Measuring impact will depend on the range of outputs being published, the funding of the press and the point at which the press finds itself.
Contents
- Measuring impact
- Usage data
- Using non-traditional metrics
- Citations
- Share evaluation and feedback with authors
- Measuring the success of the press
Measuring impact
Roemer and Borchardt (20151) divide measuring impact into levels. These could include individual publications, different formats, disciplines, individual authors, edited series, journal titles etc. It could also include the value of the distribution of knowledge outside academia, such as citizen impact (Tanner, 20182). So the value of metrics may be different depending on these groups and the aim of the evaluation (Wennström et al., 20193).
Measuring impact also depends on the aims of the press, its business model and the type of outputs published. A number of measures are required to provide a balanced view.
Metrics should be treated with caution. "High impact" in some areas, such as altmetrics, are not necessarily a portent of high impact in other metrics. For example, in a data sample from Stockholm University Press it was observed that "there is no clear evidence about a relationship between high altmetric scores and high citation numbers. There was, however, a correlation between the number of downloads and citations" (Wennström et al., 2019).
Measures of impact
Impact measures include:
- Download statistics across all platforms used (totals, by title, countries reached etc)
- Number of book proposal submissions the press receives
- Engagement with publisher social media accounts - in general and as a result of individual campaigns etc.
- Book reviews and other indicators of reception
- Engagement with press' website
- General public engagement activity of the press and its authors
- Testimonials from authors
- Citations
- Books submitted to research assessment exercises, such as the UK's Research Excellence Framework (REF) and to REF Impact Case Studies
Evaluation methods
There are also a number of methods to collate and analyse this data, including:
- Collation of download statistics from all platforms (some platforms may provide download statistics reports)
- Google analytics
- Social media analytics tools on Twitter etc
- Manual gathering of book reviews received
- Manual gathering of information about proposals received
- Manual reporting of public engagement activity
However, the size and number of outputs of the press will have an affect on the amount of collection and analysis that might be undertaken. Gathering and reporting on this range of data systematically will require some staff resource and technical investment, and the degree to which this is undertaken can also depend on institutional and author requirements and priorities.
Usage data
Usage, whether measured in print sales or as digital download statistics is an essential form of evaluation for a press.
It's also important to be able to inform authors of the potential impact of their work. It can assist a press to evaluate which of its chosen dissemination platforms are most effective, and whether different dissemination platforms are reaching different audiences and/or different parts of the world.
Measuring publications across multiple platforms
The very nature of open access means that a publication may be available on a number of different platforms - some more difficult to trace than others. However, it should be possible for presses to obtain usage statistics from platforms where publishing or distribution agreements through third party vendors are in place (read more about choosing a platform and sales channels).
Examples of usage analysis include:
- Research by Montgomery et al. (20174), where a detailed analysis of open access e-book usage at JSTOR was performed
- Stockholm University Press (SUP) where usage was analysed at OAPEN and the SUP publisher platform hosted by Ubiquity Press (Wennström et al., 2019)
Open access publications will also be available at many authors' institutional repositories. The Bielefeld Academic Search Engine (BASE) indexes publications deposited in institutional repositories, academic collections and other similar resources, as long as a DOI is used. CORE is another resource to track down material available in repositories. However, not all repositories will keep usage statistics.
Furthermore, this does not account for all usage, (ie academics own web pages or their academia.edu and Researchgate profiles), but it will give some indication of the impact of a publication. SUP found that the majority of usage came from their own platform, which might not be surprising given that this is where the DOI resolves. However, UCL Press found that having their titles on JSTOR dramatically increased usage (UCL Press, 2018).5
Measuring and comparing different outputs
Not all usage is measured in the same way. Some platforms are COUNTER compliant so data can be compared like for like. However, other platforms are not.
If in any doubt, always refer to the list of compliant publishers and platforms at COUNTER.
Repositories that use the IRUS-UK service will also keep COUNTER compliant statistics. This includes the majority of UK repositories, as well many repositories in Australia, New Zealand and most recently a pilot in the United States. Other repositories may also display usage data. There is also a difference between output (eg journal issues and articles; or books and book chapters).
For transparency, a press should list on its website:
- The platforms it uses and from which the data is derived
- The format in which they host the content
- Whether the individual platforms are COUNTER compliant or not
Additionally, multiple authors and editors have the opportunity to upload their work to multiple platforms and repositories. Chapter level DOIs will help to track usage in repositories (see more on metadata). Therefore, it is not surprising to see that SUP recorded higher usage for edited works than for single author monographs.
However, the accuracy of audited COUNTER reports may not be required in order to demonstrate the value of the press. A download, like a print purchase, is not a guarantee that something has been read.
Using non-traditional metrics
A number of non-traditional metrics have emerged in recent years.
Tools regularly used by respondents
Text version of graph
Tools regularly used by respondents (Gadd and Rowlands, 20186). Published under the Creative Commons Attribution 4.0 International (CC BY 4.0) licence.
The bar graph above shows the bibliometric and altmetric tools most regularly used by respondents:
- Scopus - 78.6%
- Google Scholar - 78.6%
- Web of Science - 69.0%
- Altmetric - 64.3%
- SciVal - 59.5%
- Publish or Perish - 33.3%
- Plum Analytics - 23.8%
- Dimensions - 23.8%
- Microsoft Academic - 21.4%
- InCites - 16.7%
- Kudos - 14.3%
- ImpactStory - 14.3%
- Other - 11.9%
As many of these metrics are still relatively immature, they can be open to misunderstanding and misrepresentation (Wilsdon, 20157). However, this does not mean that they are of no use. Used with caution (like any metric), non-traditional bibliometrics can greatly enhance evaluation of a press' publications.
Advice
In addition to Wilsdon, a good place to start is the Metrics Toolkit - a resource that helps researchers and evaluators find the best metric to use. The toolkit "provides evidence-based information about research metrics across disciplines, including how each metric is calculated, where you can find it, and how each should (and should not) be applied." The toolkit also provides examples of how to use different metrics.
Using Altmetric data
Perhaps the most recognisable non-traditional metric is the Altmetric donut. Described as complementary to citation-based metrics, Altmetric is a useful way to understand and evaluate the impact of a publication through social media as well as media coverage.
To a certain extent, resources such as altmetrics rely on self-marketing by the author to make sure that their article or book is read by their peers or the wider scholarly community. If authors participate to promote the book via social media themselves and/or the activities of the press contribute, then non-traditional bibliometrics could be a measure of how a publication was received by the network of researchers in a particular field of research - as long as they too use social media to communicate.
Once again, these bibliometrics are also reliant on good quality metadata, mostly importantly a DOI (see more on metadata).
Citations
The number of citations attributed to a research output is often used as a proxy to quantify the impact and value of scholarly publications. Indeed, open access content has a tendency to attract citations at a faster rate than paywalled content (Ottaviani, 20168). However, this depends on the format (journal vs book) and the discipline (arts, humanities and social sciences often display lower citation rates than science).
Citation databases are often based on a selection of sources, and unless the press’s publications are indexed by resources such as Scopus and Web of Science, citations to these publications will not appear within them.
The major issue with citations, especially for start-up journals and disciplines in the arts, humanities and social sciences in particular are that they take time to build up. Getting publications indexed in the major citation indexes can take years. However, many of the major commercial publishers' journals are not indexed. It is still possible to find references to some publications. For example:
- Dimensions (Hook et al., 20189) indexes and tracks data from OA sources
- Crossref also tracks citations and references between publications with DOIs
- Google Scholar tracks citations to all sources that they have classified as scholarly literature, although these references may come from any source, such as conference presentations etc
Although Google Scholar includes a wide catchment of sources, it is perhaps the most appropriate to use at the start.
Finally, it is worth noting that the Initiative for Open Citations (I4OC) launched in April 2017, is "a collaboration between scholarly publishers, researchers, and other interested parties to promote the unrestricted availability of scholarly citation data" (n.d.)10 and is one to watch in the future.
Share evaluation and feedback with authors
It is vital that the press engages with its authors after publication. One such way is to report back on evaluation and feedback. The ability of the press to market an author's work is one of the things that attracts authors to publish in the first place. A robust marketing strategy and plan will also involve feedback to the author. Good feedback from the press based on metrics, evaluation and marketing feedback may create a virtuous circle by encouraging the authors to come back and to pass on positive feedback to colleagues.
Authors may not be aware of the newer forms of evaluation, such as Altmetrics (Wennström, 2019). Therefore, feeding back to authors will create awareness that authors can receive beyond traditional print sales and citations. This is particularly important for the press as it is the authors themselves who often drive this form of metrics through self-promotion of their own work.
Advice
The OAPEN toolkit gives good advice for authors in this area (OAPEN, 202011).
Measuring the success of the press
To finish this toolkit, we asked each of the contributors to comment on what they saw as a measure of success for a press.
UCL Press
"For UCL Press, there are multiple measures of success that are reviewed in combination on a regular basis for different purposes.
"The key quantitative measure is the number of downloads and their global spread, which helps to evaluate how and where readers are using books and journals, to ensure that the Press is delivering its open access mission as well as other strategic priorities of the institution. Other measures include: the number and quality of proposals received, author and editor feedback, book reviews, social media and website engagement, and outreach activities.
"From this, UCL Press can build up a picture of global impact, reception and engagement that helps to inform strategies and activities."
Lara Spiecher, UCL Press
Liverpool University Press
"Success at Liverpool University Press (LUP) is measured in various ways.
"As a non-subsidised, mission-based press, the aim to break even each year with any surplus passed to the University of Liverpool for reinvestment into academia is the main one as this ensures our viability and longevity.
"However, at LUP success is also measured through the impact of its publications (reviews, awards, citations, downloads), the engagement of authors (the number of submissions received, whether LUP is an author's first choice publisher, the number of returning authors, and author feedback) and the external partnerships developed (which reflects on how peers and the industry view LUP).
"All of this contributes to the reputation of the press, which is the most important indicator of success, especially as the University of Liverpool's name is carried on everything LUP does. It is important to remember when considering how success is measured that this should align with the overall mission statement and business plan of the press.
"Not all successes are visible on a balance sheet and it is therefore important to measure what is valued. For more information on a value-enacted approach, see HuMetricsHSS."
Alison Welsby, Liverpool University Press
White Rose University Press
"White Rose University Press (WRUP) was initially set up to test the possibilities of library-led, open access publishing as part of White Rose Libraries wider commitment to open research and scholarship.
"As an early example of a new University Press, our initial measure of success was to prove our model was both attractive to academics and researchers, and robust enough to lead to concrete outputs.
"We wanted to demonstrate that a new press of this type could commission and produce high-quality, open access research publications that would be well received and well-used. We've met these initial goals, having established robust governance structures; peer review, editorial and production processes; and having built a strong relationship with our production partner.
"To date, we have published eight open access monograph volumes, with more monograph commissions in process, and support five research journals. Our first monographs (a two volume set) have now passed 35,000 combined views and downloads.
"As we move into our next stage of development, WRUP is looking at setting new success measures around increasing our numbers of publications and sustainability, while maintaining quality and our ability to build close, supportive relationships with authors and editors."
Kate Petherbridge, White Rose University Press
Source acknowledgements
- Part of the usage data section was adapted from Wennström et al. (2019) under a CC BY 4.0 licence.
- Part of non-traditional metrics was adapted from Emery et al. (2019)12 under a CC BY-NC 4.0 licence and Wennström et al. (2019) under a CC BY 4.0 licence
- Part of the citations section was adapted from Wennström et al. (2019) under a CC BY 4.0 licence
Footnotes
- 1 Roemer, R.C. & Borchardt, R. (2015). Meaningful Metrics: A 21st-Century Librarian’sGuide to Bibliometrics, Altmetrics, and Research Impact. Chicago, IL: The Association of College & Research Libraries. Retrieved from http://www.ala.org/acrl/sites/ala.org.acrl/files/content/publications/bo...
- 2 Tanner, S. (2018). Focusing on European Citizens and the Impact of Open Access Monographs for them. Keynote presented at the Knowledge Exchange Workshop on Open Access and Monographs, Brussels, Belgium. Retrieved from https://www.slideshare.net/KDCS/focusing-on-european-citizens-and-the-im...
- 3 Wennström, S., Schubert, G., Stone, G., & Sondervan, J. (2019). The significant difference in impact: An exploratory study about the meaning and value of metrics for open access monographs. ELPUB 2019 23rd edition of the International Conference on Electronic Publishing, Jun 2019, Marseille, France. https://hal.archives-ouvertes.fr/hal-02141879
- 4 Montgomery, L., Saunders, N., Pinter, F., & Ozaygen, A. (2017). Exploring Usage of Open Access Books via the JSTOR platform. Retrieved from http://kuresearch.org/PDF/jstor_report.pdf
- 5 UCL Press. (2018, August 13). UCL Press' open access ebooks top most-read chart [web log]. Retrieved from https://www.ucl.ac.uk/news/2018/aug/ucl-press-open-access-ebooks-top-mos...
- 6 Gadd, E., & Rowlands, I. (2018). How can bibliometric and altmetric suppliers improve? Messages from the end-user community. Insights 31(38). http://doi.org/10.1629/uksg.437
- 7 Wilsdon, J., et al. (2015). The metric tide: Report of the independent review of the role of metrics in research assessment and management. London: Higher Education Council Funding Council for England. DOI:10.13140/RG.2.1.4929.1363. Archived at http://webarchive.nationalarchives.gov.uk/20180322111254/www.hefce.ac.uk...
- 8 Ottaviani, J. (2016). The Post-Embargo Open Access Citation Advantage: It Exists (Probably), It’s Modest (Usually), and the Rich Get Richer (of Course). PLOS ONE, 11(8), e0159614. https://doi.org/10.1371/journal.pone.0159614
- 9 Hook D.W., Porter S.J., & Herzog, C. (2018). Dimensions: Building Context for Search and Evaluation. Frontiers Research. Metrics and Analytics, 3(23). https://doi.org/10.3389/frma.2018.00023
- 10 I4OC. (n.d.). Initiative for Open Citations. Retrieved from https://i4oc.org
- 11 OAPEN. (2020). OA books toolkit. Self-promotion. Retrieved from https://oabooks-toolkit.org/lifecycle/4016750-dissemination-marketing/ar...
- 12 Emery, J., Stone, G., & McCracken, P. (2020). Techniques for Electronic Resource Management: Terms and the Transition to Open. Chicago: American Library Association. https://doi.org/10.15760/lib-01