Scientometrics

, Volume 102, Issue 3, pp 2161–2164 | Cite as

Usage metrics versus altmetrics: confusing terminology?

Article

Recently an increasingly controversial discussion about the concepts of usage metrics and altmetrics got going at conferences and meetings in our field. While for a small group both concepts are clearly different, a large part of the community tends to regard usage metrics as a subset of altmetrics.

From our point of view this use of terminology is not appropriate, and can easily lead to unnecessary confusion and misunderstandings reflected in a distorted scientific communication.

In what follows we will argue why a distinction should be made between the two terms ‘usage metrics’ and ‘altmetrics’.

The main reason is of historical nature. Usage metrics have already been around much longer than altmetrics. In fact, usage metrics are even older than citation metrics, because librarians have been tracking usage since the beginning of their profession, ranging from basic user surveys to the usage tracking of physical journal issues and monographs to library loan statistics to the sophisticated analysis of e-media usage (e-metrics).

There is an abundance of statistics and models on library-related usage data, based on different sampling techniques, cumbersome procedures or comprehensive methods of gathering usage data for all subscribed publication types (Coombs 2005; Kraemer 2006; Franklin et al. 2009).

Acquisition strategies in libraries have therefore always relied on such usage data, complemented by citation analyses initially introduced by librarians (Gross and Gross 1927) as well and gaining momentum after the launch of the Science Citation Index by Eugene Garfield (Garfield 1972).

Versatile models of library circulation have been proposed as early as in the 1980s, when information scientists interpreted informetric distributions as dynamic systems changing in time (cf. Burrell 1980, 1990a,b; Gelman and Sichel 1987; Ajiferuke and Tague 1990) and thus brought library circulation models into the methodological context of citation processes (cf. Glänzel and Schubert 1995; Burrell 2003).

With the advent of electronic resources, particularly electronic journals, and their increasing acceptance, this resulted in a rapid change in the user preference especially since 2002 (Kraemer 2006; Schloegl and Gorraiz 2010). Usage metrics have become increasingly popular in scientometric analyses beyond librarian practices. Analyses gradually shifted from the local (e.g. Darmoni et al. 2002; Coombs 2005; McDonald 2007; Bollen and van de Sompel 2008) to the global level (Moed 2005; Schloegl and Gorraiz 2009, 2010; Guerrero-Bote and Moya-Anegón 2013). As a result, usage metrics were reintroduced as an interesting alternative to traditional citation metrics (Bollen et al. 2005, 2008; Brody et al. 2006; Duy and Vaughan 2006; Rowlands and Nicholas 2007; Wan et al. 2010), although they should rather be regarded as supplementary metrics.

The importance of usage metrics has not only been stressed for journals (cf. Bollen et al. 2005; Gorraiz and Gumpenberger 2010; Haustein 2011), but new studies have also suggested addressing not only views and downloads from e-books but also loan analyses for monographs (Cabezas-Clavijo et al. 2013).

The term “altmetrics” was introduced later than “usage metrics” (Priem et al. 2010; Priem and Hemminger 2010). As the name suggests, they are also meant as an alternative to citation metrics. In contrast to usage metrics they are based on the repercussion of whatsoever publications on the web, notably in social media, in contrast to usage metrics which, for so far, rely on e-content from publishers and other information providers. The whole concept is still in its infancy, still lacking standardization of what exactly and how this is all measured. Whereas usage metrics target downloads and views, which are the most usual proxies for usage at present, even if they rather measure the intention to use something than their actual usage (Gorraiz et al. 2014), altmetrics comprise of an abundance of very heterogeneous indicators from mentions and captures, to links, bookmarks, storage, and conversations. Of course, many social media also include views and downloads, but these are collected at different levels (often only tool-specific) and have neither the same dimension nor the same relevance as global data from publishers and information providers or local data based on library-licensed or archived e-content.

A clear distinction between usage, citations and altmetrics is also made in the altmetrics manifesto (http://altmetrics.org/manifesto/) or in the description of PLOS Article-Level Metrics (ALMs; http://article-level-metrics.plos.org/).

There is also another issue concerning data “instability” that also results in noteworthy difference between usage metrics and altmetrics. Downloads remain as stable as citations since they are linked to clearly defined document spaces, even if the “user” space might vary. As a consequence, metrics are replicable if all sources and data of production are properly documented. By contrast, altmetrics indicators are variable as these depend on both changes in the source side and user activity. Significant analyses about their evolution, historical reconstruction and instability are still missing. Keeping also in mind the different intentions and targets of these metrics we therefore recommend to use explicit terminology within the scientometric community.

One of the future challenges of scientometrics is to improve the quality and the extension of the impact assessment when analyzing research performance. Citations are an acceptable and correct proxy for the measure of the publications impact, however, only for a subset of the scientific community, namely the “publish or perish” group and only of the impact reflected by documented scholarly communication. It is common knowledge that many disciplines address much broader audiences within the scholar community and even beyond (societal impact). Usage metrics and altmetrics both allow the development of new indicators in order to gain a much broader and more complete picture of scientific communication.

Finally, the current and future role of the social media in the promotion of research output should be analysed, and how this behaviour will affect science itself.

References

  1. Ajiferuke, I. & Tague, J. (1990). A model for the full circulation data. In L. Egghe & R. Rousseau (Eds.), Informetrics 89/90. Elsevier Science Publishers B. V. Online: https://doclib.uhasselt.be/dspace/bitstream/1942/855/1/Ajiferuke1.PDF (accessed October 2014).
  2. Bollen, J., & Van de Sompel, H. (2008). Usage impact factor: The effects of sample characteristics on usage-based impact metrics. Journal of the American Society for Information Science and Technology, 59(1), 136–149.CrossRefGoogle Scholar
  3. Bollen, J., Van de Sompel, H., Smith, J.A., & Luce, R. (2005). ‘Toward alternative metrics of journal impact: A comparison of download and citation data’, information processing and management, 41, 1419–1440; http://public.lanl.gov/herbertv/papers/ipm05jb-final.pdf (accessed July 1, 2010).
  4. Brody, T., Harnad, S., & Carr, L. (2006). Earlier web usage statistics as predictors of later citation impact. Journal of the American Society for Information Science and Technology, 57(8), 1060–1072.CrossRefGoogle Scholar
  5. Burrell, Q. L. (1980). A simple stochastic model for library loans. Journal of Documentation, 36(2), 115–132.CrossRefGoogle Scholar
  6. Burrell, Q. L. (1990a). Using the gamma-poisson model to predict library circulations. Journal of the American Society for Information Science, 41(3), 164–170.CrossRefGoogle Scholar
  7. Burrell, Q. L. (1990b). Empirical prediction of library circulations based on negative binomial processes. In L. Egghe & R. Rousseau (Eds.), Informetrics 87/88 (pp. 54–57). Amsterdam: Elsevier Science Publisher B. V.Google Scholar
  8. Burrell, Q. L. (2003). Predicting future citation behavior. Journal of the American Society for Information Science and Technology, 54(5), 372–378.CrossRefGoogle Scholar
  9. Cabezas-Clavijo, A., Robinson-García, N., Torres-Salinas, D.; Jiménez-Contreras, E.; Mikulka, T., Gumpenberger, C., Wemisch, A., Gorraiz, J. (2013). Most borrowed is most cited? Library loan statistics as a proxy for monograph selection in citation indexes. http://arxiv.org/abs/1305.1488
  10. Coombs, K. A. (2005). Lessons learned from analyzing library database usage data. Library Hi Tech, 23(4), 598–609.CrossRefMathSciNetGoogle Scholar
  11. Darmoni, S. J., Roussel, F., Benichou, J., Thirion, B., & Pinhas, N. (2002). Reading factor: A new bibliometric criterion for managing digital libraries. Journal of the Medical Library Association, 90(3), 323–327.Google Scholar
  12. Duy, J., & Vaughan, L. (2006). Can electronic journal usage data replace citation data as a measure of journal use? An empirical examination. The Journal of Academic Librarianship, 32(5), 512–517.CrossRefGoogle Scholar
  13. Franklin, B., Kyrillidou, M., & Plum, T. (2009). From usage to user: Library metrics and expectations for the evaluation of digital libraries. In G. Tsakonas & C. Papatheodorou (Eds.), Evaluation of digital libraries. An insight into useful applications and methods (pp. 17–40). Oxford: Chandos Publishing.Google Scholar
  14. Garfield, E. (1972). Citation analysis as a tool in journal evaluation: Journals can be ranked by frequency and impact of citations for science policy studies. Science, 178(4060), 471–479.CrossRefGoogle Scholar
  15. Gelman, E., & Sichel, H. S. (1987). Library book circulation and the beta-binomial distribution. Journal of the American Society for Information Science, 38(1), 4–12.CrossRefGoogle Scholar
  16. Glänzel, W., & Schubert, A. (1995). Predictive aspects of a stochastic model for citation processes. Information Processing and Management, 31(1), 69–80.CrossRefGoogle Scholar
  17. Gorraiz, J., & Gumpenberger, C. (2010). Going beyond citations: SERUM–a new tool provided by a network of libraries. Liber Quarterly, 20, 80–93.Google Scholar
  18. Gorraiz, J., Gumpenberger, C., & Schloegl, C. (2014). Usage versus citation behaviours in four subject areas. Scientometrics, 101(2), 1077–1095.CrossRefGoogle Scholar
  19. Gross, P. L. K., & Gross, E. M. (1927). College libraries and chemical education. Science, 66(1713), 385–389.CrossRefGoogle Scholar
  20. Guerrero-Bote, V. P., & Moya-Anegón, F. (2013). Relationship between Downloads and Citation and the influence of language. In J. Gorraiz, E. Schiebel, C. Gumpenberger, M. Hörlesberger & H. Moed (Eds.), Proceedings of the 14th international conference on scientometrics and informetrics—ISSI (pp. 1469–1484). Vienna: Austrian Institute of Technology.Google Scholar
  21. Haustein, S. (2011). Taking a multidimensional approach toward journal evaluation. In Proceedings of the 13th ISSI Conference, Durban, South Africa, 4th–7th July, Vol. 1 (pp. 280–291); Durban, South Africa.Google Scholar
  22. Kraemer, A. (2006). Ensuring consistent usage statistics, part 2: Working with use data for electronic journals. The Serials Librarian, 50(1/2), 163–172.CrossRefGoogle Scholar
  23. Mcdonald, J. D. (2007). Understanding journal usage: A statistical analysis of citation and use. Journal of the American Society for Information Science and Technology, 58(1), 39–50.Google Scholar
  24. Moed, H. F. (2005). Statistical relationships between downloads and citations at the level of individual documents within a single journal. Journal of the American Society for Information Science and Technology, 56, 1088–1097.Google Scholar
  25. Priem, J. & Hemminger, B. H. (2010). Scientometrics 2.0: New metrics of scholarly impact on the social Web. First Monday, 15(7–5).Google Scholar
  26. Priem, J., Taraborelli, D., Groth, P., & Neylon, C. (2010). Alt-metrics: A manifesto. Online: http://altmetrics.org/manifesto/ (accessed September 2014).
  27. Rowlands, I., & Nicholas, D. (2007). The missing link: Journal usage metrics. Aslib Proceedings : New Information Perspectives, 59(3), 222–228.CrossRefGoogle Scholar
  28. Schloegl, C., & Gorraiz, J. (2009). ‘Global usage vs global citation metrics using Science Direct pharmacology journals’. Proceedings of the International Conference on Scientometrics and Informetrics, 1, 455–459.Google Scholar
  29. Schloegl, C., & Gorraiz, J. (2010). Comparison of citation and usage indicators: The case of oncology journals. Scientometrics, 82(3), 567–580.CrossRefGoogle Scholar
  30. Wan, J.-K., Hua, P.-H., Rousseau, R., & Sun, X.-K. (2010). The download immediacy index (DII): Experiences using the CNKI full-text database. Scientometrics, 82(3), 555–566.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2014

Authors and Affiliations

  1. 1.Vienna University Library, Bibliometrics DepartmentUniversity of ViennaViennaAustria
  2. 2.Centre for R&D Monitoring (ECOOM) and Department of MSIKU LeuvenLeuvenBelgium
  3. 3.Department of Science Policy & ScientometricsLibrary of the Hungarian Academy of SciencesBudapestHungary

Personalised recommendations