Scientometrics

, Volume 101, Issue 2, pp 1491–1513

How well developed are altmetrics? A cross-disciplinary analysis of the presence of ‘alternative metrics’ in scientific publications

Article

Abstract

In this paper an analysis of the presence and possibilities of altmetrics for bibliometric and performance analysis is carried out. Using the web based tool Impact Story, we collected metrics for 20,000 random publications from the Web of Science. We studied both the presence and distribution of altmetrics in the set of publications, across fields, document types and over publication years, as well as the extent to which altmetrics correlate with citation indicators. The main result of the study is that the altmetrics source that provides the most metrics is Mendeley, with metrics on readerships for 62.6 % of all the publications studied, other sources only provide marginal information. In terms of relation with citations, a moderate spearman correlation (r = 0.49) has been found between Mendeley readership counts and citation indicators. Other possibilities and limitations of these indicators are discussed and future research lines are outlined.

Keywords

Altmetrics Impact Story Citation indicators Research evaluation 

References

  1. Archambault, É., & Larivière, V. (2006). The limits of bibliometrics for the analysis of the social sciences and humanities literature, International Social Science Council: World social sciences report 2010: Knowledge divides (pp. 251–254). Paris: UNESCO.Google Scholar
  2. Armbruster, C. (2007). Access, usage and citation metrics: what function for digital libraries and repositories in research evaluation? Retrieved from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1088453.
  3. Bar-Ilan, J. (2012). JASIST@ Mendeley. Presented at ACM Web Science Conference Workshop on Altmetrics, Evanston, IL. Retrieved June 21, 2012 from http://altmetrics.org/altmetrics12/bar-ilan/.
  4. Bar-Ilan, J., Haustein, S., Peters, I., Priem, J., Shema, H., & Terliesner, J. (2012). Beyond citations: Scholars’ visibility on the social Web. In Proceedings of the 17th International Conference on Science and Technology Indicators, Montreal, Quebec. Retrieved from http://arxiv.org/abs/1205.5611/.
  5. Benos, D. J., Bashari, E., Chaves, J. M., et al. (2007). The ups and downs of peer review. Advances in Physiology Education, 31, 145–152.CrossRefGoogle Scholar
  6. Blecic, D. (1999). Measurements of journal use: An analysis of the correlations between three methods. Bulletin of the Medical Library Association, 87, 20–25.Google Scholar
  7. Bollen, J., Van de Sompel, H., Hagberg, A., & Chute, R. (2009). A principal component analysis of 39 scientific impact measures. PLoS ONE, 4(6), e6022. doi:10.1371/journal.pone.0006022.CrossRefGoogle Scholar
  8. Bollen, J., Van de Sompel, H., & Rodriguez, M. A. (2008). Towards usage-based impact metrics. In Proceedings of the 8th ACM/IEEE-CS Joint Conference on Digital libraries (JCDL), New York, USA. Retrieved from http://arxiv.org/pdf/0804.3791v1.pdf.
  9. Bordons, M., Fernandez, M. T., & Gomez, I. (2002). Advantages and limitations in the use of impact factor measures for the assessment of research performance. Scientometrics, 53, 195–206.CrossRefGoogle Scholar
  10. Bornmann, L. (2013). Is there currently a scientific revolution in Scientometrics? Journal of the American Society for Information Science & Technology. Retrieved from www.lutz-bornmann.de/icons/impactrevolution.pdf.
  11. Bornmann, L., & Leydesdorff, L. (2013). The validation of (advanced) bibliometric indicators through peer assessments: a comparative study using data from InCites and F1000. Journal of Informetrics, 7(2), 286–291.CrossRefGoogle Scholar
  12. Brody, T., Harnad, S., & Carr, L. (2006). Earlier Web usage statistics as predictors of later citation impact. Journal of the American Society for Information Science, 57, 1060–1072.CrossRefGoogle Scholar
  13. Butler, L., & McAllister, I. (2011). Evaluating university research performance using metrics. European Political Science, 10(1), 44–58.CrossRefGoogle Scholar
  14. Davis, P. M. (2012). Tweets, and our obsession with alt metrics. The Scholarly Kitchen. Retrieved from http://scholarlykitchen.sspnet.org/2012/01/04/tweets-and-our-obsession-with-alt-metrics/.
  15. Duy, J., & Vaughan, L. (2006). Can electronic journal usage data replace citation data as a measure of journal use? An empirical examination. The Journal of Academic Librarianship, 32(5), 512–517.CrossRefGoogle Scholar
  16. Eysenbach, G. (2011). Can tweets predict citations? Metrics of social impact based on twitter and correlation with traditional metrics of scientific impact. Journal of Medical Internet Research, 13(4), e123.CrossRefGoogle Scholar
  17. Forta, B. (2008). Sams teach yourself SQL in 10 minutes. USA: Sams Publishing.Google Scholar
  18. Galligan, F., & Dyas-Correia, S. (2013). Altmetrics: Rethinking the way we measure. Serials Review, 39(1), 56–61.CrossRefGoogle Scholar
  19. Haustein, S. (2010). Multidimensional journal evaluation. In Proceedings of the 11th International Conference on Science and Technology Indicators (pp. 120–122), Leiden, the Netherlands.Google Scholar
  20. Haustein, S., Peters, I., Bar-Ilan, J., Priem, J., Shema, H., & Terliesner, J. (2013). Coverage and adoption of altmetrics sources in the bibliometric community. In 14th International Society of Scientometrics and Informetrics Conference (pp. 1–12). Digital Libraries. Retrieved from http://arxiv.org/abs/1304.7300.
  21. Haustein, S., & Siebenlist, T. (2011). Applying social bookmarking data to evaluate journal usage. Journal of Informetrics, 5, 446–457.Google Scholar
  22. Henning, V. (2010). The top 10 journal articles published in 2009 by readership on Mendeley. Retrieved from http://www.mendeley.com/blog/academic-features/the-top-10-journal-articles-published-in-2009-by-readership-on-mendeley/.
  23. Hicks, D. & Melkers, J. (2012). Bibliometrics as a Tool for Research Evaluation. In Al Link & Nick Vornatas (Eds.) Handbook on the Theory and Practice of Program Evaluation. Edward Elgar. Retrieved from http://works.bepress.com/diana_hicks/31.
  24. Li, X., & Thelwall, M. (2012). F1000, Mendeley and traditional bibliometric indicators. Proceedings of the 17th International Conference on Science and Technology Indicators (pp. 451–551). Canada: Montréal.Google Scholar
  25. Li, X., Thelwall, M., & Giustini, D. (2012). Validating online reference managers for scholarly impact measurement. Scientometrics, 91(2), 461–471.CrossRefGoogle Scholar
  26. Lopez-Cozar, E. D., Robinson-Garcia, N., & Torres Salinas, D. (2012). Manipulating google scholar citations and google scholar metrics: simple, easy and tempting. Retrieved from http://arxiv.org/abs/1212.0638.
  27. MacRoberts, M. H., & MacRoberts, B. R. (1989). Problems of citation analysis: A critical review. Journal of the American Society for Information Science, 40, 342–349.CrossRefGoogle Scholar
  28. Martin, B. R., & Irvine, J. (1983). Assessing basic research: Some partial indicators of scientific progress in radio astronomy. Research Policy, 12, 61–90.CrossRefGoogle Scholar
  29. Moed, H. F. (2005). Citation analysis in research evaluation. Berlin: Springer.Google Scholar
  30. Moed, H. F. (2007). The future of research evaluation rests with an intelligent combination of advanced metrics and transparent peer review. Science and Public Policy, 34(8), 575–583.CrossRefGoogle Scholar
  31. Moed, H. F. (2009). New developments in the use of citation analysis in research evaluation. Archivum immunologiae et therapiae experimentalis, 57(1), 13–18.CrossRefGoogle Scholar
  32. Nederhof, A. J. (2006). Bibliometric monitoring of research performance in the social sciences and the humanities: A review. Scientometrics, 66(1), 81–100.MathSciNetCrossRefGoogle Scholar
  33. Nederhof, A. J., & Van Raan, A. F. J. (1987). Peer review and bibliometric indicators of scientific performance: a comparison of cum laude doctorates with ordinary doctorates in physics. Scientometrics, 11(5–6), 333–350.CrossRefGoogle Scholar
  34. Nicolaisen, J. (2007). Citation analysis. Annual Review of Information Science and Technology, 41, 609–641.CrossRefGoogle Scholar
  35. Priem, J., Hemminger, B. H. (2010). Scientometrics 2.0: Toward new metrics of scholarly impact on the social Web. Retrieved First Monday 15, from http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/2874/2570.
  36. Priem, J., Piwowar, H., & Hemminger, B. H. (2012). Altmetrics in the wild: Using social media to explore scholarly impact. ArXiv: 1203.4745v1.Google Scholar
  37. Priem, J., Taraborelli, D., Groth, P., and Neylon, C. (2010). Altmetrics: a manifesto. Retrieved from http://altmetrics.org/manifesto/.
  38. Rousseau, R., & Ye, F. (2013). A multi-metric approach for research evaluation. Chinese Science Bulletin, 10–12.Google Scholar
  39. Rowlands, I., & Nicholas, D. (2007). The missing link: Journal usage metrics. Aslib Proceedings, 59(3), 222–228.CrossRefGoogle Scholar
  40. Schlögl, C., Gorraiz, J., Gumpenberger, C., Jack, K., & Kraker, P. (2013). Download vs. citation vs. readership data: the case of an information systems journals. In J. Gorraiz, E. Schiebel, C. Gumpenberger, M. Hörlesberger, & H. Moed (Eds.), Proceedings of the 14th International Society of Scientometrics and Informetrics Conference, Vienna, Austria (pp. 626-634). Wien: Facultas Verlags und Buchhandels AG.Google Scholar
  41. Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. British Medical Journal, 314–497.Google Scholar
  42. Shema, H., Bar-Ilan, J., & Thelwall, M. (2013). Do blog citations correlate with a higher number of future citations ? Research blogs as a potential source for alternative metrics. In J. Gorraiz, E. Schiebel, C. Gumpenberger, M. Hörlesberger, & H. Moed (Eds.), Proceedings of the 14th International Society of Scientometrics and Informetrics Conference, Vienna, Austria (pp. 604–611). Wien: Facultas Verlags und Buchhandels AG.Google Scholar
  43. Shuai, X., Pepe, A., & Bollen., J. (2012). How the scientific community reacts to newly submitted preprints: Article downloads Twitter mentions, and citations. Retrieved from http://arxiv.org/abs/1202.2461v1.
  44. Smith, A. G. (1999). A tale of two web spaces; comparing sites using web impact factors. Journal of Documentation, 55(5), 577–592.Google Scholar
  45. Taylor, J. (2011). The assessment of research quality in UK. Universities: peer review or metrics? British Journal of Management, 22(2), 202–217.CrossRefGoogle Scholar
  46. Thelwall, M. (2001). Extracting macroscopic information from web links. Journal of American Society for Information Science and Technology, 52(13), 1157–1168.CrossRefGoogle Scholar
  47. Thelwall, M. (2004). Weak benchmarking indicators for formative and semi-evaluative assessment of research. Research Evaluation, 13(1), 63–68.CrossRefGoogle Scholar
  48. Thelwall, M. (2008). Bibliometrics to Webometrics. Journal of Information Science, 34(4), 605–621.CrossRefGoogle Scholar
  49. Thelwall, M. (2012a). A history of webometrics. Bulletin of the American Society for Information Science and Technology, 38(6), 18–23.Google Scholar
  50. Thelwall, M. (2012b). Journal impact evaluation: a webometric perspective. Scientometrics, 92(2), 429–441.CrossRefGoogle Scholar
  51. Thelwall, M., Haustein, S., Larivière, V., & Sugimoto, C. R. (2013). Do altmetrics work? Twitter and ten other social web services. PLoS ONE, 8(5), e64841.CrossRefGoogle Scholar
  52. Torres-Salinas, D., Cabezas-Clavijo, A., & Jimenez-Contreras, E. (2013a). Altmetrics: New indicators for scientific communication in web 2.0. Comunicar. Retrieved from http://arxiv.org/ftp/arxiv/papers/1306/1306.6595.pdf.
  53. Torres-Salinas, D., Robinson-Garcia, N., Campanario, J. M., & López-Cózar, E. D. (2013b). Coverage, field specialisation and the impact of scientific publishers indexed in the book citation index. Online Information Review, 38(1), 24–42.CrossRefGoogle Scholar
  54. Van Raan, A. F. J., Van Leeuwen, T. N., & Visser, M. S. (2011). Severe language effect in university rankings: particularly Germany and France are wronged in citation-based rankings. Scientometrics, 88(2), 495–498.CrossRefGoogle Scholar
  55. Vaughan, L., & Shaw, D. (2003). Bibliographic and web citations: what is the difference? Journal of the American Society for Information Science and Technology, 54(14), 1313–1322.CrossRefGoogle Scholar
  56. Waltman, L., & Costas, R. (2013). F1000 Recommendations as a potential new data source for research evaluation: a comparison with citations. Journal of the Association for Information Science and Technology. doi: 10.1002/asi.23040.
  57. Waltman, L., Van Eck, N. J., Van Leeuwen, T. N., Visser, M. S., & Van Raan, A. F. J. (2011). Towards a new crown indicator: Some theoretical considerations. Journal of Informetrics, 5(1), 37–47.CrossRefGoogle Scholar
  58. Wouters, P. (1999). The Citation Culture, Ph.D. Thesis, University of Amsterdam.Google Scholar
  59. Wouters, P., Costas, R. (2012). Users, narcissism and control: Tracking the impact of scholarly publications in the 21st century. Utrecht: SURF foundation. Retrieved from http://www.surffoundation.nl/nl/publicaties/Documents/Users%20narcissism%20and%20control.pdf.
  60. Zahedi, Z., Costas, R., & Wouters, P. (2013). How well developed are Altmetrics? Cross disciplinary analysis of the presence of ‘alternative metrics’ in scientific publications (RIP). In J. Gorraiz, E. Schiebel, C. Gumpenberger, M. Hörlesberger, & H. Moed (Eds.), Proceedings of the 14th International Society of Scientometrics and Informetrics Conference, Vienna, Austria (pp. 876–884). Wien: Facultas Verlags und Buchhandels AG.Google Scholar
  61. Zhang, Y. (2012). Comparison of select reference management tools. Medical Reference Services Quarterly, 31(1), 45–60.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2014

Authors and Affiliations

  1. 1.Centre For Science and Technology Studies (CWTS)Leiden UniversityLeidenThe Netherlands

Personalised recommendations