Advertisement

Scientometrics

, Volume 113, Issue 3, pp 1721–1731 | Cite as

Are Mendeley reader counts useful impact indicators in all fields?

  • Mike Thelwall
Article

Abstract

Reader counts from the social reference sharing site Mendeley are known to be valuable for early research evaluation. They have strong correlations with citation counts for journal articles but appear about a year before them. There are disciplinary differences in the value of Mendeley reader counts but systematic evidence is needed at the level of narrow fields to reveal its extent. In response, this article compares Mendeley reader counts with Scopus citation counts for journal articles from 2012 in 325 narrow Scopus fields. Despite strong positive correlations in most fields, averaging 0.671, the correlations in some fields are as weak as 0.255. Technical reasons explain most weaker correlations, suggesting that the underlying relationship is almost always strong. The exceptions are caused by unusually high educational or professional use or topics of interest within countries that avoid Mendeley. The findings suggest that if care is taken then Mendeley reader counts can be used for early citation impact evidence in almost all fields and for related impact in some of the remainder. As an additional application of the results, cross-checking with Mendeley data can be used to identify indexing anomalies in citation databases.

Keywords

Mendeley Readership counts Citation analysis Disciplinary differences 

References

  1. Abramo, G., Cicero, T., & D’Angelo, C. A. (2011). Assessing the varying level of impact measurement accuracy as a function of the citation window length. Journal of Informetrics, 5(4), 659–667.CrossRefGoogle Scholar
  2. Aksnes, D. W., & Taxt, R. E. (2004). Peer reviews and bibliometric indicators: A comparative study at a Norwegian university. Research Evaluation, 13(1), 33–41.CrossRefGoogle Scholar
  3. Bar-Ilan, J. (2014). Astrophysics publications on arXiv, Scopus and Mendeley: A case study. Scientometrics, 100(1), 217–225.CrossRefGoogle Scholar
  4. Campanario, J. M. (2011). Empirical study of journal impact factors obtained using the classical two-year citation window versus a five-year citation window. Scientometrics, 87(1), 189–204.CrossRefGoogle Scholar
  5. Franceschini, F., Maisano, D., & Mastrogiacomo, L. (2015). Errors in DOI indexing by bibliometric databases. Scientometrics, 102(3), 2181–2186.CrossRefGoogle Scholar
  6. Gorraiz, J., Melero-Fuentes, D., Gumpenberger, C., & Valderrama-Zurián, J. C. (2016). Availability of digital object identifiers (DOIs) in Web of Science and Scopus. Journal of Informetrics, 10(1), 98–109.CrossRefGoogle Scholar
  7. Halevi, G., Moed, H., & Bar-Ilan, J. (2017). Suitability of Google Scholar as a source of scientific information and as a source of data for scientific evaluation—Review of the literature. Journal of Informetrics, 11(3), 823–834.CrossRefGoogle Scholar
  8. Harzing, A. W., & Alakangas, S. (2017). Microsoft academic is one year old: The phoenix is ready to leave the nest. Scientometrics, 112(3), 1887–1894.CrossRefGoogle Scholar
  9. Haustein, S., Bowman, T. D., & Costas, R. (2015). When is an article actually published? An analysis of online availability, publication, and indexation dates. In: 15th International conference on scientometrics and informetrics (ISSI2015) (pp. 1170–1179).Google Scholar
  10. Haustein, S., Larivière, V., Thelwall, M., Amyot, D., & Peters, I. (2014). Tweets vs. Mendeley readers: How do these two social media metrics differ? IT-Information Technology, 56(5), 207–215.CrossRefGoogle Scholar
  11. HEFCE. (2015). The metric tide: Correlation analysis of REF2014 scores and metrics (Supplementary Report II to the independent review of the role of metrics in research assessment and management). http://www.hefce.ac.uk/pubs/rereports/Year/2015/metrictide/Title,104463,en.html.
  12. Hug, S. E., Ochsner, M., & Brändle, M. P. (2017). Citation analysis with Microsoft Academic. Scientometrics, 111(1), 371–378.CrossRefGoogle Scholar
  13. Maflahi, N., & Thelwall, M. (2016). When are readership counts as useful as citation counts? Scopus versus Mendeley for LIS journals. Journal of the Association for Information Science and Technology, 67(1), 191–199.CrossRefGoogle Scholar
  14. Maflahi, N, & Thelwall, M. (2017). How quickly do publications get read? The evolution of Mendeley reader counts for new articles. Journal of the Association for Information Science and Technology. doi: 10.1002/asi.23909.
  15. Merton, R. K. (1968). The Matthew effect in science. Science, 159(3810), 56–63.CrossRefGoogle Scholar
  16. Merton, R. K. (1973). The sociology of science: Theoretical and empirical investigations. Chicago: University of Chicago press.Google Scholar
  17. Mohammadi, E., & Thelwall, M. (2014). Mendeley readership altmetrics for the social sciences and humanities: Research evaluation and knowledge flows. Journal of the Association for Information Science and Technology, 65(8), 1627–1638.CrossRefGoogle Scholar
  18. Mohammadi, E., Thelwall, M., Haustein, S., & Larivière, V. (2015). Who reads research articles? An altmetrics analysis of Mendeley user categories. Journal of the Association for Information Science and Technology, 66(9), 1832–1846.CrossRefGoogle Scholar
  19. Mohammadi, E., Thelwall, M., & Kousha, K. (2016). Can Mendeley bookmarks reflect readership? A survey of user motivations. Journal of the Association for Information Science and Technology., 67(5), 1198–1209. doi: 10.1002/asi.23477.CrossRefGoogle Scholar
  20. Mongeon, P., & Paul-Hus, A. (2016). The journal coverage of Web of Science and Scopus: A comparative analysis. Scientometrics, 106(1), 213–228.CrossRefGoogle Scholar
  21. Priem, J., Taraborelli, D., Groth, P., & Neylon, C. (2011). Altmetrics: A manifesto. http://altmetrics.org/manifesto.
  22. Sud, P., & Thelwall, M. (2014). Evaluating altmetrics. Scientometrics, 98(2), 1131–1143. doi: 10.1007/s11192-013-1117-2.CrossRefGoogle Scholar
  23. Thelwall, M. (2016a). Are there too many uncited articles? Zero inflated variants of the discretised lognormal and hooked power law distributions. Journal of Informetrics, 10(2), 622–633. doi: 10.1016/j.joi.2016.04.014.CrossRefGoogle Scholar
  24. Thelwall, M. (2016b). Interpreting correlations between citation counts and other indicators. Scientometrics, 108(1), 337–347. doi: 10.1007/s11192-016-1973-7.CrossRefGoogle Scholar
  25. Thelwall, M. (2017a). Are Mendeley reader counts high enough for research evaluations when articles are published? Aslib Journal of Information Management, 69(2), 174–183. doi: 10.1108/AJIM-01-2017-0028.CrossRefGoogle Scholar
  26. Thelwall, M. (2017b). Three practical field normalised alternative indicator formulae for research evaluation. Journal of Informetrics, 11(1), 128–151. doi: 10.1016/j.joi.2016.12.002.CrossRefGoogle Scholar
  27. Thelwall, M., & Fairclough, R. (2015). Geometric journal impact factors correcting for individual highly cited articles. Journal of Informetrics, 9(2), 263–272.CrossRefGoogle Scholar
  28. Thelwall, M., & Maflahi, N. (2015). Are scholarly articles disproportionately read in their own country? An analysis of Mendeley readers. Journal of the Association for Information Science and Technology, 66(6), 1124–1135. doi: 10.1002/asi.23252.CrossRefGoogle Scholar
  29. Thelwall, M., & Sud, P. (2016). Mendeley readership counts: An investigation of temporal and disciplinary differences. Journal of the Association for Information Science and Technology, 57(6), 3036–3050. doi: 10.1002/asi.2355.CrossRefGoogle Scholar
  30. Thelwall, M., & Wilson, P. (2016). Mendeley readership altmetrics for medical articles: An analysis of 45 fields. Journal of the Association for Information Science and Technology, 67(8), 1962–1972. doi: 10.1002/asi.23501.CrossRefGoogle Scholar
  31. Van Noorden, R. (2014). Scientists and the social networks. Nature, 512(7513), 126–130.CrossRefGoogle Scholar
  32. van Raan, A. F. (2006). Comparison of the Hirsch-index with standard bibliometric indicators and with peer judgment for 147 chemistry research groups. Scientometrics, 67(3), 491–502.CrossRefGoogle Scholar
  33. Waltman, L., van Eck, N. J., van Leeuwen, T. N., Visser, M. S., & van Raan, A. F. (2011). Towards a new crown indicator: An empirical analysis. Scientometrics, 87(3), 467–481.CrossRefGoogle Scholar
  34. Wang, Q., & Waltman, L. (2016). Large-scale analysis of the accuracy of the journal classification systems of Web of Science and Scopus. Journal of Informetrics, 10(2), 347–364.CrossRefGoogle Scholar
  35. Wilsdon, J., Allen, L., Belfiore, E., Campbell, P., Curry, S., Hill, S., et al. (2015). The metric tide: Report of the independent review of the role of metrics in research assessment and management. http://www.hefce.ac.uk/pubs/rereports/Year/2015/metrictide/Title,104463,en.html.
  36. Wouters, P., & Costas, R. (2012). Users, narcissism and control: Tracking the impact of scholarly publications in the 21st century. In: E. Archambault, Y. Gingras, & V. Larivière (Eds) Proceedings of the 17th international conference on science and technology indicators (Vol. 2, pp. 487–497). Montreal: Science-Metrix and OST.Google Scholar
  37. Zahedi, Z., Costas, R., & Wouters, P. (2014a). How well developed are altmetrics? A crossdisciplinary analysis of the presence of ‘alternative metrics’ in scientific publications. Scientometrics, 101(2), 1491–1513.CrossRefGoogle Scholar
  38. Zahedi, Z., Costas, R., & Wouters, P. (2017). Mendeley readership as a filtering tool to identify highly cited publications. Journal of the Association for Information Science and Technology, 68(10), 2511–2521.CrossRefGoogle Scholar
  39. Zahedi, Z., Haustein, S. & Bowman, T. (2014b). Exploring data quality and retrieval strategies for Mendeley reader counts. Presentation at SIGMET Metrics 2014 workshop, 5 November 2014. Available: http://www.slideshare.net/StefanieHaustein/sigmetworkshop-asist2014.
  40. Zitt, M. (2012). The journal impact factor: Angel, devil, or scapegoat? A comment on JK Vanclay’s article 2011. Scientometrics, 92(2), 485–503.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2017

Authors and Affiliations

  1. 1.Statistical Cybermetrics Research GroupUniversity of WolverhamptonWolverhamptonUK

Personalised recommendations