, Volume 105, Issue 3, pp 2237–2248 | Cite as

CiteULike bookmarks are correlated to citations at journal and author levels in library and information science

  • Hajar SotudehEmail author
  • Zahra Mazarei
  • Mahdieh Mirzabeigi


Aiming to explore the applicability of bookmarking data in measuring the scientific impact, the present study investigates the correlation between conventional impact indicators (i.e. impact factors and mean citations) and bookmarking metrics (mean bookmarks and percentage of bookmarked articles) at author and journal aggregation levels in library and information science (LIS) field. Applying the citation analysis method, it studies a purposeful sample of LIS articles indexed in SSCI during 2004–2012 and bookmarked in CiteULike. Data are collected via WoS, Journal Citation Report, and CiteULike. There is a positive, though weak, correlation between LIS authors’ mean citations and their mean bookmarks, as well as a moderate to large correlation between LIS journals’ impact factors on the one hand and on the other, their mean bookmarks, and the percentage of their bookmarked articles. Given the correlation between the citation- and bookmark-based indicators at author and journal levels, bookmarking data can be used as a complement to, but not a substitute for, the traditional indicators to get to a more inclusive evaluation of journals and authors.


Altmetrics Citations CiteULike Bookmarks Library and information science 


  1. Bar-Ilan, J. (2012). JASIST 2001–2010. Bulletin of the American Society for Information Science and Technology, 38(6), 24–28.CrossRefGoogle Scholar
  2. Bar-Ilan, J., Haustein, S., Peters, I., Priem, J., Shema, H., & Terliesner, J. (2012). Beyond citations: Scholars’ visibility on the social web. arXiv preprint arXiv:1205.5611.Google Scholar
  3. Bornmann, L. (2014). Do altmetrics point to the broader impact of research? An overview of benefits and disadvantages of altmetrics. Journal of Informetrics, 8(4), 895–903.CrossRefGoogle Scholar
  4. Bornmann, L. (2015). Alternative metrics in scientometrics: A meta-analysis of research into three altmetrics. Scientometrics, 103(3), 1123–1144.CrossRefGoogle Scholar
  5. Borrego, Á., & Fry, J. (2012). Measuring researchers’ use of scholarly information through social bookmarking data: A case study of BibSonomy. Journal of Information Science, 38(3), 297–308.CrossRefGoogle Scholar
  6. Butler, L. (2008). Using a balanced approach to bibliometrics: Quantitative performance measures in the Australian research quality framework. Ethics in Science and Environmental Politics, 8(1), 83–92.CrossRefGoogle Scholar
  7. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers.zbMATHGoogle Scholar
  8. Costas, R., Zahedi, Z., & Wouters, P. (2014). Do altmetrics correlate with citations? Extensivecomparison of altmetric indicators with citations from a multidisciplinary perspective. Arxiv preprint arXiv: 1401.4321.Google Scholar
  9. Garfield, E. (1986). Which medical journals have the greatest impact? Annals of Internal Medicine, 105(2), 313–320.CrossRefGoogle Scholar
  10. Harnad, S. (2008).Validating research performance metrics against peer rankings. Ethics in Science and Environmental Politics, 8(11). Retrieved July 10, 2010, from
  11. Haustein, S., Golov, E., Luckanus, K., Reher, S., & Terliesner, J. (2010). Journal evaluation and science 2.0.Using social bookmarks to analyze reader perception. “In book of abstracts of the 11 th International Conference on Science and Technology Indicators”, (pp. 117–119). Leiden, The Netherlands.Google Scholar
  12. Haustein, S., Larivière, V., Thelwall, M., Amyot, D., & Peters, I. (2014). Tweets versus Mendeley readers: How do these two social media metrics differ? IT-Information Technology, 56(5), 207–215.CrossRefGoogle Scholar
  13. Li, X., & Thelwall, M. (2012). F1000, Mendeley and traditional bibliometric indicators. In E. Archambault, Y. Gingras & V. Lariviere (Eds.), The 17 th International Conference on Science and Technology Indicators, (pp. 541–551). Montreal, Canada: Repro-UQAM.Google Scholar
  14. Li, X., Thelwall, M., & Giustini, D. (2012). Validating online reference managers for scholarly impact measurement. Scientometrics, 91(2), 461–471.CrossRefGoogle Scholar
  15. Maflahi, N. & Thelwall, M. (in press).When are readers as good as citers for bibliometrics? Scopus versus Mendeley for LIS journals. Journal of the Association for Information Science and Technology.Google Scholar
  16. Mohammadi, E., & Thelwall, M. (2014). Mendeley readership altmetrics for the social sciences and humanities: Research evaluation and knowledge flows. Journal of the Association for Information Science and Technology, 65(8), 1627–1638.CrossRefGoogle Scholar
  17. Mohammadi, E., Thelwall, M., Haustein, S., & Larivière, V. (in press).Who reads research articles? An altmetrics analysis of Mendeley user categories. Journal of the Association for Information Science and Technology.Google Scholar
  18. Mohammadi, E., Thelwall, M., & Kousha, K. (in press). Can Mendeley bookmarks reflect readership? A survey of user motivations. Journal of the Association for Information Science and Technology.Google Scholar
  19. Ogden, T.L. & Bartley, D.L. (2008). “The ups and downs of journal impact factors.” Annals of Occupational Hygiene, 52(2):73–82. Retrieved July 10, 2010, from
  20. Pallant, J. (2013). SPSS survival manual. UK: McGraw-Hill Education.Google Scholar
  21. Schlögl, C., Gorraiz, J., Gumpenberger, C., Jack, K., & Kraker, P. (2013). Download versus citation versus readership data: The case of an information systems journals. In Proceedings of the 14 th International Society of Scientometrics and Informetrics Conference, (pp. 626–634). Vienna, Austria.Google Scholar
  22. Sotudeh, H. (2010). A review of the journal impact factor and its deficiencies in research evaluation in different disciplines. Rahyaft, 47, 33–44. [in Persian].Google Scholar
  23. Sotudeh, H., Mazare’i, Z., & Mirza-beighi, M. (2015). The relationship between citation indicators and CiteULike bookmarks: The case of LIS field articles during 2004–2012. Information Processing and Management [in Persian], 30(4), 939–963.Google Scholar
  24. Taraborelli, D. (2008). Soft peer review: Social software and distributed scientific evaluation. In Proceedings of the eighth international conference on the design of cooperative systems (COOP ’08; Carry–Le–Rouet, 20–23 May). Available at
  25. Thelwall, M. (2012). Journal impact evaluation: A webometric perspective. Scientometrics, 92(2), 429–441.CrossRefGoogle Scholar
  26. Wang, X., Wang, Z., & Xu, S. (2012). Tracing scientist’s research trends realtimely. Scientometrics, 95(2), 717–729.CrossRefGoogle Scholar
  27. Zahedi, Z., Costas, R., & Wouters, P. (2014a). How well developed are altmetrics? A cross-disciplinary analysis of the presence of ‘alternative metrics’ in scientific publications. Scientometrics, 101(2), 1491–1513.CrossRefGoogle Scholar
  28. Zahedi, Z., Fenner, M., & Costas, R. (2014). How consistent are altmetrics providers? Study of 1000 PLOS ONE publications using the PLOS ALM, Mendeley and APIs. In altmetrics 14. Workshop at the Web Science Conference, Bloomington, USA.Google Scholar
  29. Zahedi, Z., & Van Eck, N.J. (2014). Visualizing readership activity of Mendeley users using VOSviewer. In altmetrics14: Expanding impacts and metrics, Workshop at Web Science Conference 2014, Bloomington, IN, doi:  10.6084/m9. figshare (Vol. 1041819).
  30. Zitt, M., & Bassecoulard, E. (2008). Challenges for scientometric indicators: Data demining, knowledge-flow measurements and diversity issues. Ethics in Science and Environmental Politics, 8, 49–60.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2015

Authors and Affiliations

  • Hajar Sotudeh
    • 1
    Email author
  • Zahra Mazarei
    • 1
  • Mahdieh Mirzabeigi
    • 1
  1. 1.Department of Knowledge and Information Sciences, Faculty of Education and PsychologyShiraz UniversityShirazIran

Personalised recommendations