Scientometrics

, Volume 98, Issue 2, pp 1131–1143 | Cite as

Evaluating altmetrics

Article

Abstract

The rise of the social web and its uptake by scholars has led to the creation of altmetrics, which are social web metrics for academic publications. These new metrics can, in theory, be used in an evaluative role, to give early estimates of the impact of publications or to give estimates of non-traditional types of impact. They can also be used as an information seeking aid: to help draw a digital library user’s attention to papers that have attracted social web mentions. If altmetrics are to be trusted then they must be evaluated to see if the claims made about them are reasonable. Drawing upon previous citation analysis debates and web citation analysis research, this article discusses altmetric evaluation strategies, including correlation tests, content analyses, interviews and pragmatic analyses. It recommends that a range of methods are needed for altmetric evaluations, that the methods should focus on identifying the relative strengths of influences on altmetric creation, and that such evaluations should be prioritised in a logical order.

Keywords

Altmetrics Indicators Webometrics 

Notes

Acknowledgments

This study is part of the FP7 EU-funded project ACUMEN on assessing Web indicators in research evaluation.

References

  1. Bar-Ilan, J., Haustein, S., Peters, I., Priem, J., Shema, H., & Terliesner, J. (2012). Beyond citations: Scholars’ visibility on the social Web, 17th International Conference on Science and Technology Indicators (STI2012) (pp. 98–109). Montreal: Science-Metrix and OST.Google Scholar
  2. Blackburn, J. L., & Hakel, M. D. (2006). An examination of sources of peer-review bias. Psychological Science, 17(5), 378–382. doi: 10.1111/j.1467-9280.2006.01715.x.CrossRefGoogle Scholar
  3. Bornmann, L., Nast, I., & Daniel, H.-D. (2008). Do editors and referees look for signs of scientific misconduct when reviewing manuscripts? A quantitative content analysis of studies that examined review criteria and reasons for accepting and rejecting manuscripts for publication. Scientometrics, 77(3), 415–432.CrossRefGoogle Scholar
  4. Brooks, T. A. (1986). Evidence of complex citer motivations. Journal of the American Society for Information Science, 37, 34–36.Google Scholar
  5. Case, D. O., & Higgins, G. M. (2000). How can we investigate citation behaviour? A study of reasons for citing literature in communication. Journal of the American Society for Information Science, 51(7), 635–645.CrossRefGoogle Scholar
  6. Ceci, S. J., & Williams, W. M. (2011). Understanding current causes of women’s underrepresentation in science. Proceedings of the National Academy of Sciences of the United States of America, 108(8), 3157–3162.CrossRefGoogle Scholar
  7. Desai, T., Shariff, A., Shariff, A., Kats, M., Fang, X., et al. (2012). Tweeting the meeting: An in-depth analysis of Twitter activity at Kidney Week 2011. PLoS One, 7(7), e40253. doi: 10.1371/journal.pone.0040253.CrossRefGoogle Scholar
  8. Helic, H., Strohmaier, M., Trattner, C., Muhr, M., & Lerman, K. (2011). Pragmatic evaluation of folksonomies. Proceedings of the 20th international conference on world wide web (WWW2011) (pp. 417–426). New York, NY: ACM.Google Scholar
  9. Horrobin, D. F. (1990). The philosophical basis of peer review and the suppression of innovation. Journal of the American Medical Association, 263(10), 1438–1441.CrossRefGoogle Scholar
  10. Kousha, K., & Thelwall, M. (2007). Google scholar citations and Google Web/URL citations: A multi-discipline exploratory analysis. Journal of the American Society for Information Science and Technology, 58(7), 1055–1065.CrossRefGoogle Scholar
  11. Kousha, K., & Thelwall, M. (2008). Assessing the impact of disciplinary research on teaching: An automatic analysis of online syllabuses. Journal of the American Society for Information Science and Technology, 59(13), 2060–2069.CrossRefGoogle Scholar
  12. Kousha, K., & Thelwall, M. (2009). Google book search: Citation analysis for social science and the humanities. Journal of the American Society for Information Science and Technology, 60(8), 1537–1549.CrossRefGoogle Scholar
  13. Kousha, K., Thelwall, M., & Rezaie, S. (2010). Using the web for research evaluation: The integrated online impact indicator. Journal of Informetrics, 4(1), 124–135.CrossRefGoogle Scholar
  14. Lee, C., Sugimoto, C. R., & Zhang, G. (2013). Bias in peer review. Journal of American Society for Information Science and Technology, 64(1), 2–17.CrossRefGoogle Scholar
  15. Levitt, J., & Thelwall, M. (2011). A combined bibliometric indicator to predict article impact. Information Processing and Management, 47(2), 300–3008.CrossRefGoogle Scholar
  16. Li, X., Thelwall, M., & Giustini, D. (2012). Validating online reference managers for scholarly impact measurement. Scientometrics, 91(2), 461–471.CrossRefGoogle Scholar
  17. MacRoberts, M. H., & MacRoberts, B. R. (1996). Problems of citation analysis. Scientometrics, 36(3), 435–444.CrossRefGoogle Scholar
  18. Mahoney, M. J. (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive Therapy and Research, 1(2), 161–175.CrossRefGoogle Scholar
  19. Marcus, A., & Oransky, I. (2011). Science publishing: The paper is not sacred. Nature, 480, 449–450.CrossRefGoogle Scholar
  20. Merton, R. K. (1973). The sociology of science. Theoretical and empirical investigations. Chicago: University of Chicago Press.Google Scholar
  21. Moed, H. F. (2005). Citation analysis in research evaluation. New York: Springer.Google Scholar
  22. Mohammadi, E., & Thelwall, M. (2013). Assessing non-standard article impact using F1000 labels. Scientometrics. doi: 10.1007/s11192-013-0993-9.
  23. Neuendorf, K. (2002). The content analysis guidebook. London: Sage.Google Scholar
  24. Oppenheim, C. (2000). Do patent citations count? In B. Cronin & H. B. Atkins (Eds.), The web of knowledge: A festschrift in honor of Eugene Garfield (pp. 405–432). Metford, NJ: Information Today Inc. ASIS Monograph Series.Google Scholar
  25. Peters, D. P., & Cecia, S. J. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5, 187–195.CrossRefGoogle Scholar
  26. Priem, J., & Costello, K. L. (2010). How and why scholars cite on twitter. Proceedings of the American society for information science and technology (ASIST 2010) (pp. 1–4). doi: 10.1002/meet.14504701201.
  27. Priem, J., Piwowar, H.A., & Hemminger, B.M. (2012). Altmetrics in the wild: using social media to explore scholarly impact. Retrieved from http://arxiv.org/abs/1203.4745v1. Accessed 23 Aug 2013.
  28. Priem, J., Taraborelli, D., Groth, P., & Neylon, C. (2011). Altmetrics: A manifesto. Retrieved from http://altmetrics.org/manifesto/. Accessed 23 Aug 2013.
  29. Procter, R., Williams, R., Stewart, J., Poschen, M., Snee, H., Voss, A., et al. (2010). Adoption and use of Web 2.0 in scholarly communications. Philosophical Transactions of the Royal Society A, 368(1926), 4039–4056.CrossRefGoogle Scholar
  30. Seglen, P. O. (1998). Citation rates and journal impact factors are not suitable for evaluation of research. Acta Orthopaedica Scandinavica, 69(3), 224–229.Google Scholar
  31. Shema, H., Bar-Ilan, J., & Thelwall, M. (2012). Research blogs and the discussion of scholarly information. PLoS One, 7(5), e35869. doi: 10.1371/journal.pone.0035869.CrossRefGoogle Scholar
  32. Thelwall, M., Haustein, S., Larivière, V., & Sugimoto, C. R. (2013). Do altmetrics work? Twitter and ten other social web services. PLOS One, 8(5), e64841. doi: 10.1371/journal.pone.0064841.CrossRefGoogle Scholar
  33. Thelwall, M., & Kousha, K. (2008). Online presentations as a source of scientific impact? An analysis of PowerPoint files citing academic journals. Journal of the American Society for Information Science and Technology, 59(5), 805–815.CrossRefGoogle Scholar
  34. van Raan, A. F. J. (1998). In matters of quantitative studies of science the fault of theorists is offering too little and asking too much. Scientometrics, 43(1), 129–148.CrossRefGoogle Scholar
  35. Vaughan, L., & Huysen, K. (2002). Relationship between links to journal Web sites and impact factors. ASLIB Proceedings, 54(6), 356–361.CrossRefGoogle Scholar
  36. Vaughan, L., & Shaw, D. (2003). Bibliographic and web citations: What is the difference? Journal of the American Society for Information Science and Technology, 54(14), 1313–1322.CrossRefGoogle Scholar
  37. Vaughan, L., & Shaw, D. (2005). Web citation data for impact assessment: A comparison of four science disciplines. Journal of the American Society for Information Science and Technology, 56(10), 1075–1087.CrossRefGoogle Scholar
  38. Weller, K., Dornstädter, R., Freimanis, R., Klein, R. N., & Perez, M. (2010). Social software in academia: Three studies on users’ acceptance of web 2.0 services. Proceedings of the 2nd Web Science Conference (WebSci10), Retrieved May 29, 2013 from http://www.phil-fak.uni-duesseldorf.de/fileadmin/Redaktion/Institute/Informationswissenschaft/weller/websci10_submission_62.pdf.
  39. Wennerås, C., & Wold, A. (1997). Nepotism and sexism in peer-review. Nature, 387, 341–343.CrossRefGoogle Scholar
  40. Whitley, R. (2000). The intellectual and social organization of the sciences (2nd ed.). Oxford: Oxford University Press.Google Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2013

Authors and Affiliations

  1. 1.Statistical Cybermetrics Research Group, School of TechnologyUniversity of WolverhamptonWolverhamptonUK

Personalised recommendations