Advertisement

Scientometrics

, Volume 115, Issue 2, pp 913–928 | Cite as

Can Microsoft Academic be used for citation analysis of preprint archives? The case of the Social Science Research Network

  • Michael Thelwall
Article

Abstract

Preprint archives play an important scholarly communication role within some fields. The impact of archives and individual preprints are difficult to analyse because online repositories are not indexed by the Web of Science or Scopus. In response, this article assesses whether the new Microsoft Academic can be used for citation analysis of preprint archives, focusing on the Social Science Research Network (SSRN). Although Microsoft Academic seems to index SSRN comprehensively, it groups a small fraction of SSRN papers into an easily retrievable set that has variations in character over time, making any field normalisation or citation comparisons untrustworthy. A brief parallel analysis of arXiv suggests that similar results would occur for other online repositories. Systematic analyses of preprint archives are nevertheless possible with Microsoft Academic when complete lists of archive publications are available from other sources because of its promising coverage and citation results.

Keywords

Microsoft Academic SSRN arXiv Digital repositories Preprint archives 

References

  1. Brown, L. D. (2003). Ranking journals using social science research network downloads. Review of Quantitative Finance and Accounting, 20(3), 291–307.CrossRefGoogle Scholar
  2. Brown, L. D., & Laksmana, I. (2004). Ranking accounting Ph.D. programs and faculties using social science research network downloads. Review of Quantitative Finance and Accounting, 22(3), 249–266.CrossRefGoogle Scholar
  3. Davis, P., & Fromerth, M. (2007). Does the arXiv lead to higher citations and reduced publisher downloads for mathematics articles? Scientometrics, 71(2), 203–215.CrossRefGoogle Scholar
  4. Delgado López-Cózar, E., & Cabezas-Clavijo, Á. (2012). Google Scholar Metrics: An unreliable tool for assessing scientific journals. El Profesional de la Información, 21(4), http://www.elprofesionaldelainformacion.com/contenidos/2012/julio/15_eng.pdf.
  5. Delgado López-Cózar, E., Robinson-García, N., & Torres-Salinas, D. (2014). The Google Scholar experiment: How to index false papers and manipulate bibliometric indicators. Journal of the Association for Information Science and Technology, 65(3), 446–454.CrossRefGoogle Scholar
  6. Di Cesare, R., Luzi, D., Ricci, M., Ruggieri, R., della Ricerche, C. N., & della Repubblica, S. (2011). A profile of Italian Working papers in RePEc. In Proceedings of the twelfth international conference on grey literature (pp. 1–12). Amsterdam: TextRelease.Google Scholar
  7. Eisenberg, T. (2006). Assessing the SSRN-based law school rankings. Indiana Law Journal, 81(1), 285–291.Google Scholar
  8. Falagas, M. E., Pitsouni, E. I., Malietzis, G. A., & Pappas, G. (2008). Comparison of PubMed, Scopus, web of science, and Google scholar: Strengths and weaknesses. The FASEB Journal, 22(2), 338–342.CrossRefGoogle Scholar
  9. Gunn, W. (2013). Social signals reflect academic impact: What it means when a scholar adds a paper to Mendeley. Information Standards Quarterly, 25(2), 33–39.CrossRefGoogle Scholar
  10. Halevi, G., Moed, H., & Bar-Ilan, J. (2017). Suitability of Google Scholar as a source of scientific information and as a source of data for scientific evaluation—Review of the Literature. Journal of Informetrics, 11(3), 823–834.CrossRefGoogle Scholar
  11. Harzing, A. W. (2007). Publish or perish. http://www.harzing.com/pop.htm.
  12. Harzing, A. W. (2016). Microsoft Academic (Search): A phoenix arisen from the ashes? Scientometrics, 108(3), 1637–1647.CrossRefGoogle Scholar
  13. Harzing, A. W., & Alakangas, S. (2017a). Microsoft Academic: Is the phoenix getting wings? Scientometrics, 110(1), 371–383.CrossRefGoogle Scholar
  14. Harzing, A. W., & Alakangas, S. (2017b). Microsoft Academic is one year old: The Phoenix is ready to leave the nest. Scientometrics, 112(3), 1887–1894.CrossRefGoogle Scholar
  15. Harzing, A. W. K., & Van der Wal, R. (2008). Google Scholar as a new source for citation analysis. Ethics in Science and Environmental Politics, 8(1), 61–73.CrossRefGoogle Scholar
  16. Haustein, S., Larivière, V., Thelwall, M., Amyot, D., & Peters, I. (2014). Tweets vs. Mendeley readers: How do these two social media metrics differ? IT-Information Technology, 56(5), 207–215.CrossRefGoogle Scholar
  17. HEFCE. (2015). The Metric Tide: Correlation analysis of REF2014 scores and metrics (Supplementary Report II to the Independent Review of the Role of Metrics in Research Assessment and Management). http://www.hefce.ac.uk/pubs/rereports/Year/2015/metrictide/Title,104463,en.html.
  18. Hug, S. E., & Brändle, M. P. (2017). The coverage of Microsoft Academic: Analyzing the publication output of a university. Scientometrics.  https://doi.org/10.1007/s11192-017-2535-3.Google Scholar
  19. Hug, S. E., Ochsner, M., & Brändle, M. P. (2017). Citation analysis with Microsoft Academic. Scientometrics, 111(1), 371–378.  https://doi.org/10.1007/s11192-017-2247-8.CrossRefGoogle Scholar
  20. Jaffe, A. B., Trajtenberg, M., & Henderson, R. (1993). Geographic localization of knowledge spillovers as evidenced by patent citations. The Quarterly Journal of Economics, 108(3), 577–598.CrossRefGoogle Scholar
  21. Jamali, H. R. (2017). Copyright compliance and infringement in ResearchGate full-text journal articles. Scientometrics, 112(1), 241–254.CrossRefGoogle Scholar
  22. Karki, M. M. S. (1997). Patent citation analysis: A policy analysis tool. World Patent Information, 19(4), 269–272.MathSciNetCrossRefGoogle Scholar
  23. Kousha, K., Thelwall, M., & Abdoli, M. (2018). Can Microsoft Academic assess the early citation impact of in-press articles? A multi-discipline exploratory analysis. Journal of Informetrics, 12(1), 287–298.CrossRefGoogle Scholar
  24. Li, X., Thelwall, M., & Kousha, K. (2015). The role of arXiv, RePEc, SSRN and PMC in formal scholarly communication. Aslib Journal of Information Management, 67(6), 614–635.CrossRefGoogle Scholar
  25. Luce, R. E. (2001). E-prints intersect the digital library: inside the Los Alamos arXiv. Issues in Science and Technology Librarianship, 29(Winter). http://webdoc.sub.gwdg.de/edoc/aw/ucsb/istl/01-winter/article3.html.
  26. Maflahi, N., & Thelwall, M. (2018). How quickly do publications get read? The evolution of Mendeley reader counts for new articles. Journal of the Association for Information Science and Technology, 69(1), 158–167.CrossRefGoogle Scholar
  27. Mohammadi, E., Thelwall, M., & Kousha, K. (2016). Can Mendeley bookmarks reflect readership? A survey of user motivations. Journal of the Association for Information Science and Technology, 67(5), 1198–1209.  https://doi.org/10.1002/asi.23477.CrossRefGoogle Scholar
  28. Orduña-Malea, E., Martín-Martín, A., & Delgado-López-Cózar, E. (2016). The next bibliometrics: ALMetrics (Author Level Metrics) and the multiple faces of author impact. El Profesional de la Información, 25(3), 485–496.CrossRefGoogle Scholar
  29. Sinha, A., Shen, Z., Song, Y., Ma, H., Eide, D., Hsu, B. J. P., et al. (2015). An overview of Microsoft Academic service (mas) and applications. In Proceedings of the 24th international conference on world wide web (pp. 243–246). New York, NY: ACM Press.Google Scholar
  30. SSRN. (2017). Is my paper eligible for distribution in a SSRN eJournal? https://www.ssrn.com/en/index.cfm/ssrn-faq/#distribution_eligibility.
  31. Sud, P., & Thelwall, M. (2014). Evaluating altmetrics. Scientometrics, 98(2), 1131–1143.  https://doi.org/10.1007/s11192-013-1117-2.CrossRefGoogle Scholar
  32. Thelwall, M. (2017a). Are Mendeley reader counts high enough for research evaluations when articles are published? Aslib Journal of Information Management, 69(2), 174–183.  https://doi.org/10.1108/AJIM-01-2017-0028.CrossRefGoogle Scholar
  33. Thelwall, M. (2017b). Microsoft Academic: A multidisciplinary comparison of citation counts with Scopus and Mendeley for 29 journals. Journal of Informetrics, 11(4), 1201–1212.CrossRefGoogle Scholar
  34. Thelwall, M. (2017c). Are Mendeley reader counts useful impact indicators in all fields? Scientometrics, 113(3), 1721–1731.CrossRefGoogle Scholar
  35. Thelwall, M. (2018a). Microsoft Academic automatic document searches: Accuracy for journal articles and suitability for citation analysis. Journal of Informetrics, 12(1), 1–9.CrossRefGoogle Scholar
  36. Thelwall, M. (2018b). Does Microsoft Academic find early citations? Scientometrics, 114(1), 325–334.CrossRefGoogle Scholar
  37. Thelwall, M., & Fairclough, R. (2015). Geometric journal impact factors correcting for individual highly cited articles. Journal of Informetrics, 9(2), 263–272.CrossRefGoogle Scholar
  38. Thelwall, M., Haustein, S., Larivière, V., & Sugimoto, C. (2013). Do altmetrics work? Twitter and ten other candidates. PLoS ONE, 8(5), e64841.  https://doi.org/10.1371/journal.pone.0064841.CrossRefGoogle Scholar
  39. Thelwall, M., & Sud, P. (2016). Mendeley readership counts: An investigation of temporal and disciplinary differences. Journal of the Association for Information Science and Technology, 57(6), 3036–3050.  https://doi.org/10.1002/asi.2355.CrossRefGoogle Scholar
  40. Thelwall, M., & Wilson, P. (2016). Mendeley readership altmetrics for medical articles: An analysis of 45 fields. Journal of the Association for Information Science and Technology, 67(8), 1962–1972.  https://doi.org/10.1002/asi.23501.CrossRefGoogle Scholar
  41. van Leeuwen, T. N., & Calero Medina, C. (2012). Redefining the field of economics: Improving field normalization for the application of bibliometric techniques in the field of economics. Research Evaluation, 21(1), 61–70.CrossRefGoogle Scholar
  42. Van Noorden, R. (2014). Online collaboration: Scientists and the social network. Nature, 512(7513), 126–129.CrossRefGoogle Scholar
  43. Waltman, L., van Eck, N. J., van Leeuwen, T. N., Visser, M. S., & van Raan, A. F. (2011). Towards a new crown indicator: An empirical analysis. Scientometrics, 87(3), 467–481.CrossRefGoogle Scholar
  44. West, J. D., Jensen, M. C., Dandrea, R. J., Gordon, G. J., & Bergstrom, C. T. (2013). Author-level Eigenfactor metrics: Evaluating the influence of authors, institutions, and countries within the social science research network community. Journal of the Association for Information Science and Technology, 64(4), 787–801.Google Scholar
  45. Zahedi, Z., Costas, R., & Wouters, P. (2014a). How well developed are altmetrics? A cross-disciplinary analysis of the presence of ‘alternative metrics’ in scientific publications. Scientometrics, 101(2), 1491–1513.CrossRefGoogle Scholar
  46. Zahedi, Z., Haustein, S. & Bowman, T. (2014). Exploring data quality and retrieval strategies for Mendeley reader counts. Presentation at SIGMET Metrics 2014 workshop, 5 November 2014. Available: http://www.slideshare.net/StefanieHaustein/sigmetworkshop-asist2014.
  47. Zimmermann, C. (2013). Academic rankings with RePEc. Econometrics, 1(3), 249–280.CrossRefGoogle Scholar
  48. Zitt, M. (2012). The journal impact factor: Angel, devil, or scapegoat? A comment on JK Vanclay’s article 2011. Scientometrics, 92(2), 485–503.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2018

Authors and Affiliations

  1. 1.School of Mathematics and ComputingUniversity of WolverhamptonWolverhamptonUK

Personalised recommendations