Skip to main content
Log in

The influence of highly cited papers on field normalised indicators

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

Field normalised average citation indicators are widely used to compare countries, universities and research groups. The most common variant, the Mean Normalised Citation Score (MNCS), is known to be sensitive to individual highly cited articles but the extent to which this is true for a log-based alternative, the Mean Normalised Log Citation Score (MNLCS), is unknown. This article investigates country-level highly cited outliers for MNLCS and MNCS for all Scopus articles from 2013 and 2012. The results show that MNLCS is influenced by outliers, as measured by kurtosis, but at a much lower level than MNCS. The largest outliers were affected by the journal classifications, with the Science-Metrix scheme producing much weaker outliers than the internal Scopus scheme. The high Scopus outliers were mainly due to uncitable articles reducing the average in some humanities categories. Although outliers have a numerically small influence on the outcome for individual countries, changing indicator or classification scheme influences the results enough to affect policy conclusions drawn from them. Future field normalised calculations should therefore explicitly address the influence of outliers in their methods and reporting.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  • Aksnes, D. W. (2003). Characteristics of highly cited papers. Research Evaluation, 12(3), 159–170.

    Article  Google Scholar 

  • Aksnes, D. W., & Sivertsen, G. (2004). The effect of highly cited papers on national citation indicators. Scientometrics, 59(2), 213–224.

    Article  Google Scholar 

  • Archambault, É., Beauchesne, O. H., & Caruso, J. (2011). Towards a multilingual, comprehensive and open scientific journal ontology. In Proceedings of the 13th international conference of the international society for scientometrics and informetrics (pp. 66–77). Durban.

  • Bornmann, L., Wagner, C., & Leydesdorff, L. (2015). BRICS countries and scientific excellence: A bibliometric analysis of most frequently cited papers. Journal of the Association for Information Science and Technology, 66(7), 1507–1513.

    Article  Google Scholar 

  • Clauset, A., Shalizi, C. R., & Newman, M. E. (2009). Power-law distributions in empirical data. SIAM Review, 51(4), 661–703.

    Article  MathSciNet  MATH  Google Scholar 

  • Donner, P. (2017). Document type assignment accuracy in the journal citation index data of Web of Science. Scientometrics, 113(1), 219–236.

    Article  Google Scholar 

  • Falagas, M. E., Pitsouni, E. I., Malietzis, G. A., & Pappas, G. (2008). Comparison of PubMed, Scopus, web of science, and Google scholar: Strengths and weaknesses. The FASEB Journal, 22(2), 338–342.

    Article  Google Scholar 

  • Glänzel, W., & Schubert, A. (2003). A new classification scheme of science fields and subfields designed for scientometric evaluation purposes. Scientometrics, 56(3), 357–367.

    Article  Google Scholar 

  • Ioannidis, J. P., & Panagiotou, O. A. (2011). Comparison of effect sizes associated with biomarkers reported in highly cited individual articles and in subsequent meta-analyses. JAMA, 305(21), 2200–2210.

    Article  Google Scholar 

  • Larivière, V., Desrochers, N., Macaluso, B., Mongeon, P., Paul-Hus, A., & Sugimoto, C. R. (2016). Contributorship and division of labor in knowledge production. Social Studies of Science, 46(3), 417–435.

    Article  Google Scholar 

  • Levitt, J., & Thelwall, M. (2013). Alphabetization and the skewing of first authorship towards last names early in the alphabet. Journal of Informetrics, 7(3), 575–582.

    Article  Google Scholar 

  • MacRoberts, M. H., & MacRoberts, B. R. (2010). Problems of citation analysis: A study of uncited and seldom-cited influences. Journal of the American Society for Information Science and Technology, 61(1), 1–12.

    Article  Google Scholar 

  • Martín-Martín, A., Orduna-Malea, E., & López-Cózar, E. D. (2018). Coverage of highly-cited documents in Google Scholar, Web of Science, and Scopus: A multidisciplinary comparison. Scientometrics, 116(3), 2175–2188.

    Article  Google Scholar 

  • McCain, K. W. (2011). Eponymy and obliteration by incorporation: The case of the “Nash Equilibrium”. Journal of the American Society for Information Science and Technology, 62(7), 1412–1424.

    Article  Google Scholar 

  • Merton, R. K. (1968). The Matthew effect in science: The reward and communication systems of science are considered. Science, 159(3810), 56–63.

    Article  Google Scholar 

  • Oppenheim, C., & Renn, S. P. (1978). Highly cited old papers and the reasons why they continue to be cited. Journal of the American Society for Information Science and Technology, 29(5), 225–231.

    Article  Google Scholar 

  • Perianes-Rodriguez, A., & Ruiz-Castillo, J. (2017). A comparison of the Web of Science and publication-level classification systems of science. Journal of Informetrics, 11(1), 32–45.

    Article  Google Scholar 

  • Perianes-Rodriguez, A., & Ruiz-Castillo, J. (2018). The impact of classification systems in the evaluation of the research performance of the Leiden Ranking Universities. Journal of the Association for Information Science and Technology, 69(8), 1046–1053.

    Article  Google Scholar 

  • Persson, O. (2009). Are highly cited papers more international? Scientometrics, 83(2), 397–401.

    Article  Google Scholar 

  • Price, D. de Solla (1976). A general theory of bibliometric and other cumulative advantage processes. Journal of the American Society for Information Science and Technology, 27(5), 292–306.

    Article  Google Scholar 

  • Priem, J., Taraborelli, D., Groth, P., & Neylon, C. (2010). Altmetrics: A manifesto. http://altmetrics.org/manifesto/.

  • Ruiz-Castillo, J., & Waltman, L. (2015). Field-normalized citation impact indicators using algorithmically constructed classification systems of science. Journal of Informetrics, 9(1), 102–117.

    Article  Google Scholar 

  • Thelwall, M. (2016a). Are the discretised lognormal and hooked power law distributions plausible for citation data? Journal of Informetrics, 10(2), 454–470.

    Article  Google Scholar 

  • Thelwall, M. (2016b). Are there too many uncited articles? Zero inflated variants of the discretised lognormal and hooked power law distributions. Journal of Informetrics, 10(2), 622–633.

    Article  Google Scholar 

  • Thelwall, M. (2017a). Web indicators for research evaluation: A practical guide. San Rafael, CA: Morgan & Claypool.

    Google Scholar 

  • Thelwall, M. (2017b). Three practical field normalised alternative indicator formulae for research evaluation. Journal of Informetrics, 11(1), 128–151.

    Article  Google Scholar 

  • Thelwall, M. (2018). Do females create higher impact research? Scopus citations and Mendeley readers for articles from five countries. Journal of Informetrics, 12(4), 1031–1041.

    Article  Google Scholar 

  • Tijssen, R. J., Visser, M. S., & Van Leeuwen, T. N. (2002). Benchmarking international scientific excellence: Are highly cited research papers an appropriate frame of reference? Scientometrics, 54(3), 381–397.

    Article  Google Scholar 

  • Waltman, L. (2016). A review of the literature on citation impact indicators. Journal of Informetrics, 10(2), 365–391.

    Article  Google Scholar 

  • Waltman, L., Calero-Medina, C., Kosten, J., Noyons, E. C., Tijssen, R. J., van Eck, N. J., et al. (2012). The Leiden Ranking 2011/2012: Data collection, indicators, and interpretation. Journal of the American Society for Information Science and Technology, 63(12), 2419–2432.

    Article  Google Scholar 

  • Waltman, L., & Schreiber, M. (2013). On the calculation of percentile-based bibliometric indicators. Journal of the American Society for Information Science and Technology, 64(2), 372–379.

    Article  Google Scholar 

  • Waltman, L., van Eck, N. J., van Leeuwen, T. N., Visser, M. S., & van Raan, A. F. (2011a). Towards a new crown indicator: Some theoretical considerations. Journal of Informetrics, 5(1), 37–47.

    Article  Google Scholar 

  • Waltman, L., van Eck, N. J., van Leeuwen, T. N., Visser, M. S., & van Raan, A. F. (2011b). Towards a new crown indicator: An empirical analysis. Scientometrics, 87(3), 467–481.

    Article  Google Scholar 

  • Wang, J. (2013). Citation time window choice for research impact evaluation. Scientometrics, 94(3), 851–872.

    Article  Google Scholar 

  • Wang, Q., & Waltman, L. (2016). Large-scale analysis of the accuracy of the journal classification systems of Web of Science and Scopus. Journal of Informetrics, 10(2), 347–364.

    Article  Google Scholar 

  • Westfall, P. H. (2014). Kurtosis as peakedness, 1905–2014. RIP. The American Statistician, 68(3), 191–195.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mike Thelwall.

Appendix

Appendix

See Table 7.

Table 7 MNLCS and MNCS values for the 75 countries with the most articles in Scopus in 2013, calculated using data from 2012 or 2013 and either the Scopus or Science-Metrix classification scheme

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Thelwall, M. The influence of highly cited papers on field normalised indicators. Scientometrics 118, 519–537 (2019). https://doi.org/10.1007/s11192-018-03001-y

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-018-03001-y

Keywords

Navigation