Advertisement

Field Normalization of Scientometric Indicators

  • Ludo WaltmanEmail author
  • Nees Jan van Eck
Part of the Springer Handbooks book series (SHB)

Abstract

When scientometric indicators are used to compare research units active in different scientific fields, there is often a need to make corrections for differences between fields, for instance, differences in publication, collaboration, and citation practices. Field-normalized indicators aim to make such corrections. The design of these indicators is a significant challenge. We discuss the main issues in the design of field-normalized indicators and present an overview of the different approaches that have been developed for dealing with the problem of field normalization. We also discuss how field-normalized indicators can be evaluated and consider the sensitivity of scientometric analyses to the choice of a field-normalization approach.

field normalization field classification system scientometrics scientometric indicator impact indicator productivity indicator 

Notes

Acknowledgements

The authors would like to thank Lutz Bornmann, Robin Haunschild, Loet Leydesdorff, Javier Ruiz-Castillo, and an anonymous referee for their helpful comments on an earlier version of this chapter. These comments have led to numerous improvements.

References

  1. L. Waltman: A review of the literature on citation impact indicators, J. Informetr. 10(2), 365–391 (2016)CrossRefGoogle Scholar
  2. C.R. Sugimoto, S. Weingart: The kaleidoscope of disciplinarity, J. Doc. 71(4), 775–794 (2015)CrossRefGoogle Scholar
  3. M. Zitt, S. Ramanana-Rahary, E. Bassecoulard: Relativity of citation performance and excellence measures: From cross-field to cross-scale effects of field-normalisation, Scientometrics 63(2), 373–401 (2005)CrossRefGoogle Scholar
  4. Q. Wang, L. Waltman: Large-scale analysis of the accuracy of the journal classification systems of Web of Science and Scopus, J. Informetr. 10(2), 347–364 (2016)CrossRefGoogle Scholar
  5. W. Glänzel, A. Schubert: A new classification scheme of science fields and subfields designed for scientometric evaluation purposes, Scientometrics 56(3), 357–367 (2003)CrossRefGoogle Scholar
  6. É. Archambault, O.H. Beauchesne, J. Caruso: Towards a multilingual, comprehensive and open scientific journal ontology. In: Proc. 13th Int. Conf. Int. Soc. Sci. Informetr., Durban, South Africa, ed. by E.C.M. Noyons, P. Ngulube, J. Leta (2011) pp. 66–77Google Scholar
  7. W. Glänzel, A. Schubert, H.J. Czerwon: An item-by-item subject classification of papers published in multidisciplinary and general journals using reference analysis, Scientometrics 44(3), 427–439 (1999)CrossRefGoogle Scholar
  8. L. Bornmann, R. Mutz, C. Neuhaus, H.D. Daniel: Citation counts for research evaluation: standards of good practice for analyzing bibliometric data and presenting and interpreting results, Ethics Sci. Env. Polit. 8(1), 93–102 (2008)CrossRefGoogle Scholar
  9. C. Neuhaus, H.D. Daniel: A new reference standard for citation analysis in chemistry and related fields based on the sections of Chemical Abstracts, Scientometrics 78(2), 219–229 (2009)CrossRefGoogle Scholar
  10. F. Radicchi, C. Castellano: Rescaling citations of publications in physics, Phys. Rev. E 83(4), 046116 (2011)CrossRefGoogle Scholar
  11. T.N. van Leeuwen, C. Calero Medina: Redefining the field of economics: Improving field normalization for the application of bibliometric techniques in the field of economics, Res. Eval. 21(1), 61–70 (2012)CrossRefGoogle Scholar
  12. L. Waltman, N.J. van Eck: A new methodology for constructing a publication-level classification system of science, J. Am. Soc. Inf. Sci. Technol. 63(12), 2378–2392 (2012)CrossRefGoogle Scholar
  13. J. Ruiz-Castillo, L. Waltman: Field-normalized citation impact indicators using algorithmically constructed classification systems of science, J. Informetr. 9(1), 102–117 (2015)CrossRefGoogle Scholar
  14. A. Perianes-Rodriguez, J. Ruiz-Castillo: A comparison of the Web of Science and publication-level classification systems of science, J. Informetr. 11(1), 32–45 (2017)CrossRefGoogle Scholar
  15. S.E. Hug, M. Ochsner, M.P. Brändle: Citation analysis with Microsoft Academic, Scientometrics 111(1), 371–378 (2017)CrossRefGoogle Scholar
  16. G. Abramo, C.A. D'Angelo: How do you define and measure research productivity?, Scientometrics 101(2), 1129–1144 (2014)CrossRefGoogle Scholar
  17. L. Bornmann, R. Haunschild: Normalization of Mendeley reader impact on the reader-and paper-side: A comparison of the mean discipline normalized reader score (MDNRS) with the mean normalized reader score (MNRS) and bare reader counts, J. Informetr. 10(3), 776–788 (2016)CrossRefGoogle Scholar
  18. L. Waltman, N.J. van Eck, T.N. van Leeuwen, M.S. Visser, A.F.J. van Raan: Towards a new crown indicator: Some theoretical considerations, J. Informetr. 5(1), 37–47 (2011)CrossRefGoogle Scholar
  19. J. Lundberg: Lifting the crown – citation z-score, J. Informetr. 1(2), 145–154 (2007)CrossRefGoogle Scholar
  20. R. Fairclough, M. Thelwall: National research impact indicators from Mendeley readers, J. Informetr. 9(4), 845–859 (2015)CrossRefGoogle Scholar
  21. R. Haunschild, L. Bornmann: Normalization of Mendeley reader counts for impact assessment, J. Informetr. 10(1), 62–73 (2016)CrossRefGoogle Scholar
  22. G. Abramo, T. Cicero, C.A. D'Angelo: Revisiting the scaling of citations for research assessment, J. Informetr. 6(4), 470–479 (2012)CrossRefGoogle Scholar
  23. G. Abramo, T. Cicero, C.A. D'Angelo: How important is choice of the scaling factor in standardizing citations?, J. Informetr. 6(4), 645–654 (2012)CrossRefGoogle Scholar
  24. J.A. Crespo, N. Herranz, Y. Li, J. Ruiz-Castillo: The effect on citation inequality of differences in citation practices at the Web of Science subject category level, J. Assoc. Inf. Sci. Technol. 65(6), 1244–1256 (2014)CrossRefGoogle Scholar
  25. J.A. Crespo, Y. Li, J. Ruiz-Castillo: The measurement of the effect on citation inequality of differences in citation practices across scientific fields, PLOS ONE 8(3), e58727 (2013)CrossRefGoogle Scholar
  26. R. Fairclough, M. Thelwall: More precise methods for national research citation impact comparisons, J. Informetr. 9(4), 895–906 (2015)CrossRefGoogle Scholar
  27. M. Thelwall, P. Sud: National, disciplinary and temporal variations in the extent to which articles with more authors have more impact: Evidence from a geometric field normalised citation indicator, J. Informetr. 10(1), 48–61 (2016)CrossRefGoogle Scholar
  28. M. Thelwall: Three practical field normalised alternative indicator formulae for research evaluation, J. Informetr. 11(1), 128–151 (2017)CrossRefGoogle Scholar
  29. L. Bornmann, H.D. Daniel: Universality of citation distributions – A validation of Radicchi et al.'s relative indicator \(c_{f}=c/c_{0}\) at the micro level using data from chemistry, J. Am. Soc. Inf. Sci. Technol. 60(8), 1664–1670 (2009)CrossRefGoogle Scholar
  30. G. Vaccario, M. Medo, N. Wider, M.S. Mariani: Quantifying and suppressing ranking bias in a large citation network, J. Informetr. 11(3), 766–782 (2017)CrossRefGoogle Scholar
  31. Z. Zhang, Y. Cheng, N.C. Liu: Comparison of the effect of mean-based method and z-score for field normalization of citations at the level of Web of Science subject categories, Scientometrics 101(3), 1679–1693 (2014)CrossRefGoogle Scholar
  32. F. Radicchi, C. Castellano: A reverse engineering approach to the suppression of citation biases reveals universal properties of citation distributions, PLOS ONE 7(3), e33833 (2012)CrossRefGoogle Scholar
  33. Y. Gingras, V. Larivière: There are neither “king” nor “crown” in scientometrics: Comments on a supposed “alternative” method of normalization, J. Informetr. 5(1), 226–227 (2011)CrossRefGoogle Scholar
  34. H.F. Moed: CWTS crown indicator measures citation impact of a research group's publication oeuvre, J. Informetr. 4(3), 436–438 (2010)CrossRefGoogle Scholar
  35. T. Opthof, L. Leydesdorff: Caveats for the journal and field normalizations in the CWTS (“Leiden”) evaluations of research performance, J. Informetr. 4(3), 423–430 (2010)CrossRefGoogle Scholar
  36. A.F.J. van Raan, T.N. van Leeuwen, M.S. Visser, N.J. van Eck, L. Waltman: Rivals for the crown: Reply to Opthof and Leydesdorff, J. Informetr. 4(3), 431–435 (2010)CrossRefGoogle Scholar
  37. P. Vinkler: The case of scientometricians with the “absolute relative” impact indicator, J. Informetr. 6(2), 254–264 (2012)CrossRefGoogle Scholar
  38. W. Glänzel, B. Thijs, A. Schubert, K. Debackere: Subfield-specific normalized relative indicators and a new generation of relational charts: Methodological foundations illustrated on the assessment of institutional research performance, Scientometrics 78(1), 165–188 (2009)CrossRefGoogle Scholar
  39. H.F. Moed, R.E. De Bruin, T.N. van Leeuwen: New bibliometric tools for the assessment of national research performance: Database description, overview of indicators and first applications, Scientometrics 33(3), 381–422 (1995)CrossRefGoogle Scholar
  40. A.F.J. van Raan: Measuring science: Capita selecta of current main issues. In: Handbook of Quantitative Science and Technology Research, ed. by H.F. Moed, W. Glänzel, U. Schmoch (Springer, Dordrecht 2005) pp. 19–50Google Scholar
  41. V. Larivière, Y. Gingras: Averages of ratios vs. ratios of averages: An empirical analysis of four levels of aggregation, J. Informetr. 5(3), 392–399 (2011)CrossRefGoogle Scholar
  42. L. Waltman, N.J. van Eck, T.N. van Leeuwen, M.S. Visser, A.F.J. van Raan: Towards a new crown indicator: An empirical analysis, Scientometrics 87(3), 467–481 (2011)CrossRefGoogle Scholar
  43. N. Herranz, J. Ruiz-Castillo: Sub-field normalization in the multiplicative case: Average-based citation indicators, J. Informetr. 6(4), 543–556 (2012)CrossRefGoogle Scholar
  44. L. Waltman, N.J. van Eck: Field-normalized citation impact indicators and the choice of an appropriate counting method, J. Informetr. 9(4), 872–894 (2015)CrossRefGoogle Scholar
  45. W.R.J. Tijssen, M.S. Visser, T.N. van Leeuwen: Benchmarking international scientific excellence: Are highly cited research papers an appropriate frame of reference?, Scientometrics 54(3), 381–397 (2002)CrossRefGoogle Scholar
  46. T.N. van Leeuwen, M.S. Visser, H.F. Moed, T.J. Nederhof, A.F.J. van Raan: The Holy Grail of science policy: Exploring and combining bibliometric tools in search of scientific excellence, Scientometrics 57(2), 257–280 (2003)CrossRefGoogle Scholar
  47. L. Leydesdorff, L. Bornmann, R. Mutz, T. Opthof: Turning the tables on citation analysis one more time: Principles for comparing sets of documents, J. Am. Soc. Inf. Sci. Technol. 62(7), 1370–1381 (2011)CrossRefGoogle Scholar
  48. A.I. Pudovkin, E. Garfield: Percentile rank and author superiority indexes for evaluating individual journal articles and the author's overall citation performance, COLLNET J. Sci. Inf. Manag. 3(2), 3–10 (2009)Google Scholar
  49. L. Waltman, M. Schreiber: On the calculation of percentile-based bibliometric indicators, J. Am. Soc. Inf. Sci. Technol. 64(2), 372–379 (2013)CrossRefGoogle Scholar
  50. M. Schreiber: How much do different ways of calculating percentiles influence the derived performance indicators? A case study, Scientometrics 97(3), 821–829 (2013)CrossRefGoogle Scholar
  51. P. Albarrán, I. Ortuño, J. Ruiz-Castillo: The measurement of low-and high-impact in citation distributions: Technical results, J. Informetr. 5(1), 48–63 (2011)CrossRefGoogle Scholar
  52. P. Albarrán, I. Ortuño, J. Ruiz-Castillo: High- and low-impact citation measures: Empirical applications, J. Informetr. 5(1), 122–145 (2011)CrossRefGoogle Scholar
  53. W. Glänzel: High-end performance or outlier? Evaluating the tail of scientometric distributions, Scientometrics 97(1), 13–23 (2013)CrossRefGoogle Scholar
  54. W. Glänzel, B. Thijs, K. Debackere: The application of citation-based performance classes to the disciplinary and multidisciplinary assessment in national comparison and institutional research assessment, Scientometrics 101(2), 939–952 (2014)CrossRefGoogle Scholar
  55. W. Glänzel, A. Schubert: Characteristic scores and scales in assessing citation impact, J. Inf. Sci. 14(2), 123–127 (1988)CrossRefGoogle Scholar
  56. G.-A. Vîiu: Disaggregated research evaluation through median-based characteristic scores and scales: A comparison with the mean-based approach, J. Informetr. 11(3), 748–765 (2017)CrossRefGoogle Scholar
  57. L. Bornmann, R. Haunschild: How to normalize Twitter counts? A first attempt based on journals in the Twitter index, Scientometrics 107(3), 1405–1422 (2016)CrossRefGoogle Scholar
  58. A. Schubert, T. Braun: Reference standards for citation based assessments, Scientometrics 26(1), 21–35 (1993)CrossRefGoogle Scholar
  59. A. Schubert, T. Braun: Cross-field normalization of scientometric indicators, Scientometrics 36(3), 311–324 (1996)CrossRefGoogle Scholar
  60. C. Colliander: A novel approach to citation normalization: A similarity-based method for creating reference sets, J. Assoc. Inf. Sci. Technol. 66(3), 489–500 (2015)CrossRefGoogle Scholar
  61. B.I. Hutchins, X. Yuan, J.M. Anderson, G.M. Santangelo: Relative citation ratio (RCR): A new metric that uses citation rates to measure influence at the article level, PLOS Biol 14(9), e1002541 (2016)CrossRefGoogle Scholar
  62. A.C.J.W. Janssens, M. Goodman, K.R. Powell, M. Gwinn: A critical evaluation of the algorithm behind the relative citation ratio (RCR), PLOS Biol 15(10), e2002536 (2017)CrossRefGoogle Scholar
  63. B.I. Hutchins, T.A. Hoppe, R.A. Meseroll, J.M. Anderson, G.M. Santangelo: Additional support for RCR: A validated article-level measure of scientific influence, PLOS Biol 15(10), e2003552 (2017)CrossRefGoogle Scholar
  64. P. Dorta-González, M.I. Dorta-González, D.R. Santos-Peñate, R. Suárez-Vega: Journal topic citation potential and between-field comparisons: The topic normalized impact factor, J. Informetr. 8(2), 406–418 (2014)CrossRefGoogle Scholar
  65. M. Zitt, H. Small: Modifying the journal impact factor by fractional citation weighting: The audience factor, J. Am. Soc. Inf. Sci. Technol. 59(11), 1856–1860 (2008)CrossRefGoogle Scholar
  66. L. Leydesdorff, T. Opthof: Scopus's source normalized impact per paper (SNIP) versus a journal impact factor based on fractional counting of citations, J. Am. Soc. Inf. Sci. Technol. 61(11), 2365–2369 (2010)CrossRefGoogle Scholar
  67. H.F. Moed: Measuring contextual citation impact of scientific journals, J. Informetr. 4(3), 265–277 (2010)CrossRefGoogle Scholar
  68. W. Glänzel, A. Schubert, B. Thijs, K. Debackere: A priori vs. a posteriori normalisation of citation indicators. The case of journal ranking, Scientometrics 87(2), 415–424 (2011)CrossRefGoogle Scholar
  69. M. Zitt: Citing-side normalization of journal impact: A robust variant of the audience factor, J. Informetr. 4(3), 392–406 (2010)CrossRefGoogle Scholar
  70. L. Leydesdorff, L. Bornmann: How fractional counting of citations affects the impact factor: Normalization in terms of differences in citation potentials among fields of science, J. Am. Soc. Inf. Sci. Technol. 62(2), 217–229 (2011)CrossRefGoogle Scholar
  71. L. Leydesdorff, P. Zhou, L. Bornmann: How can journal impact factors be normalized across fields of science? An assessment in terms of percentile ranks and fractional counts, J. Am. Soc. Inf. Sci. Technol. 64(1), 96–107 (2013)CrossRefGoogle Scholar
  72. L. Waltman, N.J. van Eck, T.N. van Leeuwen, M.S. Visser: Some modifications to the SNIP journal impact indicator, J. Informetr. 7(2), 272–285 (2013)CrossRefGoogle Scholar
  73. M. Kosmulski: Successful papers: A new idea in evaluation of scientific output, J. Informetr. 5(3), 481–485 (2011)CrossRefGoogle Scholar
  74. F. Franceschini, M. Galetto, D. Maisano, L. Mastrogiacomo: The success-index: An alternative approach to the h-index for evaluating an individual's research output, Scientometrics 92(3), 621–641 (2012)CrossRefGoogle Scholar
  75. F. Franceschini, D. Maisano: Sub-field normalization of the IEEE scientific journals based on their connection with Technical Societies, J. Informetr. 8(3), 508–533 (2014)CrossRefGoogle Scholar
  76. J. Nicolaisen, T.F. Frandsen: The reference return ratio, J. Informetr. 2(2), 128–135 (2008)CrossRefGoogle Scholar
  77. L. Bornmann, R. Haunschild: Citation score normalized by cited references (CSNCR): The introduction of a new citation impact indicator, J. Informetr. 10(3), 875–887 (2016)CrossRefGoogle Scholar
  78. G. Pinski, F. Narin: Citation influence for journal aggregates of scientific publications: Theory, with application to the literature of physics, Inf. Process. Manag. 12(5), 297–312 (1976)CrossRefGoogle Scholar
  79. S. Brin, L. Page: The anatomy of a large-scale hypertextual web search engine, Comput. Netw. ISDN Syst. 30(1), 107–117 (1998)CrossRefGoogle Scholar
  80. C.T. Bergstrom: Eigenfactor: Measuring the value and prestige of scholarly journals, College Res. Libr. News 68(5), 314–316 (2007)CrossRefGoogle Scholar
  81. J.D. West, T.C. Bergstrom, C.T. Bergstrom: The eigenfactor metrics: A network approach to assessing scholarly journals, College Res. Libr. 71(3), 236–244 (2010)CrossRefGoogle Scholar
  82. B. González-Pereira, V.P. Guerrero-Bote, F. Moya-Anegón: A new approach to the metric of journals' scientific prestige: The SJR indicator, J. Informetr. 4(3), 379–391 (2010)CrossRefGoogle Scholar
  83. V.P. Guerrero-Bote, F. Moya-Anegón: A further step forward in measuring journals' scientific prestige: The SJR2 indicator, J. Informetr. 6(4), 674–688 (2012)CrossRefGoogle Scholar
  84. L. Waltman, E. Yan: PageRank-related methods for analyzing citation networks. In: Measuring Scholarly Impact: Methods and Practice, ed. by Y. Ding, R. Rousseau, D. Wolfram (Springer, Cham 2014) pp. 83–100Google Scholar
  85. E. Fragkiadaki, G. Evangelidis: Review of the indirect citations paradigm: Theory and practice of the assessment of papers, authors and journals, Scientometrics 99(2), 261–288 (2014)CrossRefGoogle Scholar
  86. L. Waltman, N.J. van Eck: The relation between eigenfactor, audience factor, and influence weight, J. Am. Soc. Inf. Sci. Technol. 61(7), 1476–1486 (2010)CrossRefGoogle Scholar
  87. G. Abramo, C.A. D'Angelo: Ranking research institutions by the number of highly-cited articles per scientist, J. Informetr. 9(4), 915–923 (2015)CrossRefGoogle Scholar
  88. D. Aksnes, G. Sivertsen, T.N. van Leeuwen, K.K. Wendt: Measuring the productivity of national R&D systems: Challenges in cross-national comparisons of R&D input and publication output indicators, Sci. Pub. Policy 44(2), 246–258 (2017)Google Scholar
  89. T. Koski, E. Sandström, U. Sandström: Towards field-adjusted production: Estimating research productivity from a zero-truncated distribution, J. Informetr. 10(4), 1143–1152 (2016)CrossRefGoogle Scholar
  90. J.E. Hirsch: An index to quantify an individual's scientific research output, Proc. Natl. Acad. Sci. USA 102(46), 16569–16572 (2005)CrossRefGoogle Scholar
  91. P.D. Batista, M.G. Campiteli, O. Kinouchi, A.S. Martinez: Is it possible to compare researchers with different scientific interests?, Scientometrics 68(1), 179–189 (2006)CrossRefGoogle Scholar
  92. A.W. Harzing, S. Alakangas, D. Adams: hIa: An individual annual h-index to accommodate disciplinary and career length differences, Scientometrics 99(3), 811–821 (2014)CrossRefGoogle Scholar
  93. J.E. Iglesias, C. Pecharromán: Scaling the h-index for different scientific ISI fields, Scientometrics 73(3), 303–320 (2007)CrossRefGoogle Scholar
  94. J. Kaur, F. Radicchi, F. Menczer: Universality of scholarly impact metrics, J. Informetr. 7(4), 924–932 (2013)CrossRefGoogle Scholar
  95. F. Radicchi, S. Fortunato, C. Castellano: Universality of citation distributions: Toward an objective measure of scientific impact, Proc. Natl. Acad. Sci. USA 105(45), 17268–17272 (2008)CrossRefGoogle Scholar
  96. J. Kaur, E. Ferrara, F. Menczer, A. Flammini, F. Radicchi: Quality versus quantity in scientific impact, J. Informetr. 9(4), 800–808 (2015)CrossRefGoogle Scholar
  97. G. Ruocco, C. Daraio: An empirical approach to compare the performance of heterogeneous academic fields, Scientometrics 97(3), 601–625 (2013)CrossRefGoogle Scholar
  98. L. Waltman: Special section on size-independent indicators in citation analysis, J. Informetr. 10(2), 645 (2016)CrossRefGoogle Scholar
  99. G. Abramo, C.A. D'Angelo: A farewell to the MNCS and like size-independent indicators, J. Informetr. 10(2), 646–651 (2016)CrossRefGoogle Scholar
  100. G. Abramo, C.A. D'Angelo: A comparison of university performance scores and ranks by MNCS and FSS, J. Informetr. 10(4), 889–901 (2016)CrossRefGoogle Scholar
  101. L. Smolinsky: Expected number of citations and the crown indicator, J. Informetr. 10(1), 43–47 (2016)CrossRefGoogle Scholar
  102. A. Perianes-Rodriguez, J. Ruiz-Castillo: Multiplicative versus fractional counting methods for co-authored publications. The case of the 500 universities in the Leiden Ranking, J. Informetr. 9(4), 974–989 (2015)CrossRefGoogle Scholar
  103. P. Albarrán, J.A. Crespo, I. Ortuño, J. Ruiz-Castillo: The skewness of science in 219 sub-fields and a number of aggregates, Scientometrics 88(2), 385–397 (2011)CrossRefGoogle Scholar
  104. L. Waltman, N.J. van Eck, A.F.J. van Raan: Universality of citation distributions revisited, J. Am. Soc. Inf. Sci. Technol. 63(1), 72–77 (2012)CrossRefGoogle Scholar
  105. F. Radicchi, C. Castellano: Testing the fairness of citation indicators for comparison across scientific domains: The case of fractional citation counts, J. Informetr. 6(1), 121–130 (2012)CrossRefGoogle Scholar
  106. L. Leydesdorff, F. Radicchi, L. Bornmann, C. Castellano, W. De Nooy: Field-normalized impact factors (IFs): A comparison of rescaling and fractionally counted IFs, J. Am. Soc. Inf. Sci. Technol. 64(11), 2299–2309 (2013)CrossRefGoogle Scholar
  107. Y. Li, F. Radicchi, C. Castellano, J. Ruiz-Castillo: Quantitative evaluation of alternative field normalization procedures, J. Informetr. 7(3), 746–755 (2013)CrossRefGoogle Scholar
  108. L. Waltman, N.J. van Eck: A systematic empirical comparison of different approaches for normalizing citation impact indicators, J. Informetr. 7(4), 833–849 (2013)CrossRefGoogle Scholar
  109. D. Sirtes: Finding the Easter eggs hidden by oneself: Why Radicchi and Castellano's (2012) fairness test for citation indicators is not fair, J. Informetr. 6(3), 448–450 (2012)CrossRefGoogle Scholar
  110. F. Radicchi, C. Castellano: Why Sirtes's claims (Sirtes, 2012) do not square with reality, J. Informetr. 6(4), 615–618 (2012)CrossRefGoogle Scholar
  111. N.J. van Eck, L. Waltman, A.F.J. van Raan, R.J.M. Klautz, W.C. Peul: Citation analysis may severely underestimate the impact of clinical research as compared to basic research, PLOS ONE 8(4), e62395 (2013)CrossRefGoogle Scholar
  112. L. Leydesdorff, L. Bornmann: The operationalization of “fields” as WoS subject categories (WCs) in evaluative bibliometrics: The cases of “library and information science” and “science & technology studies”, J. Assoc. Inf. Sci. Technol. 67(3), 707–714 (2016)CrossRefGoogle Scholar
  113. Y. Li, J. Ruiz-Castillo: The comparison of normalization procedures based on different classification systems, J. Informetr. 7(4), 945–958 (2013)CrossRefGoogle Scholar
  114. H.F. Moed: Citation Analysis in Research Evaluation (Springer, Dordrecht 2005)Google Scholar
  115. E.J. Rinia, T.N. van Leeuwen, H.G. van Vuren, A.F.J. van Raan: Comparative analysis of a set of bibliometric indicators and central peer review criteria: Evaluation of condensed matter physics in the Netherlands, Res. Policy 27(1), 95–107 (1998)CrossRefGoogle Scholar
  116. J. Adams, K. Gurney, L. Jackson: Calibrating the zoom – A test of Zitt's hypothesis, Scientometrics 75(1), 81–95 (2008)CrossRefGoogle Scholar
  117. L. Bornmann, R. Haunschild: Relative citation ratio (RCR): An empirical attempt to study a new field-normalized bibliometric indicator, J. Assoc. Inf. Sci. Technol. 68(4), 1064–1067 (2017)CrossRefGoogle Scholar
  118. L. Bornmann, W. Marx: Methods for the generation of normalized citation impact scores in bibliometrics: Which method best reflects the judgements of experts?, J. Informetr. 9(2), 408–418 (2015)CrossRefGoogle Scholar
  119. L. Waltman, N.J. van Eck: Source normalized indicators of citation impact: An overview of different approaches and an empirical comparison, Scientometrics 96(3), 699–716 (2013)CrossRefGoogle Scholar
  120. C. Colliander, P. Ahlgren: The effects and their stability of field normalization baseline on relative performance with respect to citation impact: A case study of 20 natural science departments, J. Informetr. 5(1), 101–113 (2011)CrossRefGoogle Scholar
  121. A. Perianes-Rodriguez, J. Ruiz-Castillo: A comparison of two ways of evaluating research units working in different scientific fields, Scientometrics 106(2), 539–561 (2016)CrossRefGoogle Scholar
  122. G. Abramo, C.A. D'Angelo: Evaluating university research: Same performance indicator, different rankings, J. Informetr. 9(3), 514–525 (2015)CrossRefGoogle Scholar
  123. R.N. Kostoff: Citation analysis of research performer quality, Scientometrics 53(1), 49–71 (2002)CrossRefGoogle Scholar
  124. R.N. Kostoff, W.L. Martinez: Is citation normalization realistic?, J. Inf. Sci. 31(1), 57–61 (2005)CrossRefGoogle Scholar
  125. L. Waltman, N.J. van Eck: The need for contextualized scientometric analysis: An opinion paper. In: Proc. 21st Int. Conf. Sci. Technol. Indic, ed. by I. Rafols, J. Molas-Gallart, E. Castro-Martínez, R. Woolley (2016) pp. 541–549Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Centre for Science and Technology Studies (CWTS)Leiden UniversityLeidenThe Netherlands

Personalised recommendations