Scientometrics

, Volume 96, Issue 3, pp 699–716 | Cite as

Source normalized indicators of citation impact: an overview of different approaches and an empirical comparison

Article

Abstract

Different scientific fields have different citation practices. Citation-based bibliometric indicators need to normalize for such differences between fields in order to allow for meaningful between-field comparisons of citation impact. Traditionally, normalization for field differences has usually been done based on a field classification system. In this approach, each publication belongs to one or more fields and the citation impact of a publication is calculated relative to the other publications in the same field. Recently, the idea of source normalization was introduced, which offers an alternative approach to normalize for field differences. In this approach, normalization is done by looking at the referencing behavior of citing publications or citing journals. In this paper, we provide an overview of a number of source normalization approaches and we empirically compare these approaches with a traditional normalization approach based on a field classification system. We also pay attention to the issue of the selection of the journals to be included in a normalization for field differences. Our analysis indicates a number of problems of the traditional classification-system-based normalization approach, suggesting that source normalization approaches may yield more accurate results.

Keywords

Bibliometric indicator Citation analysis Field normalization Source normalization 

Notes

Acknowledgments

We would like to thank Javier Ruiz Castillo for his comments on an earlier draft of this paper. We are also grateful to an anonymous referee for various useful comments.

References

  1. Adams, J., Gurney, K., & Jackson, L. (2008). Calibrating the zoom—A test of Zitt’s hypothesis. Scientometrics, 75(1), 81–95.CrossRefGoogle Scholar
  2. Braun, T., & Glänzel, W. (1990). United Germany: the new scientific superpower? Scientometrics, 19(5–6), 513–521.CrossRefGoogle Scholar
  3. Buela-Casal, G., Perakakis, P., Taylor, M., & Checa, P. (2006). Measuring internationality: reflections and perspectives on academic journals. Scientometrics, 67(1), 45–65.CrossRefGoogle Scholar
  4. Glänzel, W., Schubert, A., & Czerwon, H.-J. (1999). An item-by-item subject classification of papers published in multidisciplinary and general journals using reference analysis. Scientometrics, 44(3), 427–439.CrossRefGoogle Scholar
  5. Glänzel, W., Schubert, A., Thijs, B., & Debackere, K. (2011). A priori vs. a posteriori normalisation of citation indicators. The case of journal ranking. Scientometrics, 87(2), 415–424.CrossRefGoogle Scholar
  6. Glänzel, W., Thijs, B., Schubert, A., & Debackere, K. (2009). Subfield-specific normalized relative indicators and a new generation of relational charts: methodological foundations illustrated on the assessment of institutional research performance. Scientometrics, 78(1), 165–188.CrossRefGoogle Scholar
  7. Leydesdorff, L., & Bornmann, L. (2011). How fractional counting of citations affects the impact factor: normalization in terms of differences in citation potentials among fields of science. J Am Soc Inf Sci Technol, 62(2), 217–229.CrossRefGoogle Scholar
  8. Leydesdorff, L., & Opthof, T. (2010). Scopus’s source normalized impact per paper (SNIP) versus a journal impact factor based on fractional counting of citations. J Am Soc Inf Sci Technol, 61(11), 2365–2369.CrossRefGoogle Scholar
  9. Leydesdorff, L., Zhou, P., & Bornmann, L. (in press). How can journal impact factors be normalized across fields of science? An assessment in terms of percentile ranks and fractional counts. J Am Soc Inf Sci Technol.Google Scholar
  10. Crespo, J. A., Li, Y., & Ruiz-Castillo, J. (2012). Differences in citation impact across scientific fields (Working Paper Economic Series 12-06). Departamento de Economía, Universidad Carlos III of Madrid.Google Scholar
  11. Lundberg, J. (2007). Lifting the crown—citation z-score. J Informetr, 1(2), 145–154.CrossRefGoogle Scholar
  12. Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. J Informetr, 4(3), 265–277.CrossRefGoogle Scholar
  13. Moed, H. F., De Bruin, R. E., & Van Leeuwen, T. N. (1995). New bibliometric tools for the assessment of national research performance: database description, overview of indicators and first applications. Scientometrics, 33(3), 381–422.CrossRefGoogle Scholar
  14. Neuhaus, C., & Daniel, H.-D. (2009). A new reference standard for citation analysis in chemistry and related fields based on the sections of chemical abstracts. Scientometrics, 78(2), 219–229.CrossRefGoogle Scholar
  15. Radicchi, F., & Castellano, C. (2012a). Testing the fairness of citation indicators for comparison across scientific domains: the case of fractional citation counts. J Informetr, 6(1), 121–130.CrossRefGoogle Scholar
  16. Radicchi, F., & Castellano, C. (2012b). A reverse engineering approach to the suppression of citation biases reveals universal properties of citation distributions. PLoS One, 7(3), e33833.CrossRefGoogle Scholar
  17. Radicchi, F., Fortunato, S., & Castellano, C. (2008). Universality of citation distributions: toward an objective measure of scientific impact. Proceedings of the National Academy of Sciences, 105(45), 17268–17272.CrossRefGoogle Scholar
  18. Schubert, A., & Braun, T. (1996). Cross-field normalization of scientometric indicators. Scientometrics, 36(3), 311–324.CrossRefGoogle Scholar
  19. Small, H., & Sweeney, E. (1985). Clustering the science citation index using co-citations. I. A comparison of methods. Scientometrics, 7(3–6), 391–409.CrossRefGoogle Scholar
  20. Van Eck, N. J., Waltman, L., Van Raan, A. F. J., Klautz, R. J. M., & Peul, W. C. (2012). Citation analysis may severely underestimate the impact of clinical research as compared to basic research. arXiv:1210.0442.Google Scholar
  21. Van Leeuwen, T. N., & Calero Medina, C. (2012). Redefining the field of economics: improving field normalization for the application of bibliometric techniques in the field of economics. Res Eval, 21(1), 61–70.CrossRefGoogle Scholar
  22. Van Leeuwen, T. N., Moed, H. F., Tijssen, R. J. W., Visser, M. S., & Van Raan, A. F. J. (2001). Language biases in the coverage of the Science Citation Index and its consequences for international comparisons of national research performance. Scientometrics, 51(1), 335–346.CrossRefGoogle Scholar
  23. Van Raan, A. F. J., Van Leeuwen, T. N., & Visser, M. S. (2011a). Severe language effect in university rankings: particularly Germany and France are wronged in citation-based rankings. Scientometrics, 88(2), 495–498.CrossRefGoogle Scholar
  24. Van Raan, T., Van Leeuwen, T., & Visser, M. (2011b). Non-English papers decrease rankings. Nature, 469, 34.CrossRefGoogle Scholar
  25. Waltman, L., & Van Eck, N. J. (2010a). A general source normalized approach to bibliometric research performance assessment. In Book of Abstracts of the Eleventh International Conference on Science and Technology Indicators (pp. 298–299).Google Scholar
  26. Waltman, L., & Van Eck, N. J. (2010b). The relation between Eigenfactor, audience factor, and influence weight. J Am Soc Inf Sci Technol, 61(7), 1476–1486.CrossRefGoogle Scholar
  27. Waltman, L., & Van Eck, N. J. (in press). A new methodology for constructing a publication-level classification system of science. J Am Soc Inf Sci Technol. Google Scholar
  28. Waltman, L., Van Eck, N. J., Van Leeuwen, T. N., & Visser, M. S. (2012). Some modifications to the SNIP journal impact indicator. arXiv:1209.0785.Google Scholar
  29. Waltman, L., Van Eck, N. J., Van Leeuwen, T. N., Visser, M. S., & Van Raan, A. F. J. (2011a). Towards a new crown indicator: some theoretical considerations. J Informetr, 5(1), 37–47.CrossRefGoogle Scholar
  30. Waltman, L., Yan, E., & Van Eck, N. J. (2011b). A recursive field-normalized bibliometric performance indicator: an application to the field of library and information science. Scientometrics, 89(1), 301–314.CrossRefGoogle Scholar
  31. Zhou, P., & Leydesdorff, L. (2011). Fractional counting of citations in research evaluation: a cross- and interdisciplinary assessment of the Tsinghua University in Beijing. J Informetr, 5(3), 360–368.CrossRefGoogle Scholar
  32. Zitt, M. (2010). Citing-side normalization of journal impact: a robust variant of the audience factor. J Informetr, 4(3), 392–406.CrossRefGoogle Scholar
  33. Zitt, M. (2011). Behind citing-side normalization of citations: some properties of the journal impact factor. Scientometrics, 89(1), 329–344.MathSciNetCrossRefGoogle Scholar
  34. Zitt, M., & Bassecoulard, E. (1998). Internationalization of scientific journals: a measurement based on publication and citation scope. Scientometrics, 41(1–2), 255–271.CrossRefGoogle Scholar
  35. Zitt, M., Ramanana-Rahary, S., & Bassecoulard, E. (2003). Correcting glasses help fair comparisons in international science landscape: country indicators as a function of ISI database delineation. Scientometrics, 56(2), 259–282.CrossRefGoogle Scholar
  36. Zitt, M., Ramanana-Rahary, S., & Bassecoulard, E. (2005). Relativity of citation performance and excellence measures: from cross-field to cross-scale effects of field-normalisation. Scientometrics, 63(2), 373–401.CrossRefGoogle Scholar
  37. Zitt, M., & Small, H. (2008). Modifying the journal impact factor by fractional citation weighting: the audience factor. J Am Soc Inf Sci Technol, 59(11), 1856–1860.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2012

Authors and Affiliations

  1. 1.Centre for Science and Technology StudiesLeiden UniversityLeidenThe Netherlands

Personalised recommendations