Advertisement

Scientometrics

, Volume 92, Issue 2, pp 485–503 | Cite as

The journal impact factor: angel, devil, or scapegoat? A comment on J.K. Vanclay’s article 2011

  • Michel Zitt
Article

Abstract

J.K. Vanclay’s article is a bold attempt to review recent works on the journal impact factor (JIF) and to call for alternative certifications of journals. The too broad scope did not allow the author to fulfill all his purposes. Attempting after many others to organize the various forms of criticism, with targets often broader than the JIF, we shall try to comment on a few points. This will hopefully enable us to infer in which cases the JIF is an angel, a devil, or a scapegoat. We shall also expand on a crucial question that Vanclay could not really develop in the reduced article format: the field-normalization. After a short recall on classical cited-side or ex post normalization and of the powerful influence measures, we will devote some attention to the novel way of citing-side or ex ante normalization, not only for its own interest, but because it directly proceeds from the disassembling of the JIF clockwork.

Keywords

Bibliometric measures Impact factor Impact factor limitations Field-normalized impact-factor Citation behavior Citation normalization Citing-side normalization Source-level normalization 

Notes

Acknowledgments

The author thanks S. Ramanana-Rahary and E. Bassecoulard for their help.

References

  1. Adams, J., Gurney, K., & Marshall, S. (2007). Profiling citation impact: a new methodology. Scientometrics, 72(2), 325–344.CrossRefGoogle Scholar
  2. Adler, R., Ewing, J., & Taylor, P. (2008). Citation statistics. A report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the International Council of Institute of Mathematical Statistics (IMS). Summarized in (2009). Statistical Science, 24(1), 1–14.MathSciNetCrossRefGoogle Scholar
  3. Bar-Ilan, J. (2008). Which h-index? – A comparison of WoS, Scopus and Google Scholar. Scientometrics, 74(2), 257–271.CrossRefGoogle Scholar
  4. Bergstrom, C. (2007). Eigenfactor: measuring the value and prestige of scholarly journals. College & Research Libraries News, 68, 5, www.ala.org/ala/acrl/acrlpubs/crlnews/backissues2007/may2007/eigenfactor.cfm.
  5. Bollen, J., Van de Sompel, H., Hagberg, A., & Chute, R. (2009). A principal component analysis of 39 scientific impact measures. PLoS ONE, 4(6), e6022. doi: 10.1371/journal.pone.0006022.CrossRefGoogle Scholar
  6. Bourdieu, P. (1975). The specificity of the scientific field and the social conditions of the progress of reason. Social Science Information, 14(6), 19–47.CrossRefGoogle Scholar
  7. Bouyssou, D., & Marchant, T. (2011). Bibliometric rankings of journals based on impact factors: an axiomatic approach. Journal of Informetrics, 5(1), 75–86. doi: 10.1016/j.joi.2010.09.001.MathSciNetCrossRefGoogle Scholar
  8. Braun, T., Glänzel, W., & Schubert, A. (2006). A Hirsch-type index for journals. Scientometrics, 69, 169–173.CrossRefGoogle Scholar
  9. Callon, M., & Latour, B. (1981). Unscrewing the big leviathan: how actors macrostructure reality and how sociologists help them to do so. In Karin. D. Knorr Cetina & Aaraon. V. Cicourel (Eds.), Advances in social theory and methodology: toward an integration of micro- and macro-sociologies (pp. 277–303). Boston: Routledge and Kegan Paul.Google Scholar
  10. Cronin, B. (1984). The citation process; the role and significance of citations in scientific communication. London: Taylor Graham.Google Scholar
  11. Czapski, G. (1997). The use of deciles of the citation impact to evaluate different fields of research in Israel. Scientometrics, 40(3), 437–443.CrossRefGoogle Scholar
  12. de Moya-Anegon, F. (2007). SCImago. SJR — SCImago Journal & Country Rank.Google Scholar
  13. de Solla Price, D. J. (1963). Little science, big science. New York: Columbia University Press.Google Scholar
  14. Garfield, E. (1955). Citation Indexes for Science. A new dimension in documentation through association of ideas. Science, 122, 108–111.CrossRefGoogle Scholar
  15. Garfield, E. (1972). Citation analysis as a tool in journal evaluation. Science, 178(4060), 471–479.CrossRefGoogle Scholar
  16. Garfield, E. (2006). The history and meaning of the journal impact factor. Journal of the American Medical Association, 295, 90–93.CrossRefGoogle Scholar
  17. Garfield, E., & Sher, I. H. (1963). New factors in the evaluation of scientific literature through citation indexing. American Documentation, 14(3), 195–201.CrossRefGoogle Scholar
  18. Geller, N. L. (1978). Citation influence methodology of Pinski and Narin. Information Processing and Management, 14, 93–95.zbMATHCrossRefGoogle Scholar
  19. Glänzel, W. (2008). On some new bibliometric applications of statistics related to the h-index. Scientometrics, 77(1), 187–196.CrossRefGoogle Scholar
  20. Glanzel, W., & Moed, H. F. (2002). Journal impact measures in bibliometric research. Scientometrics, 53(2), 171–193.CrossRefGoogle Scholar
  21. Glänzel, W., Schubert, A., Thijs, B., & Debackere, K. (2011). A priori vs. a posteriori normalisation of citation indicators. The case of journal ranking. Scientometrics, 87(2), 415–424.CrossRefGoogle Scholar
  22. Hagström, W. O. (1965). The scientific community. New York: Basic Books.Google Scholar
  23. Hicks, D. (2004). The four literatures of social science. In H. Moed, W. Glanzel, & U. Schmoch (Eds.), Handbook of quantitative science and technology research. New York: Kluwer Academic.Google Scholar
  24. Hirsch, J. E. (2007). Does the h index have predictive power? Proceedings of the National Academy of Sciences, 104(49), 19193–19198.CrossRefGoogle Scholar
  25. Hoeffel, C. (1998). Journal impact factors. Allergy, 53, 1225.CrossRefGoogle Scholar
  26. Hönekopp, J., & Khan, J. (2011). Future publication success in science is better predicted by traditional measures than by the h index. Scientometrics, 90(3), 843–853.CrossRefGoogle Scholar
  27. Ingwersen, P., Larsen, B., Rousseau, R., & Russell, J. (2001). The publication-citation matrix and its derived quantities. Chinese Science Bulletin, 46(6), 524–528.CrossRefGoogle Scholar
  28. Katz, S. J. (1999). The self-similar science system. Research policy, 28(5), 501–517.CrossRefGoogle Scholar
  29. Leydesdorff, L., & Opthof, T. (2010). Scopus’s source normalized impact per paper (SNIP) versus a journal impact factor based on fractional counting of citations. Journal of the American Society for Information Science and Technology, 61(11), 2365–2396.CrossRefGoogle Scholar
  30. Lundberg, J. (2007). Lifting the crown: citation z-score. Journal of Informetrics, 1(2), 145–154.CrossRefGoogle Scholar
  31. Luukkonen, T. (1997). Why has Latour’s theory of citations been ignored by the bibliometric community? Discussion of sociological interpretations of citation analysis. Scientometrics, 38(1), 27–37.MathSciNetCrossRefGoogle Scholar
  32. Marchant, T. (2009). An axiomatic characterization of the ranking based on the h-index and some other bibliometric rankings of authors. Scientometrics, 80(2), 325–342.CrossRefGoogle Scholar
  33. Marshakova-Shaikevich, I. (1996). The standard impact factor as an evaluation tool of science fields and scientific journals. Scientometrics, 35(2), 283–290.CrossRefGoogle Scholar
  34. Merton, R.K. (1942). Science and technology in a democratic order. Journal of legal and political sociology, 1, 115–126 (reprint: The normative structure of science (1973). In Storer N.W. (ed.), The sociology of science: theoretical and empirical investigations (pp. 1267–1278). Chicago: University of Chicago Press).Google Scholar
  35. Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4(3), 265–277.CrossRefGoogle Scholar
  36. Moed, H. F., & van Leeuwen, T. N. (1995). Improving the accuracy of institute for scientific information’s journal impact factors. Journal of the American Society for Information Science, 46(6), 461–467.CrossRefGoogle Scholar
  37. Moed, H. F., & Vriens, M. (1989). Possible inaccuracies occurring in citation analysis. Journal of Information Science, 15, 95–107.CrossRefGoogle Scholar
  38. Murugesan, P., & Moravcsik, M. J. (1978). Variation of the nature of citation measures with journal and scientific specialties. Journal of the American Society for Information Science, 29(3), 141–155.CrossRefGoogle Scholar
  39. Narin, F. (1976). Evaluative bibliometrics : the use of publication and citation analysis in the evaluation of scientific activity (Report prepared for the National Science Foundation, Contract NSF C-627). Cherry Hill: Computer Horizons.Google Scholar
  40. Nicolaisen, J., & Frandsen, T. F. (2008). The reference return ratio. Journal of Informetrics, 2(2), 128–135. doi: 10.1016/j.joi.2007.12.001.CrossRefGoogle Scholar
  41. Palacios Huerta, I., & Volij, O. (2004). The Measurement of intellectual influence. Econometrica, 72(3), 963–977.zbMATHCrossRefGoogle Scholar
  42. Pinski, G., & Narin, F. (1976). Citation influence for journal aggregates of scientific publications: theory, with application to the literature of physics. Information Processing and Management, 12, 297–312.CrossRefGoogle Scholar
  43. Raddichi, F., Fortunato, S., & Castellano, C. (2008). Universality of citation distributions: towards an objective measure of citation impact. Proceedings of the National Academy of Sciences, 105(45), 17268–17272.CrossRefGoogle Scholar
  44. Ramanana-Rahary, S., Zitt, M., & Rousseau, R. (2009). Aggregation properties of relative impact and other classical indicators: convexity issues and the Yule-Simpson paradox. Scientometrics, 79(1–2), 311–327.CrossRefGoogle Scholar
  45. Rousseau, R. (2008). Woeginger’s axiomatisation of the h-index and its relation to the g-index, the h(2)-index and the r2-index. Journal of Informetrics, 2(4), 335–340.CrossRefGoogle Scholar
  46. Rousseau, R., & Egghe, L. (2003). A general framework for relative impact indicators. Canadian Journal of Information and Library Science, 27(1), 29–48.Google Scholar
  47. Schubert, A., & Braun, T. (1986). Relative indicators and relational charts for comparative assessment of publication output and citation impact. Scientometrics, 9(5–6), 281–291.CrossRefGoogle Scholar
  48. Seglen, P. O. (1992). The skewness of science. Journal of the American Society for Information Science, 43, 628–638.CrossRefGoogle Scholar
  49. Sen, B. K. (1992). Documentation Note Normalized Impact Factor. Journal of Documentation, 48(3), 318–325.CrossRefGoogle Scholar
  50. Small, H., & Sweeney, E. (1985). Clustering the science citation index using co-citations: 1. A comparison of methods. Scientometrics, 7(3–6), 391–409.CrossRefGoogle Scholar
  51. Van Raan, A. F. J. (2000). On growth, ageing, and fractal differentiation of science. Scientometrics, 47(2), 347–362.CrossRefGoogle Scholar
  52. van Raan, A. F. J. (2001). Competition amongst scientists for publication status: toward a model of scientific publication and citation distributions. Scientometrics, 51(1), 347–357.CrossRefGoogle Scholar
  53. Vanclay, J. K. (2009). Bias in the journal impact factor. Scientometrics, 78(1), 3–12.CrossRefGoogle Scholar
  54. Vanclay, J. K. (2011). Impact Factor: outdated artefact or stepping-stone to journal certification? Scientometrics,. doi: 10.1007/s11192-011-0561-0.Google Scholar
  55. Vieira, E. S., & Gomez, J. A. N. F. (2011). The journal relative impact: an indicator for journal assessment. Scientometrics, 89(2), 631–651.CrossRefGoogle Scholar
  56. Vinkler, P. (2002). Subfield problems in applying the Garfield (impact) factors in practice. Scientometrics, 53(2), 267–279.CrossRefGoogle Scholar
  57. Waltman, L., van Eck, N. J., van Leeuwen, T. N., Visser, M. S., & van Raan, A. F. J. (2011). Towards a new crown indicator: some theoretical considerations. Journal of Informetrics, 5, 37–47.CrossRefGoogle Scholar
  58. Waltman, L., & van Eck, N. J. (2009). A taxonomy of bibliometric performance indicators based on the property of consistency. Proceedings of the 12th International Conference on Scientometrics and Informetrics, 1002–1003.Google Scholar
  59. Wouters, P. (1997). Citation cycles and peer review cycles. Scientometrics, 38(1), 39–55.CrossRefGoogle Scholar
  60. Zitt, M. (2010). Citing-side normalization of journal impact: a robust variant of the audience factor. Journal of Informetrics, 4(3), 392–406.CrossRefGoogle Scholar
  61. Zitt, M. (2011). Behind citing-side normalization of citations: some properties of the journal impact factor. Scientometrics, 89(1), 329–344.MathSciNetCrossRefGoogle Scholar
  62. Zitt, M., Ramanana-Rahary, S., & Bassecoulard, E. (2003). Correcting glasses help fair comparisons in international science landscape: country indicators as a function of ISI database delineation. Scientometrics, 56(2), 259–282.CrossRefGoogle Scholar
  63. Zitt, M., Ramanana-Rahary, S., & Bassecoulard, E. (2005). Relativity of citation performance and excellence measures: from cross-field to cross-scale effects of field-normalisation. Scientometrics, 63(2), 373–401.CrossRefGoogle Scholar
  64. Zitt, M., & Small, H. (2008). Modifying the journal impact factor by fractional citation weighting: the audience factor. Journal of the American Society for Information Science and Technology, 59(11), 1856–1860.CrossRefGoogle Scholar
  65. Zucker, L. G., & Darby, M. R. (1996). Star scientists and institutional transformation: patterns of invention and innovation in the formation of the biotechnology industry. Proceedings of the National Academy of Sciences, 93(23), 12709–12716.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2012

Authors and Affiliations

  1. 1.INRA Lereco (SAE2) U1134Nantes Cedex 03France
  2. 2.Observatoire des Sciences et des Techniques (OST)ParisFrance

Personalised recommendations