The evaluation of scientific production: Towards a neutral impact factor

Abstract

Measurement of research activity still remains a controversial question. The use of the impact factor from the Institute for Scientific Information (ISI) is quite widespread nowadays to carry out evaluations of all kinds; however, the calculation formula employed by ISI in order to construct its impact factors biases the results in favour of knowledge fields which are better represented in the sample, cite more in average and whose citations are concentrated in the early years of the articles.

In the present work, we put forward a theoretical proposal regarding how aggregated normalization should be carried out with these biases, which allows comparing scientific production between fields, institutions and/or authors in a neutral manner. The technical complexity of such work, together with data limitations, lead us to propose some adjustments on the impact factor proposed by ISI which — although they do not completely solve the problem — reduce it and allow glimpsing the path towards more neutral evaluations. The proposal is empirically applied to three analysis levels: single journals, knowledge fields and the set of journals from the Journal Citation Report.

This is a preview of subscription content, access via your institution.

References

  1. Adam, D. (2002), The counting house. Nature, 415: 725–729.

    Google Scholar 

  2. Aksnes, D. W. (2006), Citation rates and perceptions of scientific contribution. Journal of the American Society for Information Science and Technology, 57(2): 169–185.

    Article  Google Scholar 

  3. Archambault, E., Vignola-Gagne, E., Côte, G., Larivière, V., Gingras, Y. (2006), Benchmarking scientific output in the social sciences and humanities — The limits of existing databases. Scientometrics, 68(3): 329–342.

    Article  Google Scholar 

  4. Bar-Ilan, J. (2008), Informetrics at the beginning of the 21st century — A review. Journal of Informetrics, 2: 1–52.

    Article  Google Scholar 

  5. Bollen, J., Rodriguez, M. A., van De Sompel, H. (2006), Journal status, Scientometrics, 69(3): 669–687.

    Article  Google Scholar 

  6. Bordons, M., Barrigón, S. (1992), Bibliometric analysis of publications of Spanish pharmacologists in the SCI (1984–89). Part II. Contribution to subfields other than “Pharmacology & Pharmacy (lSI)”. Scientometrics, 25(3): 425–446.

    Article  Google Scholar 

  7. Brookes, B. C. (1970), The growth, utility, and obsolescence of scientific periodical literature. Journal of Documentation, 26: 458–461.

    MathSciNet  Google Scholar 

  8. Buchanan, R. A. (2006), Accuracy of cited references — The role of citation databases. College & Research Libraries, 67(4): 292–303.

    Google Scholar 

  9. Butler, L., Visser, M. S. (2006), Extending citation analysis to non-source items. Scientometrics, 66(2): 327–343.

    Article  Google Scholar 

  10. Egghe, L., Rousseau, R. (1990), Introduction to Infometrics. Quantitative Methods in Library, Documentation and Information Science. Elsevier, Amsterdam.

    Google Scholar 

  11. Garfield, E. (1996), How can impact factors be improved? British Medical Journal, 313: 411–413.

    Google Scholar 

  12. Garfield, E. (1998), Long-term vs. short-term journal impact: part ii. cumulative impact factors. The Scientist, 12(14): 12–13.

    Google Scholar 

  13. Garfield, E. (1999), Journal impact factor: A brief review. Canadian Medical Association 161(8): 979–980.

    Google Scholar 

  14. Glänzel, W., Moed, H. F. (2002), Journal impact measures in bibliometric research. Scientometrics, 53(20): 171–193.

    Article  Google Scholar 

  15. Gómez Sancho, J. M. (2005), La evaluación de la eficiencia productiva de las Universidades Públicas Españolas, tesis doctoral, Universidad de Zaragoza, Spain

    Google Scholar 

  16. Jacsó, P. (2006), Deflated, inflated and phantom citation counts. Online Information Review, 30(3): 297–309.

    Article  Google Scholar 

  17. Kostoff, R. N. (2002), Citation analysis of research performer quality. Scientometrics, 53(1): 49–71.

    Article  Google Scholar 

  18. Leydesdorff, L. (2008), Caveats for the use of citation indicators in research and journal evaluations. Journal of the American Society for Information Science and Technology, 59(2): 278–287.

    Article  Google Scholar 

  19. Marshakova-Shaikevich, I. (1996), The standard impact factor as an evaluation tool of science fields and scientific journals, Scientometrics, 35(2): 283–290.

    Article  Google Scholar 

  20. Moed, H. F. (2002), The impact-factors debate — The ISI’s uses and limits. Nature, 415: 731–732.

    Google Scholar 

  21. Moed, H. F., Van Leeuwen, T. N. (1995), Improving the accuracy of institute for scientific information’s journal impact factors. Journal of the American Society for Information Science and Technology, 46(6): 461–467.

    Article  Google Scholar 

  22. Moed, H. F., Van Leeuwen, T. N., Reedijk, J. (1998), A new classification system to describe the ageing of scientific journals and their impact factors. Journal of Documentation, 54(4): 387–419.

    Article  Google Scholar 

  23. Moed, H. F., van Leeuwen, T. N., Reedijk, J. (1999), Towards appropiate indicators of journal impact. Scientometrics, 46(3): 575–589.

    Article  Google Scholar 

  24. Moed, H. F., Vriens, M. (1989), Possible inaccuracies occurring in citation analysis. Journal of Information Science, 15: 95–107

    Article  Google Scholar 

  25. Mueller, P. S., Murali, N. S., Cha, S. S., Erwin, P. J. Y Ghosh, A. K. (2006), The association between impact factors and language of general internal medicine journals. Swiss Medical Weekly, 136(27/28): 441–443.

    Google Scholar 

  26. Pinski, G., Narin, F. (1976), Citation influence for journal aggregates of scientific publications: Theory, with application to the literature of physics, Information Processing & Management, 12(5): 297–312.

    Article  Google Scholar 

  27. Pudovkin A. I., Garfield E. (2004), Rank-normalized impact factor: a way to compare journal performance across subject categories. In: Proceedings of the 67th Annual Meeting of the American Society for Information Science & Technology. Vol 41. Silver Spring, Md: American Society for Information Science & Technology, pp. 507–515

    Google Scholar 

  28. Reedijk, J., Moed, H. F. (2008), Is the impact of journal impact factors decreasing? Journal of Documentation, 64(2): 183–192.

    Article  Google Scholar 

  29. Rinia, E. J., van Leeuwen, T. N., Bruins, E. E. W., van Vuren, H. G., van Raan, A. F. J. (2001), Citation delay in interdisciplinary knowledge exchange. Scientometrics, 51(1): 293–309.

    Article  Google Scholar 

  30. Rousseau, R. (2005), Median and percentile impact factors: A set of new indicators. Scientometrics, 63(3): 431–441.

    Article  Google Scholar 

  31. Schubert, A., Braun, T. (1993), Reference standards for citation based assessments. Scientometrics, 26(1): 21–35.

    Article  Google Scholar 

  32. Schubert, A., Braun, T. (1996), Cross-field normalization of scientometric indicators. Scientometrics, 36(1): 311–324.

    Article  Google Scholar 

  33. Seglen, P. O. (1997a), Why the impact factor of journals should not be used for evaluating research. British Medical Journal, 314(7079): 498–502.

    Google Scholar 

  34. Seglen, P. O. (1997b), Citations and journal impact factors: Questionable indicators of research quality. Allergy, 52(11): 1050–1056.

    Article  Google Scholar 

  35. Sen B.K. (1992), Documentation note normalized impact factor. Journal of Documentation, 48(3): 318–325.

    Article  Google Scholar 

  36. Sombatsompop, N., Markpin, T., Premkamolnetr, N. (2004), A modified method for calculating the impact factors of journals in ISI Journal Citation Reports — Polymer science category in 1997–2001. Scientometrics, 60(2): 235–271.

    Article  Google Scholar 

  37. van Leeuwen, T. N., Moed, H. F., Tijssen, R. J. W., Visser, M. S., van Raan, A. F. J. (2001), Language biases in the coverage of the Science Citation Index and its consequences for international comparisons of national research performance. Scientometrics, 51(1): 335–346.

    Article  Google Scholar 

  38. Vinkler, P. (2002), Subfield problems in applying the Garfield (Impact) Factors in practice. Scientometrics, 53(2): 267–279.

    Article  Google Scholar 

  39. Wallin, J. A. (2005), Bibliometric methods: Pitfalls and possibilities. Basic & Clinical Pharmacology & Toxicology, 97(5): 261–275.

    Article  Google Scholar 

  40. Whitehouse, G. H. (2002), Impact factors: facts and myths. European Radiology, 12(4): 715–717.

    Article  Google Scholar 

  41. Wilson, C. S. (1999), Informetrics. In: M. E. Williams (Eds), Annual Review of Information Science and Technology. Medford, NJ: Information Today, pp. 107–247.

    Google Scholar 

  42. Zitt, M., Ramana-rahary, S., Bassecoulard, E. (2005), Relativity of citation performance and excellence measures: From cross-field to cross-scale effects of field-normalisation, Scientometrics, 63 (2).

  43. Zitt, M., Small, H. (2008), Modifying the journal impact factor by fractional citation weighting: The audience factor, Journal of the American Society for Information Science and Technology, 59(11): 1856–1860.

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to José María Gómez-Sancho.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Gómez-Sancho, J.M., Mancebón-Torrubia, M.J. The evaluation of scientific production: Towards a neutral impact factor. Scientometrics 81, 435 (2009). https://doi.org/10.1007/s11192-008-2137-1

Download citation

Keywords

  • Impact Factor
  • Bias Correction
  • Journal Citation Report
  • Journal Impact Factor
  • Theoretical Proposal