Skip to main content
Log in

Citations versus journal impact factor as proxy of quality: could the latter ever be preferable?

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

In recent years bibliometricians have paid increasing attention to research evaluation methodological problems, among these being the choice of the most appropriate indicators for evaluating quality of scientific publications, and thus for evaluating the work of single scientists, research groups and entire organizations. Much literature has been devoted to analyzing the robustness of various indicators, and many works warn against the risks of using easily available and relatively simple proxies, such as journal impact factor. The present work continues this line of research, examining whether it is valid that the use of the impact factor should always be avoided in favour of citations, or whether the use of impact factor could be acceptable, even preferable, in certain circumstances. The evaluation was conducted by observing all scientific publications in the hard sciences by Italian universities, for the period 2004–2007. Performance sensitivity analyses were conducted with changing indicators of quality and years of observation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. We will use the two as synonyms.

  2. Thomson Reuters classifies each article indexed in Web of Science under a specific ISI subject category. For details see http://science.thomsonreuters.com/cgi-bin/jrnlst/jloptions.cgi?PC=D.

  3. Data standardization serves to eliminate bias due to the different publication “fertility” of the various sectors within a single area, while data weighting takes account of the diverse presence of the SDSs, in terms of staff numbers, in each UDA (Abramo et al. 2008a).

  4. Civil engineering and architecture was excluded from the analysis because WoS listings are not sufficiently representative of research output in this area.

  5. http://cercauniversita.cineca.it/php5/docenti/cerca.php.

References

  • Abramo, G., D’Angelo, C. A., & Di Costa, F. (2008a). Assessment of sectoral aggregation distortion in research productivity measurements. Research Evaluation, 17(2), 111–121.

    Article  Google Scholar 

  • Abramo, G., D’Angelo, C. A., & Pugini, F. (2008b). The measurement of Italian universities’ research productivity by a non parametric-bibliometric methodology. Scientometrics, 76(2), 225–244.

    Article  Google Scholar 

  • Aksnes, D. W., & Rip, A. (2009). Researchers’ perceptions of citations. Research Policy, 38(6), 895–905.

    Article  Google Scholar 

  • Bordons, M., Fernández, M. T., & Gómez, I. (2002). Advantages and limitations in the use of impact factor measures for the assessment of research performance in a peripheral country. Scientometrics, 53(2), 195–206.

    Article  Google Scholar 

  • Garfield, E. (1972). Citation analysis as a tool in journal evaluation. Science, 178, 471–479.

    Article  Google Scholar 

  • Glanzel, W., & Moed, H. F. (2002). Journal impact measures in bibliometric research. Scientometrics, 53(2), 171–193.

    Article  Google Scholar 

  • Moed, H. F. (2005). Citation analysis in research evaluation. Netherlands: Springer.

    Google Scholar 

  • Moed, H. F., & Van Leeuwen, Th. N. (1995). Improving the accuracy of the Institute for Scientific Information’s journal impact factor. Journal of the American Society for Information Science, 46(6), 461–467.

    Article  Google Scholar 

  • Moed, H. F., & Van Leeuwen, Th. N. (1996). Impact factors can mislead. Nature, 381, 186.

    Article  Google Scholar 

  • Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. British Medical Journal, 314(7079), 497–502.

    Google Scholar 

  • Weingart, P. (2004). Impact of bibliometrics upon the science system: inadvertent consequences? In H. F. Moed, W. Glänzel, & U. Schmoch (Eds.), Handbook on quantitative science and technology research. Dordrecht (The Netherlands): Kluwer Academic Publishers.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giovanni Abramo.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Abramo, G., D’Angelo, C.A. & Di Costa, F. Citations versus journal impact factor as proxy of quality: could the latter ever be preferable?. Scientometrics 84, 821–833 (2010). https://doi.org/10.1007/s11192-010-0200-1

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-010-0200-1

Keywords

Navigation