Abstract
In recent years bibliometricians have paid increasing attention to research evaluation methodological problems, among these being the choice of the most appropriate indicators for evaluating quality of scientific publications, and thus for evaluating the work of single scientists, research groups and entire organizations. Much literature has been devoted to analyzing the robustness of various indicators, and many works warn against the risks of using easily available and relatively simple proxies, such as journal impact factor. The present work continues this line of research, examining whether it is valid that the use of the impact factor should always be avoided in favour of citations, or whether the use of impact factor could be acceptable, even preferable, in certain circumstances. The evaluation was conducted by observing all scientific publications in the hard sciences by Italian universities, for the period 2004–2007. Performance sensitivity analyses were conducted with changing indicators of quality and years of observation.
Similar content being viewed by others
Notes
We will use the two as synonyms.
Thomson Reuters classifies each article indexed in Web of Science under a specific ISI subject category. For details see http://science.thomsonreuters.com/cgi-bin/jrnlst/jloptions.cgi?PC=D.
Data standardization serves to eliminate bias due to the different publication “fertility” of the various sectors within a single area, while data weighting takes account of the diverse presence of the SDSs, in terms of staff numbers, in each UDA (Abramo et al. 2008a).
Civil engineering and architecture was excluded from the analysis because WoS listings are not sufficiently representative of research output in this area.
References
Abramo, G., D’Angelo, C. A., & Di Costa, F. (2008a). Assessment of sectoral aggregation distortion in research productivity measurements. Research Evaluation, 17(2), 111–121.
Abramo, G., D’Angelo, C. A., & Pugini, F. (2008b). The measurement of Italian universities’ research productivity by a non parametric-bibliometric methodology. Scientometrics, 76(2), 225–244.
Aksnes, D. W., & Rip, A. (2009). Researchers’ perceptions of citations. Research Policy, 38(6), 895–905.
Bordons, M., Fernández, M. T., & Gómez, I. (2002). Advantages and limitations in the use of impact factor measures for the assessment of research performance in a peripheral country. Scientometrics, 53(2), 195–206.
Garfield, E. (1972). Citation analysis as a tool in journal evaluation. Science, 178, 471–479.
Glanzel, W., & Moed, H. F. (2002). Journal impact measures in bibliometric research. Scientometrics, 53(2), 171–193.
Moed, H. F. (2005). Citation analysis in research evaluation. Netherlands: Springer.
Moed, H. F., & Van Leeuwen, Th. N. (1995). Improving the accuracy of the Institute for Scientific Information’s journal impact factor. Journal of the American Society for Information Science, 46(6), 461–467.
Moed, H. F., & Van Leeuwen, Th. N. (1996). Impact factors can mislead. Nature, 381, 186.
Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. British Medical Journal, 314(7079), 497–502.
Weingart, P. (2004). Impact of bibliometrics upon the science system: inadvertent consequences? In H. F. Moed, W. Glänzel, & U. Schmoch (Eds.), Handbook on quantitative science and technology research. Dordrecht (The Netherlands): Kluwer Academic Publishers.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Abramo, G., D’Angelo, C.A. & Di Costa, F. Citations versus journal impact factor as proxy of quality: could the latter ever be preferable?. Scientometrics 84, 821–833 (2010). https://doi.org/10.1007/s11192-010-0200-1
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-010-0200-1