Citation time window choice for research impact evaluation
- 1.5k Downloads
This paper aims to inform choice of citation time window for research evaluation, by answering three questions: (1) How accurate is it to use citation counts in short time windows to approximate total citations? (2) How does citation ageing vary by research fields, document types, publication months, and total citations? (3) Can field normalization improve the accuracy of using short citation time windows? We investigate the 31-year life time non-self-citation processes of all Thomson Reuters Web of Science journal papers published in 1980. The correlation between non-self-citation counts in each time window and total non-self-citations in all 31 years is calculated, and it is lower for more highly cited papers than less highly cited ones. There are significant differences in citation ageing between different research fields, document types, total citation counts, and publication months. However, the within group differences are more striking; many papers in the slowest ageing field may still age faster than many papers in the fastest ageing field. Furthermore, field normalization cannot improve the accuracy of using short citation time windows. Implications and recommendations for choosing adequate citation time windows are discussed.
KeywordsCitation time window Citation ageing Research evaluation Field normalization
The author would like to thank Stefan Hornbostel, Sybille Hinze, and William Dinkel for their efforts in the early stage of project initiation and research design, Diana Hicks and Daniel Sirtes for their suggestions which were most helpful in improving the paper, Jasmin Schmitz, Haiko Lietz, Marion Schmidt, Pei-Shan Chi, and Jana Schütze for their many helpful ideas and collegial support, and two anonymous reviewers for their critical and constructive comments. The research underlying this paper was supported by the German Federal Ministry for Education and Research (BMBF, project number 01PQ08004A). The data used in this paper are from a bibliometrics database developed and maintained by the Competence Center for Bibliometrics for the German Science System (KB) and derived from the 1980 to 2011 Science Citation Index Expanded (SCIE), Social Sciences Citation Index (SSCI), and Arts and Humanities Citation Index (AHCI) prepared by Thomson Reuters (Scientific) Inc. (TR®), Philadelphia, Pennsylvania, USA: ©Copyright Thomson Reuters (Scientific) 2012. The author thanks the KB team for its collective effort in the development of the KB database.
- Costas, R., Van Leeuwen, T. N., & van Raan, A. F. J. (2010). Is scientific literature subject to a ‘Sell By Date’? A general methodology to analyze the ‘durability’of scientific documents. Journal of the American Society for Information Science and Technology, 61(2), 329–339.Google Scholar
- De Bellis, N. (2009). Bibliometrics and citation analysis: from the Science citation index to cybermetrics. Lanham, MD: Scarecrow Press.Google Scholar
- Garfield, E. (1980). Premature discovery or delayed recognition—why. Current Contents, 21, 5–10.Google Scholar
- Garfield, E. (1985a). The articles most cited in the SCI from 1961 to 1982. 7. Another 100 citation-classics—the Watson-Crick double helix has its turn. Current Contents, 20, 3–12.Google Scholar
- Garfield, E. (1985b). The articles most cited in the SCI from 1961 to 1982. 8. Ninety-eight more classic papers from unimolecular reaction velocities to natural opiates-the changing frontiers of science. Current Contents, 33, 3–11.Google Scholar
- Glänzel, W. (2008). Seven myths in bibliometrics. About facts and fiction in quantitative science studies. In H. Kretschmer & F. Havemann (Eds.), Proceedings of WIS 2008 (pp. 1–10). Germany: Berlin.Google Scholar
- Line, M. B. (1993). Changes in the use of literature with time: Obsolescence revisited. Library Trends, 41(4), 665–683.Google Scholar