Skip to main content
Log in

What is the appropriate length of the publication period over which to assess research performance?

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

National research assessment exercises are conducted in different nations over varying periods. The choice of the publication period to be observed has to address often contrasting needs: it has to ensure the reliability of the results issuing from the evaluation, but also reach the achievement of frequent assessments. In this work we attempt to identify which is the most appropriate or optimal publication period to be observed. For this, we analyze the variation of individual researchers’ productivity rankings with the length of the publication period within the period 2003–2008, by the over 30,000 Italian university scientists in the hard sciences. First we analyze the variation in rankings referring to pairs of contiguous and overlapping publication periods, and show that the variations reduce markedly with periods above 3 years. Then we will show the strong randomness of performance rankings over publication periods under 3 years. We conclude that the choice of a 3 year publication period would seem reliable, particularly for physics, chemistry, biology and medicine.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. The complete list is accessible on http://attiministeriali.miur.it/UserFiles/115.htm. Last accessed on March 12, 2012.

  2. Mathematics and computer sciences; physics; chemistry; earth sciences; biology; medicine; agricultural and veterinary sciences; civil engineering; industrial and information engineering.

  3. As frequently observed in literature (Lundberg 2007), standardization of citations with respect to median value rather than to the average is justified by the fact that distribution of citations is highly skewed in almost all disciplines.

  4. The subject category of a publication corresponds to that of the journal where it is published. For publications in multidisciplinary journals the median is calculated as a weighted average of the standardized values for each subject category.

  5. This indicator is similar to the “total field normalized citation score” of the Karolinska Institute (Rehn et al. 2007). The difference is that we standardize by the Italian median rather than the world average. Moreover, we consider fractional counting of citations based on co-authorship.

  6. Publications without citations are excluded from calculation of the median.

  7. For life sciences, different weights are given to each co-author according to his/her position in the list and the character of the co-authorship (intra-mural or extra-mural). If first and last authors belong to the same university, 40% of citations are attributed to each of them; the remaining 20% are divided among all other authors. If the first two and last two authors belong to different universities, 30% of citations are attributed to first and last authors; 15% of citations are attributed to second and last author but one; the remaining 10% are divided among all others.

  8. http://cercauniversita.cineca.it/php5/docenti/cerca.php. Last accessed on March 12, 2012.

  9. www.orp.researchvalue.it/ Last accessed on March 12, 2012.

  10. This is generally the category taken as reference in defining the specifics of incentive systems.

References

  • Abramo, G., Cicero, T., & D’Angelo, C. A. (2011). Assessing the varying level of impact measurement accuracy as a function of the citation window length. Journal of Informetrics, 5(4), 659–667.

    Article  Google Scholar 

  • Abramo, G., Cicero, T., & D’Angelo, C. A. (2012). A sensitivity analysis of researchers’ productivity rankings to citation window length. Journal of Informetrics, 6(2), 192–201.

    Article  Google Scholar 

  • Abramo, G., & D’Angelo, C. A. (2007). Measuring science: irresistible temptations, easy shortcuts and dangerous consequences. Current Science, 93(6), 762–766.

    Google Scholar 

  • Abramo, G., & D’Angelo, C. A. (2011). National-scale research performance assessment at the individual level. Scientometrics, 86(2), 347–364.

    Article  Google Scholar 

  • Amat, C. B. (2008). Editorial and publication delay of papers submitted to 14 selected food research journals. Influence of Online Posting. Scientometrics, 74(3), 379–389.

    Google Scholar 

  • Bhattacharya, A., Newhouse, H. (2010). Allocative efficiency and an incentive scheme for research. Discussion Papers 10/02, Department of Economics, University of York.

  • Burrell, Q. L. (2002). Modeling citation age data: simple graphical methods from reliability theory. Scientometrics, 55, 273–285.

    Article  Google Scholar 

  • Butler, L. (2003). Modifying publication practices in response to funding formulas. Research Evaluation, 17(1), 39–46.

    Article  Google Scholar 

  • D’Angelo, C. A., Giuffrida, C., & Abramo, G. (2011). A heuristic approach to author name disambiguation in large-scale bibliometric databases. Journal of the American Society for Information Science and Technology, 62(2), 257–269.

    Article  Google Scholar 

  • ERA (2010). The excellence in research for Australia (ERA) initiative. http://www.arc.gov.au/era/. Accessed 16 December 2011.

  • Geuna, A., & Martin, B. R. (2003). University research evaluation and funding: an international comparison. Minerva, 41(4), 277–304.

    Article  Google Scholar 

  • Glänzel, W. (2004). Towards a model for diachronous and synchronous citation analyses. Scientometrics, 60(3), 511–522.

    Article  Google Scholar 

  • Lundberg, J. (2007). Lifting the crown—citation z-score. Journal of Informetrics, 1(2), 145–154.

    Article  Google Scholar 

  • Luwel, M., & Moed, H. F. (1998). Publication delays in the science field and their relationship to the ageing of scientific literature. Scientometrics, 41(1–2), 29–40.

    Article  Google Scholar 

  • Moed, H. F. (2005). Citation analysis in research evaluation. Springer, ISBN:978-1-4020-3713-9.

  • OECD (2010). Performance-based funding for public research in tertiary education institutions. Workshop proceedings OECD 2010, 187, ISBN:978-92-64-09461-1.

  • Rehn, C., Kronman, U., Wadsko, D. (2007). Bibliometric indicators definitions and usage at Karolinska Institutet. Karolinska Institutet University Library. http://kib.ki.se/sites/kib.ki.se/files/Bibliometric_indicators_definitions_1.0.pdf. Accessed 8 March 2012.

  • Trivedi, P. K. (1993). An analysis of publication lags in econometrics. Journal of Applied Econometrics, 8(1), 93–100.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giovanni Abramo.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Abramo, G., D’Angelo, C.A. & Cicero, T. What is the appropriate length of the publication period over which to assess research performance?. Scientometrics 93, 1005–1017 (2012). https://doi.org/10.1007/s11192-012-0714-9

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-012-0714-9

Keywords

Navigation