Skip to main content
Log in

National-scale research performance assessment at the individual level

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

There is an evident and rapid trend towards the adoption of evaluation exercises for national research systems for purposes, among others, of improving allocative efficiency in public funding of individual institutions. However the desired macroeconomic aims could be compromised if internal redistribution of government resources within each research institution does not follow a consistent logic: the intended effects of national evaluation systems can result only if a “funds for quality” rule is followed at all levels of decision-making. The objective of this study is to propose a bibliometric methodology for: (i) large-scale comparative evaluation of research performance by individual scientists, research groups and departments within research institution, to inform selective funding allocations; and (ii) assessment of strengths and weaknesses by field of research, to inform strategic planning and control. The proposed methodology has been applied to the hard science disciplines of the Italian university research system for the period 2004–2006.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The peer-review approach is used for the social sciences, arts and humanities.

  2. University of Rome “Tor Vergata”, Milan, Luiss, Pavia, Udine, and Cagliari.

  3. The m-quotient is the h-index divided by the research age (Hirsch 2005).

  4. The algorithm is presented in a manuscript which is currently under consideration for publication in another journal. A short abstract is available at http://www.disp.uniroma2.it/laboratorioRTT/TESTI/Working%20paper/Giuffrida.pdf.

  5. At this time, for the identification of authorship of all publications by Italian university researchers indexed in the WoS between 2004 and 2006, the harmonic average of precision and recall (F-measure) is close to 95% (2% sampling error, 98% confidence interval).

  6. “Civil engineering and architecture” is not considered because the WoS does not cover a satisfactory range of research output in this area.

  7. The ISI subject categories are the scientific disciplines that the WoS uses for the classification of publications. The complete list can be seen at http://science.thomsonreuters.com/cgi-bin/jrnlst/jlsubcatg.cgi?PC=D.

  8. The authors adhere to the school of thought that a reasonable share of author self-citations is a natural part of scientific communication, and that alarm over author self-citation lacks empirical foundation.

  9. Alternatively, the denominator could be the average number of citations of all WoS indexed publications. In this case the standardization benchmark would be international.

  10. Research productivity by individual scientists is not standardized with respect to effective hours of research nor with respect to other production factors and intangible resources, because of the lack of data that can be attributed to individuals.

  11. More specific indications of fractional productivity could be given for disciplines where the order of the author names conveys a meaning concerning level of contribution to the publication. For example, in the case of Medicine, the first and last authors could be given more weight than the others.

  12. The exact authorship of publications could also be subsequently verified by each individual author, to reduce errors and assure the transparency of the evaluation process.

  13. As of December 31, 2005, this SDS had 206 university scientists in all of Italy.

  14. The authors note that the work by Costas et al. did not inspire the current work, since it came to their awareness only at the moment that the current paper was submitted for publication.

References

  • Abramo, G., D’Angelo, C. A., & Di Costa, F. (2008a). Assessment of sectoral aggregation distortion in research productivity measurements. Research Evaluation, 17(2), 111–121.

    Article  Google Scholar 

  • Abramo, G., D’Angelo, C. A., & Caprasecca, A. (2008b). Gender differences in research productivity: A bibliometric analysis of the Italian academic system. Scientometrics, 79(3), 517–539.

    Article  Google Scholar 

  • Bhattacharya, A., & Newhouse, H. (2008). Allocative efficiency and an incentive scheme for research. University of California-San Diego Working Paper. Retrieved June 18, 2010 from http://econ.ucsd.edu/~hnewhous/research/Bhattacharya-Newhouse-RAE.pdf.

  • Bornmann, L., Mutz, R., Neuhaus, C., & Daniel, H.-D. (2008). Citation counts for research evaluation: standards of good practice for analyzing bibliometric data and presenting and interpreting results. Ethics in Science and Environmental Politics, 8, 93–102.

    Article  Google Scholar 

  • Butler, L. (2003). Explaining Australia’s increased share of ISI publications. The effects of a funding formula based on publication counts. Research Policy, 32(1), 143–155.

    Article  Google Scholar 

  • Costas, R., van Leeuwen, T., & Bordons, M. (2010). A bibliometric classificatory approach for the study and assessment of research performance at the individual level: The effects of age on productivity and impact. Journal of the American Society for Information Science and Technology, 61(8), 1564–1581.

    Google Scholar 

  • ERA (2010). The Excellence in Research for Australia (ERA) Initiative. Retrieved June 18, 2010 from http://www.arc.gov.au/era/default.htm.

  • Franceschet, M. (2009). A cluster analysis of scholar and journal bibliometric indicators. Journal of the American Society for Information Science and Technology, 60(10), 1950–1964.

    Article  Google Scholar 

  • Geuna, A., & Martin, B. R. (2003). University research evaluation and funding: an international comparison. Minerva, 41, 277–304.

    Article  Google Scholar 

  • Hicks, D. (2009). Evolving regimes of multi-university research evaluation. Higher Education, 57(4), 393–404.

    Article  Google Scholar 

  • Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Science of the USA, 102(46), 16569–16572.

    Article  Google Scholar 

  • Leydesdorff, L. (2008). Caveats for the use of citation indicators in research and journal evaluations. Journal of the American Society for Information Science and Technology, 59(2), 278–287.

    Article  Google Scholar 

  • ORP (2009). Observatory on Public Research in Italy. Retrieved June 18, 2010 from www.orp.researchvalue.it.

  • Orr, D., Jaeger, M., & Schwarzenberger, A. (2007). Performance-based funding as an instrument of competition in German higher education. Journal of Higher Education Policy and Management, 29(1), 3–23.

    Article  Google Scholar 

  • PBRF (2008). Performance-Based Research Fund in New Zealand. Retrieved June 18, 2010 from http://www.tec.govt.nz/templates/standard.aspx?id=588.

  • RAE (2008). Research Assessment Exercise. Retrieved June 18, 2010 from www.rae.ac.uk.

  • REF (2010). Research Excellence Framework. Retrieved June 18, 2010 from http://www.hefce.ac.uk/Research/ref/.

  • Rousseau, R., & Smeyers, M. (2000). Output-financing at LUC. Scientometrics, 47(2), 379–387.

    Article  Google Scholar 

  • Sandström, U., & Sandström, E. (2009). Meeting the micro-level challenges: Bibliometrics at the individual level. 12th International Conference on Scientometrics and Informetrics, Rio de Janeiro, Brazil, July 14–17.

  • Shattock, M. (2004). Managing successful universities. Perspectives: Policy and Practice in Higher Education, 8(4), 119–120.

    Article  Google Scholar 

  • Strehl, F., Reisinger, S., & Kalatschan, M. (2007). Funding systems and their effects on higher education systems. OECD Education Working Papers, No. 6. OECD Publishing. doi:10.1787/220244801417.

  • Van den Berghe, H., Houben, J. A., de Bruin, R. E., Moed, H. F., Kint, A., Spruyt, E. H. J., et al. (1998). Bibliometric indicators of university research performance in Flanders. Journal of the American Society for Information Science, 49(1), 59–67.

    Article  Google Scholar 

  • Van Leeuwen, Th. N., & Moed, H. F. (2002). Development and application of journal impact measures in the Dutch science system. Scientometrics, 53, 249–266.

    Article  Google Scholar 

  • VTR (2006). Italian Triennial Research Evaluation. VTR 20012003. Risultati delle valutazioni dei Panel di Area. Retrieved June 18, 2010 from http://vtr2006.cineca.it/.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giovanni Abramo.

Appendix

Appendix

See Table 12.

Table 12 Indicators of research performance at individual level

Rights and permissions

Reprints and permissions

About this article

Cite this article

Abramo, G., D’Angelo, C.A. National-scale research performance assessment at the individual level. Scientometrics 86, 347–364 (2011). https://doi.org/10.1007/s11192-010-0297-2

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-010-0297-2

Keywords

Navigation