Abstract
There is an evident and rapid trend towards the adoption of evaluation exercises for national research systems for purposes, among others, of improving allocative efficiency in public funding of individual institutions. However the desired macroeconomic aims could be compromised if internal redistribution of government resources within each research institution does not follow a consistent logic: the intended effects of national evaluation systems can result only if a “funds for quality” rule is followed at all levels of decision-making. The objective of this study is to propose a bibliometric methodology for: (i) large-scale comparative evaluation of research performance by individual scientists, research groups and departments within research institution, to inform selective funding allocations; and (ii) assessment of strengths and weaknesses by field of research, to inform strategic planning and control. The proposed methodology has been applied to the hard science disciplines of the Italian university research system for the period 2004–2006.
Similar content being viewed by others
Notes
The peer-review approach is used for the social sciences, arts and humanities.
University of Rome “Tor Vergata”, Milan, Luiss, Pavia, Udine, and Cagliari.
The m-quotient is the h-index divided by the research age (Hirsch 2005).
The algorithm is presented in a manuscript which is currently under consideration for publication in another journal. A short abstract is available at http://www.disp.uniroma2.it/laboratorioRTT/TESTI/Working%20paper/Giuffrida.pdf.
At this time, for the identification of authorship of all publications by Italian university researchers indexed in the WoS between 2004 and 2006, the harmonic average of precision and recall (F-measure) is close to 95% (2% sampling error, 98% confidence interval).
“Civil engineering and architecture” is not considered because the WoS does not cover a satisfactory range of research output in this area.
The ISI subject categories are the scientific disciplines that the WoS uses for the classification of publications. The complete list can be seen at http://science.thomsonreuters.com/cgi-bin/jrnlst/jlsubcatg.cgi?PC=D.
The authors adhere to the school of thought that a reasonable share of author self-citations is a natural part of scientific communication, and that alarm over author self-citation lacks empirical foundation.
Alternatively, the denominator could be the average number of citations of all WoS indexed publications. In this case the standardization benchmark would be international.
Research productivity by individual scientists is not standardized with respect to effective hours of research nor with respect to other production factors and intangible resources, because of the lack of data that can be attributed to individuals.
More specific indications of fractional productivity could be given for disciplines where the order of the author names conveys a meaning concerning level of contribution to the publication. For example, in the case of Medicine, the first and last authors could be given more weight than the others.
The exact authorship of publications could also be subsequently verified by each individual author, to reduce errors and assure the transparency of the evaluation process.
As of December 31, 2005, this SDS had 206 university scientists in all of Italy.
The authors note that the work by Costas et al. did not inspire the current work, since it came to their awareness only at the moment that the current paper was submitted for publication.
References
Abramo, G., D’Angelo, C. A., & Di Costa, F. (2008a). Assessment of sectoral aggregation distortion in research productivity measurements. Research Evaluation, 17(2), 111–121.
Abramo, G., D’Angelo, C. A., & Caprasecca, A. (2008b). Gender differences in research productivity: A bibliometric analysis of the Italian academic system. Scientometrics, 79(3), 517–539.
Bhattacharya, A., & Newhouse, H. (2008). Allocative efficiency and an incentive scheme for research. University of California-San Diego Working Paper. Retrieved June 18, 2010 from http://econ.ucsd.edu/~hnewhous/research/Bhattacharya-Newhouse-RAE.pdf.
Bornmann, L., Mutz, R., Neuhaus, C., & Daniel, H.-D. (2008). Citation counts for research evaluation: standards of good practice for analyzing bibliometric data and presenting and interpreting results. Ethics in Science and Environmental Politics, 8, 93–102.
Butler, L. (2003). Explaining Australia’s increased share of ISI publications. The effects of a funding formula based on publication counts. Research Policy, 32(1), 143–155.
Costas, R., van Leeuwen, T., & Bordons, M. (2010). A bibliometric classificatory approach for the study and assessment of research performance at the individual level: The effects of age on productivity and impact. Journal of the American Society for Information Science and Technology, 61(8), 1564–1581.
ERA (2010). The Excellence in Research for Australia (ERA) Initiative. Retrieved June 18, 2010 from http://www.arc.gov.au/era/default.htm.
Franceschet, M. (2009). A cluster analysis of scholar and journal bibliometric indicators. Journal of the American Society for Information Science and Technology, 60(10), 1950–1964.
Geuna, A., & Martin, B. R. (2003). University research evaluation and funding: an international comparison. Minerva, 41, 277–304.
Hicks, D. (2009). Evolving regimes of multi-university research evaluation. Higher Education, 57(4), 393–404.
Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Science of the USA, 102(46), 16569–16572.
Leydesdorff, L. (2008). Caveats for the use of citation indicators in research and journal evaluations. Journal of the American Society for Information Science and Technology, 59(2), 278–287.
ORP (2009). Observatory on Public Research in Italy. Retrieved June 18, 2010 from www.orp.researchvalue.it.
Orr, D., Jaeger, M., & Schwarzenberger, A. (2007). Performance-based funding as an instrument of competition in German higher education. Journal of Higher Education Policy and Management, 29(1), 3–23.
PBRF (2008). Performance-Based Research Fund in New Zealand. Retrieved June 18, 2010 from http://www.tec.govt.nz/templates/standard.aspx?id=588.
RAE (2008). Research Assessment Exercise. Retrieved June 18, 2010 from www.rae.ac.uk.
REF (2010). Research Excellence Framework. Retrieved June 18, 2010 from http://www.hefce.ac.uk/Research/ref/.
Rousseau, R., & Smeyers, M. (2000). Output-financing at LUC. Scientometrics, 47(2), 379–387.
Sandström, U., & Sandström, E. (2009). Meeting the micro-level challenges: Bibliometrics at the individual level. 12th International Conference on Scientometrics and Informetrics, Rio de Janeiro, Brazil, July 14–17.
Shattock, M. (2004). Managing successful universities. Perspectives: Policy and Practice in Higher Education, 8(4), 119–120.
Strehl, F., Reisinger, S., & Kalatschan, M. (2007). Funding systems and their effects on higher education systems. OECD Education Working Papers, No. 6. OECD Publishing. doi:10.1787/220244801417.
Van den Berghe, H., Houben, J. A., de Bruin, R. E., Moed, H. F., Kint, A., Spruyt, E. H. J., et al. (1998). Bibliometric indicators of university research performance in Flanders. Journal of the American Society for Information Science, 49(1), 59–67.
Van Leeuwen, Th. N., & Moed, H. F. (2002). Development and application of journal impact measures in the Dutch science system. Scientometrics, 53, 249–266.
VTR (2006). Italian Triennial Research Evaluation. VTR 2001–2003. Risultati delle valutazioni dei Panel di Area. Retrieved June 18, 2010 from http://vtr2006.cineca.it/.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
See Table 12.
Rights and permissions
About this article
Cite this article
Abramo, G., D’Angelo, C.A. National-scale research performance assessment at the individual level. Scientometrics 86, 347–364 (2011). https://doi.org/10.1007/s11192-010-0297-2
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-010-0297-2