Abstract
Measuring the efficiency of scientific research activity presents critical methodological aspects, many of which have not been sufficiently studied. Although many studies have assessed the relation between quality and research productivity and academic rank, not much is known about the extent of distortion in national university performance rankings when academic rank and the other labor factors are not considered as a factor of normalization. This work presents a comparative analysis that aims to quantify the sensitivity of bibliometric rankings to the choice of input, with input considered as only the number of researchers on staff, or alternatively where their cost is also considered. The field of observation consists of all 69 Italian universities active in the hard sciences. Performance measures are based on the 81,000 publications produced during the 2004–2006 triennium by all 34,000 research staff, with analysis carried out at the level of individual disciplines, 187 in total. The effect of the switch from labor to cost seems to be minimal except for a few outliers.
Similar content being viewed by others
Notes
For the hard sciences, unlike the social sciences, arts and humanities, articles in international journals provide a good proxy of overall research output.
Through a geographic proximity effect, concentration of public and private research organizations in a specific area can favor scientific collaboration and research productivity (Abramo et al. 2009b).
Abramo et al. (2009b) demonstrate that publications in co-authorship with other organizations have a higher mean quality than those authored within a single institution. Since location affects opportunities for collaboration with other organizations it can thus have an effect on quality of output.
On the subject of address reconciliation, Geuna and Martin (2003) report: “… The main problem consists in having to ‘clean up’ institutional addresses, a task that can take many person-years of effort”.
At this time, for disambiguation of authorship of the 215,000 Italian academic publications indexed in the WoS between 2001 and 2007, the harmonic average of precision and recall (F-measure) is close to 95% (2% sampling error, 98% confidence interval). Further details are reported in Abramo et al. (2008a).
The complete list is available at http://www.miur.it/atti/2000/alladm001004_01.htm.
“Civil engineering and architecture” UDA was not considered because the WoS does not cover the full range of research output in this area.
The basic assumption of bibliometrics, e.g., the level of citation which corresponds to a quantum of research quality, has been criticized by few scholars (Warner, 2000). In this study though we are not interested in absolute ratings, but in switch of rankings when passing form labor input to cost input.
The ISI subject categories are the scientific disciplines that the WoS uses for classification of articles.
The rankings of the Italian peer-review VTR were carried out at the UDA level.
References
Abramo, G., D’Angelo, C. A., & Caprasecca, A. (2009a). Allocative efficiency in public research funding: Can biblometrics help? Research Policy, 38(1), 206–215.
Abramo, G., D’Angelo, C. A., & Caprasecca, A. (2009c). Gender differences in research productivity: A bibliometric analysis of the Italian academic system. Scientometrics, 79(3), 517–539.
Abramo, G., D’Angelo, C. A., & Di Costa, F. (2008b). Assessment of sectoral aggregation distortion in research productivity measurements. Research Evaluation, 17(2), 111–121.
Abramo, G., D’Angelo, C. A., & Di Costa, F., (2009d). Testing the trade-off between productivity and quality in research activities. Journal of the American Society for Information Science and Technology, 61(1), 132–140.
Abramo, G., D’Angelo, C. A., Di Costa, F., & Solazzi, M. (2009b). University-industry collaboration in Italy: An extensive bibliometric survey. Technovation, 29(6–7), 498–507.
Abramo, G., D’Angelo, C. A., & Pugini, F. (2008a). The measurement of Italian universities’ research productivity by a non parametric–bibliometric methodology. Scientometrics, 76(2), 225–244.
Ben-David, D., (2009). Ranking Israel’s economists. Scientometrics. doi: 10.1007/s11192-009-0049-3.
Blackburn, R. T., Behymer, C. E., & Hall, D. E. (1978). Research notes: Correlates of faculty publication. Sociology of Education, 51(2), 132–141.
Bordons, M., Morillo, F., Fernández, M. T., & Gómez, I. (2003). One step further in the production of bibliometric indicators at the micro level: Differences by gender and professional category of scientists. Scientometrics, 57(2), 159–173.
Dickson, V. A. (1983). The determinants of publication rates of faculty members at a Canadian university. Canadian Journal of Higher Education, 13(2), 41–49.
Geuna, A., & Martin, B. R. (2003). University research evaluation and funding: an international comparison. Minerva, 41(4), 277–304.
Hicks, D. (2009). Evolving regimes of multi-university research evaluation. Higher Education, 57(4), 393–404.
Kalaitzidakis, P., Mamuneas, T. P., & Stengos, T. (2003). Rankings of academic journals and institutions in economics. Journal of European Economic Association, 1(6), 1346–1366.
Kyvik, S. (1990). Motherhood and scientific productivity. Social Studies of Science, 20(1), 149–160.
Macri, J., & Dipendra, S. (2006). Rankings methodology for international comparisons of institutions and individuals: An application to economics in Australia and New Zealand. Journal of Economic Surveys, 20(1), 111–146.
Moed, H. F. (2005). Citation analysis in research evaluation. Dordrecht, The Netherlands: Springer.
Pomfret, R., & Wang, L. C. (2003). Evaluating the research output of Australian universities economics departments. Australian Economic Papers, 42(4), 418–441.
Prpic, K. (1996). Characteristics and determinants of eminent scientist’ productivity. Scientometrics, 36(2), 185–206.
Rinia, E. J., van Leeuwen, Th. N., van Vuren, H. G., & van Raan, A. F. J. (1998). Comparative analysis of a set of bibliometric indicators and central peer review criteria. Evaluation of condensed matter physics in the Netherlands. Research Policy, 27(1), 95–107.
van Raan, A. F. J. (2005). Fatal attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods. Scientometrics, 62(1), 133–143.
Warner, J. (2000). A critical review of the application of citation studies to the Research assessment Exercises. Journal of Information Science, 26(6), 453–459.
Zainab, A. N. (1999). Personal, academic and departmental correlates of research productivity: A review of literature. Malaysian Journal of Library & Information Science, 4(2), 73–110.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Abramo, G., D’Angelo, C.A. & Solazzi, M. National research assessment exercises: a measure of the distortion of performance rankings when labor input is treated as uniform. Scientometrics 84, 605–619 (2010). https://doi.org/10.1007/s11192-010-0164-1
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-010-0164-1