Abstract
Many different measures are used to assess academic research excellence and these are subject to ongoing discussion and debate within the scientometric, university-management and policy-making communities internationally. One topic of continued importance is the extent to which citation-based indicators compare with peer-review-based evaluation. Here we analyse the correlations between values of a particular citation-based impact indicator and peer-review scores in several academic disciplines, from natural to social sciences and humanities. We perform the comparison for research groups rather than for individuals. We make comparisons on two levels. At an absolute level, we compare total impact and overall strength of the group as a whole. At a specific level, we compare academic impact and quality, normalised by the size of the group. We find very high correlations at the former level for some disciplines and poor correlations at the latter level for all disciplines. This means that, although the citation-based scores could help to describe research-group strength, in particular for the so-called hard sciences, they should not be used as a proxy for ranking or comparison of research groups. Moreover, the correlation between peer-evaluated and citation-based scores is weaker for soft sciences.
Similar content being viewed by others
References
Billaut, J. -C., Bouyssou, D., & Vincke, P. (2010). Should you believe in the Shanghai ranking? Scientometrics 84, 237–263.
Bornmann, L. (2012). The Hawthorne effect in journal peer review, Scientometrics 91, 857–862.
Bornmann, L., Wallon, G., & Ledin, A. (2008). Is the h index related to (standard) bibliometric measures and to the assessments by peers? An investigation of the h index by using molecular life sciences data, Research Evaluation 17, 149-156.
Butler, D. (2010). University rankings smarten up, Nature, 464,16–17.
Derrick, G. E., Haynes, A., Chapman, S., & Hall, W. D. (2001). The association between four citation metrics and peer rankings of research influence of Australian researchers in six fields of public health, PLoS One, 6, e18521.
Donovan, C. (2007). Future pathways for science policy and research assessment: Metrics vs peer review, quality vs impact. Science and Public Policy 34, 538-542.
Egghe, L. (2006). Theory and practise of the g-index, Scientometrics,69(1), 131–152.
Evidence. (2010). Evidence (a Thomson Reuters business) report. The future of the UK university research base, July 2010.
Evidence. (2011). Funding research excellence: Research group size, critical mass and performance. A University Alliance report, July 2011.
Evidence. (2012). Bibliometric evaluation and international benchmarking of the UK’s physics research, Summary report prepared for the Institute of Physics by Evidence, Thomson Reuters.
Florian, R. V. (2007). Irreproducibility of the results of the Shanghai academic ranking of world universities Scientometrics 72, 25–32.
Garfield, E. (1955). Citation indexes for science: A new dimension in documentation through association of Ideas. Science, 122(3159), 108–111.
Garfield, E. (1973). Citation frequency as a measure of research activity and performance in essays of an information scientist, Current Contents, 1, 406–408.
Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. PNAS, 102(46),16569–16572.
Ioannidis, J. P. A. et al. (2007). International ranking systems for universities and institutions: A critical appraisal BMC Medicine 5, 30.
Kenna, R., & Berche, B. (2010). Critical mass and the dependency of research quality on group size. Scientometrics, 86(2), 527–540.
Kenna, R., & Berche, B. (2011). Critical masses for academic research groups and consequences for higher education research policy and management. Higher Education Management and Policy, 23(3), 1–21
Macilwain, C. (2010). Wild goose chase. Nature, 463, 291.
Mryglod O., Kenna R., Holovatch Y., Berche B. (2012). Absolute and specific measures of research group excellence. Scientometrics doi:10.1007/s11192-012-0874-7.
Moed, H. F. (2005). Citation analysis in research evaluation. Dordrecht: Springer.
Nature. (2010). Editorial, Metrics Special. 465, p. 845. Retrieved April 2012, from http://www.nature.com/metrics.
Norris, M., & Oppenheim, C. (2003). Citation counts and the Research Assessment Exercise. V archaeology and the 2001 RAE, Journal of Documentation, 59(6), 709–730.
Oppenheim C., & Summers M. A. C. (2008). Citation counts and the Research Assessment Exercise, part VI. Unit of assessment 67 (music), Information Research, 13(2).
RAE. (2008). The panel criteria and working methods. Panel E. (2006). Retrieved October 19, 2012, from http://www.rae.ac.uk/pubs/2006/01/docs/eall.pdf.
Schubert, A., & Braun, T. (1996). Cross-field normalization of scientometric indicators. Scientometrics 36(3), 311–324.
Stauffer D. (2012). A biased review of sociophysics. Journal of Statistical Physics doi:10.1007/s10955-012-0604-9.
The official web-page of the RAE. (2008). Retrieved October 18, 2012, from http://www.rae.ac.uk/.
The official web-page of the Higher Education Funding Council for England. Funding for universities and colleges in 2009–10 (2009). Electronic Publication 01/2009 in the ADMIN-HEFCE Archives. Retrieved October 19, 2012.
The official web-page of Academic Ranking of World Universities (ARWU). (2012). Retrieved October 19, 2012, from http://www.shanghairanking.com.
The official web-page of Evidence Thomson Reuters. (2012). Retrieved October 18, 2012, from http://www.evidence.co.uk.
The official web-page of the REF. (2012). Retrieved October 19, 2012, from http://www.ref.ac.uk/.
van Raan, A. F. J. (2005). Fatal attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods, Scientometrics, 62(1), 133–143.
Vinkler, P. (2001). An attempt for defining some basic categories of scientometrics and classifying the indicators of evaluative scientometrics. Scientometrics, 50(3), 539–544
Vinkler, P. (2003). Relations of relative scientometric indicators. Scientometrics,58(3), 687–694.
Warner, J. (2003). Citation Analysis and Research Assessment in the United Kingdom, American Society for Information Science and Technology, 30(1), 26–27.
Williams R., de Rassenfosse G., Jensen P., & Marginson S. (2012). U21 Ranking of National Higher Education Systems, Report of the project sponsored by Universitas 21, University of Melbourne.
Acknowledgments
This work was supported in part by the 7th FP, IRSES project No. 269139 “Dynamics and cooperative phenomena in complex physical and biological environments” and IRSES project No. 295302 “Statistical physics in diverse realizations”. The authors thank Jonathan Adams from Thomson Reuters Research Analytics for the data and Ihor Mryglod for fruitful discussions.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Mryglod, O., Kenna, R., Holovatch, Y. et al. Comparison of a citation-based indicator and peer review for absolute and specific measures of research-group excellence. Scientometrics 97, 767–777 (2013). https://doi.org/10.1007/s11192-013-1058-9
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-013-1058-9