Advertisement

Scientometrics

, Volume 87, Issue 3, pp 499–514 | Cite as

Evaluating research: from informed peer review to bibliometrics

  • Giovanni AbramoEmail author
  • Ciriaco Andrea D’Angelo
Article

Abstract

National research assessment exercises are becoming regular events in ever more countries. The present work contrasts the peer-review and bibliometrics approaches in the conduct of these exercises. The comparison is conducted in terms of the essential parameters of any measurement system: accuracy, robustness, validity, functionality, time and costs. Empirical evidence shows that for the natural and formal sciences, the bibliometric methodology is by far preferable to peer-review. Setting up national databases of publications by individual authors, derived from Web of Science or Scopus databases, would allow much better, cheaper and more frequent national research assessments.

Keywords

Decision support systems Research assessment Peer review Bibliometrics Research productivity 

References

  1. Abramo, G., & D’Angelo, C. A. (2011). National-scale research performance assessment at the individual level. Scientometrics, 86(2), 347–364.CrossRefGoogle Scholar
  2. Abramo, G., D’Angelo, C. A., & Pugini, F. (2008). The measurement of Italian Universities’ research productivity by a non parametric-bibliometric methodology. Scientometrics, 76(2), 225–244.CrossRefGoogle Scholar
  3. Abramo, G., D’Angelo, C. A., & Caprasecca, A. (2009). Allocative efficiency in public research funding: can bibliometrics help? Research Policy, 38(1), 206–215.CrossRefGoogle Scholar
  4. Abramo, G., D’Angelo, C. A., & Viel, F. (2010). Peer review research assessment: a sensitivity analysis of performance rankings to the share of research product evaluated. Scientometrics, 85(3), 705–720.CrossRefGoogle Scholar
  5. Adams, J., & Griliches, Z. (1998). Research productivity in a system of universities. Annales d’economie et de statistique, 4950, 127–162.Google Scholar
  6. Aksnes, D. W. (2008). When different persons have an identical author name. How frequent are homonyms? Journal of the American Society for Information Science and Technology, 59(5), 838–841.CrossRefGoogle Scholar
  7. Aksnes, D. W., & Taxt, R. E. (2004). Peers reviews and bibliometric indicators: a comparative study at Norvegian University. Research Evaluation, 13(1), 33–41.CrossRefGoogle Scholar
  8. Butler, L., & McAllister, I. (2007). Metrics or peer review? Evaluating the 2001 UK Research assessment exercise in political science. Political Studies Review, 7, 3–17.CrossRefGoogle Scholar
  9. D’Angelo, C. A., Giuffrida, C., & Abramo, G. (2011). A heuristic approach to author name disambiguation in large-scale bibliometric databases. Journal of the American Society for Information Science and Technology, 62(2), 257–269.CrossRefGoogle Scholar
  10. Garfield, E. (1980). Premature discovery or delayed recognition—Why? Current Contents, 21, 5–10.Google Scholar
  11. Glanzel, W., 2008. Seven myths in bibliometrics. About facts and fiction in quantitative science studies. In Kretschmer & F. Havemann (Eds.), Proceedings of WIS Fourth international conference on webometrics, informetrics and scientometrics and ninth COLLNET meeting, Berlin.Google Scholar
  12. Harman, G. (2000). Allocating research infrastructure grants in post-binary higher education systems: British and Australian approaches. Journal of Higher Education Policy and Management, 22(2), 111–126.CrossRefMathSciNetGoogle Scholar
  13. Horrobin, D. F. (1990). The philosophical basis of peer review and the suppression of innovation. Journal of the American Medical Association, 263(10), 1438–1441.CrossRefGoogle Scholar
  14. Lach, S., & Schankerman, M., 2003. Incentives and invention in universities, National Bureau of Economic Research working paper 9727. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1158310.
  15. Moed, H. F. (2005). Citation analysis in research evaluation. Dordrecht: Springer.Google Scholar
  16. Moxham, H., Anderson, J., 1992. Peer review. A view from the inside. Science and Technology Policy, 7–15.Google Scholar
  17. Oppenheim, C. (1997). The correlation between citation counts and the 1992 research assessment exercise ratings for British research in genetics, anatomy and archaeology. Journal of Documentation, 53(5), 477–487.CrossRefMathSciNetGoogle Scholar
  18. Oppenheim, C., & Norris, M. (2003). Citation counts and the research assessment exercise V: archaeology and the 2001 RAE. Journal of Documentation, 56(6), 709–730.Google Scholar
  19. Pendlebury, D. A. (2009). The use and misuse of journal metrics and other citation indicators. Scientometrics, 57(1), 1–11.Google Scholar
  20. REF (Research Excellence Framework). 2009. http://www.hefce.ac.uk/pubs/hefce/2009/09_38/#exec. Accessed 21 Jan 2011.
  21. Rinia, E. J., van Leeuwen, Th. N., van Vuren, H. G., & van Raan, A. F. J. (1998). Comparative analysis of a set of bibliometric indicators and central peer review criteria, evaluation of condensed matter physics in The Netherlands. Research Policy, 27, 95–107.CrossRefGoogle Scholar
  22. van Raan, A. F. J. (2008). Scaling rules in the science system: influence of field-specific citation characteristics on the impact of research groups. Journal of the American Society for Information Science and Technology, 59(4), 565–576.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2011

Authors and Affiliations

  1. 1.National Research Council of ItalyRomeItaly
  2. 2.Department of ManagementLaboratory for Studies of Research and Technology Transfer, School of Engineering, University of Rome “Tor Vergata”RomeItaly

Personalised recommendations