Scientometrics

, Volume 107, Issue 2, pp 671–683 | Cite as

Large-scale assessment of research outputs through a weighted combination of bibliometric indicators

  • Alberto Anfossi
  • Alberto Ciolfi
  • Filippo Costa
  • Giorgio Parisi
  • Sergio Benedetto
Article

Abstract

The paper describes a method to combine the information on the number of citations and the relevance of the publishing journal (as measured by the Impact Factor or similar impact indicators) of a publication to rank it with respect to the world scientific production in the specific subfield. The linear or non-linear combination of the two indicators is represented on the scatter plot of the papers in the specific subfield in order to immediately visualize the effect of a change in weights. The final rank of the papers is therefore obtained by partitioning the two-dimensional space through linear or higher order curves. The procedure is intuitive and versatile since it allows, after adjusting few parameters, an automatic and calibrated assessment at the level of the subfield. The derived evaluation is homogeneous among different scientific domains and can be used to address the quality of research at the departmental (or higher) levels of aggregation. We apply this method, that is designed to be feasible on a scale typical of a national evaluation exercise and to be effective in terms of cost and time, to some instances of the Thomson Reuters Web of Science database and discuss the results in view of what was done recently in Italy for the Evaluation of Research Quality exercise 2004–2010. We show how the main limitations of the bibliometric methodology used in that context can be easily overcome.

Keywords

Bibliometric evaluation Institutional rankings Evaluation processes University policy 

References

  1. Abramo, G., & D’Angelo, C. A. (2011). Evaluating research: From informed peer review to bibliometrics. Scientometrics, 87, 499–514.CrossRefGoogle Scholar
  2. Abramo, G., D’Angelo, C. A., & Di Costa, F. (2011). National research assessment exercises: A comparison of peer review and bibliometrics rankings. Scientometrics, 89, 929–941.CrossRefGoogle Scholar
  3. Aksnes, D. W., & Taxt, R. E. (2004). Peer reviews and bibliometric indicators: A comparative study at a Norwegian university. Research Evaluation, 13, 33–41.CrossRefGoogle Scholar
  4. Alberts, B. (2013). Impact factor distortions. Science, 340, 787–787.CrossRefGoogle Scholar
  5. Ancaiani, A., Anfossi, A. F., Barbara, A., Benedetto, S., Blasi, B., Carletti, V., Cicero, T., Ciolfi, A., Costa, F., & Colizza, G., et al. (2015). Evaluating scientific research in Italy: The 2004–2010 research evaluation exercise. Research Evaluation. doi:10.1093/reseval/rvv008 Google Scholar
  6. Barker, K. (2007). The UK Research Assessment Exercise: The evolution of a national research evaluation system. Research Evaluation, 16, 3–12.CrossRefGoogle Scholar
  7. Bence, V., & Oppenheim, C. (2004). The influence of peer review on the research assessment exercise. Journal of Information Science, 30, 347–368.CrossRefGoogle Scholar
  8. Bergstrom, C. T., & West, J. D. (2008). Assessing citations with the Eigenfactor™ metrics. Neurology, 71, 1850–1851.CrossRefGoogle Scholar
  9. Bladek, M. (2014). DORA San Francisco declaration on research assessment (May 2013). College and Research Libraries News, 75, 191–196.Google Scholar
  10. Bollen, J., Van de Sompel, H., Hagberg, A., & Chute, R. (2009). A principal component analysis of 39 scientific impact measures. PLoS ONE, 4, e6022.CrossRefGoogle Scholar
  11. Butler, L. (2003). Modifying publication practices in response to funding formulas. Research Evaluation, 12, 39–46.CrossRefGoogle Scholar
  12. Butler, L. (2008). Using a balanced approach to bibliometrics: Quantitative performance measures in the Australian Research Quality Framework. Ethics in Science and Environmental Politics, 8, 83–92.CrossRefGoogle Scholar
  13. Eyre-Walker, A., & Stoletzki, N. (2013). The assessment of science: the relative merits of post-publication review, the impact factor, and the number of citations. PLoS Biology, 11, e1001675.CrossRefGoogle Scholar
  14. Falagas, M. E., Kouranos, V. D., Arencibia-Jorge, R., & Karageorgopoulos, D. E. (2008). Comparison of SCImago journal rank indicator with journal impact factor. FASEB J, 22, 2623–2628.CrossRefGoogle Scholar
  15. Fersht, A. (2009). The most influential journals: Impact Factor and Eigenfactor. PNAS, 106, 6883–6884.CrossRefGoogle Scholar
  16. Franceschet, M., & Costantini, A. (2010). The effect of scholar collaboration on impact and quality of academic papers. Journal of Informetrics, 4, 540–553.CrossRefGoogle Scholar
  17. Glänzel, W., & Thijs, B. (2004). The influence of author self-citations on bibliometric macro indicators. Scientometrics, 59, 281–310.CrossRefGoogle Scholar
  18. HEFCE. (2011). REF2014 impact pilot exercise. Www.hefce.ac.uk/research/ref/impact/. Accessed October 2011.
  19. Moed, H. F. (2009). New developments in the use of citation analysis in research evaluation. Archivum Immunologiae et Therapiae Experimentalis, 57, 13–18.CrossRefGoogle Scholar
  20. Moed, H. F., Glänzel, W., & Schmoch, U. (2005). Editors’ introduction. Berlin: Springer.Google Scholar
  21. Oppenheim, C. (2008). Out with the old and in with the new: The RAE, bibliometrics and the new REF. Journal of Librarianship and Information Science, 40, 147–149.CrossRefGoogle Scholar
  22. Reale, E., Barbara, A., & Costantini, A. (2007). Peer review for the evaluation of academic research: Lessons from the Italian experience. Research Evaluation, 16, 216–228.CrossRefGoogle Scholar
  23. Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. BMJ, 314, 497.CrossRefGoogle Scholar
  24. Setti, G. (2013). Bibliometric indicators: Why do we need more than one? IEEE Access, 1, 232–246.CrossRefGoogle Scholar
  25. Smith, D. A. T., & Eysenck, P. M. (2002). The correlation between RAE ratings and citation counts in psychology. Royal Holloway: University of London.Google Scholar
  26. Warner, J. (2000). A critical review of the application of citation studies to the Research Assessment Exercises. Journal of Information Science, 26, 453–459.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2016

Authors and Affiliations

  1. 1.National Agency for the Evaluation of Universities and Research Institutes (ANVUR)RomeItaly
  2. 2.Compagnia di San Paolo Sistema TorinoTurinItaly
  3. 3.Dipartimento Ingegneria dell’InformazioneUniversità di PisaPisaItaly
  4. 4.Università “La Sapienza” di RomaRomeItaly

Personalised recommendations