Skip to main content

Large-scale assessment of research outputs through a weighted combination of bibliometric indicators

Abstract

The paper describes a method to combine the information on the number of citations and the relevance of the publishing journal (as measured by the Impact Factor or similar impact indicators) of a publication to rank it with respect to the world scientific production in the specific subfield. The linear or non-linear combination of the two indicators is represented on the scatter plot of the papers in the specific subfield in order to immediately visualize the effect of a change in weights. The final rank of the papers is therefore obtained by partitioning the two-dimensional space through linear or higher order curves. The procedure is intuitive and versatile since it allows, after adjusting few parameters, an automatic and calibrated assessment at the level of the subfield. The derived evaluation is homogeneous among different scientific domains and can be used to address the quality of research at the departmental (or higher) levels of aggregation. We apply this method, that is designed to be feasible on a scale typical of a national evaluation exercise and to be effective in terms of cost and time, to some instances of the Thomson Reuters Web of Science database and discuss the results in view of what was done recently in Italy for the Evaluation of Research Quality exercise 2004–2010. We show how the main limitations of the bibliometric methodology used in that context can be easily overcome.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Notes

  1. 1.

    As defined by Web of Science by Thomson Reuters® or Scopus by Elsevier® databases, respectively.

  2. 2.

    CIT: by ordering the total number of paper published in that SC and in that year in decreasing order from the highest to the lowest cited; IF: by ordering the journals belonging to that SC in that year in decreasing order from the highest IF to the lowest. This is not the only strategy to build the cumulative distribution function for the IF variable, as we will discuss later in the paper.

  3. 3.

    Except for the Physical Sciences one (“GEV 02”).

  4. 4.

    By relevant we mean that a great number (more than 100) of papers to be evaluated fell under that SC.

References

  1. Abramo, G., & D’Angelo, C. A. (2011). Evaluating research: From informed peer review to bibliometrics. Scientometrics, 87, 499–514.

    Article  Google Scholar 

  2. Abramo, G., D’Angelo, C. A., & Di Costa, F. (2011). National research assessment exercises: A comparison of peer review and bibliometrics rankings. Scientometrics, 89, 929–941.

    Article  Google Scholar 

  3. Aksnes, D. W., & Taxt, R. E. (2004). Peer reviews and bibliometric indicators: A comparative study at a Norwegian university. Research Evaluation, 13, 33–41.

    Article  Google Scholar 

  4. Alberts, B. (2013). Impact factor distortions. Science, 340, 787–787.

    Article  Google Scholar 

  5. Ancaiani, A., Anfossi, A. F., Barbara, A., Benedetto, S., Blasi, B., Carletti, V., Cicero, T., Ciolfi, A., Costa, F., & Colizza, G., et al. (2015). Evaluating scientific research in Italy: The 2004–2010 research evaluation exercise. Research Evaluation. doi:10.1093/reseval/rvv008

    Google Scholar 

  6. Barker, K. (2007). The UK Research Assessment Exercise: The evolution of a national research evaluation system. Research Evaluation, 16, 3–12.

    Article  Google Scholar 

  7. Bence, V., & Oppenheim, C. (2004). The influence of peer review on the research assessment exercise. Journal of Information Science, 30, 347–368.

    Article  Google Scholar 

  8. Bergstrom, C. T., & West, J. D. (2008). Assessing citations with the Eigenfactor™ metrics. Neurology, 71, 1850–1851.

    Article  Google Scholar 

  9. Bladek, M. (2014). DORA San Francisco declaration on research assessment (May 2013). College and Research Libraries News, 75, 191–196.

    Google Scholar 

  10. Bollen, J., Van de Sompel, H., Hagberg, A., & Chute, R. (2009). A principal component analysis of 39 scientific impact measures. PLoS ONE, 4, e6022.

    Article  Google Scholar 

  11. Butler, L. (2003). Modifying publication practices in response to funding formulas. Research Evaluation, 12, 39–46.

    Article  Google Scholar 

  12. Butler, L. (2008). Using a balanced approach to bibliometrics: Quantitative performance measures in the Australian Research Quality Framework. Ethics in Science and Environmental Politics, 8, 83–92.

    Article  Google Scholar 

  13. Eyre-Walker, A., & Stoletzki, N. (2013). The assessment of science: the relative merits of post-publication review, the impact factor, and the number of citations. PLoS Biology, 11, e1001675.

    Article  Google Scholar 

  14. Falagas, M. E., Kouranos, V. D., Arencibia-Jorge, R., & Karageorgopoulos, D. E. (2008). Comparison of SCImago journal rank indicator with journal impact factor. FASEB J, 22, 2623–2628.

    Article  Google Scholar 

  15. Fersht, A. (2009). The most influential journals: Impact Factor and Eigenfactor. PNAS, 106, 6883–6884.

    Article  Google Scholar 

  16. Franceschet, M., & Costantini, A. (2010). The effect of scholar collaboration on impact and quality of academic papers. Journal of Informetrics, 4, 540–553.

    Article  Google Scholar 

  17. Glänzel, W., & Thijs, B. (2004). The influence of author self-citations on bibliometric macro indicators. Scientometrics, 59, 281–310.

    Article  Google Scholar 

  18. HEFCE. (2011). REF2014 impact pilot exercise. Www.hefce.ac.uk/research/ref/impact/. Accessed October 2011.

  19. Moed, H. F. (2009). New developments in the use of citation analysis in research evaluation. Archivum Immunologiae et Therapiae Experimentalis, 57, 13–18.

    Article  Google Scholar 

  20. Moed, H. F., Glänzel, W., & Schmoch, U. (2005). Editors’ introduction. Berlin: Springer.

  21. Oppenheim, C. (2008). Out with the old and in with the new: The RAE, bibliometrics and the new REF. Journal of Librarianship and Information Science, 40, 147–149.

    Article  Google Scholar 

  22. Reale, E., Barbara, A., & Costantini, A. (2007). Peer review for the evaluation of academic research: Lessons from the Italian experience. Research Evaluation, 16, 216–228.

    Article  Google Scholar 

  23. Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. BMJ, 314, 497.

    Article  Google Scholar 

  24. Setti, G. (2013). Bibliometric indicators: Why do we need more than one? IEEE Access, 1, 232–246.

    Article  Google Scholar 

  25. Smith, D. A. T., & Eysenck, P. M. (2002). The correlation between RAE ratings and citation counts in psychology. Royal Holloway: University of London.

    Google Scholar 

  26. Warner, J. (2000). A critical review of the application of citation studies to the Research Assessment Exercises. Journal of Information Science, 26, 453–459.

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank Dr. Marco Malgarini for useful discussions.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Alberto Ciolfi.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Anfossi, A., Ciolfi, A., Costa, F. et al. Large-scale assessment of research outputs through a weighted combination of bibliometric indicators. Scientometrics 107, 671–683 (2016). https://doi.org/10.1007/s11192-016-1882-9

Download citation

Keywords

  • Bibliometric evaluation
  • Institutional rankings
  • Evaluation processes
  • University policy