Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Evaluating research institutions: the potential of the success-index

  • 643 Accesses

  • 7 Citations

Abstract

Similarly to the h-index and other indicators, the success-index is a recent indicator that makes it possible to identify, among a general group of papers, those of greater citation impact. This indicator implements a field-normalization at the level of single paper and can therefore be applied to multidisciplinary groups of articles. Also, it is very practical for normalizations aimed at achieving the so-called size-independency. Thanks to these (and other) properties, this indicator is particularly versatile when evaluating the publication output of entire research institutions. This paper exemplifies the potential of the success-index by means of several practical applications, respectively: (i) comparison of groups of researchers within the same scientific field, but affiliated with different universities, (ii) comparison of different departments of the same university, and (iii) comparison of entire research institutions. A sensitivity analysis will highlight the success-index’s robustness. Empirical results suggest that the success-index may be conveniently extended to large-scale assessments, i.e., involving a large number of researchers and research institutions.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Notes

  1. 1.

    In Italy each university scientist belongs to one specific disciplinary sector, 370 in all. Complete list accessible at http://cercauniversita.cineca.it/php5/settori/index.php, last accessed on July, 2012.

References

  1. Abramo, G., D’Angelo, C. A., & Di Costa, F. (2011). National research assessment exercises: a comparison of peer review and bibliometrics rankings. Scientometrics, 89(3), 929–941.

  2. Albarrán, P., & Ruiz-Castillo, J. (2011). References made and citations received by scientific articles. Journal of the American Society for Information Science and Technology, 62(1), 40–49.

  3. Burrell, Q. L. (2007). On the h-index, the size of the Hirsch core and Jin’s A-index. Journal of Informetrics, 1(2), 170–177.

  4. Courtault, J. M., & Hayek, N. (2008). On the Robustness of the h-index: a mathematical approach. Economics Bulletin, 3(78), 1–9.

  5. Franceschini, F., Galetto, M., & Maisano, D. (2007). Management by measurement: designing key indicators and performance measurements. Berlin: Springer.

  6. Franceschini, F., Galetto, M., Maisano, D., & Mastrogiacomo, L. (2012a). The success-index: an alternative approach to the h-index for evaluating an individual’s research output. Scientometrics, 92(3), 621–641.

  7. Franceschini, F., Galetto, M., Maisano, D., & Mastrogiacomo, L. (2012b). Further clarifications about the success-index. Journal of Informetrics, 6(4), 669–673.

  8. Franceschini, F., Galetto, M., Maisano, D., & Mastrogiacomo, L. (2012c). An informetric model for the success-index. Journal of Informetrics.

  9. Franceschini, F., & Maisano, D. (2010). Analysis of the Hirsch index’s operational properties. European Journal of Operational Research, 203(2), 494–504.

  10. Franceschini, F., & Maisano, D. (2011). Structured evaluation of the scientific output of academic research groups by recent h-based indicators. Journal of Informetrics, 5(1), 64–74.

  11. Franceschini, F., & Maisano, D. (2012). Sub-field normalization within the IEEE publication galaxy: a novel approach based on the link between journals and Technical Societies. Proceeding of the 17th international conference on science and technology indicators (STI 2012), 6–8 September 2012, Montréal, Canada.

  12. Garfield, E. (1979). Citation indexing. Its theory and application in science, technology and humanities. New York: Wiley.

  13. Glänzel, W. (2011). The application of characteristic scores and scales to the evaluation and ranking of scientific journals. Journal of Information Science, 37(1), 40–48.

  14. Glänzel, W., Schubert, A., Thijs, B., & Debackere, K. (2011). A priori vs. a posteriori normalisation of citation indicators. The case of journal ranking. Scientometrics, 87(2), 415–424.

  15. Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output, in. Proceedings of the National Academy of Sciences of the United States of America, 102, 16569–16572.

  16. ISI Web of Knowledge (2012). Essential science indicators. http://thomsonreuters.com (11 July 2012).

  17. Leydesdorff, L., & Opthof, T. (2010). Normalization at the field level: fractional counting of citations. Journal of Informetrics, 4(4), 644–646.

  18. Merton, R. K. (1968). The Matthew effect in science. Science, 159(3810), 56–63.

  19. MIUR—Ministero dell’Istruzione dell’Università e della Ricerca (2012) http://cercauniversita.cineca.it/php5/settori/index.php.

  20. Moed, H. F. (2010a). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4(3), 265–277.

  21. Moed, H. F. (2010b). CWTS crown indicator measures citation impact of a research group’s publication oeuvre. Journal of Informetrics, 3(3), 436–438.

  22. Radicchi, F., & Castellano, C. (2012). Testing the fairness of citation indicators for comparison across scientific domains: the case of fractional citation counts. Journal of Informetrics, 6(1), 121–130.

  23. Scopus—Elsevier (2012) http://www.info.scopus.com (20 Feb 2012).

  24. Vinkler, P. (2004). Characterization of the impact of sets of scientific papers: the Garfield (impact) factor. Journal of the American Society for Information Science and Technology, 55(5), 431–435.

  25. Vinkler, P. (2011). Application of the distribution of citations among publications in scientometric evaluations. Journal of the American Society for Information Science and Technology, 62(10), 1963–1978.

  26. Waltman, L., Yan, E., & van Eck, N. J. (2011). A recursive field-normalized bibliometric performance indicator: an application to the field of library and information science. Scientometrics, 89(1), 301–314.

  27. Wang, X., Liu, D., Ding, K., & Wang, X. (2012). Science funding and research output: a study on 10 countries. Scientometrics, 91(2), 591–599.

  28. Zitt, M. (2010). Citing-side normalization of journal impact: a robust variant of the audience factor. Journal of Informetrics, 4(3), 392–406.

  29. Zitt, M., & Small, H. (2008). Modifying the journal impact factor by fractional citation weighting: the audience factor. Journal of the American Society for Information Science and Technology, 59(11), 1856–1860.

Download references

Author information

Correspondence to Domenico Maisano.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Franceschini, F., Maisano, D. & Mastrogiacomo, L. Evaluating research institutions: the potential of the success-index. Scientometrics 96, 85–101 (2013). https://doi.org/10.1007/s11192-012-0887-2

Download citation

Keywords

  • Success-index
  • Hirsch index
  • Field normalization
  • Citation propensity
  • Groups of researchers
  • Research institutions