Skip to main content
Log in

Evaluating research institutions: the potential of the success-index

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

Similarly to the h-index and other indicators, the success-index is a recent indicator that makes it possible to identify, among a general group of papers, those of greater citation impact. This indicator implements a field-normalization at the level of single paper and can therefore be applied to multidisciplinary groups of articles. Also, it is very practical for normalizations aimed at achieving the so-called size-independency. Thanks to these (and other) properties, this indicator is particularly versatile when evaluating the publication output of entire research institutions. This paper exemplifies the potential of the success-index by means of several practical applications, respectively: (i) comparison of groups of researchers within the same scientific field, but affiliated with different universities, (ii) comparison of different departments of the same university, and (iii) comparison of entire research institutions. A sensitivity analysis will highlight the success-index’s robustness. Empirical results suggest that the success-index may be conveniently extended to large-scale assessments, i.e., involving a large number of researchers and research institutions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. In Italy each university scientist belongs to one specific disciplinary sector, 370 in all. Complete list accessible at http://cercauniversita.cineca.it/php5/settori/index.php, last accessed on July, 2012.

References

  • Abramo, G., D’Angelo, C. A., & Di Costa, F. (2011). National research assessment exercises: a comparison of peer review and bibliometrics rankings. Scientometrics, 89(3), 929–941.

    Article  Google Scholar 

  • Albarrán, P., & Ruiz-Castillo, J. (2011). References made and citations received by scientific articles. Journal of the American Society for Information Science and Technology, 62(1), 40–49.

    Article  Google Scholar 

  • Burrell, Q. L. (2007). On the h-index, the size of the Hirsch core and Jin’s A-index. Journal of Informetrics, 1(2), 170–177.

    Article  MathSciNet  Google Scholar 

  • Courtault, J. M., & Hayek, N. (2008). On the Robustness of the h-index: a mathematical approach. Economics Bulletin, 3(78), 1–9.

    Google Scholar 

  • Franceschini, F., Galetto, M., & Maisano, D. (2007). Management by measurement: designing key indicators and performance measurements. Berlin: Springer.

    Google Scholar 

  • Franceschini, F., Galetto, M., Maisano, D., & Mastrogiacomo, L. (2012a). The success-index: an alternative approach to the h-index for evaluating an individual’s research output. Scientometrics, 92(3), 621–641.

    Article  Google Scholar 

  • Franceschini, F., Galetto, M., Maisano, D., & Mastrogiacomo, L. (2012b). Further clarifications about the success-index. Journal of Informetrics, 6(4), 669–673.

    Article  Google Scholar 

  • Franceschini, F., Galetto, M., Maisano, D., & Mastrogiacomo, L. (2012c). An informetric model for the success-index. Journal of Informetrics.

  • Franceschini, F., & Maisano, D. (2010). Analysis of the Hirsch index’s operational properties. European Journal of Operational Research, 203(2), 494–504.

    Article  MATH  Google Scholar 

  • Franceschini, F., & Maisano, D. (2011). Structured evaluation of the scientific output of academic research groups by recent h-based indicators. Journal of Informetrics, 5(1), 64–74.

    Article  Google Scholar 

  • Franceschini, F., & Maisano, D. (2012). Sub-field normalization within the IEEE publication galaxy: a novel approach based on the link between journals and Technical Societies. Proceeding of the 17th international conference on science and technology indicators (STI 2012), 6–8 September 2012, Montréal, Canada.

  • Garfield, E. (1979). Citation indexing. Its theory and application in science, technology and humanities. New York: Wiley.

  • Glänzel, W. (2011). The application of characteristic scores and scales to the evaluation and ranking of scientific journals. Journal of Information Science, 37(1), 40–48.

    Article  Google Scholar 

  • Glänzel, W., Schubert, A., Thijs, B., & Debackere, K. (2011). A priori vs. a posteriori normalisation of citation indicators. The case of journal ranking. Scientometrics, 87(2), 415–424.

    Article  Google Scholar 

  • Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output, in. Proceedings of the National Academy of Sciences of the United States of America, 102, 16569–16572.

    Article  Google Scholar 

  • ISI Web of Knowledge (2012). Essential science indicators. http://thomsonreuters.com (11 July 2012).

  • Leydesdorff, L., & Opthof, T. (2010). Normalization at the field level: fractional counting of citations. Journal of Informetrics, 4(4), 644–646.

    Article  Google Scholar 

  • Merton, R. K. (1968). The Matthew effect in science. Science, 159(3810), 56–63.

    Article  Google Scholar 

  • MIUR—Ministero dell’Istruzione dell’Università e della Ricerca (2012) http://cercauniversita.cineca.it/php5/settori/index.php.

  • Moed, H. F. (2010a). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4(3), 265–277.

    Article  Google Scholar 

  • Moed, H. F. (2010b). CWTS crown indicator measures citation impact of a research group’s publication oeuvre. Journal of Informetrics, 3(3), 436–438.

    Article  Google Scholar 

  • Radicchi, F., & Castellano, C. (2012). Testing the fairness of citation indicators for comparison across scientific domains: the case of fractional citation counts. Journal of Informetrics, 6(1), 121–130.

    Article  Google Scholar 

  • Scopus—Elsevier (2012) http://www.info.scopus.com (20 Feb 2012).

  • Vinkler, P. (2004). Characterization of the impact of sets of scientific papers: the Garfield (impact) factor. Journal of the American Society for Information Science and Technology, 55(5), 431–435.

    Article  Google Scholar 

  • Vinkler, P. (2011). Application of the distribution of citations among publications in scientometric evaluations. Journal of the American Society for Information Science and Technology, 62(10), 1963–1978.

    Article  Google Scholar 

  • Waltman, L., Yan, E., & van Eck, N. J. (2011). A recursive field-normalized bibliometric performance indicator: an application to the field of library and information science. Scientometrics, 89(1), 301–314.

    Article  Google Scholar 

  • Wang, X., Liu, D., Ding, K., & Wang, X. (2012). Science funding and research output: a study on 10 countries. Scientometrics, 91(2), 591–599.

    Article  MathSciNet  Google Scholar 

  • Zitt, M. (2010). Citing-side normalization of journal impact: a robust variant of the audience factor. Journal of Informetrics, 4(3), 392–406.

    Article  Google Scholar 

  • Zitt, M., & Small, H. (2008). Modifying the journal impact factor by fractional citation weighting: the audience factor. Journal of the American Society for Information Science and Technology, 59(11), 1856–1860.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Domenico Maisano.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Franceschini, F., Maisano, D. & Mastrogiacomo, L. Evaluating research institutions: the potential of the success-index. Scientometrics 96, 85–101 (2013). https://doi.org/10.1007/s11192-012-0887-2

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-012-0887-2

Keywords

Navigation