Advertisement

Scientometrics

, Volume 92, Issue 3, pp 621–641 | Cite as

The success-index: an alternative approach to the h-index for evaluating an individual’s research output

  • Fiorenzo FranceschiniEmail author
  • Maurizio Galetto
  • Domenico Maisano
  • Luca Mastrogiacomo
Article

Abstract

Among the most recent bibliometric indicators for normalizing the differences among fields of science in terms of citation behaviour, Kosmulski (J Informetr 5(3):481–485, 2011) proposed the NSP (number of successful paper) index. According to the authors, NSP deserves much attention for its great simplicity and immediate meaning—equivalent to those of the h-index—while it has the disadvantage of being prone to manipulation and not very efficient in terms of statistical significance. In the first part of the paper, we introduce the success-index, aimed at reducing the NSP-index’s limitations, although requiring more computing effort. Next, we present a detailed analysis of the success-index from the point of view of its operational properties and a comparison with the h-index’s ones. Particularly interesting is the examination of the success-index scale of measurement, which is much richer than the h-index’s. This makes success-index much more versatile for different types of analysis—e.g., (cross-field) comparisons of the scientific output of (1) individual researchers, (2) researchers with different seniority, (3) research institutions of different size, (4) scientific journals, etc.

Keywords

Successful paper NSP-index Field normalization Reference practices Operational properties Hirsch index 

References

  1. Amin, M., & Mabe, M. (2000). Impact factors: Use and abuse. Perspectives in publishing no 1. Oxford: Elsevier.Google Scholar
  2. Batista, P. D., Campiteli, M. G., Kinouchi, O., & Martinez, A. S. (2006). Is it possible to compare researchers with different scientific interests? Scientometrics, 68(1), 179–189.CrossRefGoogle Scholar
  3. Bornmann, L. (2011). Mimicry in science? Scientometrics, 86(1), 173–177.CrossRefGoogle Scholar
  4. Bornmann, L., & de Moya Anegon, F. (2011). Some interesting insights from aggregated data published in the World Report SIR 2010. Journal of Informetrics, 5(3), 486–488.CrossRefGoogle Scholar
  5. Davis, P. (2010). Reference List length and citations: A spurious relationship. Retrieved on August 1, 2011, from http://scholarlykitchen.sspnet.org/2010/08/18/reference-list-length-and-citations-a-spurious-relationship/
  6. Egghe, L. (2008). The influence of merging on h-type indices. Journal of Informetrics, 2(3), 252–262.CrossRefGoogle Scholar
  7. Egghe, L., & Rousseau, R. (1990). Introduction to informetrics: Quantitative methods in library, documentation and information science. Elsevier. Retrieved on November 21, 2011, from http://hdl.handle.net/1942/587
  8. Egghe, L., & Rousseau, R. (2006). An informetric model for the Hirsch-index. Scientometrics, 69(1), 121–129.CrossRefGoogle Scholar
  9. Franceschini, F., Galetto, M., & Maisano, D. (2007). Management by measurement: Designing key indicators and performance measurements. Berlin: Springer.Google Scholar
  10. Franceschini, F., & Maisano, D. (2010a). Analysis of the Hirsch index’s operational properties. European Journal of Operational Research, 203(2), 494–504.zbMATHCrossRefGoogle Scholar
  11. Franceschini, F., & Maisano, D. (2010b). The Hirsch spectrum: A novel tool for analysing scientific journals. Journal of Informetrics., 4(1), 64–73.CrossRefGoogle Scholar
  12. Franceschini, F., & Maisano, D. (2010c). The citation triad: An overview of a scientist’s publication output based on Ferrers diagrams. Journal of Informetrics, 4(4), 503–511.CrossRefGoogle Scholar
  13. Franceschini, F., & Maisano, D. (2011). Bibliometric positioning of scientific manufacturing journals: A comparative analysis. Scientometrics, 86(2), 463–485.CrossRefGoogle Scholar
  14. Garfield, E. (1979a). Citation indexing. Its theory and application in science, technology and humanities. New York: Wiley.Google Scholar
  15. Garfield, E. (1979b). Is citation analysis a legitimate evaluation tool? Scientometrics, 1(4), 359–375.CrossRefGoogle Scholar
  16. Garfield, E. (2005). Agony and the ecstasy—the history and meaning of the impact factor. In Proceedings of the international congress on peer review and biomedical publication. Chicago, USA, September 16, 2005.Google Scholar
  17. Glänzel, W., Schubert, A., Thijs, B., & Debackere, K. (2011). A priori vs. a posteriori normalisation of citation indicators. The case of journal ranking. Scientometrics, 87(2), 415–424.Google Scholar
  18. Guns, R., & Rousseau, R. (2010). New journal impact indicators take references into account: A comparison. ISSI Newsletter, 6(1), 9–14.Google Scholar
  19. Harzing, A. W., & van der Wal, R. (2008). Google Scholar as a new source for citation analysis. Ethics in Science and Environmental Politics, 8(11), 61–73.CrossRefGoogle Scholar
  20. Haslam, N., Ban, L., Kaufmann, L., Loughnan, S., Peters, K., Whelan, J., et al. (2008). What makes an article influential? Predicting impact in social and personality psychology. Scientometrics, 76(1), 169–185.CrossRefGoogle Scholar
  21. Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102, 16569–16572.Google Scholar
  22. Iglesias, J. E., & Pecharromán, C. (2007). Scaling the h-index for different scientific ISI fields. Scientometrics, 73(3), 303–320.CrossRefGoogle Scholar
  23. Jackson, M. O., & Rogers, B. W. (2007). Meeting strangers and friends of friends: How random are social networks? American Economic Review, 97(3), 890–915.CrossRefGoogle Scholar
  24. JCQAR—Joint Committee on Quantitative Assessment of Research. (2010). Citation statistics. Retrieved on August 1, 2011, from http://www.mathunion.org/fileadmin/IMU/Report/CitationStatistics.pdf
  25. Kochen, M. (1974). Principles of information retrieval (p. 21). Melville: Los Angeles.zbMATHGoogle Scholar
  26. Kosmulski, M. (2011). Successful papers: A new idea in evaluation of scientific output. Journal of Informetrics, 5(3), 481–485.CrossRefGoogle Scholar
  27. Krampen, G. (2010). Acceleration of citing behavior after the millennium? Exemplary bibliometric reference analyses for psychology journals. Scientometrics, 83(2), 507–513.CrossRefGoogle Scholar
  28. Larivière, V., Archambault, É., Gingras, Y., & Wallace, M. L. (2008). The fall of uncitedness. In Book of abstracts of the 10th international conference on science and technology indicators (ISSI) (pp. 279–282). Vienna: Austrian Research Centers GmbH.Google Scholar
  29. Leydesdorff, L., & Bornmann, L. (2011). Integrated impact indicators compared with impact factors: An alternative research design with policy implications. Journal of the American Society for Information Science and Technology, 62(11), 2133–2146.CrossRefGoogle Scholar
  30. Leydesdorff, L., & Opthof, T. (2010). Normalization at the field level: Fractional counting of citations. Journal of Informetrics, 4(4), 644–646.CrossRefGoogle Scholar
  31. Leydesdorff, L., & Shin, J. C. (2011). How to evaluate universities in terms of their relative citation impacts: Fractional counting of citations and the normalization of differences among disciplines. Journal of the American Society for Information Science and Technology, 62(6), 1146–1155.CrossRefGoogle Scholar
  32. Moed, H. F. (2010a). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4(3), 265–277.CrossRefGoogle Scholar
  33. Moed, H. F. (2010b). CWTS crown indicator measures citation impact of a research group’s publication oeuvre. Journal of Informetrics, 3(3), 436–438.CrossRefGoogle Scholar
  34. Pinski, G., & Narin, F. (1976). Citation influence for journal aggregates of scientific publications: Theory, with application to the literature of physics. Information Processing and Management, 12(5), 197–312.CrossRefGoogle Scholar
  35. Ravichandra Rao, I. K. (2011). Relations among the number of Citations, references and authors: Revisited. proceedings of COLLNET 2011, 7th international conference on webometrics, informetrics and scientometrics (WIS), 20–23 September, 2011, Istanbul, Turkey.Google Scholar
  36. Roberts, F. S. (1979). Measurement theory: With applications to decisionmaking, utility, and the social sciences. Encyclopedia of mathematics and its applications (Vol. 7). Reading, MA: Addison-Wesley.Google Scholar
  37. Rousseau, R. (2006). New developments related to the Hirsch index. Science Focus, 1(4), 23–25.Google Scholar
  38. Rousseau, R., & Ye, F. Y. (2011). A simple impact measure and its evolution over time. Journal of Library & Information Studies, 9(2).Google Scholar
  39. Small, H. (2004). On the shoulders of Robert Merton: Towards a normative theory of citation. Scientometrics, 60(1), 71–79.CrossRefGoogle Scholar
  40. Small, H., & Sweeney, E. (1985). Clustering the science citation index using co-citations I. A comparison of methods. Scientometrics, 7(3–6), 391–409.CrossRefGoogle Scholar
  41. Waltman, L., van Eck, N. J., Van Leeuwen, T. N., Visser, M. S., & Van Raan, A. F. J. (2011a). Towards a new crown indicator: Some theoretical considerations. Journal of Informetrics, 5(1), 37–47.CrossRefGoogle Scholar
  42. Waltman, L., Yan, E., & van Eck, N. J. (2011b). A recursive field-normalized bibliometric performance indicator: An application to the field of library and information science. Scientometrics, 89(1), 301–314.CrossRefGoogle Scholar
  43. Williamson, J. R. (2009). My h-index turns 40: My midlife crisis of impact. ACS Chemical Biology, 4(5), 311–313.CrossRefGoogle Scholar
  44. Zhou, P., & Leydesdorff, L. (2011). Fractional counting of citations in research evaluation: A cross- and interdisciplinary assessment of the Tsinghua University in Beijing. Journal of Informetrics, 5(3), 360–368.CrossRefGoogle Scholar
  45. Zitt, M. (2010). Citing-side normalization of journal impact: A robust variant of the Audience Factor. Journal of Informetrics, 4(3), 392–406.CrossRefGoogle Scholar
  46. Zitt, M., & Small, H. (2008). Modifying the journal impact factor by fractional citation weighting: The audience factor. Journal of the American Society for Information Science and Technology, 59(11), 1856–1860.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2011

Authors and Affiliations

  • Fiorenzo Franceschini
    • 1
    Email author
  • Maurizio Galetto
    • 1
  • Domenico Maisano
    • 1
  • Luca Mastrogiacomo
    • 1
  1. 1.Department of Production Systems and Business Economics (DISPEA)Politecnico di TorinoTorinoItaly

Personalised recommendations