Scientometrics

, Volume 101, Issue 3, pp 1731–1745 | Cite as

Comparing scientific performance among equals

  • C. O. S. Sorzano
  • J. Vargas
  • G. Caffarena-Fernández
  • A. Iriarte
Article

Abstract

Measuring scientific performance is currently a common practice of funding agencies, fellowship evaluations and hiring institutions. However, as has already been recognized by many authors, comparing the performance in different scientific fields is a difficult task due to the different publication and citation patterns observed in each field. In this article, we defend that scientific performance of an individual scientist, laboratory or institution should be analysed within the corresponding context and we provide objective tools to perform this kind of comparative analysis. The usage of the new tools is illustrated by using two control groups, to which several performance measurements are referred: one group being the Physics and Chemistry Nobel laureates from 2007 to 2012, the other group consisting of a list of outstanding scientists affiliated to two different institutions.

Keywords

Scientific performance Relative measurements Control groups 

References

  1. Alonso, S., Cabrerizo, F. J., Herrera-Viedma, E., & Herrera, F. (2009). h-Index: A review focused in its variants, computation and standardization for different scientific fields. Journal of Informetrics, 3(4), 273–289.CrossRefGoogle Scholar
  2. Bornmann, L., Mutz, R., & Daniel, H. D. (2008). Are there better indices for evaluation purposes than the h index?A comparison of nine different variants of the h index using data from biomedicine. Journal of the American Society for Information Science and Technology, 59, 830–837.CrossRefGoogle Scholar
  3. Coleman, B. J., Bolumole, Y. A., & Frankel, R. (2012). Benchmarking individual publication productivity in logistics. Transportation Journal, 51(2), 164–196.CrossRefGoogle Scholar
  4. El Emam, K., Arbuckle, L., Jonker, E., & Anderson, K. (2012). Two h-index benchmarks for evaluating the publication performance of medical informatics researchers. Journal of Medical Internet Research, 14(5). doi: 10.2196/jmir.2177.
  5. Geoffrion, A. M., Dyer, J. S., & Feinberg, A. (1972). An interactive approach for multi-criterion optimization, with an application to the operation of an academic department. Management Science, 19, 357–368.CrossRefMATHGoogle Scholar
  6. Glänzel, W. (2006). On the Opportunities and Limitations of the H-index. Science Focus, 1(1), 10–11.Google Scholar
  7. Harzing, A. (2010). The publish or perish book. Melbourne, Australia: Tarma Software Research. Google Scholar
  8. Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102, 16569–16580.CrossRefGoogle Scholar
  9. Iglesias, J. E., & Pecharromán, C. (2007). Scaling the h-index for different scientific ISI fields. Scientometrics, 73(3), 303–320.CrossRefGoogle Scholar
  10. Imperial, J., & Rodríguez-Navarro, A. (2007). Usefulness of Hirsch’s h-index to evaluate scientific research in Spain. Scientometrics, 71(2), 271–282.CrossRefGoogle Scholar
  11. Köksalan, M. M., & Sagala, P. N. S. (1995). Interactive approaches for discrete alternative multiple criteria decision making with monotone utility functions. Management Science, 41, 1158–1171.CrossRefMATHGoogle Scholar
  12. Lehmann, S., Jackson, A. D., & Lautrup, B. E. (2008). A quantitative analysis of indicators of scientific performance. Scientometrics, 76, 369–390.CrossRefGoogle Scholar
  13. Lundberg, J. (2007). Lifting the crown-citation z-score. Journal of Informetrics, 1, 145–154.CrossRefGoogle Scholar
  14. Moed, H. F., Debrun R. E., & Vanlleuwen T. (1995). New bibliometric tools for the assessment of national research performance-database description, overview of indicators and first applications. Scientometrics, 33, 381–422.CrossRefGoogle Scholar
  15. Moed, H. F. (2009). New developments in the use of citation analysis in research evaluation. Archivum immunologiae et therapiae experimentalis, 57(1), 13–18.CrossRefGoogle Scholar
  16. Panaretos, J., & Malesios, C. (2009). Assessing scientific research performance and impact with single indices. Scientometrics, 81(3), 635–670.CrossRefGoogle Scholar
  17. Podlubny, I. (2005). Comparison of scientific impact expressed by the number of citations in different fields of science. Scientometrics, 64(1), 95–99.CrossRefGoogle Scholar
  18. Radicchi, F., Fortunato, S., & Castellano, C. (2008). Universality of citation distributions: Toward an objective measure of scientific impact. Proceedings of the National Academy of Sciences of the United States of America, 45, 17268–17272.CrossRefGoogle Scholar
  19. Saaty, T. (1988). What is the Analytic hierarchy process? Mathematical models for Decision Support, 48, 109–121.CrossRefMathSciNetGoogle Scholar
  20. Schulze, M. (2003). A new monotonic and clone-independent single winner election method. Voting Matters, 17, 9–19.Google Scholar
  21. Tideman, T. N. (1987). Independence of clones as a criterion for voting rules. Social Choice and Welfare, 4, 185–206.CrossRefMATHMathSciNetGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2014

Authors and Affiliations

  • C. O. S. Sorzano
    • 1
    • 2
  • J. Vargas
    • 1
  • G. Caffarena-Fernández
    • 2
  • A. Iriarte
    • 2
  1. 1.National Center of Biotechnology (CSIC)MadridSpain
  2. 2.Department of Information and Telecommunication SystemsUniversity CEU San PabloMadridSpain

Personalised recommendations