Comparison of bibliometric measures for assessing relative importance of researchers
- 503 Downloads
Quantitative evaluation of citation data to support funding decisions has become widespread. For this purpose there exist many measures (indices) and while their properties were well studied there is little comprehensive experimental comparison of the ranking lists obtained when using different methods. A further problem of the existing studies is that lack of available data about net citations prevents researchers from studying the effect of measuring scientific impact by using net citations (all citations minus self-citations). In this paper we use simulated data to study factors that could potentially influence the degree of agreement between the rankings obtained when using different indices with the emphasis given to the comparison of the number of net citations per author to other more established indices. We observe that the researchers publishing papers with a large number of co-authors are systematically ranked higher when using h-index or total citations (TC) instead of the number of citations per author (TCA), that the researchers who publish a small proportion of papers which receive many citations while the rest of their papers receive only few citations are systematically ranked higher when using TCA or TC instead of h-index, and that the authors who have lower proportion of self-citations are ranked higher when considering indices which include the number of net citations in comparison with indices considering only the total citation count. Results are verified and illustrated also by analyzing a large dataset from the field of medical science in Slovenia for the period 1986–2007.
KeywordsBibliometric evaluation Number of citations per author Net citations h-index
Our sincere thanks of gratitude goes to Dr. Hristovski who developed a programme to automatically analyze the Science Citation Index database (and later Web of Science).
- Bornmann, L., Mutz, R., & Daniel, H. D. (2008). Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine. Journal of the American Society for Information Science and Technology, 59(5), 830–837.CrossRefGoogle Scholar
- Cole, J. R., & Cole, S. (1973). Social stratification in science. Chicago: The University of Chicago Press.Google Scholar
- Hristovski, D., Rožić, A., & Adamič, Š. (1996) A decision support system for biomedical research evaluation. In Medical informatics Europe ‘96: Human facets in information technologies, pp 609–613Google Scholar
- Iglesias, J. E., & Pecharroman, C. (2006). Scaling the h-index for different scientific isi fields. arXiv:physics/0607224.
- Pareto, V. (1897). Course d’économie politique. Lausanne: F. Rouge.Google Scholar
- R Core Team. (2013). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing.Google Scholar
- Roediger, H. L. (2006). The h index in science: A new measure of scholarly contribution. APS Observer 19.Google Scholar
- Silagadze, Z. (2010). Citation entropy and research impact estimation. Acta Physica Polonica B, 41(11), 2325–2333.Google Scholar