, Volume 101, Issue 1, pp 125–158 | Cite as

A review of the characteristics of 108 author-level bibliometric indicators

  • Lorna WildgaardEmail author
  • Jesper W. Schneider
  • Birger Larsen


An increasing demand for bibliometric assessment of individuals has led to a growth of new bibliometric indicators as well as new variants or combinations of established ones. The aim of this review is to contribute with objective facts about the usefulness of bibliometric indicators of the effects of publication activity at the individual level. This paper reviews 108 indicators that can potentially be used to measure performance on individual author-level, and examines the complexity of their calculations in relation to what they are supposed to reflect and ease of end-user application. As such we provide a schematic overview of author-level indicators, where the indicators are broadly categorised into indicators of publication count, indicators that qualify output (on the level of the researcher and journal), indicators of the effect of output (effect as citations, citations normalized to field or the researcher’s body of work), indicators that rank the individual’s work and indicators of impact over time. Supported by an extensive appendix we present how the indicators are computed, the complexity of the mathematical calculation and demands to data-collection, their advantages and limitations as well as references to surrounding discussion in the bibliometric community. The Appendix supporting this study is available online as supplementary material.


Author-level bibliometrics Research evaluation Impact factors Self-assessment Researcher performance Indicators Curriculum vitaes 



This work was supported by funding from ACUMEN (Academic Careers Understood through Measurement and Norms), FP7 European Commission 7th Framework “Capacities, Science in Society”, Grant Agreement: 266632. Opinions and suggestions contained in this article are solely the authors and do not necessarily reflect those of the ACUMEN collaboration.

Supplementary material

11192_2014_1423_MOESM1_ESM.docx (187 kb)
Supplementary material 1 (DOCX 186 kb)


  1. Ahlgren, P., & Järvelin, K. (2010). Measuring impact of 12 information scientists using the DCI-index. Journal of the American Society for Information Science and Technology, 61(7), 1424–1439.CrossRefGoogle Scholar
  2. Alonso, S., Cabreriazo, F., Herrera-Viedma, E., & Herra, F. (2009). H-index: A review focused in its variants, computation and standardization for different scientific fields. Journal of Informatrics, 3(4), 273–289.CrossRefGoogle Scholar
  3. Anderson, T. R., Hankin, R. K. S., & Killworth, P. D. (2008). Beyond the durfee square: Enhancing the h-index to score total publication output. Scientometrics. doi: 10.1007/s11192-007-2071-2.
  4. Antonakis, J., & Lalive, R. (2008). Quantifying scholarly impact: IQp versus the Hirsch h. Journal of the American Society for Information Science and Technology. doi: 10.1002/asi.20802.
  5. Archambault, È, & Larivière, V. (2010). The limits of bibliometrics for the analysis of the social sciences and humanities literature. In: Caillods F. (Ed.), World Social Science Report 2010. UNESCO publishing, pp. 251-254.Google Scholar
  6. Arencibia-Jorge, R., Barrios-Almaguer, I., Ferdandez-Hernandez, S., & Carvajal-Espino, R. (2008). Applying successive h indicators in the institutional evaluation: A case study. Journal of the American Society for Information Science and Technology, 59(1), 155–157.CrossRefGoogle Scholar
  7. Bach, J. F. (2011). On the proper use of bibliometrics to evaluate individual researchers. Académie des sciences. Accessed 5 Apr 2013.
  8. Batista, P., Campiteli, M., Kinouchi, O., & Martinez, A. (2006). Is it possible to compare researchers with different scientific interests? Scientometrics, 68(1), 179–189.CrossRefGoogle Scholar
  9. Belter, C. (2012). A bibliometric analysis of NOAA’s office of ocean exploration and research. Scientometrics, doi: 10.1007/s11192-012-0836-0.
  10. Bennett, D., & Taylor, D. (2003). Unethical practices in authorship of scientific papers. Emergency Medicine, 15, 263–270.CrossRefGoogle Scholar
  11. Bollen, J., Rodriguez, M., & Van de Sompel, H. (2006). Journal status. Scientometrics, 69(3), 669–687.CrossRefGoogle Scholar
  12. Bollen, J., & van de Sompel, H. (2008). Usage impact factor: The effects of sample characteristics on usage-based impact metrics. Journal of the American Society for Information Science and Technology, 59(1), 136–149.CrossRefGoogle Scholar
  13. Bordons, M., & Barrigon, S. (1992). Bibliometric analysis of publication of spanish pharmacologists in the SCI (1984–1989): 2 Contribution to subfields other than pharmacology and pharmacy (ISI). Scientometrics, 25(3), 425–446.CrossRefGoogle Scholar
  14. Bornmann, L. (2012). Measuring the societal impact of research. EMBO Reports, 13(8), 673–676.CrossRefGoogle Scholar
  15. Bornmann, L., Mutz, R. & Daniel, H. (2008). Are there better indicators for evaluation purposes than the h-index? A comparison of nine different variants of the h-index using data from biomedicine. Journal of the American Society for Information Science and Technology. doi: 10.1002/asi.20806.
  16. Bornmann, L., Mutz, R., Hug, S. E., & Daniel, H. (2011). A multilevel meta-analysis of studies reporting correlations between the h-index and 37 different h-index variants. Journal of Informetrics, doi: 10.1016/j.joi.2011.01.006.
  17. Bornmann, L., & Werner, M. (2012). How good is research really? EMBO Reports, 14, 226–230.CrossRefGoogle Scholar
  18. Brown, R. (2009). A simple method for excluding self-citations from the h-index: The b-index. Online Information Review, 33(6), 1129–1136.CrossRefGoogle Scholar
  19. Burnhill, P., & Tubby Hille, M. (1994). On measuring the relation between social science research activity and research publication. Research Evaluation, 4(3), 130–152.Google Scholar
  20. Cabrerizoa, F. J., Alonso, S., Herrera-Viedmac, E., & Herrerac, F. (2012). Q2-index: Quantitative and qualitative evaluation based on the number and impact of papers in the Hirsch-core. Journal of Informetrics, 4, 23–28.CrossRefGoogle Scholar
  21. Chai, J., Hua, P., Rousseau, R., & Wan, J. (2008). The adapted pure h-index. In Proceedings of WIS 2008: Fourth International conference on webmetrics, informetrics and scientometrics & ninth COLLNET meeting, Berlin.
  22. Claro, J., & Costa, C. (2011). A made-to-measure indicator for cross-disciplinary bibliometric ranking of researchers performance. Scientometrics, doi:  10.1007/s11192-010-0241-5.
  23. Costas, R., & Bordons, M. (2007). The h-index: Advantages, limitations and its relation with other bibliometric indicators at the micro level. Journal of Informetrics. doi: 10.1016/j.joi.2007.02.001.
  24. Costas, R., Bordons, M., van Leeuwen, T. N., & van Raan, A. (2009). Scaling rules in the science system: Influence of field-specific citation characteristics on the impact of individual researchers. Journal of the American Society for Information Science and Technology, 60(4), 740–753.CrossRefGoogle Scholar
  25. Costas, R., van Leeuwen, T. N., & Bordons, M. (2010a). A bibliometric classificatory approach for the study and assessment of research performance at the individual level. Journal of the American Society for Information Science and Technology, 61(8), 1564–1581.Google Scholar
  26. Costas, R., van Leeuwen, T., & van Raan, A. (2010b). Is scientific literature subject to a sell by date? A general methodology to analyze the durability of scientific documents. Journal of the American Society for Information Science and Technology, 61(2), 329–339.Google Scholar
  27. Costas, R., van Leeuwen, T. N., & van Raan, A. (2011). The “Mendel Syndrome” in science: Durability of scientific literature and its effects on bibliometric analysis of individual scientists. Scientometrics, 89(1), 177–205.CrossRefGoogle Scholar
  28. Cronin, B. (1984). The citation process: The role and significance of citations in scientific communication. London: Taylor Graham.Google Scholar
  29. Dahler-Larsen, P. (2012). The evaluation society. California: Stanford University Press.Google Scholar
  30. De Bellis, N. (2009). Bibliometrics and citation analysis: From the science citation index to cybermetrics. Lanham, MD: Scarecrow Press.Google Scholar
  31. Delgado López-Cózar, E., Robinson-García, N., & Torres-Salinas, D. (2014). The Google scholar experiment: How to index false papers and manipulate bibliometric indicators. Journal of the Association for Information Science and Technology, 65, 446–454. doi: 10.1002/asi.23056.CrossRefGoogle Scholar
  32. Directorate-General for Research. (2008). Assessing Europe’s university-based research. Belgium: The European Commission.Google Scholar
  33. Eck, N. V., & Waltman, L. (2008). Generalizing the g- and h-indicators. ECON Papers..
  34. Egghe, L. (2006). Theory and practise of the g-index. Scientometrics, 69(1), 131–152.MathSciNetCrossRefGoogle Scholar
  35. Egghe, L., & Rousseau, R. (2008). An h-index weighted by citation impact. Information Processing and Management. doi:  10.1016/j.ipm.2007.05.003.
  36. Egghe, L., Rousseau, R., & Van Hooydonk, G. (2000). Methods for accrediting publications to authors or countries: Consequences for evaluation studies. Journal of the American Society for Information Science, 51(2), 145–157.CrossRefGoogle Scholar
  37. Glänzel, W. (2006). On the h-index—A mathematical approach to a new measure of publication activity and citation impact. Scientometrics. doi: 10.1007/s11192-006-0102-4.
  38. Glänzel, W., & Schubert, A. (2010). Hirsch-type characteristics of the tail of distributions. The generalized h-index. Journal of Informetrics, 4(1), 118–123.CrossRefGoogle Scholar
  39. Hagen, N. (2010). Harmonic publication and citation counting: Sharing authorship credit equitably—not equally, geometrically or arithmetically. Scientometrics, 84(3), 785–793.MathSciNetCrossRefGoogle Scholar
  40. Harzing, A. (2008). Reflections on the H-index. Accessed 5 Apr 2013.
  41. Harzing, A. (2012). Publish or perish user’s manual. Accessed 11 April 2013.
  42. Haslam, N., & Laham, S. M. (2009). Quality, quantity, and impact in academic publication. European Journal of Social Psychology, doi:  10.1002/ejsp.727.
  43. Haustein, S. (2012). Multidimensional journal evaluation: Analyzing scientific periodicals beyond the impact factor. Berlin: K. G.Saur Verlag GmbH & Company.CrossRefGoogle Scholar
  44. HEFCE. (2009). Identification and dissemination of lessons learned by institutions participating in the research excellence framework (REF) bibliometrics pilot: Results of the round one consultation. Higher Education Funding Council for England. Accessed 11 Apr 2013.
  45. Hicks, D. (2004). The four literatures of social science. In H. Moed, W. Glänzel, & U. Schmoch (Eds.), Handbook of quantative sicence and technology research. New York: Springer/Kluwer.Google Scholar
  46. Hicks, D. (2006). The dangers of partial bibliometric evaluation in the social sciences. Economica Politica, 33(2), 145–162.Google Scholar
  47. Hirsch, J. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569–16572.CrossRefGoogle Scholar
  48. Iglesias, J., & Pecharromán, C. (2007). Scaling the h-index for different scientific ISI fields. Scientometrics, 73(3), 303–320.CrossRefGoogle Scholar
  49. Ingwersen, P. (2005). Scientometri: Videnskabspublicering og bibliometriske metoder. Ressource document. Biokemisk Forening. Accessed 23 Mar 2014.
  50. Ingwersen, P., Larsen, B., Rousseau, R., & Russell, J. (2001). The publication-citation matrix and its derived quantities. Chinese Science Bulletin, 46(6), 524–528.CrossRefGoogle Scholar
  51. Jacsó, P. (2011). Google Scholar duped and deduped—the aura of “robometrics”. Online Information Review, 35(1), 154–160.CrossRefGoogle Scholar
  52. Järvelin, K., & Person, O. (2008). The DCI-index: Discounted cumulated impact based on research evaluation. Journal of the American Society for Information Science and Technology, 59(9), 1433–1440.CrossRefGoogle Scholar
  53. Jin, B. H. (2006). H-index: An evaluation indicator proposed by scientist. Science Focus, 1(1), 8–9.Google Scholar
  54. Jin, B. H., Liang, L. L., Rousseau, R., & Egghe, L. (2007). The R and AR indicators: Complementing the h-index. Chinese Science Bulletin, 52(6), 855–863.CrossRefGoogle Scholar
  55. Kosmulski, M. (2006). A new type Hirsch-index saves time and works equally well as the original h-index. ISSI Newsletter, 2(3), 4–6.Google Scholar
  56. Lawrence, P. A. (2003). The politics of publication. Nature, 422(6929), 259–261.CrossRefGoogle Scholar
  57. Lawrence, P. A. (2008). Lost in publication: How measurement harms science. Ethics in science and environmental politics, 8(1), 9–11.MathSciNetCrossRefGoogle Scholar
  58. Levitt, J., & Thelwall, M. (2011). A combined bibliometric indicator to predict article impact. Information Processing and Management. doi:  10.1016/j.ipm.2010.09.005.
  59. Liang, L. (2006). H-index sequence and h-index matrix: Constructions and applications. Scientometrics, 69(1), 153–159.CrossRefGoogle Scholar
  60. Lundberg, J. (2009). Lifting the crown—citation z-score. Journal of Informetrics, doi:  10.1016/j.joi.2006.09.007.
  61. Martin, B. R., & Irvine, J. (1983). Assessing Basic Research: Some partial indicators of scientific progress in Radio Astronomy. Research Policy, 12(2), 61–90.CrossRefGoogle Scholar
  62. Miller, C. W. (2006). Superiority of the h-index over the impact factor for physics. arXiv:physics/0608183 [physics.soc-ph].Google Scholar
  63. Moed, H. F. (2008). UK research assessment exercises: Informed judgments on research quality or quantity? Scientometrics. doi:  10.1007/s11192-008-0108-1.
  64. Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4(3), 265–277.CrossRefGoogle Scholar
  65. Mostert, S. P., Ellenbroek, S. P. H., Meijer, I., van A. & Klasen, E. C. (2010). Societal output and use of research performed by health research groups. Health Research Policy Systems. Accessed 5 Apr 2013.
  66. Namazi, M. R., & Fallahzadeh, M. K. (2010). N-index: A novel and easily-calculable parameter for comparison of researchers working in different scientific fields. Indian Journal of Dermatology, Venereology, and Leprology, 76(3), 229–230.CrossRefGoogle Scholar
  67. Nederhof, A. J., & Meijer, R. F. (1995). Development of bibliometric indicators for utility of research to users in society: Measurement of external knowledge transfer via publications in trade journals. Scientometrics. doi:  10.1007/BF02020187.
  68. Nelhans, G (2013) The practices of the citation: Scientific publication as theory, method and research policy. PhD thesis. Accessed 19 Mar 2014.
  69. Niederkrotenthaler, T., Dorner, T. E., & Maier, M. (2011). Development of a practical tool to measure the impact of publications on the society based on focus group discussions with scientists. BMC Public Health, doi: 10.1186/1471-2458-11-588.
  70. Okubu, Y. (1997). Bibliometric indicators and analysis of research systems: Methods and examples. OECD Science, Technology and Industry Working Papers 1. doi: 10.1787/208277770603.
  71. Panaretos, J., & Malesios, C. (2009). Assessing scientific research performance and impact with single indicators. Scientometrics. doi:  10.1007/s11192-008-2174-9.
  72. Podlubny, I. (2005). Comparsion of scientific impact expressed by the number of citations in different fields of science. Scientometics, 64(1), 95–99.CrossRefGoogle Scholar
  73. Price, D. S. (1970). Citation measures of hard science, soft science, technology and non-science. In C. E. Nelson & D. K. Pollack (Eds.), Communication among scientists and engineers (pp. 3–22). Lexington: Heath Lexington Books.Google Scholar
  74. Radicchi, F., Fortunatoa, S., & Castellanob, C. (2008). Universality of citation distributions: Toward an objective measure of scientific impact. Proceedings of the National Academy of Sciences of the United States of America, 105(45), 17268–17272.CrossRefGoogle Scholar
  75. Rehn, C., Kronman, U., & Wads, D. (2007) Bibliometric indicators—definitions and usage at Karolinska institutet. Accessed 19 Mar 2014.
  76. Retzer, V., & Jurasinski, G. (2009). Towards objectivity in research evaluation using bibliometric indicators - a protocol for incorporating complexity. Basic and Applied Ecology, 10(5), 393–400.CrossRefGoogle Scholar
  77. Rosenberg, M. S. (2011). A biologist’s guide to impact factors. Accessed 5 April 2013.
  78. Rousseau, R. (2006). New developments related to the Hirsch-index. Accessed 5 April 2013.
  79. Rousseau, R., & Ye, F. Y. (2008). A proposal for a dynamic h-type index. Journal of the American Society for Information Science and Technology. doi:  10.1002/asi.20890.
  80. Ruane, F., & Tol, R. (2008). Rational (successive) h-indicators: An application to economics in the Republic of Ireland. Scientometrics, doi:  10.1007/s11192-007-1869-7.
  81. Sanderson, M. (2008). Revisiting h measured on UK LIS and IR academics. Journal of the American Society for Information Science and Technology, 59(7), 1184–1190.CrossRefGoogle Scholar
  82. Sandström, E., & Sandström, U. (2009). Meeting the micro-level challenges: Bibliometrics at the individual level. Proceedings of ISSI 2009 12th International Conference of the International Society for Scientometrics and Informetrics, 2, 846-856.Google Scholar
  83. Schneider, J. W. (2013). Caveats for using statistical significance tests in research assessments. Journal of Informetrics, 7(1), 50–62.CrossRefGoogle Scholar
  84. Schneider, J. W. (Forthcoming). Null hypothesis significance tests. A mix-up of two different theories: the basis for widespread confusion and numerous misinterpretations. Scientometrics. doi: 10.1007/s11192-014-1251-5.
  85. Schreiber, M. (2008a). An empirical investigation of the g-index for 26 physicists in comparison with the h-index, the A-index, and the R-index. Journal of the American Society of Information Science and Technology. doi: 10.1002/asi.20856.
  86. Schreiber, M. (2008b). A modification of the h-index: the H(m)-index accounts for multi-authored manuscripts. arXiv:0805.2000[Physics.Soc-Ph].Google Scholar
  87. Schreiber, M. (2010). Twenty Hirsch-index variants and other indicators giving more or less preference to highly cited papers. arXiv:1005.5227v1 [Physics.Soc-Ph].Google Scholar
  88. Schreiber, M., Malesios, C. C., & Psarakis, S. (2012). Exploratory factor analysis for the Hirsch-index, 17 h-type variants, and some traditional bibliometric indicators. Journal of Informetrics, doi:  10.1016/j.joi.2012.02.001.
  89. Sidiropoulos, A., Katsaros, D., & Manolopoulos, Y. (2007). Generalized Hirsch h-index for disclosing latent facts in citation networks. Scientometrics. arXiv:cs/0607066v1.Google Scholar
  90. Tol, R. S. J. (2008). A rational, successive g-index applied to economics departments in Ireland. Journal of Informetrics, 2(2), 149–155.CrossRefGoogle Scholar
  91. Tol, R. S. J. (2009). Of the H-index and its alternatives: An application to the 100 most prolific economists. Scientometrics, 80(2), 317–324.CrossRefGoogle Scholar
  92. van Leeuwen, T. N., Visser, M., Moed, H., Nederhof, T., & Raan, A. V. (2003). The holy grail of science policy: Exploring and combining bibliometric tools in search of scientific excellence. Scientometrics. doi:  10.1023/A:1024141819302.
  93. Vanclay, J. (2007). On the robustness of the H-index. Journal of the American Society for Information Science and Technology. doi: 10.1002/asi.20616.
  94. Vinkler, P. (2009). The π-index: A new indicator for assessing scientific impact. Journal of Information Science, doi:  10.1177/0165551509103601.
  95. Wagner, C. S., Roessner, J. D., Bobb, K., Klein, J. T., Boyack, K. W., Keyton, J., Rafols, I., Börner, K. (2011). Approaches to understanding and measuring interdisciplinary scientific research (IDR): A review of the literature. Journal of Informetrics. doi:  10.1016/j.joi.2010.06.004.
  96. Waltman, L., & Schreiber, M. (2013). On the calculation of percentile-based bibliometric indicators. Journal of the Association for Information Science and Technology, 64(2), 373–379.Google Scholar
  97. Waltman, L., & van Eck, L. (2009). A taxonomy of bibliometric performance indicators based on the property of consistency. ERIM Report. Accessed 5 Apr 2013.Google Scholar
  98. Waltman, L., & van Eck, N. J. (2011). The inconsistency of the h-index. arXiv:1108.3901v1 [cs.DL].Google Scholar
  99. Waltman, L., van Eck, N. J., van Leeuwen, T., & Visser, M. S. (2012). Some modifications to the SNIP journal impact factor. arXiv:1209.0785.Google Scholar
  100. Wan, J., Hua, P. & Rousseau, R. (2007). The pure h-index: Calculating an author’s h- index by taking co-authors into account. ELIS. Accessed 5 Apr 2013.
  101. Whitley, R. (2000). The intellectual and social organization of the sciences. Oxford: Oxford University Press.Google Scholar
  102. Wu, Q. (2008). The w-index: A significant improvement of the h-index. arXiv:0805.4650v1 [Physics.Soc-Ph].Google Scholar
  103. Yan, E., & Ding, Y. (2011). Discovering author impact: A PageRank perspective. Journal Information Processing and Management, 47(1), 125–134.CrossRefGoogle Scholar
  104. Zhang, C. (2009). The e-index, complementing the h-index for excess citations. PLoS ONE, doi: 10.1371/journal.pone.0005429.

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2014

Authors and Affiliations

  • Lorna Wildgaard
    • 1
    Email author
  • Jesper W. Schneider
    • 2
  • Birger Larsen
    • 3
  1. 1.Royal School of Library and Information ScienceCopenhagenDenmark
  2. 2.Danish Centre for Studies in Research and Research Policy, Department of Political Science and GovernmentAarhus UniversityAarhus CDenmark
  3. 3.Aalborg University CopenhagenCopenhagen SVDenmark

Personalised recommendations