Advertisement

hα: the scientist as chimpanzee or bonobo

  • Loet LeydesdorffEmail author
  • Lutz Bornmann
  • Tobias Opthof
Open Access
Article

Abstract

In a recent paper, Hirsch (hα: an index to quantify an individual’s scientific leadership, 2019.  https://doi.org/10.1007/s11192-018-2994-1)  proposes to attribute the credit for a co-authored paper to the α-author—the authors with the highest h-index—regardless of his or her actual contribution, effectively reducing the role of the other co-authors to zero. The indicator hα inherits most of the disadvantages of the h-index from which it is derived, but adds the normative element of reinforcing the Matthew effect in science. Using an example, we show that hα can be extremely unstable. The empirical attribution of credit among co-authors is not captured by abstract models such as h, \(\bar{h}\), or hα.

Keywords

h-index hα Co-authorship Credit Citation 

Introduction

Unlike bonobos, chimpanzees are organized in groups where the alpha male is the winner who “takes all” (de Waal 2000, 2006). In a recent paper (2019), the physicist Jorge E. Hirsch proposes to attribute the credit for a co-authored paper to the α-author, regardless of his or her actual contribution, effectively reducing the role of the other co-authors to zero. The α-author is defined (at p. 2) as “the co-author with the highest h-index.” The h-index itself was defined by Hirsch (2005, p. 16569) as the number of papers of a scientist with at least h citations. Despite its obvious shortcomings, this h-index has been incorporated into the bibliometric databases (Web of Science, Scopus, and Google Scholar).

The h-index combines the number of publications and citations into a single measure that can easily be determined. Many decision makers in science prefer to have such a clear and controllable result. Furthermore, h-values can be attributed to all sets of publications with citations, such as departments, universities, or journals. However, the h-index is mathematically inconsistent (Waltman and Van Eck 2012) and there are no convincing arguments why the numbers of publications and citations should be combined in this way; other counting rules for identifying the h-core among the papers are equally possible (e.g., Egghe 2006; Ye 2017).

In response to the critique that the h-index does not take the number of co-authors of a paper into account, Hirsch (2010) extended his original h-index with \(\bar{h}\) as follows: “A scientist has index \(\bar{h}\) if \(\bar{h}\) of his/her papers belong to his/her \(\bar{h}\)-core. A paper belongs to the \(\bar{h}\) core of a scientist if it has ≥ \(\bar{h}\) citations and in addition belongs to the \(\bar{h}\)-core of each of the co-authors of the paper” (p. 742). The contribution to the \(\bar{h}\) of an individual scientist is thus made dependent on the achievements of his/her co-authors. The newly proposed hα dissolves this dependency and focuses exclusively on the seniority (“leadership”) of the individual scientist. But it also generates some new problems.

The numbers of citations of each of the co-authors at the moment of publication are not retrievable in the bibliometric databases at later moments. As in the case of \(\bar{h}\), Hirsch makes a pragmatic concession for the operationalization by using the number of papers in the currenth-core of the scientist as a proxy. The hα can then be obtained as follows: “One simply has to go through the list of papers in the h-core of a scientist and eliminate those papers for which a coauthor has higher h-index than the h-index of the author under consideration” (Hirsch 2019, at p. 2). Division of this value of hα by the h-value provides a ratio rα = (hα/h) between zero and one which can also be expressed as a percentage.

The consequences of this operationalization are devastating for the value of hα. For example: let us assume that three authors (A, B, and C) share an h-core of 50 papers. The papers are cited from 110 times for paper #1 and one time less for each next paper, ending at 61 citations for paper #50. Therefore, both their h-index and their hα index are 50. Additionally, each of the three authors has a single-authored paper which is cited 49 times. This is the situation at time t.  Two months later, however, A’s single-authored paper receives two new citations, bringing her h-index as well as her hα index to 51. At that very moment, the hα for authors B and C decrease from 50 to zero as a consequence of the citation of a paper which may have no relation to the collaboration among these three authors.  One month later, the single-authored paper of B also receives two additional citations: the hα-index of B increases from 0 to 51, but the hα of C remains zero. However, the h-indices of the three authors are 51, 51, and 50, respectively. The hα, however, can be extremely unstable.

Analytical and normative use and assessment

In our opinion, indicators such as h, \(\bar{h}\), or hα (and the many h-index variants proposed hitherto; see Bornmann et al. 2011) can be evaluated (1) analytically and empirically as a methodology in bibliometrics and science studies, and (2) normatively as an indicator providing management information. The h-index itself, for example, has virtually no analytical value, as has been shown extensively in the scientometric literature (e.g., Bornmann 2014), but it is frequently used in research management and by policy-makers. Normatively successful indicators can function performatively in competitive environments (Dahler-Larsen 2014). For example, indicators can be incorporated into bureaucratic processes and function then as institutional incentives (Wouters 2014). Applicants, for example, nowadays routinely report their h-index.

The newly proposed indicator hα inherits most of the disadvantages of the h-index from which it is derived (e.g., Marchant 2009), but adds the normative element of reinforcing the Matthew effect in science, which was defined by Merton (1968) based on the following passage from the Gospel: “For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath” (Matthew 25:29, King James version). This tendency will prevail in some sciences more than others, but it can be reinforced by using the hα for the attribution of credit, implying that “the winner takes all.”

However, Hirsch’s models do not describe the attribution of credit in empirical situations. The literature informs us that the attribution of credit differs among the disciplines (e.g., Moed 2000; Price 1970; Wagner 2008). The order of authorship in the byline of the article is accordingly pluriform. In the life sciences, for example, papers are often attributed to the PhD student or postdoc as the first author and to the supervisor as the last one, while in economics the names of co-authors are commonly listed in alphabetical order. A senior with the largest h-value may also be involved, but not necessarily in one of these two (junior or senior) functions; perhaps, for legitimatory purposes or in relation to funding agencies. In other words, the empirical attribution of credit among co-authors is not captured by abstract models such as \(\bar{h}\) or hα.

Evaluation using publication and citation measures should consider the field-specific environments in which the evaluated scientists operate and the objectives of the evaluation: are research groups in biomedicine being compared, or candidates for a full professorship in economics? Bornmann and Marewski (2018) introduced the term “bibliometrics-based heuristics,” which emphasizes the meaning of the environment in which the evaluation takes place. One cannot make performance judgements without information about the international network of the evaluees, the quality of the journals in which their papers were published, the number of single-authored papers compared to the number of co-authored papers, the concrete topics of the scientists’ research, and the most important papers in their careers.

If, for other reasons, a single number is needed that reflects both impact and output dimensions in a comparison, the number of papers which belong to the 10% most frequently cited in the corresponding fields and publication years is probably the best candidate (Leydesdorff et al. 2011; Narin 1987; Tijssen et al. 2002). An age-normalized variant of this indicator (at the individual level) can be obtained by dividing this number by the years since publishing one’s first paper (Bornmann and Marx 2014).

As against these empirically elaborated bibliometric indicators, h, \(\bar{h}\), and hα are just mathematical constructs which are formal and thus devoid of meaning. A mathematical model of how to combine publication and citation analysis without empirical testing and theoretical backing tells us more about the imagination than about the modeled system. Scientists, for example, could be scaled on behaving like chimpanzees or bonobos, and one could design a research project testing the differences in α-behavior among the disciplines. The current proposal of hα, however, claims validity across the disciplines but is both untestable and uninformed; it provides us rather with a perspective. Is this, perhaps, the perspective “which forces a man to become a physicist” (Leydesdorff and van Erkelens 1981; Mitroff 1974)?

References

  1. Bornmann, L. (2014). h-Index research in scientometrics: A summary. Journal of Informetrics, 3(8), 749–750.CrossRefGoogle Scholar
  2. Bornmann, L., & Marewski, J. N. (2018). Heuristics as conceptual lens for understanding and studying the usage of bibliometrics in research evaluation. Retrieved July 27, 2018, from https://arxiv.org/abs/1807.05115.
  3. Bornmann, L., & Marx, W. (2014). How to evaluate individual researchers working in the natural and life sciences meaningfully? A proposal of methods based on percentiles of citations. Scientometrics, 98(1), 487–509.  https://doi.org/10.1007/s11192-013-1161-y.CrossRefGoogle Scholar
  4. Bornmann, L., Mutz, R., Hug, S., & Daniel, H. (2011). A multilevel meta-analysis of studies reporting correlations between the h index and 37 different h index variants. Journal of Informetrics, 5(3), 346–359.  https://doi.org/10.1016/j.joi.2011.01.006.CrossRefGoogle Scholar
  5. Dahler-Larsen, P. (2014). Constitutive effects of performance indicators: Getting beyond unintended consequences. Public Management Review, 16(7), 969–986.CrossRefGoogle Scholar
  6. De Waal, F. B. (2000). Primates—A natural heritage of conflict resolution. Science, 289(5479), 586–590.CrossRefGoogle Scholar
  7. De Waal, F. (2006). The animal roots of human morality. New Scientist, 192(2573), 60–61.CrossRefGoogle Scholar
  8. Egghe, L. (2006). Theory and practise of the g-index. Scientometrics, 69(1), 131–152.MathSciNetCrossRefGoogle Scholar
  9. Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the USA, 102(46), 16569–16572.CrossRefzbMATHGoogle Scholar
  10. Hirsch, J. (2010). An index to quantify an individual’s scientific research output that takes into account the effect of multiple coauthorship. Scientometrics, 85(3), 741–754.CrossRefGoogle Scholar
  11. Hirsch, J. (2019). hα: An index to quantify an individual’s scientific leadership. Scientometrics.  https://doi.org/10.1007/s11192-018-2994-1.Google Scholar
  12. Leydesdorff, L., Bornmann, L., Mutz, R., & Opthof, T. (2011). Turning the tables in citation analysis one more time: Principles for comparing sets of documents. Journal of the American Society for Information Science and Technology, 62(7), 1370–1381.CrossRefGoogle Scholar
  13. Leydesdorff, L., & van Erkelens, H. (1981). Some social-psychological aspects of becoming a physicist. Scientometrics, 3(1), 27–45.  https://doi.org/10.1007/bf02021862.CrossRefGoogle Scholar
  14. Marchant, T. (2009). An axiomatic characterization of the ranking based on the h-index and some other bibliometric rankings of authors. Scientometrics, 80(2), 325–342.CrossRefGoogle Scholar
  15. Merton, R. K. (1968). The Matthew effect in science. Science, 159(3810), 56–63.CrossRefGoogle Scholar
  16. Mitroff, I. I. (1974). The subjective side of science. Amsterdam: Elsevier.Google Scholar
  17. Moed, H. F. (2000). Bibliometric indicators reflect publication and management strategies. Scientometrics, 47(2), 323–346.CrossRefGoogle Scholar
  18. Narin, F. (1987). Bibliometric techniques in the evaluation of research programs. Science and Public Policy, 14(2), 99–106.Google Scholar
  19. Price, D. J. de Solla (1970). Citation measures of hard science, soft science, technology, and nonscience. In C. E. Nelson & D. K. Pollock (Eds.), Communication among scientists and engineers (pp. 3–22). Lexington, MA: Heath.Google Scholar
  20. Tijssen, R. J. W., Visser, M. S., & Van Leeuwen, T. N. (2002). Benchmarking international scientific excellence: Are highly cited research papers an appropriate frame of reference? Scientometrics, 54(3), 381–397.CrossRefGoogle Scholar
  21. Wagner, C. S. (2008). The new invisible college. Washington, DC: Brookings Press.Google Scholar
  22. Waltman, L., & Van Eck, N. J. (2012). The inconsistency of the h-index. Journal of the American Society for Information Science and Technology, 63(2), 406–415.CrossRefGoogle Scholar
  23. Wouters, P. (2014). The citation: From culture to infrastructure. In B. Cronin & C. Sugimoto (Eds.), Beyond bibliometrics: Harnessing multidimensional indicators of scholarly impact (pp. 47–66). Cambridge, MA: MIT Press.Google Scholar
  24. Ye, F. Y. (2017). Scientific metrics: Towards analytical and quantitative sciences. Beijing: Science Press, Springer.CrossRefGoogle Scholar

Copyright information

© The Author(s) 2019

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Amsterdam School of Communication Research (ASCoR)University of AmsterdamAmsterdamThe Netherlands
  2. 2.Division for Science and Innovation StudiesAdministrative Headquarters of the Max Planck SocietyMunichGermany
  3. 3.Experimental Cardiology Group, Heart Failure Research CenterAcademic Medical Center AMCAmsterdamThe Netherlands

Personalised recommendations