Introduction

In 2005, Hirsch introduced his famous h-index. It combines two important measures of scientometrics, namely the publication count of a researcher (as an indicator for his or her research productivity) and the citation count of those publications (as an indicator for his or her research impact). Hirsch (2005, p. 1569) defines, “A scientist has index h if h of his or her Np papers have at least h citations each and the other (Nph) papers have < h citations each.” If a researcher has written 100 articles, for instance, 20 of these having been cited at least 20 times and the other 80 less than that, then the researcher’s h-index will be 20 (Stock and Stock 2013, p. 382). Following Hirsch, the h-index “gives an estimate of the importance, significance, and broad impact of a scientist’s cumulative research contribution” (Hirsch 2005, p. 16,572). Hirsch (2007) assumed that his h-index may predict researchers’ future achievements. Looking at this in retro-perspective, Hirsch had hoped to create an “objective measure of scientific achievement” (Hirsch 2020, p. 4) but also starts to believe that this could be the opposite. Indeed, it became a measure of scientific achievement, however a very questionable one.

Also in 2005, Hirsch derives the m-index with the researcher’s “research age” in mind. Let the number of years after a researcher’s first publication be tp. The m-index is the quotient of the researcher’s h-index and her or his research age: mp = hp/tp (Hirsch 2005, p. 16,571). An m-value of 2 would mean, for example, that a researcher has reached an h-value of 20 after 10 research years. Meanwhile, the h-index is strongly wired in our scientific system. It became one of the “standard indicators” in scientific information services and can be found on many general scientific bibliographic databases. Besides, it is used in various contexts and generated a lot of research and discussions. This indicator is used or rather misused—dependent on the way of seeing—in decisions about researchers’ career paths, e.g. as part of academics’ evaluation concerning awards, funding allocations, promotion, and tenure (Ding et al. 2020; Dinis-Oliveira 2019; Haustein and Larivière 2015; Kelly and Jennions 2006). For Jappe (2020, p. 13), one of the arguments for the use of the h-index in evaluation studies is its “robustness with regards to incomplete publication and citation data.” Contrary, the index is well-known for its inconsistencies, incapability for comparisons between researchers with different career stages, and missing field normalization (Costas and Bordons 2007; Waltman and van Eck 2012). There already exist various advantages and disadvantages lists on the h-index (e.g. Rousseau et al. 2018). And it is still questionable what the h-index underlying concept represents, due to its conflation of the two concepts’ productivity and impact resulting in one single number (Sugimoto and Larivière, 2018).

It is easy to identify lots of variants of the h-index concerning both, the basis of the data as well as the concrete formula of calculation. Working with the numbers of publications and their citations, there are the data based upon the leading general bibliographical information services Web of Science (WoS), Scopus, Google Scholar, and, additionally, on ResearchGate (da Silva and Dobranszki 2018); working with publication numbers and the number of the publications’ reads, there are data based upon Mendeley (Askeridis 2018). Depending of an author’s visibility on an information service (Dorsch 2017), we see different values for the h-indices for WoS, Scopus, and Google Scholar (Bar-Ilan 2008), mostly following the inequation h(R)WoS < h(R)Scopus < h(R)Google Scholar for a given researcher R (Dorsch et al. 2018). Having in mind that WoS consists of many databases (Science Citation Index Expanded, Social Science Citation Index, Arts & Humanities Citation Index, Emerging Sources Citation Index, Book Citation Index, Conference Proceedings Citation Index, etc.) and that libraries not always provide access to all (and not to all years) it is no surprise that we will find different h-indices on WoS depending on the subscribed sources and years (Hu et al. 2020).

After Hirsch’s publication of the two initial formulas (i.e. the h-index and the time-adjusted m-index) many scientists felt required to produce similar, but only slightly mathematically modified formulas not leading to brand-new scientific insights (Alonso et al. 2009; Bornmann et al. 2008; Jan and Ahmad 2020), as there are high correlations between the values of the variants (Bornmann et al. 2011).

How do researchers estimate the importance of the h-index? Do they really know the concrete definition and its formula? In a survey for Springer Nature (N = 2734 authors of Springer Nature and Biomed Central), Penny (2016, slide 22) found that 67% of the asked scientists use the h-index and further 22% are aware of it but have not used it before; however, there are 10% of respondents who do not know what the h-index is. Rousseau and Rousseau (2017) asked members of the International Association of Agricultural Economists and gathered 138 answers. Here, more than two-fifth of all questionees did not know what the h-index is (Rousseau and Rousseau 2017, p. 481). Among Taiwanese researchers (n = 417) 28.78% self-reported to have heard about the h-index and fully understood the indicator, whereas 22.06% never heard about it. The remaining stated to hear about it and did not know its content or only some aspects (Chen and Lin 2018). For academics in Ireland (n = 19) “journal impact factor, h-index, and RG scores” are familiar concepts, but “the majority cannot tell how these metrics are calculated or what they represent” (Ma and Ladisch 2019, p. 214). Likewise, the interviewed academics (n = 9) could name “more intricate metrics like h-index or Journal Impact Factor, [but] were barely able to explain correctly how these indicators are calculated” (Lemke et al. 2019, p. 11). The knowledge about scientometric indicators in general “is quite heterogeneous among researchers,” Rousseau and Rousseau (2017, p. 482) state. This is confirmed by further studies on the familiarity, perception or usage of research evaluation metrics in general (Aksnes and Rip 2009; Derrick and Gillespie 2013; Haddow and Hammarfelt 2019; Hammarfelt and Haddow 2018).

In a blog post, Tetzner (2019) speculates on concrete numbers of a “good” h-index for academic positions. Accordingly, an h-index between 3 and 5 is good for a new assistant professor, an index between 8 and 12 for a tenured associate professor, and, finally, an index of more than 15 for a full professor. However, these numbers are gross generalizations without a sound empirical foundation. As our data are from Germany, the question arises: What kinds of tools do German funders, universities, etc. use for research evaluation? Unfortunately, there are only few publications on this topic. For scientists at German universities, bibliometric indicators (including the h-index and the impact factor) are important or very important for scientific reputation for more than 55% of the questionees (Neufeld and Johann 2016, p.136). Those indicators have also relevance or even great relevance concerning hiring on academic positions in the estimation of more than 40% of the respondents (Neufeld and Johann 2016, p.129). In a ranking of aspects of reputation of medical scientists, the h-index takes rank 7 (with a mean value of 3.4 with 5 being the best one) out of 17 evaluation criteria. Top-ranked indicators are the reputation of the journals of the scientists’ publications (4.1), the scientists’ citations (4.0), and their publication amount (3.7) (Krempkow et al. 2011, p. 37). For hiring of psychology professors in Germany, the h-index had factual relevance for the tenure decision with a mean value of 3.64 (on a six-point scale) and ranks on position 12 out of more than 40 criteria for professorship (Abele-Brehm and Bühner 2016). Here, the number of peer-reviewed publications is top-ranked (mean value of 5.11). Obviously, these few studies highlight that the h-index indeed has relevance for research evaluation in Germany next to publication and citation numbers.

What is still a research desideratum is an in-depth description of researchers’ personal estimations on the h-index and an analysis of possible differences concerning researchers’ generation, their gender, and the discipline.

What is about the researchers’ state of knowledge on the h-index? Of course, we may ask, “What’s your knowledge on the h-index? Estimate on a scale from 1 to 5!” But personal estimations are subjective and do not substitute a test of knowledge (Kruger and Dunning 1999). Knowledge tests on researchers’ state of knowledge concerning the h-index are—to our best knowledge—a research desideratum, too.

In this article, we pursue two goals, namely on the one hand—similar to Buela-Casal and Zych (2012) on the impact factor—the collection of data about researchers’ personal estimations of the importance of the h-index for themselves as well as their discipline, and on the other hand data on the researchers’ concrete knowledge on the h-index and the way of its calculation. In short, these are our research questions:

  • RQ1: How do researchers estimate the importance of the h-index?

  • RQ2: What is the researchers’ knowledge on the h-index?

In order to answer RQ1, we asked researchers on their personal opinions; to answer RQ2, we additionally performed a test of their knowledge.

Methods

Online survey

Online-survey-based questionnaires provide a means of generating quantitative data. Furthermore, they ensure anonymity, and thus, a high degree of unbiasedness to bare personal information, preferences, and own knowledge. Therefore, we decided to work with an online survey. As we live and work in Germany, we know well the German academic landscape and thus restricted ourselves to professors working at a German university. We have focused on university professors as sample population (and skipped other academic staff in universities and also professors at universities of applied sciences), because we wanted to concentrate on persons who have (1) an established career path (in contrast to other academic staff) and (2) are to a high extent oriented towards publishing their research results (in contrast to professors at universities of applied science, formerly called Fachhochschulen, i.e. polytechnics, who are primarily oriented towards practice).

The online questionnaire (see Appendix 1) in German language contained three different sections. In Sect. 1, we asked for personal data (gender, age, academic discipline, and university). Section 2 is on the professors’ personal estimations of the importance of publications, citations, their visibility on WoS, Scopus, and Google Scholar, the h-index on the three platforms, the importance of the h-index in their academic discipline, and, finally, their preferences concerning h-index or m-index. We chose those three information services as they are the most prominent general scientific bibliographic information services (Linde and Stock 2011, p. 237) and all three present their specific h-index in a clearly visible way. Section 3 includes the knowledge test on the h-index and a question concerning the m-index.

In this article, we report on all aspects in relation with the h-index (for other aspects, see Kamrani et al. 2020). For the estimations, we used a 5-point Likert scale (from 1: very important via 3: neutral to 5: very unimportant) (Likert 1932). It was possible for all estimations to click also on “prefer not to say.” The test in Sect. 3 was composed of two questions, namely a subjective estimation of the own knowledge on the h-index and an objective knowledge test on this knowledge with a multiple-choice test (items: one correct answer, four incorrect ones as distractors, and the option “I’m not sure”). Those were the five items (the third one being counted as correct):

  • h is the quotient of the number of citations of journal articles in a reference period and the number of published journal articles in the same period;

  • h is the quotient of the general number of citations of articles (in a period of three years) and the number of citations of a researcher’s articles (in the same three years);

  • h is the number of articles by a researcher, which were cited h times at minimum;

  • h is the number of all citations concerning the h-index, thereof subtracted h2;

  • h is the quotient of the number of citations of a research publication and the age of this publication.

A selected-response format for the objective knowledge test was chosen since it is recommended as the best choice for measuring knowledge (Haladyna and Rodriguez 2013). For the development of the knowledge test items we predominantly followed the 22 recommendations given by Haladyna and Rodriguez (2013, in section II). Using a three-option multiple-choice should be superior to the four- or five-option for several reasons. However, we decided to use five options because our test only contained one question. The “I’m not sure” selection was added for the reason that our test is not a typical (classroom) assessment test. We, therefore, did not want to force an answer, for example through guessing, but rather wanted to know if participants do not know the correct answer. Creating reliable distractors can be seen as the most difficult part of the test development. Furthermore, validation is a crucial task. Here we tested and validated the question to the best of our knowledge.

As no ethical review board was involved in our research, we had to determine the ethical harmlessness of the research project ourselves and followed suggestions for ethical research applying online surveys such as consent, risk, privacy, anonymity, confidentiality, and autonomy (Buchanan and Hvizdak 2009). We found the e-mail addresses of the participants in a publicly accessible source (a handbook on all German faculty members, Deutscher Hochschulverband 2020); the participation was basically voluntary, and the participants knew that their answers became stored. At no time, participants became individually identifiable through our data collection or preparation as we strictly anonymized all questionnaires.

Participants

The addresses of the university professors were randomly extracted from the German Hochschullehrer-Verzeichnis (Deutscher Hochschulverband 2020). So, our procedure was non-probability sampling, more precisely convenience sampling in combination with volunteer sampling (Vehovar et al. 2016). Starting with volume 1 of the 2020 edition of the handbook, we randomly picked up entries and wrote the e-mails addresses down. The link to the questionnaire was distributed to every single professor by the found e-mail addresses; to host the survey we applied UmfrageOnline. To strengthen the power of the statistical analysis we predefined a minimum of 1000 usable questionnaires. The power tables provided by Cohen (1988) have a maximum of n = 1000 participants. Therefore, we chose this value of the sample size to ensure statistically significant results, also for smaller subsets as single genders, generations, and disciplines (Cohen 1992). We started the mailing in June 2019 and stopped it in March 2020, when we had response of more than 1000 valid questionnaires. All in all we contacted 5722 professors by mail and arrived at 1081 completed questionnaires, which corresponds to a response rate of 18.9%.

Table 1 shows a comparison between our sample of German professors at universities with the population as one can find it in the official statistics (Destatis 2019). There are only minor differences concerning the gender distribution and also few divergences concerning most disciplines; however, Table 1 exhibits two huge differences. In our sample, we find more (natural) scientists than in the official statistics and less scholars in the humanities and the social sciences.

Table 1 Representativity of our sample.

Analysis

In our analysis, we distinguished always between the results for all participants, and, additionally, the results by gender (Geraci et al. 2015), generation (Fietkiewicz et al. 2016), and the field of knowledge (Hirsch and Buela-Casal 2014). We differentiated two genders (men, women) (note the questionnaire also provided the options “diverse” and “prefer not to say,” which were excluded from further calculations concerning gender), four generations: Generation Y (born after 1980), Generation X (born between 1960 and 1980), Baby Boomers (born after 1946 and before 1960), Silent Generation (born before 1946), and six academic disciplines: (1) geosciences, environmental sciences, agriculture, forestry, (2) humanities, social sciences, (3) sciences (including mathematics), (4) medicine, (5) law, and (6) economics. This division of knowledge fields is in line with the faculty structure of many German universities. As some participants answered some questions with “prefer not to say” (which was excluded from further calculations), the sum of all answers is not always 1081.

As our Likert scale is an ordinal scale, we calculated in each case the median as well as the interquartile range (IQR). For the analysis of significant differences we applied the Mann–Whitney u-test (Mann and Whitney 1947) (for the two values of gender) and the Kruskall–Wallis h-test (Kruskal and Wallis 1952) (for more than two values as the generations and academic disciplines). The data on the researchers’ knowledge on the h-index are on a nominal scale, so we calculated relative frequencies for three values (1: researcher knows the h-index in her/his self-estimation and passed the test; 2: researcher does not know the h-index in her/his self-estimation; 3: researcher knows the h-index in her/his self-estimation and failed the test) and used chi-squared test (Pearson 1900) for the analysis of differences between gender, knowledge area, and generation. We distinguish between three levels of statistical significance, namely *: p ≤ 0.05 (significant), **: p ≤ 0.01 (very significant), and ***: p ≤ 0.001 (extremely significant); however, one has to interpret such values always with caution (Amrhein et al. 2019). All calculations were done with the help of SPSS (see a sketch of the data analysis plan in Appendix 2).

Researchers’ estimations of the h-index

How do researchers estimate the importance of the h-index for their academic discipline? And how important is the h-index (on WoS, Scopus, and Google Scholar) for themselves? In this paragraph, we will answer our research question 1.

Table 2 shows the different researcher estimations of the importance of the h-index concerning their discipline. While for all participants the h-index is “important” (2) for their academic field (median 2, IQA 1), there are massive and extremely significant differences between the single disciplines. For the sciences, medicine, and geosciences (including environmental sciences, agriculture, and forestry) the h-index is much more important (median 2, IQA 1) than for economics (median 3, IQA 1), humanities and social sciences (median 4, IQA 2), and law (median 5, IQA 0). The most votes for “very important” come from medicine (29.1%), the least from the humanities and social sciences (1.0%) as well as from law (0.0%). Conversely, the most very negative estimations (5: “very unimportant”) can be found among lawyers (78.6%) and scholars from the humanities and social sciences (30.4%). There is a clear cut between sciences (including geosciences, etc., and medicine) on one hand and humanities and all social sciences (including law and economics) on the other hand—with a stark importance of the h-index for the first-mentioned disciplines and a weak importance of the h-index for the latter.

Table 2 Researchers’ estimations of the importance of the h-index in their academic discipline

In Tables 3, 4 and 5 we find the results for the researchers’ estimations of the importance of their h-index on WoS (Table 3), Scopus (Table 4), and Google Scholar (Table 5). For all participants, the h-index on WoS is the most important one (median 2; however, with a wide dispersion of IQR 3), leaving Scopus and Google Scholar behind it (median 3, IQR 2 for both services). For all three bibliographic information services, the estimations of men and women do not differ in the statistical picture. For scientists (including geoscientists, etc.), a high h-index on WoS and Scopus is important (median 2); interestingly, economists join scientists when it comes to the importance of the h-index on Google Scholar (all three disciplines having a median of 2). For scholars from humanities and social sciences, the h-indices on all three services are unimportant (median 4), for lawyers they are even very unimportant (median 5). For researchers in the area of medicine there is a decisive ranking: most important is their h-index on WoS (median 2, IQR 2, and 41.5% votes for “very important”), followed by Scopus (median 2, IQA 1, but only 18.4% votes for “very important”), and, finally, Google Scholar (median 3, IQR 1, and the modus also equals 3, “neutral”). For economists, the highest share of (1)-votes (“very important”) is found for Google Scholar (29.9%) in contrast to the fee-based services WoS (19.7%) and Scopus (12.2%).

Table 3 Researchers’ estimations of the importance of their h-index on Web of Science
Table 4 Researchers’ estimations of the importance of their h-index on Scopus
Table 5 Researchers’ estimations of the importance of their h-index on Google Scholar

Similar to the results of the knowledge areas, there is also a clear result concerning the generations. The older a researcher, the less important is his or her h-index for him- or herself. We see a declining number of (1)-votes in all three information services, and a median moving over the generations from 2 to 3 (WoS), 2 to 4 (Scopus), and 2 to 3 (Google Scholar). The youngest generation has a preference for the h-index on Google Scholar ((1)-votes: 34.9%) over the h-indices on WoS ((1)-votes: 25.9%) and Scopus ((1)-votes: 19.8%).

A very interesting result of our study are the impressive differences of the importance estimations of the h-index by discipline (Fig. 1). With three tiny exceptions, the estimations for the general importance and the importance of the h-indices on WoS, Scopus, and Google Scholar are consistent inside each scientific disciplines. For the natural sciences, geosciences etc., and medicine, the h-index is important (median 2), for economics, it is neutral (median 3), for the humanities and social sciences it is unimportant (median 4), and, finally, for law this index is even very unimportant (median 5).

Fig. 1
figure 1

Researchers’ estimations of the h-index by discipline (medians). N = 1001 (general importance), N = 961 (WoS), N = 946 (Scopus), N = 966 (Google Scholar); Scale: (1) very important, (2) important, (3) neutral, (4) unimportant, (5) very unimportant

We do not want to withhold a by-result on the estimation on a modification of the h-index by the time-adjusted m-index. 567 participants made a decision: for 50.8% of them the h-index is the better one, 49.2% prefer the m-index. More women (61.1%) than men (47.3%) choose the m-index over the original h-index. All academic disciplines except one prefer the m-index; scientists are the exception (only 42.8% approval for the m-index). For members of Generation Y, Baby Boomers, and Silent Generation the m-index is the preferable index; Generation X prefers mainly (54.3%) the h-index. Inside the youngest generation, Generation Y (being discriminated by the h-index), the majority of researchers (65.5%) likes the m-index more than the h-index.

Researchers’ state of knowledge on the h-index

Answering our research question 2, the overall result is presented in Fig. 2. This is a combination of three questions, as we initially asked the researchers regarding their personal estimations of their general familiarity (Appendix 1, Q10) and calculation knowledge (Q13) on the h-index. Only participants who confirmed that they have knowledge on the indicators’ calculation (Q10 and Q13) made the knowledge test (Q14). About three fifths of the professors know the h-index in their self-estimations and passed the test, one third of all answering participants does not know the h-index following their self-estimations, and, finally, 7.2% wrongly estimated their knowledge on the h-index, as they failed the test but meant to know it.

Fig. 2
figure 2

Researchers’ state of knowledge on the h-index: The basic distribution. N = 1017

In contrast to many of our results concerning the researchers’ estimation of the importance of the h-index we see differences in the knowledge on the h-index by gender (Table 6). Only 41.6% of the women have justified knowledge (men: 64.6%), 50.0% do not know the definition or the formula of the h-index (men: 28.7%), and 8.3% wrongly estimate their knowledge as sufficient (men: 6.9%). However, these differences are statistically not significant.

Table 6 Researchers’ state of knowledge on the h-index

In the sciences (incl. geosciences, etc.) and in medicine, more than 70% of the participants do know how to calculate the h-index. Scientists have the highest level of knowledge on the h-index (79.1% passed the knowledge test). Participants from the humanities and social sciences (21.1%) as well as from law (7.1%) exhibit the lowest states of knowledge concerning the h-index. With a share of 48.3%, economists take a middle position between the two main groups of researchers; however, there are 13.8% of economists who wrongly overestimate their knowledge state.

We found a clear result concerning the generations: the older the researcher the less is the knowledge on the h-index. While 62.9% of the Generation X know the calculation of the h-index, only 53.2% of the Baby Boomers possess this knowledge. The differences in the states of the researchers’ knowledge on the h-index within the knowledge areas and generations are extremely significant each.

Discussion

Main results

Our main results are on the researchers’ estimations of the h-index and their state of knowledge on this scientometric indicator. We found a clear binary division between the academic knowledge fields: For the sciences (including geosciences, agriculture, etc.) and medicine the h-index is important for the researchers themselves and for their disciplines, while for the humanities and social sciences, economics, and law the h-index is considerably less important. For the respondents from the sciences and medicine, the h-index on WoS is most important, followed by the h-index of Google Scholar and Scopus. Surprisingly, for economists Google Scholar’s h-index is very attractive. We did not find significant differences between the estimations of the importance of the h-index between men and women; however, there are differences concerning the generations: the older the participants the less important they estimate the importance of the h-index.

Probably, for older professors the h-index has not the same significance as for their younger colleagues, as they are not so much in need to plan their further career or to apply for new research projects. On average, for researchers aged 60 and more, their productivity declines in contrast to younger colleagues (Kyvik 1990). And perhaps some of them simply do not know the existence of more recent services and of new scientometric indicators. Younger researchers are more tolerant of novelty in their work (Packalen and Bhattachrya 2015), and such novelty includes new information services (as Scopus and Google Scholar) as well as new indicators (as the h-index). It is known that young researchers rely heavily on search engines like Google (Rowlands et al. 2008), which partly may explain the high values for Google Scholar especially from Generation Y. Furthermore, the increasing publication pressure and the h-index utilization for decisions about early career researchers’ work-related paths thus also impact the importance of the indicator for those young professors (Farlin and Majewski 2013).

All in all, two fifths of the professors do not know the concrete calculation of the h-index or—which is rather scary—wrongly deem to know what the h-index is and failed our simple knowledge test. The women do even worse, as only about two fifths really know what the h-index is and how it is defined and calculated, but we should have in mind that this gender difference is statistically not significant. The older the researcher, the higher is the share of participants who do not know the definition and calculation of the h-index. The researchers’ knowledge on the h-index is much smaller in the academic disciplines of the humanities and the social sciences.

The h-index in the academic areas

Especially the obvious differences between the academic areas demand further explanation. Participants from the natural sciences and from medicine estimate the importance of the h-index as “important” or even “very important,” and they know details on this indicator to a high extend. The participants from the humanities, the social sciences, economics, and law are quite different. They estimate the h-index’ importance as “neutral,” “unimportant,” or even as “very unimportant,” and the share of researchers with profound knowledge on the h-index is quite low. Haddow and Hammarfelt (2019) also report a lower use of the h-index within these fields. Similar to our study, especially researchers in the field of law (n = 24) did not make use of the h-index. All researchers publish and all cite, too. There are differences in their publication channels, as scientists publish mostly in journals and researchers from the humanities publish in monographs and sometimes also in journals (Kulczycki et al. 2018), but this may not explain the differences concerning the importance of and the knowledge state on the h-index. Furthermore, more information on how such researchers’ h-index perceptions through different disciplines comply with the h-index (mis)usage for research evaluation within those disciplines would add another dimension to this topic.

The indeed very large general information services WoS and Scopus are, compared to personal literature lists of researchers, quite incomplete (Hilbert et al. 2015). There is also a pronounced unequal coverage of certain disciplines (Mongeon and Paul-Hus 2016) and many languages (except English) (Vera-Baceta et al. 2019). Perhaps these facts, in particular, prevent representatives of the disadvantaged disciplines and languages (including German—and we asked German professors) from a high estimation of the relevance of their h-index as important on these platforms. Then, however, the rejection of the h-index of Google Scholar, which can also be seen, is surprising, because this information service is by far the most complete (Martin-Martin et al. 2018). However, economists are very well informed here, as they—as the only academic representatives—highly value their h-index at Google Scholar. On the other hand, the use of Google Scholar for research evaluation is discussed in general. Although its coverage is usually broader than those provided by more controlled databases and steadily expanding its collection, there exist widely known issues, for example, its low accuracy (Halevi et al. 2017). Depending on a researcher’s own opinion on this topic, this could be a reason for seeing no importance in the h-index provided by Google Scholar as well.

Another attempt for an explanation may be the different cultures in the different research areas. For Kagan (2009, p. 4), natural scientists see their main interest in explanation and prediction, while for humanists it is understanding (following Snow 1959 and Dilthey 1895, p. 10). The h-index is called an indicator allowing explanation and prediction of scientific achievement (Hirsch 2007); it is typical for the culture of natural sciences. Researchers from the natural science and from medicine are accustomed to numbers, while humanists seldom work quantitatively. In the humanities, other indicators such as book reviews and the quality of book publishers are components for their research evaluation; however, such aspects are not reflected by the h-index. And if humanities scholars are never asked for their h-index, why should they know or use it?

Following Kagan (2009, p. 5) a second time, humanists exhibit only minimal dependence on outside support and natural scientists are highly dependent on external sources of financing. The h-index can work as an argument for the allocation of outside support. So for natural scientists the h-index is a very common fabric and they need it for their academic survival; humanists are not as familiar with numerical indicators and for them the h-index is not so much-needed as for their colleagues from the science and medicine faculties. However, this dichotomous classification of research and researchers may be an oversimplifying solution (Kowalski and Mrdjenovich 2016) and there is a trend in consulting and using such research evaluation indicators in the humanities and social sciences, too. For preparing a satisfying theory of researchers’ behavior concerning the h-index (or, in general, concerning scientometric indicators)—also in dependence on their background in an academic field—more research is needed.

Limitations, outlook, and recommendations

A clear limitation of the study is our studied population, namely university professors from Germany. Of course, researchers in other countries should be included in further studies. It seems necessary to broaden the view towards all researchers and all occupational areas, too, including, for instance, also lecturers in polytechnics and researchers in private companies. Another limitation is the consideration of only three h-indices (of WoS, Scopus, and Google Scholar). As there are other databases for the calculation of an h-index (e.g., ResearchGate) the study should be broadened to all variants of the h-index.

Another interesting research question may be: Are there any correlations between the estimations of the importance of the h-index or the researcher’s knowledge on the h-index and the researcher’s own h-index? Does a researcher with a high h-index on, for instance, WoS, estimate the importance of this indicator higher than a researcher with a low h-index? Hirsch (2020) speculates that people with high h-indexes are more likely to think that this indicator is important. A more in-depth analysis on the self-estimation of researchers’ h-index knowledge might also consider the Dunning-Kruger effect, showing certain people can be wrongly confident about their limited knowledge within a domain and not having the ability to realize this (Kruger and Dunning 1999).

As the h-index has still an important impact on the evaluation of scientists and as not all researchers are very knowledgeable about this author-specific research indicator, it seems to be a good idea to strengthen their knowledge in the broader area of “metric-wiseness” (Rousseau et al. 2018; Rousseau and Rousseau 2015). With a stronger focus on educating researchers and research support staff in terms of the application and interpretation of metrics as well as to reduce misuse of indicators, Haustein (2018) speaks about better (scholarly) “metrics literacies.” Following Hammarfelt and Haddow (2018), we should further discuss possible effects of indicators within the “metrics culture.” Likewise, this also applies to all knowledgeable researchers as well as research evaluators who also may or may not be researchers by themselves. Here, the focus rather lies to raise awareness for metrics literacies and to foster fair research evaluation practices not incorporating any kind of misuse. This leads directly to a research gap in scientometrics. Further research on concrete data about the level of researchers’ knowledge not only concerning the h-index, but also on other indicators such as WoS’s impact factor, Google’s i-10 index, Scopus’ CiteScore, the source normalized impact per paper (SNIP), etc., also in a comparative perspective would draw a more comprehensive picture on the current indicator knowledge. All the meanwhile “classical” scientometric indicators are based upon publication and citation measures (Stock 2001). Alternative indicators are available today, which are based upon social media metrics, called “altmetrics” (Meschede and Siebenlist 2018; Thelwall et al. 2013). How do researchers estimate the importance of these alternative indicators and do they know their definitions and their formulae of calculation? First insights on this give Lemke et al. (2019), also in regard to researchers’ personal preferences and concerns.

Following Hirsch (2020), the h-index is by no means a valid indicator of research quality; however, it is very common especially in the sciences and medicine. Probably, it is a convenient indicator for some researchers who want to avoid the hassle of laborious and time-consuming reviewing and scrutinizing other researchers’ œuvre. Apart from its convenience and popularity, and seen from an ethical perspective, one should consider what significance a single metric should have and how we—in general—want to further shape the future of research evaluation.