Skip to main content
Log in

The HF-rating as a universal complement to the h-index

  • Published:
Scientometrics Aims and scope Submit manuscript

A Correction to this article was published on 30 October 2020

This article has been updated

Abstract

Interdisciplinary comparison has been a constant objective of bibliometrics. The well-known h-index and its alternatives have not achieved this objective. Based on the gh-rating or ghent-rating, a categorization of academic articles into tiers of publications within similar citations ranges, a new ratio is proposed, the high fame hf-ratio. This ratio is calculated as the adjusted average of the weighted factors of the researchers’ best articles; it leads to an associated rating also designated by the symbols AAA, AA, A, BBB, B, C, D, … etc. comparable to financial ratings such as Moody’s and S&P ratings. Adding this rating to the h-index forms the high fame HF-rating. The HF-rating provides the average grade of a researcher’s best papers benchmarked in their field. This new HF-rating induces some qualitative elements in the evaluation of research, includes more selectivity and mitigates between classic h-indices. This universal HF-rating complements the well-known h-index with a relative indication of its influence in its field that also allows inter-field comparison. The methodology is illustrated with examples of researchers from different disciplines with different distributions of citations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Change history

  • 30 October 2020

    In the original publication of the article, the author name was published incorrect. The correct name is given with this Correction.

Notes

  1. I will further use the symbols h2 and h3, rather than h(2) and h(3).

  2. Which gives 1.25 for that article if in the g-core and also within the 10% (B) or 1.50 if also within the 5% (BB). An article within the 1% (BBB) has a weighted factor of 2 points; if also in the h-core it raises to 2.5; if in the h2-core it reaches 4.

  3. h-type percentiles can overlap the standard percentiles (Fassin 2018); in practice the h-type grade will overrule the grade of the standard percentiles; so if h (grade A) is larger than 1% the grade BBB will be overruled by A for the whole 1%-class; the part of the 2% (BBC grade) within the g-core will be upgraded to BA and the part within the h-core to A (as in the example of bibliometrics); in that case there will not be BBB grade papers.

  4. The selection of datasets in databases displays many imperfections due to shortcomings in the classification on the basis of keywords or journals. When analyzing an author’s work, I therefore start from a separate search on the author, and then check on the basis of the title or the abstract whether the article or the journal should be eliminated or not. The check can be limited to the h-core of the author’s list of publications. All selected articles are then positioned in the field dataset, also those publication of that author that were not selected in the original dataset.

  5. In difference with the f2-index (Fassin 2018), that sums up the weighted factors of all the articles in the author’s h2-core.

  6. An alternative is an H2F rating, where the HF-rating would follow the h2-index.

  7. The highly-cited paper in this approach is not to be confounded with the definition of HCP used in the Web of Science (top 0.1% but limited in the last 10 years), but means a paper in the h3-core or in the h-core of the field.

References

  • Anonymous. (2015). Editorials. Nature, 523, 127–128.

    Google Scholar 

  • Batista, P., Campiteli, M., Kinouchi, O., & Martinez, A. (2006). Is it possible to compare researchers with different scientific interests? Scientometrics, 68(1), 179–189.

    Google Scholar 

  • Berker, Y. (2018). Golden-ratio as a substitute to geometric and harmonic counting to determine multi-author publication credit. Scientometrics, 114(3), 839–857.

    Google Scholar 

  • Bornmann, L. (2013). How to analyze percentile citation impact data meaningfully in bibliometrics: The statistical analysis of distributions, percentile rank classes, and top-cited papers. Journal of the American Society for Information Science and Technology, 64(3), 587–595.

    Google Scholar 

  • Bornmann, L., & Daniel, H.-D. (2009). The state of h index research. Is the h index the ideal way to measure research performance? EMBO Reports, 10(1), 2–6.

    Google Scholar 

  • Bornmann, L., & Leydesdorff, L. (2018). Count highly-cited papers instead of papers with h citations: Use normalized citation counts and compare “like with like”! Scientometrics, 115(2), 1119–1123.

    Google Scholar 

  • Bornmann, L., & Marx, W. (2014). How to evaluate individual researchers working in the natural and life sciences meaningfully? A proposal of methods based on percentiles of citations. Scientometrics, 98(1), 487–509.

    Google Scholar 

  • Bornmann, L., & Mutz, R. (2011). Further steps towards an ideal method of measuring citation performance: The avoidance of citation (ratio) averages in field-normalization. Journal of informetrics, 1(5), 228–230.

    Google Scholar 

  • Bornmann, L., Mutz, R., & Daniel, H. D. (2008). Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine. Journal of the American Society for Information Science and Technology, 59(5), 830–837.

    Google Scholar 

  • Bornmann, L., Mutz, R., Hug, S. E., & Daniel, H. D. (2011). A multilevel meta-analysis of studies reporting correlations between the h index and 37 different h index variants. Journal of Informetrics, 5(3), 346–359.

    Google Scholar 

  • Bouyssou, D., & Marchant, T. (2011). Ranking scientists and departments in a consistent manner. Journal of the American Society for Information Science and Technology, 62(9), 1761–1769.

    Google Scholar 

  • Costas, R., & Bordons, M. (2007). The h-index: Advantages, limitations and its relation with other bibliometric indicators at the micro level. Journal of Informetrics, 1, 193–203.

    Google Scholar 

  • Cronin, B. (2001). Hyperauthorship: A postmodern perversion or evidence of a structural shift in scholarly communication practices? Journal of the American Society for Information Science and Technology, 52(7), 558–569.

    Google Scholar 

  • Da Silva, J. A. T., & Dobránszki, J. (2018). Multiple versions of the h-index: Cautionary use for formal academic purposes. Scientometrics, 115(2), 1107–1113.

    Google Scholar 

  • Egghe, L. (2006). Theory and practice of the g-index. Scientometrics, 69(1), 131–152.

    MathSciNet  Google Scholar 

  • Fang, H. (2018). Normalized paper credit assignment: A solution for the ethical dilemma induced by multiple important authors. Science and Engineering Ethics, 24(5), 1589–1601.

    Google Scholar 

  • Fassin, Y. (2018). A new qualitative rating system for scientific publications and a fame Index for academics. Journal of the Association for Information Science and Technology, 69(11), 1396–1399.

    Google Scholar 

  • Fassin, Y. (2019). The HF-rating as a universal complement to the h-index. In 17th International Conference on Scientometrics & Informetrics, Rome.

  • Fassin, Y., & Rousseau, R. (2019). The h(3)-index of academic journals. Malaysian Journal of Library & Information Science, 24(2), 41–53.

    Google Scholar 

  • Glänzel, W., & Moed, H. F. (2013). Opinion paper: thoughts and facts on bibliometric indicators. Scientometrics, 96(1), 381–394.

    Google Scholar 

  • Hagen, N. T. (2010). Harmonic publication and citation counting: Sharing authorship credit equitably—Not equally, geometrically or arithmetically. Scientometrics, 84, 785–793.

    Google Scholar 

  • Henriksen, D. (2016). The rise in co-authorship in the social sciences (1980–2013). Scientometrics, 107(2), 455–476.

    Google Scholar 

  • Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520(7548), 429–431.

    Google Scholar 

  • Hirsch, J. E. (2005). An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences USA, 102, 16569–16572.

    MATH  Google Scholar 

  • Kosmulski, M. (2006). A new Hirsch-type index saves time and works equally well as the original h-index. ISSI Newsletter, 2(3), 4–6.

    Google Scholar 

  • Kosmulski, M. (2018). Are you in top 1% (1‰)? Scientometrics, 114(2), 557–565.

    Google Scholar 

  • Lee, S., & Bozeman, B. (2005). The impact of research collaboration on scientific productivity. Social Studies of Science, 35(5), 673–702.

    Google Scholar 

  • Leydesdorff, L., & Bornmann, L. (2011a). Integrated impact indicators compared with impact factors: An alternative research design with policy implications. Journal of the American Society for Information Science and Technology, 62(11), 2133–2146.

    Google Scholar 

  • Leydesdorff, L., & Bornmann, L. (2011b). How fractional counting of citations affects the impact factor: Normalization in terms of differences in citation potentials among fields of science. Journal of the American Society for Information Science and Technology, 62(2), 217–229.

    Google Scholar 

  • Leydesdorff, L., Bornmann, L., Mutz, R., & Opthof, T. (2011). Turning the tables on citation analysis one more time: Principles for comparing sets of documents. Journal of the American Society for Information Science and Technology, 62(7), 1370–1381.

    Google Scholar 

  • Leydesdorff, L., Wouters, P., & Bornmann, L. (2016). Professional and citizen bibliometrics: Complementarities and ambivalences in the development and use of indicators—A state-of-the-art report. Scientometrics, 109(3), 2129–2150.

    Google Scholar 

  • Leydesdorff, L., Bornmann, L., & Adams, J. (2019). The integrated impact indicator revisited (I3*): a non-parametric alternative to the journal impact factor. Scientometrics, 119, 1669.

    Google Scholar 

  • Lindsey, D. (1980). Production and citation measures in the sociology of science: The problem of multiple authorship. Social Studies of Science, 10(2), 145–162.

    Google Scholar 

  • Radicchi, F., Fortunato, S., & Castellano, C. (2008). Universality of citation distributions: Toward an objective measure of scientific impact. Proceedings of the National Academy of Sciences, 105(45), 17268–17272.

    Google Scholar 

  • Rousseau, R. (2016). Citation data as proxy for quality or scientific influence are at best PAC (Probably Approximately Correct). Journal of the Association for Information Science and Technology, 67(12), 3092–3094.

    Google Scholar 

  • Sahoo, S. (2016). Analyzing research performance: proposition of a new complementary index. Scientometrics, 108(2), 489–504.

    Google Scholar 

  • Schreiber, M. (2009). A case study of the modified Hirsch index hm accounting for multiple coauthors. Journal of the American Society for Information Science and Technology, 60(6), 1274–1282.

    Google Scholar 

  • Seglen, P. O. (1992). The skewness of science. Journal of the American Society for Information Science, 43(9), 628–638.

    Google Scholar 

  • Sivertsen, G., Rousseau, R., & Zhang, L. (2019). Measuring scientific contributions with modified fractional counting. Journal of Informetrics, 13(2), 679–694.

    Google Scholar 

  • Van Hooydonk, G. (1997). Fractional counting of multiauthored publications: Consequences for the impact of authors. Journal of the American Society for Information Science, 48(10), 944–945.

    Google Scholar 

  • Vinkler, P. (2010). Indicators are the essence of scientometrics and bibliometrics. Scientometrics, 85(3), 861–866.

    Google Scholar 

  • Waltman, L., & Schreiber, M. (2013). On the calculation of percentile-based bibliometric indicators. Journal of the American Society for Information Science and Technology, 64(2), 372–379.

    Google Scholar 

  • Waltman, L., & Van Eck, N. J. (2012). The inconsistency of the h-index. Journal of the American Society for Information Science and Technology., 63(2), 406–415.

    Google Scholar 

  • Wendl, M. C. (2007). H-index: However ranked, citations need context. Correspondence. Nature, 449, 403.

    Google Scholar 

  • Wilsdon, J. (2015). We need a measures approach to metrics. Nature, 523, 129.

    Google Scholar 

  • Wuchty, S., Jones, B. F., & Uzzi, B. (2007). The increasing dominance of teams in production of knowledge. Science, 316(5827), 1036–1039.

    Google Scholar 

  • Xu, J., Ding, Y., Song, M., & Chambers, T. (2016). Author credit-assignment schemas: A comparison and analysis. Journal of the Association for Information Science and Technology, 67(8), 1973–1989.

    Google Scholar 

  • Yan, Z., Wu, Q., & Li, X. (2016). Do Hirsch-type indices behave the same in assessing single publications? An empirical study of 29 bibliometric indicators. Scientometrics, 109(3), 1815.

    Google Scholar 

  • Zhang, C. T. (2009). The e-Index, complementing the h-Index for excess citations. PLoS ONE, 4(5), e5429.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yves Fassin.

Additional information

The original online version of this article was revised: In the original publication of the article, the author name was published incorrect.

Appendices

Appendix 1: Calculation of the HF-rating

The proposed HF-rating is easy to calculate, when following a number of successive steps.

First of all, the dataset has to be defined (field, sub-field). The search on the Web of Science (or other database) gives the total number of articles selected, for example 50,000 articles.

I calculate thresholds of this dataset through the percentiles: first the standard percentiles: 0.1%, 1%, 2%, 5%, 10%, 25% and 50% through looking up articles ranked (in my example of 50,000 articles) on places 50, 500, 1000, 2500, 5000, 12,500, 25,000 and taking note of the corresponding citations.

Then follow the variable h-type percentiles. The h-index is retrieved from the WoS citation report. In a similar way as the h-index, I calculate the h2 and h3-indexes from the top of the citation distribution (mostly maximum 50 and 20 publications).

A simple way to determine the g-index is to use the downloaded savedrecs excel file extract of the top list of articles of the WoS citation report (generally between 1.5 and 3 times the h-index), selected through the ‘Marked list’ search if the total sample is larger than 10,000 articles. Define a row to calculate the cumulated sum of citations of the top articles (till about 3 times the h-index value). Define a row with the rank of the top articles from 1 to 3 × h. Calculate the square of those ranks in a second row. The last square that remains lower than the cumulated sum of citations defines the g-index.

I find the corresponding citations that give the thresholds ch, ch2, ch3, cg.

Any article from any author can now be positioned on this continuum of thresholds and be assigned its category and weighted factor.

In a shorter simplified way, I can use the simplified categories: AAA (h2), A (h), B (10%), C (25%), that would need only 4 citation thresholds to determine the simplified categories.

The calculation would even be easier, if the databases as WoS or Scopus would add the g-index, and h2 and h3-index, to their database search engine. Harzing’s ‘Publish or Perish’ that uses Google Scholar data already provides the g-index (https://harzing.com/resources/publish-or-perish).

An additional step to simplify the calculations would be to add the gh-rating thresholds in the WoS or Scopus, calculated automatically in function of the selected dataset.

Appendix 2: Calculation of the adapted fractional HF-rating

Fractional counting of the h-index has not often been applied as it is not so easy to calculate. Indeed, the number of authors and the status of the author are not automatically provided in the publication list extracted from a search on the Web of Science. It would take a lot of time to search this information manually in the Web of Science. I therefore develop a heuristic that can limit the number of articles where authorship has to be controlled for. By definition, the fractional h-index will always be lower than the classic h-index based on complete counting.

The pure fractional index

I start from a downloaded savedrecs excel file extract of the researcher under study. I use three columns to fill in the number of authors, the author’s rank in the authors’ list, and a third column to define the weighted factor according to the status of the author (2 for first or corresponding author or n for other author). The adapted citation count will be calculated in another column as the total citation divided by the number of authors.

I rank the publications alphabetically on author and select the papers where the researcher under study is single author, and fill in 1 in columns 1, 2 and 3. I determine the temporary h-index ht1 from the new distribution of the adapted citations.

I then select the papers with 2 authors that have more than two times ht1 citations. The new temporary index of the enlarged sample is now ht2, somewhat larger than ht1. Then the papers with 3 authors and more than three times ht2 citations. And so on.

If not many papers with more than i times hti remain, it might be faster to count the number of authors of the remaining papers where citations overpass hti multiplied by the number of authors. The final h-index of this selection gives the (pure) fractional h-index.

The adapted fractional index

For the adapted fractional index, the heuristic is somewhat different. The first part is unchanged: first the single papers. I then select the papers where researcher is first author (they follow the single papers alphabetically), and fill in n in column n, 1 in column 2 and 2 in column 3, as I will divide the citations by 2 for first authors. I calculate a new ht1 index. I then select the papers where researcher is corresponding (generally last author) with more than ht1 citations. The new temporary index of the enlarged sample is now ht2, somewhat larger than ht1. I now select the authors of the remaining papers where citations overpass ht2 multiplied by the number of authors. The final h-index of this selection gives the adapted fractional h-index, higher than the pure fractional h-index but generally lower than the h-index, and in exceptional cases equal to the index (when researcher is single author of his h-core articles or first or corresponding author of highly-cited papers).

The fhf and ahf ratio

Now for the fhf-ratio and ahf-ratio, only the 4 (or i) most-cited papers are needed. The same heuristic is simplified, as I need only to investigate a few multiple co-authored papers with more citations than the 4th most cited single article of that author—f4. So those articles with 2 times f4 citations where the researcher under study is first or last author, and those articles with minimum f4 citations multiplied by the number of authors for the highly cited papers where the researcher under study is not first or last author. In practice, this will often mean the 5 to 10 highly-cited papers with multiple authors.

The calculation would be easier if the Web of Science and other databases would automatically add the number of authors and the place and status of the researcher under study in the publication list extracted from a search on Scopus or on the Web of Science. That feature would probably stimulate the use of the adapted fractional method.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fassin, Y. The HF-rating as a universal complement to the h-index. Scientometrics 125, 965–990 (2020). https://doi.org/10.1007/s11192-020-03611-5

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-020-03611-5

Keywords

Navigation