Introduction

Since its introduction by Garfield in the 1960s, first mentioned in 1963 (Garfield, 1972, 1976; Garfield & Sher, 1963), the Journal Impact Factor (JIF) is still one of the most common bibliometric indicators when it comes to measuring journal impact (Archambault & Larivière, 2009). Its popularity is unbroken, and not only because its introduction meant a revolution for the scientific community (Larivière & Sugimoto, 2019). The simple fact that, despite the development of a multitude of new indicators, none of the alternatives has prevailed testifies to the high acceptance of the JIF when it is used reasonably (Garfield, 2005; Gorraiz et al., 2020). The past has clearly shown that the JIF is not an all-in-one solution for various issues, which has led to controversial discussions and justified criticism (Alberts, 2013; Glänzel & Moed, 2002; Gorraiz, Gumpenberger, et al., 2012; Moed, 2002; Moed & van Leeuwen, 1995; Todorov & Glänzel, 1988). In response, several manifestos and statements were published especially due to the increasingly frequent misuse in research-assessment practices (American Society for Cell Biology [ASCB], 2012).

The first edition of the Journal Citation Reports (JCR)–including for the first time the Journal Impact Factor–was launched in 1976 and was based on the fundamental understanding that citations can be used as valuable criterion for the assessment of scientific journals (Garfield, 1976). The more frequently a journal is cited, the higher the recognition of its importance and prestige as information channel in its respective research field.

Researchers started to use the JIF in order to identify adequate publication venues and to optimize their publication strategies. As of its introduction editors and publishers rely on the JIF in order to estimate the reputation, prestige and market value of their journal portfolio. Furthermore, the JIF opened up a new support tool for librarians to back up decisions about subscriptions, to guarantee the presence of indispensable journals in their collections and to optimize their acquisition strategy. Finally, policy makers have thus gained a quantitative indicator for evaluation purposes, which additionally drove the expansion from its use for scientific information to application in evaluative contexts (cf. Glänzel, 2006).

The JIF has been further developed and improved over the years (extension of the citation window to 5 years, consideration of the journal self-citations, etc.) and nowadays a number of alternative journal citation-indicators are available such as the h-index for journals (Braun et al., 2006), eigenfactor metrics (Bergstrom et al., 2008; West et al., 2010), SJR (González-Pereira et al., 2010; Guerrero-Bote & Moya- Anegón, 2012), the SNIP indicator (Moed, 2010a, 2010b) or the CiteScore (cf. van Noorden, 2016). Nevertheless, the new edition of the JCR is eagerly expected each year, which shows the continuing importance of this analytical tool for the scholarly community and for research assessment.

Research assessment exercises are often performed for recent time periods. In these cases, impact analyses relying on citations are not very useful, because in many disciplines the citation window is practically too short for retrieving significant number of citations. Although it is not the appropriate indicator to measure the impact of a publication (Waltman & Traag, 2020), the JIF does provide a quick information on the impact and prestige of the journals in which the researcher, group or institution has been able to publish. Being published in journals with high JIF is much more difficult (higher rejection quotes), and successful publication in these journals provides recognition. JIF also helps to identify the top journals in each field according to their impact or prestige. This is why the JIF plays such a key role. The competition to be included in the Web of Science (WoS) Core Collection and to be indexed as a journal or to publish in one continues unabated (Osterloh & Frey, 2014) and is inextricably linked to the question of how the citation impact and prestige of a journal is measured.

However, since the introduction of the JIF, many analytical tools have been developed and are available, enabling a very quick and automatic calculation of the percentiles of the most cited publications for each publication year and each subject category (Lozano et al., 2012). Nowadays the normalized citation counts like Category Normalized Citation Impact (CNCI) and the number and percentage of Top 10% and Top 1% most cited publications are essential indicators in citation analyses (Adams et al., 2007; Gorraiz & Gumpenberger, 2015; Gorraiz, Reimann, et al., 2012). In addition, in June 2021 Clarivate analytics presents the 2021 JCR offering a revamped user interface with new interactive graphics that permit a more complete, dynamic view of data. The new JCR also included a new indicator, the Journal Citation Indicator (JCI),Footnote 1 that it is the journal-level CNCI and has been designed to complement the JIF.

Therefore, it can be quite interesting to use these normalized indicators as an alternative to the JIF. Does the JIF reflect the amount of excellent publications contained in a journal or in a subject category? Are there other approaches to paint a more precise picture of journal excellence? This is the subject of our study. A previous study focusing on five selected WoS subject categories was presented at the 18th International Conference on Scientometrics and Informetrics (ISSI 2021) and published in the corresponding proceedings (Gorraiz et al., 2021). This study has now been expanded to a total of 20 WoS subject categories for this new paper.

Research questions

The main objective of this article is to present new indicators to measure scientific excellence that can complement the JIF designed to provide a robust and size-independent journal performance measure. To achieve this objective, we have established the following scientific questions, which we have classified into two blocks. In the first, we include questions related to the design of the indicators of excellence and their relationship with other bibliometric indicators. These research questions are the following:

  1. 1.

    Is it possible to design complementary and alternative indicators for scientific journals considering the number of highly cited publications in the JCR categories?

  2. 2.

    Could these new indicators of research excellence based on a percentile approach supplement the JIF as an improved assessment of the citation impact of a journal?

  3. 3.

    How does the JIF and other bibliometric indicators correlate with the proposed percentile-based indicators?

    Once the indicators have been designed and presented, two research questions have been defined. The aim of these two questions is to find out the inner workings of the indicators.

  4. 4.

    An issue when calculating journal indicators is not to consider categories as closed lists. In the JCR, the category lists only include journals that publish in that category, however, when counting citations, the citations received from all journals are included. Considering this, how do multidisciplinary journals (e.g., journals of the WoS category “Multidisciplinary Science”), affect the indicators and JCR calculations? We refer to the effect of papers published in multidisciplinary journals (Plos One, Nature, Science, …) but belonging to a specific field according to Incites recategorization.

  5. 5.

    There is an asymmetry in the calculation of the JIF. In the numerator, the citations to all types of documents are summed up, while in the denominator only research articles and reviews are considered. Considering this, how sensitive are our indicators to the choice of document types, particularly of the so-called ‘citable items’ (i.e., articles and reviews) instead of all document types?

The article is organized as follows: (1) in the methodology we will give specific details of the calculation of the indicators. In the results (2) we provide a descriptive overview of the 20 categories analyzed, and (3) a more detailed case study of the indicators of excellence applied exclusively to the category of “Library and Information Science” (ILS) will be presented. (4) In the next section, a summary of the correlations of the indicators of excellence in the 20 categories is presented. Finally, (5) in separate sections the results concerning the effect of document typologies and multidisciplinary journals are analyzed.

In order to provide more information and ensuring the reproducibility and validity of the data this paper is complemented by the following materials deposited in the Zenodo repository: (1) The work in progress presented at the ISSI 2021 conference (https://doi.org/10.5281/zenodo.5679387). (2) The dataset with all the collected data distributed in five tab-separated values (TSV) files (https://doi.org/10.5281/zenodo.5676184). Finally, the video of Juan Gorraiz's oral presentation at the ISSI is available at https://www.youtube.com/watch?v=Imwryb_pNhk.

Methodology

All documents assigned to the 20 selected WoS subject categories published of the years between 2009 and 2018 were retrieved in InCites, excluding ESCI documents since to identify the journals we used the sources of the publications that had a JIF associated with them. Table 1 lists the 20 WoS subject categories considered in this study according to JCR. The categories were chosen to give a view as broad as possible of the various publishing cultures.

Table 1 List of selected JCR categories, their abbreviations and JCR edition

In this study, we are considering only journals with a JIF, and we are performing the analyses for two different groups:

  • Group 1: only journals assigned to each WoS subject category according to JCR (“JCR Cat.”).

  • Group 2: including all multidisciplinary journals that, according to InCites, have likewise contributed to this category (“JCR Cat. + Multidisciplinary”).Footnote 2

For each journal, we list:

  • Number of publications published in this journal in JCR Cat.: p(J).

  • Number of excellent publications published in this journal in JCR Cat.: x(J).

    For each category we list:

  • Total number of publications in JCR Cat.: p(T).

  • Total number of excellent publications in JCR Cat.: x(T).

In this study the term “excellent publications” or “excellence” is used as synonym for publications belonging to the Top 10% most cited documents in the same JCR category, publication year and document type.

Beside the JIF retrieved from the JCR Edition 2020, we have calculated the following indicators for each journal:

  1. 1.

    Journal Percentage of Excellent Publications (JPEP) = (x(J)/p(J)) = Number of excellent publications published in this journal in the PY = 2009–2018 in this WoS category/Total number of publications published in this journal in the PY = 2009–2018 in this WoS Category.

  2. 2.

    Journal Contribution to the Excellence of the Category (JCEC) = (x(J)/x(T)) = Number of excellent publications published in this journal in the PY = 2009–2018 in this WoS category/Total number of excellent publications published in the PY = 2009–2018 in this WoS category.

    Both indicators are size dependent: The first one (JPEP) can reach very high values for journals with just few publications in the category, and the second one (JCEC) benefits journals from a large number of publications. Therefore, we have also calculated two further indicators:

  3. 3.

    Journal Brute Excellence (JBE) = 100 × JPEP × JCEC = 100 × x2(J)/(p(J) × x(T)).

  4. 4.

    Journal Normalized Excellence (JNE) = (x(J)/x(T))/(p(J)/p(T)) = Journal Contribution to the Excellence (JCEC)/Journal Contribution to the Category.

    The first one reflects the total brute excellence force or brute contribution of the journal to the category. The second one provides the normalized excellence contribution of the journal to the category. Together they provide a more complete picture of the journal excellence. We are using the JNE especially for the analysis limited to the journals assigned to the JCR category under study (“JCR Cat.”), because the number of publications of these journals is significant, resulting in relevant JNE values. Note that JNE is inspired by the “Attractivity Index” by Schubert and Braun (1996), which is, in turn, defined based on the model of the Activity Index introduced into scientometrics by Frame (1977). Both indicators have been used since the late 1980s to reflect a country’s, region’s or other unit’s relative contribution to research productivity and citation impact in given subject fields (cf. Schubert et al., 1989). JNE here expresses a journal’s contribution to the excellence in a given subject. As such JNE, analogously to the above-mentioned indicators by the Hungarian research group, is a balance measure with neutral value 1, i.e., a journal contributes relatively more (less) to the subject’s excellence according as JNE > ( <) 1. It is not contributing at all, if JNE = 0. The only conceptual deviation of JNE from activity/attractivity is that the balance in not considered across subjects but across units (i.e., journals). A consequence of the “balance” property of this concept is that not all journals can contribute relatively more (less) than expected–some journals assigned to the subject category reflect relatively more excellence than the subject standards, others contribute to subject excellence to a lesser extent.

    When analyzing the effect of the multidisciplinary journals, we use the JBE. High impact multidisciplinary journals (like Nature or Science) contributing with rather few publications to the category could yield high JNE values, but according to the JBE no significant contributions are achieved. Pearson Correlations were then performed for the JIF, JBE and JNE for the 20 categories considering only journals assigned to the subject category (“JCR Cat.”) and including also multidisciplinary journals (“JCR Cat. + Multidisciplinary”). Furthermore, we have compared the Q1 journals assigned to each category according to JCR Edition 2020 with the Top Journals according to the two new indicators JNE (“JCR Cat.”), and JBE (“JCR Cat. + Multidisciplinary”).

    In order to address research question 4, we have analyzed and discussed the contribution of other journals not directly assigned to the corresponding category, like e.g., the multidisciplinary journals, to the excellence of the category. For this purpose, we have introduced two more indicators:

  5. 5.

    Category Percentage of Multidisciplinarity (CPM) = Number of publications added by multidisciplinary journals not directly assigned to this category according to the JCR (e.g., Nature, Science, PLOS ONE, etc.)/Total number of publications in the category.

  6. 6.

    Category Excellence Degree Multidisciplinarity (CEDM) = Number of excellent publications added by journals not directly assigned to this category according to the JCR (e.g., Nature, Science, PLOS ONE, etc.)/Total number of excellent publications in the category.

Finally, we have also performed our analysis not only for the document types articles and reviews, but also for all document types in order to address research question 5.

Results

General overview

Table 2 gives an overview of the number of journals, publications and excellent publications for each category considered in this study. The differences between all document types (All types) and article and reviews (Art./Rev.) are also considered. The table makes clear that the selected categories represent very different communities and research categories: “Physics, Condensed Matter” (PHCM), “Virology” (VIR) or “History” (HIS) stand for small document sets and small scientific communities. On the other hand, we have large categories with a considerable number of journals such as “Economics” (ECO), “Neurosciences” (NEU) or “Pharmacology & Pharmacy” (PHAR).

Table 2 Overview of the 20 categories analyzed in this study including number of publications considering JCR categories and multidisciplinary journals as well as articles and reviews versus other document typologies

With the aim of providing an overview of the categories and their characteristics Table 3 shows the Category Percentage of Multidisciplinarity (CPM) and the Category Excellence Degree Multidisciplinarity (CEDM), as well as the contribution of other document types than articles or reviews to the total excellence in the category on a percentage basis. The category percentage and degree of multidisciplinarity are very different according to the subject categories. The highest values are reported by the categories “Ecology” (ECOL), “Statistics & Probability” (STPR) and “Microbiology” (MICR), followed by “Virology” (VIR), “Neurology” (NEUR) and “History” (HIS). More than 13% of the excellent publications are published in multidisciplinary journals in the category of “Ecology” (ECOL) and around 10% in the category “History”. However, in “Education (EDU), “Pharmacology and Pharmacy” (PHCM), “Political Sciences” (POLS), “Business” (BUS), “Economics” (ECO) and “Chemistry, Analytical” (CHEM) the effect of the multidisciplinary journals is almost inexistent or very low. In “Information Science & Library Science” (ILS) the effect is much higher in the total number of publications (CPM) than in the number of excellent publications (CEDM) as well as for articles and reviews as in comparison to all document types. These results show that the effect of multidisciplinary journals can affect both science and social science categories but with different intensity.

Table 3 Category Percentage of Multidisciplinarity (CPM) and the Category Excellence Degree Multidisciplinarity (CEDM) for the 20 subject categories according to InCites

Articles and reviews are mostly responsible for the number of excellent publications in all categories. This is even true for the categories related to the Social Sciences where big differences between the total number of all document types compared to articles and reviews can be observed (see Table 2). However, the results compiled in Table 4 show that the consideration of other document types than articles and reviews may be of significance in some categories of the Science (SCIE) as well as of the Social Sciences Edition (SSCI). Articles and reviews are mostly responsible for the number of excellent publications in all categories. This is even true for the categories related to the Social Sciences where big differences between the total number of all document types compared to articles and reviews can be observed (see Table 2). The lowest percentage of articles and reviews within the excellent publications is observed for “Neurosciences” (NEUR), “Psychology” (PSYC) and “Nursing” (NURS) with around 73%. Other document types than articles and reviews (especially editorial materials and letters) are responsible for almost a fourth of the excellence in these three categories. These categories are followed by “Pharmacology and Pharmacy” (PHCM), “Information Science & Library Science” (ILS) and “History” (HIS) with around 80%, followed by “Virology” (VIR) and “Economics” (ECO) with around 88%. The highest percentage of articles and reviews within the excellent publications is reported in in “Chemistry, Physical” (CHPH), “Physics, Condensed Matter” (PCM) and “Computer Science, Artificial intelligence” (COMP) with almost 98%. In this study, we are focusing on the document types: articles (Art.) and reviews (Rev.). In “Final remarks section”, the effect of the document types will be further analyzed and discussed.

Table 4 Percentage of excellence contribution from other document types than articles and review considering the presence or absence of multidisciplinary journals

Case study: “Information Science & Library Science” (ILS)

In order to offer a first approximation of the indicators, a first study has been carried out by applying them to the category of “Information and Library Science” (ILS), Table 5 provides an example of the results obtained for this and includes all the indicators mentioned in the methodology. The table shows only the First Quartile (Q1) journals according to the JIF. Figure 1 shows the correlation between the JIF and the two new indicators for all journals of the category “Information Science & Library Science” (ILS). The correlation is rather moderate (JBE; r = 0.763, see Table 3), most notably for the normalized JNE (r = 0.906, see Table 3), but some of the journals change their position, if a normalized and size-independent indicator (JNE) is used (e.g., Journal of the Association for Information Science and Technology and Scientometrics). Table 6 shows the Pearson correlations between all five indicators (JPEP, JCEC, JIF, JBE and JNE) for the category “Information Science & Library Science” (ILS) for (a) only articles and reviews (lower left triangle), and (b) for all the document types (upper right triangle).

Table 5 Excerpt of the table of indicators calculated in “Information Science & Library Science” (ILS), only Q1 journals according to the JIF 2019 (article and reviews, published between 2009 and 2018)
Fig. 1
figure 1

Correlations of the JIF with the JBE and JNE in “Information Science & Library Science” (ILS)

Table 6 Pearson correlations between all measures and indicators for all JCR journals in “Information Science & Library Science” (ILS) (lower left triangle: articles and reviews; upper right triangle: all document types; published between 2009 and 2018)

In Table 7, journals in the category “Information Science & Library Science” (ILS) are listed. It shows the changes in ranking position, which is traditionally based on the JIF, when applying the excellence indicators JBE and JNE. Portal: Libraries and the Academy and Journal of Health Communication are the journals that improve their rank position the most due to the excellence indicators. Malaysian Journal of Library & Information Science and Information Technology for Development are the ones decreasing the most in the brute and normalized excellence rankings.

Table 7 Ranking changes for journals in “Information Science & Library Science” (ILS) according to the JIF in comparison with JBE and/or JNE (article and reviews, published between 2009 and 2018)

Comparisons between the 20 categories analyzed

Figure 2 shows the results of the correlation between the JIF and the two excellence indicators for all 20 categories considered in our study. The results show that the correlation between the JIF and the JNE is higher than between JIF and the JBE. This is expected because JIF and JNE are both size independent. The highest correlation between the JIF and the JNE is in the JCR category “Virology” (VIR), followed by “Chemistry, Physical” (CHPH), “Physics, Condensed Matter” (PHCM), “Ecology” (ECOL), “Neurosciences” (NEUR), “Nursing” (NURS) and “Computer Science, Artificial Intelligence” (COMP), (all of them above 0.9). The lowest one is reported by “Statistics & Probability” (STPR), followed by “History” (HIS) and “Pharmacology and Pharmacy” (PHAR) (all of them under 0.8). The correlations between the JIF and the JBE are only moderate and much lower than the ones between JIF and JNE. The lowest is reported by the category “Pharmacology and Pharmacy” (PHAR) (0.378) and the highest by “Environmental Sciences” (ENV) (0.789). These results reveal that the journal excellence content is not completely reflected in the JIF measure, and this affects both Science (SCIE) and Social Science Edition (SSCI) categories, as it can be seen very clearly in Fig. 2.

Fig. 2
figure 2

Correlations between JIF, JBE and JNE for the 20 subject categories analyzed (article and reviews, published between 2009 and 2018)

Effect of the multidisciplinary

Table 8 lists the journals and categories where multidisciplinary journals have the highest contribution to the indicator JCEC and only journals with greater than 0.5 have been included. Results illustrates strong differences in the effects of the “multidisciplinary journals” in the 20 selected subject categories. Categories related to the life sciences and natural sciences show the strongest influences of such journals compared with the Social Science that are less affected or where these journals are not representative. We have to keep in mind that humanities and most fields in the social sciences have a lesser weight in the big multidisciplinary journals. In more than half of the categories studied at least one or more multidisciplinary journals would be Q1 if they will be considered in JCR as part of the category. In particular, “Ecology” (ECOL), “Virology” (VIR), “Neurosciences” (NEUR) and “Microbiology” (MICR) are the categories with the highest presence of multidisciplinary journals. In these categories, six multidisciplinary journals are responsible for a very large brute excellence contribution and can be considered as “Q1 journals” in this category according to this indicator.

Table 8 Effect of the multidisciplinary journals in the JBE ranking for the journals of the 20 subject categories (article and reviews, published between 2009 and 2018)

If we consider our indicators of excellence, Proceedings of the National Academy of Sciences (PNAS) would be one of the most important multidisciplinary journals. PNAS ranks 3rd in “Ecology” (ECOL) and 4th respectively in “Virology” (VIR) and “Neurosciences” (NEUR). The journal Science also stands out in 5th position in “Ecology” (ECOL) and “Virology” (VIR). Open access journals or megajournals also stand out as journals that contribute excellent papers to the categories. We refer to three journals: Nature Communications, Scientific Reports and PLOS ONE. The latter stands out for being present in almost all categories with a substantial contribution of excellent papers. The only multidisciplinary journal ascending to the first quartile in “Information Science & Library Science” (ILS) is PLOS ONE. As it is well-known, PLOS ONE has a special section for Research assessment and Bibliometrics. However, according to its size, its excellence contribution is not as high as expected.

Effect of the document types

Finally, we analyzed the effect of considering all types of documents instead of only articles and reviews. As it is common knowledge that there is an asymmetry in the calculation of the JIF. In the numerator, the citations to all types of documents are summed up, while in the denominator only research articles and reviews are considered.Footnote 3 In “Introduction” we have already analyzed the document types in each category and their contribution to the excellence (see Table 1). The results corroborate that in the subject categories related to the social sciences, e.g., “Information Science & Library Science” (ILS) and “History” (HIS), other document types than articles and reviews might play a significant role accounting for around 18% of the category excellence. Furthermore, the two new excellence indicators have been also calculated for all document types and for articles and reviews only (see Table 3). The results underline the role of research articles and reviews in scientific journals. Any reasonable correlation of the number of documents with excellence measure is absent, even slightly negative. Thus, it is plausible that the observed Pearson correlation between JIF and JNE is distinctly higher for articles and reviews than for all document types (0.906 versus 0.755), while it is just the opposite for the brute excellence contribution (JBE), where the total number of publications in the category plays a role (0.763 versus 0.897).

Figure 3 shows the correlation of the Journal Impact Factor, and the two excellence indicators (JBE and JNE) for the Q1 journals of the category in “Information Science & Library Science” (ILS) when considering only articles and reviews (column 2 and 3) and all document types (column 4 and 5), respectively The results show that, even if the actual indicator values are changing, the distribution of the JBE or JNE as such is not much affected by the considering all document types instead of only “citable items”. This hints to the fact that our excellence indicators are quite robust or less sensitive to the types of documents considered. In particular, the correlations are very strong, e.g., 0.986 for the JNE, and 0.99 for the JBE, and they corroborate the robustness of both indicators concerning the document types used in their calculation. One possible reason is that the normalizations performed for defining excellent publications are also done by document type (Top 10% most cited publications of the same document type and publication year in the same category year).

Fig. 3
figure 3

Distribution of JIF, JBE and JNE for all Q1 journals in “Information Science & Library Science” (ILS) for only articles and reviews (2nd and 3rd columns 2 and 3) and all document types (columns 4 and 5)

Final remarks

Due to the precariousness and long half-lives of the citations, the identification of the top journals in each discipline is one of the most requested and used tools in academic evaluation exercises focusing on the assessment of the research performance of the most recent years. Despite the enormous criticism it has received in scientific articles and manifestos, the JIF has established itself as one of the most consolidated instruments for assessing the impact and prestige of the journals where the scientists, research groups, organizations and countries have published in. To provide a broader view of each journal's contribution to the excellence in each category or field, we have introduced two new indicators, which ideally complement the JIF. The first one, the Journal Normalized Excellence (JNE) measures the normalized excellence contribution of a journal to its subject category. A journal contributes relatively more (less) to the subject’s excellence according as JNE > ( <) 1. On the other side, it is also interesting to know the total contribution of a journal to the category excellence, independently of its size. The Journal Brute Excellence (JBE) reflects the total brute excellence force or brute contribution of the journal to the category. Similarly, in those cases in which a journal is present in several categories, the JBE allows a better comparison of its performance in each of them.

The case study applied to “Library Science & Information Science” (ILS) has shown how the indicators work. It has been observed how JBE has allowed us to identify which journals contribute the most significant, i.e., excellent, papers to a journal. Likewise, the JNE indicator has allowed us to contextualize the Gross Contribution with the total number of documents in the journal. In this sense, the proposal yields positive results, firstly because they provide different and complementary information, as demonstrated by their different correlation with the Journal Impact Factor. The correlations are similar in almost all the categories analyzed. Thus, the correlation is moderate between JIF-JBE and high/significant between JIF and JNE, with the exception of singular cases such as “Statistics & Probability” (STPR). Among the proposed indicators, JBE and JNE, the correlation was moderate/low with cases of no correlation (e.g., “Education (EDU) and “Economics” (ECO)). This situation is interesting as we are dealing with two different and complementary indicators but with singularities according to the categories although less pronounced than in the JIF (Dorta-Gonzalez & Dorta-González, 2013). On the other hand, although percentile-based indicators may have limitations (Bornmann et al., 2013), the correlations of the JIF with JNE indicate that no information is lost. It also overcomes certain limitations of the JIF such as the Citation Window. Although it is well known that journal impact measures do not work well in the Arts and Humanities and can lead to false interpretations (Repiso et al., 2019). In this sense, for the two indicators in “History” (HIS), the results are comparable to those of the other scientific categories.

In this study, including 20 WoS subject categories, our excellence indicators have shown a robustness concerning the consideration of all types of documents instead of only articles and reviews. Therefore, they provide an amelioration of the inherent asymmetry reflected in the definition and calculation of Garfield’s Impact Factor. Another advantage of our excellence indicators relies on the practical aspect for the measurement of the visibility of publications. When using the JIF for this purpose, there is always a controversial decision: what JCR Edition should be used? There are three possibilities: (a) using JIF values of the last JCR-edition for all publications independently of their publication year; (b) Using the JCR-edition corresponding to the publication year of each publication; and (c) using the mean value of the last x years according to the time period under study. Any of them is completely satisfactory (Glänzel et al., 2016). Excellence indicators circumvent this problem because they are based on accumulated measures including the last ten complete publication yearsFootnote 4 and are not restricted to 2 years or a selected JCR edition.

This study also revealed that the effect of the multidisciplinary journals differs according to the category, and this effect is generally stronger in the so-called ‘hard sciences. One of the possible applications of our study is to prevent the use of JCR categories for the delineation of scientific areas, as has been done in many previous bibliometric studies. The study warns of serious consequences of this approach, as contributions from multidisciplinary journals are not considered in some categories. For example, reducing the study to only journals of the category in “Ecology” (ECOL), “Statistics & Probability” (STPR) or “Microbiology” (MICR) would mean missing a large part of the scientific breakthroughs and excellent publications, which are regularly published in multidisciplinary journals specially PNAS, Science, Nature and PLOS ONE.

In relation to PLOS ONE our study agrees with previous results, which show the multidisciplinary nature of this journal and how in certain JCR categories it has a significant impact (Repiso et al., 2020).

Another interesting question is the effect of interdisciplinarity. Unfortunately, InCites does not offer the possibility to measure this effect, because the subject classification is made on journal level, except for the multidisciplinary journals (on publication level). The recent introduction of the publication based “Citation Topics” may be an improvement in InCites. This topic will also be part of our future analyses.

In the most recent edition of the JCR, a new indicator has been introduced, the JCI, which is based on the CNCI. This indicator is mainly used for Arts & Humanities and ESCI journals, for which the journal impact factor is not calculated, following Garfield's recommendations. However, this indicator is also a mean value (like the JIF) and does not consider the skewness of the distribution of citations (Bornmann et al., 2013). Therefore, single outliers, extremely highly cited papers, can distort dramatically his values (Antonoyiannakis, 2019; Dimitrov et al., 2010). The use of the excellence indicators as suggested in this study will provide a much better assessment of the impact of the journals.