Introduction

At present, scientific and technological innovation has become a decisive factor in enhancing a country’s core competitiveness. As an important platform for researchers to exchange academic achievements, the evaluation of sci-tech journals has a positive or negative influence on the value orientation of their published research achievements. However, since Garfield (Garfield, 1972) put forward the idea that “citation analysis can be used as a tool for journal evaluation” in 1972, mainstream evaluation systems have been based on citation indicators. However, there is a problem that can’t be ignored when evaluating journals based on citation indicators, that is, citation indicators essentially represent the journal’s influence, rather than its academic innovation. External emergencies (such as COVID-19) can have a devastating impact on citation indicators (Fassin, 2021).

On June 28, 2022, JCR2021 was officially released. This is also the first time that publications related to COVID-19 are included in the calculation of the impact factors of journals. Among them, the impact indicators of journals in the fields of infectious diseases, intensive care and public health have increased significantly. Zhang et al. (Zhang et al., 2022) determined that among the 10 medical journals whose COVID-related papers contributed the most to the journal impact factors, COVID-related papers contributed about 50% of the journal impact factors. This phenomenon has brought unprecedented challenges to the existing evaluation system. If we continue to implement the evaluation system based on citation indicators, it will continue to affect the development of academic journals and even academic research and bring artificial obstacles to unpopular fields (Waltman & van Eck, 2013) and disruptive research (Du et al., 2016).

As early as 2012, “The San Francisco Declaration on Research Assessment”(DORA) pointed out: do not use journal impact factors as an alternative indicator to measure the quality of individual research articles (O’Connor, 2022). In February, 2020, the Chinese Ministry of Education and the Ministry of Science and Technology also issued a notice of "Several Opinions entitled Standardizing the Use of Relevant Indicators of SCI Papers in Colleges and Universities and Establishing a Correct Evaluation Orientation", clearly stating that innovation and actual contribution should be highlighted in the evaluation of innovation ability and relevant indicators of SCI papers should not be used as the direct basis for evaluation. On July 8, 2022, the European Union released the “Reform Research Evaluation Plan” and proposed that the scientific community should reject the improper use of journal and publication-based indicators in research evaluation and research evaluation should be based on the use of quantitative indicators to support qualitative assessment.

On November 9, 2022, eight departments of China namely the Ministry of Science and Technology, the Ministry of Education, the Ministry of Industry and Information Technology, the Ministry of Finance, the Ministry of Water Resources, the Ministry of Agriculture and Rural Affairs, the National Health Commission and the Chinese Academy of Sciences issued the "the work plan of carrying out the pilot reform of scientific and technological talent evaluation". The plan also pointed out the important problem of “set new standards” after “ break the four only” (paper-only, title-only, education-only and award-only tendency) (Pan et al., 2022). Therefore, it is extremely important and necessary to find a scientific and reasonable journal innovation evaluation method which has both theoretical value and practical value. To solve this issue, this study attempts to construct the Journal Disruption Index (JDI) from the perspective of measuring the disruption of journal articles and conduct an empirical study on the differences and correlations between the impact indicators and disruption indicators as well as the evaluation effect of the disruption indicators.

Concept and research status of the disruption index

Concept of the disruption index

Innovation is not a new concept. Already more than 1000 years ago, the concept of innovation was mentioned many times in Chinese ancient books and records. For example, in “the 50th Biography in the Book of Wei”, there is "reform and innovation" and in "the Book of Zhou", there is also "innovation and renew the old". But it was not until 1912 that the Austrian economist Schumpeter (Schumpeter, 1992) first proposed the basic connotation of innovation in his classic book “The Theory of Economic Development: An Inquiry into Profits, Capital, Credit, Interest and the Business Cycle” and believed that innovation is the “establishment of production function”, a new combination of production factors and production conditions in the production system, which creates a precedent for innovation theory research.

Since then, in order to explore the essence of innovation better, Henderson and Clark (Henderson & Clark, 1990), divided innovation into four categories from the perspective of knowledge management: incremental innovation, architectural innovation, modular innovation and fundamental innovation. In 1996, Christensen and Bower (Bower & Christensen, 1996) of Harvard Business School took the lead in proposing the “disruptive innovation theory” and Christensen divided innovation into sustaining innovation and disruptive innovation according to the different value networks on which innovation depends and constituted a basic framework of disruptive innovation theory in his book “The Innovator’s Dilemma”(Christensen, 1997). Therefore, disruptive innovation has become an important paradigm in the field of innovation research.

Based on this theory, Huang et al. (Huang et al., 2013) proposed the ‘Disruption Score’ and put forward that the emergence of disruptive research destroys the existing citation path and forms a new research paradigm. Funk and Owen-Smith (Funk & Owen-Smith, 2017) proposed an index to measure technological disruption based on the dynamic citation network of patents. This index reflects the degree of disruption of new patents by measuring the impact of new patents on existing citation networks. Wu et al. (Wu et al., 2019) published the article ‘Large teams develop and small teams disrupt science and technology’ in Nature and proposed the Disruption index (abbreviated as D index, as shown in formula 1), which measures the disruption by calculating the citation substitution of focus papers in the citation network (as shown in Fig. 1, the single arrow indicates the citation relationship, with the end of the arrow connecting the cited literature and the tip of the arrow connecting the reference). Based on this index, Bornmann et al. (Bornmann et al., 2020a) conducted a study of disruptive papers published in Scientometrics. Horen et al. (Horen et al., 2021), Sullivan et al. (Sullivan et al., 2021), Meyer et al. (Meyer et al., 2021), Jiang and Liu (Jiang & Liu, 2023a) respectively excavated disruptive papers in the fields of craniofacial surgery, pediatric surgery, synthetic biology and energy security. This series of research makes it possible to make innovative evaluation of scientific research papers and gradually mature.

$${\text{D}}\, = \,\frac{{N_{F} \, - \,N_{B} }}{{N_{F} \, + \,N_{B} \, + \,N_{R} }}$$
(1)
Fig. 1
figure 1

Classification of citation types

In formula 1: \({N}_{F}\) refers to the literature that only cites the focus paper (FP), \({N}_{B}\) refers to the literature that cite both the focus paper and at least one reference (R) of the focus paper and \({N}_{R}\) refers to the literature that only cite at least one reference(R) of the focus paper but not the focus paper.

Data sources for studies related to the Disruption Index

With the development of computer technology and the rise of the Internet age, the emergence of online citation databases has greatly pushed the boundaries of the exploration of scientometrics and improved the ability of researchers who study scientometrics to conduct large-scale research. As an important field of science of science, the improvement of the large-scale research capabilities has greatly enhanced the value in use and policy significance of scientometrics. But in the first two decades of the twenty-first century, most research was based on commercial citation databases which have the advantages of fast data updating, sustainable operation and rich functions.

However, the current international mainstream commercial citation databases (such as Web of Science and Scopus) usually do not allow researchers to make in-depth use of data without any obstacles, nor do they allow researchers to redistribute the data obtained from the database. This has brought great difficulties to scholars who pay attention to related research to conduct achievement verification, multi-angle research and baseline comparison (Freese & Peterson, 2017; Vasilevsky et al., 2017). Shotton (Shotton, 2018) pointed out that the authors of scientific papers provide the citation link data of the papers. When authors need to use citation data, they should get them for free, but these data are hidden in the hands of major publishing houses. This actually reflects the sharp conflict between the intellectual property rights and data disposal rights of database vendors and the data knowledge and autonomy rights of the academic community.

In this study, there are mainly two types of data that need to be obtained, namely journal index data and citation relationship data. Among them, journal index data can be easily obtained through JCR, but the workload of obtaining citation relationship data is quite huge. Therefore, this study chooses to analyze the sources of citation relationship data used in related studies based on the disruption index first. This paper investigates the citation data sources and data acquisition methods of 29 articles that use the D index and related variables to measure disruptive innovation. The chronological results are shown in Table 1.

Table 1 Research, data sources and acquisition methods related to the disruption index

Among the 29 research papers, 13 papers used commercial databases and 17 papers used open data. The main commercial database of 13 research papers using commercial databases is Web of Science, but the methods of obtaining them are different. Among them, the three research teams headed by Wu, Li and Bornmann separately mainly relied on the citation data resources provided by Clarivate (from Clarivate, MPDL and Indiana University respectively), while Song obtained them through web crawlers. The source of WOS data is not stated in the text. Among the 17 papers using open data for research, there are many types of open datasets/libraries used, namely USPTO Open Data, ORCID, APS Open Data, Third-party dataset, MAG, PubMed, etc.

Variants of the disruption Index

After Wu proposed the D index, many scholars have improved it and carried out further applications and researches according to specific application scenarios (as shown in Table 2). But from Table 1, we can find that no matter which indicator variant actually depends on the three parameters \({N}_{F}\)\({N}_{B}\) \(\mathrm{and}\) \({N}_{R}.\) Therefore, only three relevant parameters of the focus paper can be obtained to complete the measurement of disruption of the focus paper.

Table 2 Disruption Index and its variants

Concept and calculation method of journal disruption index

From the above review of the research on the D index, we can find no scholarly evaluation of disruptive innovation in journals yet. Therefore, this paper chooses to identify the disruption of research papers by measuring the degree of substitution of research papers (focus papers) for references and then evaluates the disruption of journals by constructing a new index for evaluating journals through academic disruptive innovation rather than academic influence—Journal Disruption Index (JDI, as shown in formula 2).

$$JDI\, = \,\frac{{\sum\nolimits_{i}^{n} {{\text{ln}}\left( {Dz_{i} \, + \,1} \right)} }}{n}$$
(2)

In formula 1: n is the number of' article' type papers in the journal and \({Dz}_{i}\) is the \({D}_{Z}\) of the ith paper in the journal, which indicates the absolute disruption of the paper.

First of all, it should be noted that \({D}_{Z}\) is called the absolute disruption index but it is not the absolute value of the D index. The main reason for using this expression is to follow the expression of the indicator proposer in its original context. Secondly, in the calculation process of JDI, the reasons for choosing \({D}_{Z}\) instead of D index and other variants for calculating the disruption index of paper are as follows: (1) Among the three parameters needed to calculate the disruption of a paper, the disruption of a paper is reflected in \({N}_{F}\) only and the influence of \({N}_{F}\) on the results of disruption evaluation should be strengthened. (2) Since there may be large differences between \({N}_{R}\) and the research topic of the focus paper (e.g., a bibliometric paper which cites a research paper on a natural science topic is cited by a subsequent bibliometric paper), the effect of \({N}_{R}\) on the disruption index is supposed to be appropriately smaller than the other two types of citations. (3) The degree of disruption of science of any article should be non-negative with a tacit acknowledgement of the scientific validity of the conclusions of all research papers. Finally, the reason for adding 1 in the calculation process is to prevent the calculation problem when \({D}_{Z}\) is 0.

As for the evaluation effect of \({D}_{Z}\), Liu et al. (Liu et al., 2020) have verified it by journal papers and selected milestone papers of American Physical Society (APS). As shown in Table 3, \({D}_{Z}\) is better than the original D index in identifying disruptive innovations. Therefore, it is more appropriate to use \({D}_{Z}\) for disruptive innovation evaluation of research papers in a comprehensive view.

Table 3 Average percentile ranking of disruption index for APS papers, PRL and Milestone papers with different algorithms

Empirical research

Research object

Most scholars believe that because the \({\mathrm{D}}_{Z}\) index is not an interdisciplinary index with the function of interdisciplinary evaluation, so this paper only studies the journals involved in virology, a single discipline. In 2016, there were 34 journals of virology in Web of Science. As the measurement of disruptive innovation is mainly aimed at research papers and there is no disruptive innovation attribute in the review literature, this study excludes journals whose papers account for less than 50% of the total published papers and those whose citation relationship is not included in COCI. (Acta Virologica, Advances in Virus Research, AIDS Research and Human Retroviruses, Annual Review of Virology, Antiviral Therapy, Current Hiv Research, Future Virology, Intervirology, Journal of Neurovirology, Retrovirology, Reviews in Medical Virology and Virology). A total of 22 journals were eventually included in the study (as shown in Table 4). In addition, this study also selected Faculty Opinions and the “High-quality Journals Classification Catalogue” published by China Association for Science and Technology as the peer review results to compare with the disruptive evaluation results.

Table 4 Journals in virology in 2016

Data acquisition and processing

After full consideration based on a survey of related research, this study chooses to use the main database of OpenCitations called COCI as the data source for this study to obtain citation relationship data. OpenCitations is an independent not-for-profit infrastructure organization dedicated to publishing open bibliography and citation data through the use of semantic technologies. It originated from the proposal of the Initiative of Open Citations (I4OC) in 2017.

The purpose of this initiative is to promote structural, separable and open citation data. Structural means that the data representing each publication and each citation instance is represented in a common machine-readable format and can be accessed programmatically. Separable means that citation instances can be accessed and analyzed without access to the original documents (such as journal articles and books) in which the citation was created. Open means data can be freely accessed and reused.

To achieve these goals, the COCI database treats each reference as an independent data entity (Heibi et al., 2019). The advantages of this kind of data storage mode are: (1) it allows descriptive attributes to be assigned to citations; (2) all information about each citation can be found in one place, because this information is defined as an attribute of the citation itself; (3) citations become easier to describe, differentiate, count and process; (4) it is easier to analyze by using bibliometric methods.

COCI now contains more than 77 million bibliographic resources and more than 1.463 billion citation links (as of January 2023) across all scholarly subject areas. According to an independent analysis published in 2021 by Martin-Martin (Martín-Martín et al., 2021), the coverage of OpenCitations is now approaching that of the two major proprietary citation indicators, Web of Science and Scopus. Compared with the commercial database Web of Science, COCI has achieved a certain balance in terms of accessibility and domain coverage.

OpenCitations provides quadruple access to all data in COCI: (1) query via SPARQL endpoint; (2) retrieve using REST API; (3) search using OpenCitations search interface; (4) available on Figshare based on CSV, N-Triples and Scholix data dumps. The provision of multiple acquisition methods can meet the needs of various types of users in different usage scenarios and effectively relieve the network load pressure of the service provider. Among them, the most suitable for large-scale data calculation is to use the data updated to Figshare and download it to the researcher's local area for in-depth research (as shown in Fig. 2).

Fig. 2
figure 2

The overall flow of data acquisition and processing

After obtaining the downloaded data from Figshare, it needs to be processed accordingly so that researchers can use it efficiently. Take the processing of the CSV format downloaded data set stored on Figshare as an example. The file of this format is a plain text file containing a data list, which is usually used to exchange data between different applications. The special value of this format lies in solving data storage, transmission, sharing, etc. between different application environments, rather than direct utilization. Considering the resource scale of the COCI dataset, transforming the data resources in this format into a local database is more conducive to researchers' measurements of disruption at all levels.

Due to the file storage settings of Figshare, the dump file of the COCI dataset on Figshare is divided into multiple CSV format files labeled with time series. Using the data import tool of visual database management tools such as Navicat can complete the conversion of data sets from multiple scattered files to a single database. If the hardware environment used in the research does not support visualization operations, the same purpose can also be achieved by directly using local database software such as SQLite or programming languages with database manipulation functions such as Python (based on sqlite3 lib).

After completing the basic data format conversion, this study chooses to slice the entire database into multi-dimensional and multi-level data according to attributes such as citation creation time, journal sources and authors and reasonably indicators the data tables (as shown in Fig. 3). This can save the time spent in the follow-up process of measuring the disruption of the focus paper and further optimize the measurement process. Considering the data scale and the performance requirements of actual research, the above model is more suitable for a single researcher or a small team to carry out the small-scale measurement of disruption work. If the researcher's institution can provide high-performance equipment, it will be a better choice to use an in-memory database such as Redis for data processing. High-performance equipment can also provide possibilities for interactive research operations (Light et al., 2014).

Fig. 3
figure 3

The specific flow of data processing

After proper processing of the acquired COCI database, we can use local data for the measurement of disruption. In the COCI database, a total of 7 fields are provided for researchers to use (as shown in Table 5). After the researchers select the focus paper, they can construct ‘Journal’ and ‘Article’ tables based on the journal collection information obtained from JCR and the COCI database and then calculate the three parameters \({N}_{F}\), \({N}_{B}\) and \({N}_{R}\) of the focus paper based on the DOI number of the focus paper and the ‘Citing’, ‘Cited’ and ‘Creation’ fields in the COCI database. After obtaining the three parameters of the focus paper, the calculation formula of the paper/journal disruption index can be used to obtain the disruption index at the paper level and journal level respectively. The specific data entity relationship diagram of the local database is shown in Fig. 4.

Table 5 The meaning of each field of COCI
Fig. 4
figure 4

Entity relationship diagram of local database

Comparative indicators

In order to ensure the accuracy of the measurement results of papers' innovativeness using the disruption index, researchers need to measure after the focus paper has experienced a period of citation accumulation. In this regard, Bornmann and Tekles (Bornmann & Tekles, 2019a) believes that no matter which discipline’s disruptiveness is measured, a 3-year citation accumulation cycle is necessary. In Liu and her collaborators’ research results of (Liu et al., 2021b) on the stability time windows of the disruption index in various disciplines, the stability time windows of biology, biochemistry and immunology, which are most relevant to virology, are 4 years after publication. Therefore, in the process of calculating JDI in this paper, in order to ensure the validity of the results, the citation time window of the focus paper is selected as 5 years (from 2016 to 2020). In addition, in order to reflect the relationship between journal influence and academic innovation, several important journal evaluation indicators need to be selected for comparison. The comparative indicators selected in this paper are:

  1. (1)

    Cumulative Impact Factor for 5 years (\({\mathrm{CIF}}_{5}\)). \({\mathrm{CIF}}_{5}\) is obtained by dividing the sum of the citation frequencies recorded in the COCI library from 2016 to 2020 by the number of research papers published in the journals. In Eq. 3, \({a}_{i,t}\) represents the number of citations of the ith research paper in the journal in the year t and n is the number of research papers published in the journal.

    $${\mathrm{CIF}}_{5}=\frac{\sum_{i}^{n}{a}_{i,2016}+{a}_{i,2017}+{a}_{i,2018}+{a}_{i,2019}+{a}_{i,2020}}{n}$$
    (3)
  2. (2)

    Average Percentile in Subject Area (aPSA). In 2015, the InCites database gave the exact percentile (Percentile in Subject Area, PSA) of the citation frequency of each paper in its respective subject area. The definition of percentile here is: the pth percentile represents that p% of the data items are less than or equal to this value. aPSA is the average PSA of all papers in a journal, and the smaller the value, the greater the influence of the journal. This indicator converts the exact percentile of the citation frequency of the citable literature published by the journal into a journal evaluation indicator, which can be considered as an accurate percentile indicator (Liu et al., 2021a). In formula 4, \({PSA}_{i}\) represents the PSA of the ith research paper in the journal and n is the number of research papers published in the journal.

    $$\mathrm{aPSA}=\frac{\sum_{i}^{n}{PSA}_{i}}{n}$$
    (4)
  3. (3)

    Journal Index of Percentile Ranking with 6 Classifications (JIPR6). Arrange all papers in ascending order of PSA to determine 6 percentile segments, namely (0, 1%], (1%, 5%], (5%, 10%], (10%, 25%], (25%, 50%], (50%, 100%] and the Sp of papers in each section are set to 60, 50, 40, 30, 20, 10 respectively (Bornmann & Mutz, 2011). The average Sp value of journal papers is the JIPR6 of the journal. In formula 5, \({Sp}_{i}\) is the Sp obtained from the ith research paper of the journal, and n is the number of research papers published in the journal.

    $$\mathrm{JIPR}6=\frac{\sum_{i}^{n}{Sp}_{i}}{n}$$
    (5)

Statistical processing methods

In this study, IBM SPSS STATISTICS 25 was used for statistical processing. After using SPSS to conduct Kolmogorov–Smirnov normality test on the data, we find that the data of each indicator is not normally distributed. Therefore, the Spearman correlation test was used for the correlation analysis among the indicators.

Results and analysis

Comparison of journal rankings based on the disruption index and citation indicators

The 22 journals were sorted according to JDI, CIF5, JIPR6 and aPSA (in reverse order) and the results are shown in Table 6. Among the 22 journals, 12 (accounting for 54.5% of all journals) are ranked higher by JDI than \({\mathrm{CIF}}_{5}\), JIPR6 or aPSA (in reverse order). There are 17 journals (accounting for 77% of all journals) with a gap of 5 or more between the JDI ranking of the journals and the rankings of the journals’ \({\mathrm{CIF}}_{5}\), JIPR6 or aPSA (in reverse order). This shows that there is a significant difference between the evaluation results of journals based on traditional citation indicators and the evaluation results of journals based on the innovation of research papers published in journals.

Table 6 Comparison of journal ranking based on JDI and citation indicators

Correlation analysis of each index

The results of the Spearman test on the correlation between journal indicators are shown in Table 7. It can be seen from Table 6 that the innovation indicator JDI is moderately correlated with the citation indicators CIF5, JIPR6 and aPSA and the correlations are 0.486, 0.471 and -0.448 respectively. The selected citation indicators CIF5, JIPR6 and aPSA are highly correlated and the correlation coefficients are all close to 1. The Spearman test results of various indicators in journal research papers are shown in Table 8. It can be seen from Table 7 that \({D}_{Z}\) is moderately correlated with the citation indicators CI, PR6 and PSA and the correlations are 0.593, 0.575 and − 0.593 respectively.

Table 7 Spearman correlation test between JDI and citation indicators
Table 8 Spearman correlation test between \({D}_{Z}\) and citation indicators of research papers

Comparison of evaluation results based on the disruption index and peer review

Article level

As the most authoritative peer-reviewed database in the global biomedical field in the past 20 years, faculty opinion integrates the joint efforts of more than 8000 international authoritative experts and is a knowledge discovery tool for evaluating published research. Faculty Opinions reviewers are all leading experts in the fields of life sciences and medicine and provide comments, opinions and validation of key papers in their fields. The quality and rigor of the reviewers means that researchers can ensure the quality of their recommended papers. In this study, the 22 journals contained a total of 5566 research papers (focus papers) of which 140 were included by Faculty Opinions. The average \({D}_{Z}\) of focus papers and Faculty Opinions accepted papers is shown in Table 9. The average \({D}_{Z}\) of Faculty Opinions included papers is much higher than that of all focus papers.

Table 9 Average \({D}_{Z}\) of all focus papers and faculty opinions indexed papers

Journal level

Since 2019, in order to promote the construction of a science and technology journal system that is in line with the world's scientific and technological power and to facilitate the high-quality development of Chinese science and technology journals, the Chinese Association for Science and Technology has been guiding and supporting its national societies to publish a graded catalog of high-quality journals for domestic and foreign science and technology journals in various disciplines in accordance with the principles of "peer review, value orientation and homogeneity", so as to provide reference for scientific and technological workers and research institutions in publishing papers and academic evaluation. This study selected 13 Chinese SCI journals in the two fields of mechanical engineering and environmental science for verification (relevant indicators are shown in Table 10 ~ Table 11). It can be found that the average JDI of T1 journals in each field is higher than the average JDI of T2 and T3 journals. The JDI of some journals with lower ratings is higher than that of journals with higher ratings, which may be due to the different sub-disciplines of the journals and the differences in the evaluation habits of experts belonging to different societies that grade journals.

Table 10 Comparison of relevant indicators of journals in the field of mechanical engineering
Table 11 Comparison of relevant indicators of journals in the field of environmental science

Conclusion

JDI is moderately correlated with traditional journal citation indicators

In this paper, 22 journals in virology are selected for measurement and the results show that JDI has a certain degree of correlation with the selected citation indicators. We believe that the reason for the correlation between the evaluation results of the two different systems is that the technological threshold breakthroughs brought about by disruptive innovations have driven the booming development of related research, bringing about a large number of retrospective citation identifications, which are manifested at the level of citation indicators as an increase in the frequency of citations.

JDI is significantly different from traditional journal citation indicators

This journal evaluation method makes innovative evaluation of journals with the aid of the measurement of disruption of paper from the level of knowledge structure. There is a big difference between the evaluation results of journals based on JDI and impact indicators. This evaluation method brings new research ideas to the field of journal evaluation, which may help relevant institutions and scholars to get rid of the constraints of impact factor and promote the sound development of scientific research, sci-tech journals and academic ecology.

JDI reflects the innovation level of journals to a certain extent

By referring to the peer review results of the Faculty Opinions and the high-quality journal classification catalogue published by China Association for Science and Technology, this study found that both the \({D}_{Z}\) and JDI reflected the innovative differences among different measurement samples, both at the journal level and at the paper level. Therefore, JDI can be used as a reference index to evaluate the innovation level of journals.

Limitations and prospects

Lack of the gold standard for comparison

Bornmann and Leydesdorff pointed out that comparing indicators with peer evaluations has been widely recognized as a way of validating indicators (Bornmann & Leydesdorff, 2013). At present, the academic community generally believes that peer review, makes up for the defect that bibliometric evaluation cannot evaluate the content of journals and is the best way to achieve innovative evaluation. However, existing studies have shown that reviewers’ cognitive bias (Truex et al., 2009) and emotional bias (Serenko & Bontis, 2011) will have a significant impact on the evaluation results. In addition, if there are too few experts participating in peer review, the evaluation results may not be reliable (Serenko & Bontis, 2011) and if there are too many experts, it will not be easy to organize (Liu & Guo, 2020). Therefore, the evaluation effect of JDI still needs to be compared with a gold standard.

The application details of JDI need to be further explored

With the continuous development of interdisciplinary research, the interdisciplinary attributes of journals and published research results are becoming stronger and stronger. However, considering that the D index and related improvements (\({D}_{Z}\)) do not have the function of interdisciplinary evaluation, the JDI created based on this index is also not suitable for interdisciplinary evaluation of journals in different disciplines. For the evaluation application of JDI in the interdisciplinary field, the author will carry out further research in the follow-up. In addition, Ruan et al. (Ruan et al., 2021) pointed out that the more references the focus paper has, the more difficult it is for the focus paper to replace all its references and be cited by subsequent research papers. If the number of references in the focus paper is too small, it is likely to bring a bias to the calculation of the D index. Bornmann et al. (Bornmann et al., 2020b) argued that the D index should only count research papers with at least 10 citations and 10 references. But so far, there is no empirical research to verify the impact of the number of references on the D index. Therefore, solving this problem is crucial to enhancing the usability of JDI.

There are still problems to be improved in data acquisition and processing

In this study, open citation data was used to calculate JDI, avoiding the link of directly obtaining citation data from the database, effectively reducing the difficulty of obtaining large-scale citation data (Narock & Wimmer, 2017) and ensuring the repeatability of the research. The method of index calculation based on citation relationship refers to the automatic calculation program of D index based on Web of Science established by Leydesdorff and Bornmann. (Leydesdorff & Bornmann, 2021). Therefore, this study also has the problem of missing references and citations without DOI. This affects the accuracy of the evaluation results based on journal innovation to a certain extent. In future research, more accurate measurement results can be obtained through the joint use of multiple data sources.