Recently, a new indicator measuring disruption has been introduced (Funk and Owen-Smith 2017; Wu et al. 2019). This indicator is connected to the ‘scientific revolutions’ concept of Kuhn (1962) and tries to identify exceptional research which is characterized by an overthrow of established thinking (Casadevall and Fang 2016). The indicator is based on combining cited references of citing papers with cited references data of a focal paper (FP). The disruptiveness of a FP is determined by the extent to which the cited references of the papers citing the FP overlap with the cited references of the FP. Disruptiveness is indicated if many citing papers do not refer to the FP’s cited references. Then, it seems that the FP presents something new which is independent of the context of the FP.

Disruptiveness is described by Wu et al. (2019) as a weighted index \({DI}_{1}\) (see Fig. 1) calculated for a FP: the difference between the number of papers citing the FP without citing any of its cited references (\({N}_{i}\)) and the number of papers citing both the FP and at least one of its cited references (\({N}_{j}^{1}\)) is divided by the sum of \({N}_{i}\), \({N}_{j}^{1}\), and \({N}_{k}\). \({N}_{k}\) is the number of papers citing at least one of the FP’s cited references without citing FP itself. Wu et al. (2019) claim that disruptive research is reflected in high positive values; developmental research continuing previous research directions is indicated by small negative values.

Fig. 1
figure 1

Different roles of papers in citation networks (see Wu and Wu 2019) for calculating the disruption index \({DI}_{1}\) (Wu et al. 2019) and \({DI}_{5}\) (Bornmann et al. 2019)

Bornmann et al. (2019) discussed and empirically investigated \({DI}_{1}\) and some variants. They followed the idea of considering how strong the cited references of the FP’s citing papers are coupled with the cited references of the FP. This is converted in the formula in Fig. 1: the subscript of \({DI}_{l}\) corresponds to the threshold for the number of bibliographic coupling links. With \(l=1\), there is no restriction on the number of bibliographic coupling links which corresponds to the originally proposed index by Wu et al. (2019); \(l=5\) means that a threshold of five bibliographic coupling links is used. Bornmann et al. (2019) compared (among other indicators measuring disruption) the variants \({DI}_{1}\) and \({DI}_{5}\). The empirical results show that \({DI}_{5}\) might be better able than \({DI}_{1}\) (and also better than the other indicators) to identify disruptive research. Thus, we took this indicator as a starting point for a modified variant which allows to focus on disruptive effects in a certain field.

We calculated this variant for all papers published in Scientometrics and compared the results with \({DI}_{1}\). Bornmann and Tekles (2019) calculated \({DI}_{1}\) for Scientometrics papers and showed a “concentration of disruption values around zero” (p. 332). Most of the papers identified by Bornmann and Tekles (2019) with the highest \({DI}_{1}\) values could not be characterized as landmark papers in the scientometrics field using manual inspection. To focus with \({DI}_{5}\) in this study on the scientometrics field, we defined \({N}_{j}^{l}\) and \({N}_{k}\) based on all papers cited by any paper published in Scientometrics in the same year as the FP instead of the FP’s cited references. The corresponding definitions of \({N}_{j}^{l}\) and \({N}_{k}\) are as follows:

  • \({N}_{j}^{l}\): Number of papers citing the FP, and at least \(l\) of the cited references of all Scientometrics papers published in the same year as the FP

  • \({N}_{k}\): Number of papers citing at least one of the cited references of all Scientometrics papers published in the same year as the FP, but not the FP itself

Table 1 shows the twenty Scientometrics papers with the highest \({DI}_{5}\) values (to base the values on reliable data, we considered only papers with at least 10 cited references counts). We selected all publications in Scientometrics available in Scopus for the years 1996–2015 (2145 papers with at least 10 cited references). As the values reveal, they are still close to zero. This is similar to the results based on \({DI}_{1}\). However, the comparison of both results (\({DI}_{1}\) and \({DI}_{5}\)) points out different foci: as Bornmann and Tekles (2019) reveal, \({DI}_{1}\) mostly brings out scientometrics papers with local focus (e.g., issues rooted in countries such as Iberian-American countries, Korea, or Iceland). \({DI}_{5}\) is more focused on basic scientometric papers which focus on bibliometric performance measures or state-of-the-art scientometrics. For example, Narin and Hamilton (1996) discuss “three different types of bibliometrics—literature bibliometrics, patent bibliometrics, and linkage bibliometric” (p. 293).

Table 1 Scientometrics papers with the highest \({DI}_{5}\)values (calculated based on the field-specific variant of (\({DI}_{5}\))

van Raan (1996) “gives an overview of the potentials and limitations of bibliometric methods for the assessment of strengths and weaknesses in research performance, and for monitoring scientific developments” (p. 397). Martin (1996) is a landmark paper in scientometrics discussing how government-funded basic research can be evaluated using different indicators. The paper by MacRoberts and MacRoberts (1996) is one of the core papers in the critical discussion of evaluative citation analyses. Ho (2004)—the paper with the largest \({DI}_{5}\)—demonstrated the problem of citation analysis by focusing on the reception of an equation published in one paper. The results show that “all of the examined references from the literature review in this study were incorrect for referencing Lagergren’s equation” (p. 175).

Taken as a whole, the results in Table 1 reveal that the field-specific variant of \({DI}_{5}\) may be better able than the original variant of \({DI}_{1}\) to identify disruptive papers which concentrate the attention of researchers within a discipline. This suggests that focusing on the citation links in a specific field (here: scientometrics as represented by the journal Scientometrics) is a promising approach for improving existing disruption indicators. Further exploration on a variety of datasets, journals, and disciplines is certainly needed to elucidate this possibility.