When policy-makers prioritize “interdisciplinarity” they most likely mean a form of “synergy,” that is, the creation of additional options generated by interactions among disciplinary bodies of knowledge. In recent papers (e.g., Leydesdorff et al. 2019), we have further developed measures for “interdisciplinarity” and distinguished the measurement of “interdisciplinarity” from that of “synergy” (Leydesdorff and Ivanova, under review). The two concepts complement each other and sometimes overlap in practice, but their theoretical background and operationalization are different.
Synergy is a result of interactions among subsets, whereas interdisciplinarity can also be considered as a means to generate synergy. From this perspective, interdisciplinarity can be considered as a process characteristic of an R&D process, whereas synergy is a product characteristic. In university-industry-government relations, for example, the objective is the generation of synergy, and interdisciplinarity may be a means to this end. Other factors, such as teams with researchers from different institutions or countries, or researchers representing various academic generations, can also contribute to interdisciplinarity and synergy.
Interdisciplinarity
Interdisciplinarity can be measured in a variety of ways. On the basis of a literature review, Stirling (2007) and cf. Rao (1982) proposed combining three existing measures of diversity—“variety,” “balance,” and “disparity”—into a single measure, as follows (e.g., Rafols and Meyer 2010; Rao 1982; Stirling 2007):
$$\Delta = \mathop \sum \limits_{i,j} (p_{i} p_{j} )^{\alpha } d_{ij}^{\beta }$$
(1)
For the least complex case of α = β = 1, this measure [\(\Delta = \sum_{i,j} p_{i} p_{j} d_{ij}\)] is also called Rao-Stirling (RS) diversity. In Eq. 1 the Simpson index [\(\sum_{i,j} (p_{i} p_{j} )\)] first combines variety and balance, whereas the factor \(d_{ij}^{{}}\) represents disparity. The combination of the first two factors into the Simpson index is also called “dual concept” diversity.
RS is also known as the “integration score” developed by Porter et al. (2006, 2007; cf. Porter and Chubin 1985). More recently, Zhang et al. (2016) have proposed replaceing RS with “true” diversity. This measure 2D3 is monotonously increasing and decreasing with RS as follows:
$$^{2} D^{3} = 1/\left( {1{-}\Delta } \right)$$
(2)
The advantage of “true” diversity is that one diversity can be expressed as a percentage of another, thus providing a measure for above- and below-expected values in the evaluation. Note that “true” diversity is not bounded between zero and one. We use 2D3 as one of the measures of interdisciplinarity in this study.
Leydesdorff et al. (2019) proposed measuring “variety” and “balance” independently—that is, unlike the Simpson Index—on the basis of Nijssen et al.’s (1998) conclusion that the Gini index is a measure of balance, but not of variety. This allows us to write:
$${\text{DIV}}_{c} = [n_{c} /N] \cdot [1 - {\text{Gini}}] \cdot \left[ {\sum\limits_{\begin{subarray}{l} i = 1, \\ j = 1, \\ i \ne j \end{subarray} }^{\begin{subarray}{l} j = n_{c} \\ i = n_{c} \end{subarray} } {d_{ij} /[n_{c} *(n_{c} - 1)]} } \right]$$
(3)
The three components (between zero and one) are indicated in Eq. 3 with brackets. The right-most factor in this equation is similar to the disparity measure used in RS diversity (in Eq. 1), albeit normalized differently. The other two factors represent relative variety as (nc/N) and balance measured as (1 – Gini). Rousseau (2019) further improved DIV into a “true” diversity measure DIV* as follows:
$${\text{DIV}}^{*} = \left( {N*{\text{DIV}}} \right)$$
(4)
It follows that:
$${\text{DIV}}^{*} = n_{c} \cdot [1 - {\text{Gini}}] \cdot \left[ {\sum\limits_{\begin{subarray}{l} i = 1, \\ j = 1, \\ i \ne j \end{subarray} }^{\begin{subarray}{l} j = n_{c} \\ i = n_{c} \end{subarray} } {d_{ij} /[n_{c} *(n_{c} - 1)]} } \right]$$
(5)
DIV* is a “true” measure of diversity defined in terms of variety, balance, and disparity.
Synergy
Whereas the measurement of diversity is rooted in ecology and economics, the measurement of synergy, in terms of additional options made available when subsets are combined, finds its origin in Shannon’s (1948) information theory and in systems theory. In information theory the sum of the possible, but not yet realized states of a system (the redundancy) and the realized ones (the uncertainty) is by definition equal to the system’s maximum information capacity. Synergy among subroutines or subsystems means that a system contains in total more states than the sum of its parts. When a system grows, for example, the number of both realized and possible states may increase, but without a necessary coupling between the two growth rates (Brooks and Wiley 1986). When the redundancy increases more than the relative uncertainty, synergy is generated as negative entropy.
This is not the place to explain the indicator in detail; this is elaborated in another paper (Leydesdorff and Ivanova, under review). Here, we apply the measure pragmatically to Judith Bar-Ilan’s œuvre as a test case. Does the application of this indicator to this data provide new insights? Which nodes and links (in terms of bibliographic couplings at the journal level) contribute to synergy in the system? Is synergy concentrated in specific parts of the network?
Synergy can be measured as mutual information among three or more subsystems. While mutual information between two random variables (\(T_{xy} = H_{x} + H_{y} - H_{xy} )\) is always and necessarily larger than or equal to zero (Theil, 1972; Leydesdorff et al. 2017), a third dimension can spuriously correlate with the other two, and thus reduce or add to the uncertainty as a contextual factor. For example, the answers of two parents to questions from their child can sometimes be almost identical. Analogously, mutual information among three or more subsystems can be positive, negative, or zero (e.g., McGill 1954; McGill and Quastler 1955; Yeung 2008).
The number of possible combinations among three sets is n * (n − 1) * (n − 2)/(2 * 3). For n = 130, this results in (130 * 129 * 128)/(2 * 3) = 357,760 possible triads. Secondly, each link can be part of n * (n – 1) / 2 triads; for n = 130, this amounts to (130 * 129 / 2 =) 8385 possible links in triads; some triads generate redundancy, others entropy. Thirdly, each node can be involved in n – 1 = 129 links, of which some are parts of triads which generate redundancy. Not amazingly given the focus in our data collection, all nodes and links in this data generate more synergy (redundancy; T123 ≤ 0) than uncertainty (information; T123 ≥ 0).
By summing the negative and positive values of T123 for each paper’s participation in a triad of papers, we can attribute their respective participations in the generation of redundancy and entropy, respectively. Links are similarly partaking in triads with positive and negative signs for the bibliographic coupling. The core set of links and nodes which contribute to the synergy can also be mapped.