1 Introduction

The philosophy of science has been marked by an ever-growing interest in scientific explanations. This interest is especially unsurprising in the philosophy of neuroscience, given the sheer diversity of modelling and explanatory practices in neuroscience (Gold & Roskies, 2008). The philosophical literature on scientific explanation in neuroscience has been dominated by the idea of mechanisms (Bechtel & Richardson, 2010; Craver, 2007; Glennan, 2017). The basic idea can best be captured by the following definition of a minimal mechanism (Glennan, 2017, p. 17):

A mechanism for a phenomenon consists of entities (or parts) whose activities and interactions are organized so as to be responsible for the phenomenon.

The mechanist philosophers often claim that all explanations in neuroscience are ultimately mechanistic in the above sense, or, at the very least, that they conform to various degrees of completeness of this definition, e.g., there could be full-fledged mechanisms, partial mechanisms or mechanistic sketches (Piccinini & Craver, 2011). Furthermore, anything that does not fit this definition, or a degree of completeness thereof, is not considered an explanation at all (Craver, 2016). Other diverse explanatory strategies are thereby reduced to a single mechanist formula and hence we call this set of claims “mechanistic explanatory imperialism” (Kostić, 2022).

Mechanistic imperialism has been challenged by various arguments which show that there are scientific explanations in science in general, and in neuroscience in particular, that do not conform to the mechanistic mould. Among the contenders that generated the most philosophical literature are the dynamical (Chemero & Silberstein, 2008; Favela, 2020, 2021; Gervais, 2015; Stepp et al., 2011; Venturelli, 2016; Verdejo, 2015; Vernazzani, 2019; Weiskopf, 2011) and topological explanations (Kostić, 2018b, 2019a, 2019b, 2020, 2022; Khalifa et al. 2022; Kostić & Khalifa, 2021, 2022).

We acknowledge that there are many other non-mechanistic kinds of explanations across the sciences, e.g., computational (Chirimuuta, 2014), statistical (Walsh, 2014; Walsh et al., 2002), interventionist (Hitchcock & Woodward, 2003; Woodward & Hitchcock, 2003), mathematical, and in general non-causal explanations (Lange, 2013), minimal model and optimality explanations (Batterman, 2010; Batterman & Rice, 2014; Rice, 2021), and many other. However, here we focus on dynamical and topological explanations for three reasons: (1) they directly and in depth challenge mechanistic imperialism, especially in neuroscience; (2) these explanations use a relatively distinct repertoire to express explanatory relations, and such repertoire can be traced in the language used in scientific literature, and finally, (3) our aim in this paper is not to represent the full range of explanatory repertoires in the neurosciences, but to demonstrate that important competitors for mechanist explanations exist and thrive in scientific practice.

Mechanistic imperialism can be interpreted in two ways. The first is that explanations may look non-mechanistic, but these can, in principle, always be interpreted as mechanistic by using the epistemic-normative frameworks developed by the new mechanists. In this view, the scientists may use non-mechanistic terms in describing their explanatory practices, but these descriptions actually conform to the mechanists’ conception of scientific explanation. The second is a more empirical claim that the repertoire of mechanistic explanations prevails in neuroscientific practice. In the former interpretation, the pervasiveness of one or the other scientific explanations is determined solely through conceptual analysis, whereas in the latter, it requires empirical evidence. As this paper investigates explanatory repertoires empirically, i.e., in the language of research papers, it addresses directly the latter, empirical issue.

The empirical claims about the pervasiveness of one or the other kind of explanations in neuroscience require empirical evidence, which so far has not been forthcoming. The importance of empirical evidence about pervasiveness and uses of “mechanisms” or any other kind of explanation in neuroscience is particularly needed because examples and case studies that are used to illustrate philosophers’ claims do not represent a statistically relevant sample, even if taken all together. Since demonstrations of the pervasiveness of different kinds of explanation in the philosophical literature rely on handpicked examples, the risk of confirmation bias is considerable: when looking for white swans, all one finds is that swans are white. The more systematic quantitative and qualitative bibliometric study of a large body of relevant literature that we present in this paper can put such claims into perspective by investigating:

  1. (1)

    What are typical mechanistic, dynamical, and topological expressions used in neuroscience papers?

  2. (2)

    What is the preponderance of mechanistic, dynamical, and topological explanations in the neuroscience literature?

  3. (3)

    How does the preponderance of these explanatory patterns in neuroscience change over time?

In this study, we first defined strings of words to identify explanatory language patterns in a qualitative analysis, and then searched for these strings in a large neuroscience corpus from the Dimensions.ai repository. In a second step, we analysed the distribution of typical language patterns in the corpus to provide comprehensive and empirically grounded insights into the explanatory landscape of neuroscience.

In order to provide a philosophical context for our study, in the next section we characterize more precisely each of the three kinds of explanations. In the interest of space, we skip an overview of the debates between the mechanistic imperialist and proponents of dynamical and topological explanations because review literature of these debates is abundant (Kostić et al., 2020; Khalifa et al. 2022; Kostić, 2018a, 2019b, 2022).

2 Mechanistic, dynamical and topological explanations

2.1 Mechanistic explanation

According to some of the most prominent mechanist philosophers “Biologists seek mechanisms that produce, underlie, or maintain a phenomenon” (Craver & Darden, 2013, p. 72).

The most influential definition of mechanistic explanation comes from an early paper by Machamer and colleagues (Machamer et al., 2000, p. 3):

Mechanisms are entities and activities organized such that they are productive of regular changes from start or set-up to finish or termination conditions.

In this definition, entities in mechanisms could be neurons in a brain that are organized in a certain way, e.g., connected into neural populations that make up brain regions. But they also have to do something: they have to produce or change things through some activity. For example, neurons release neurotransmitters in order to propagate signals through neuronal assemblies. This is where the comparison with some everyday notions of mechanisms might be useful: a mechanical watch that has stopped ticking is not a mechanism in the above sense, because even though it has all the entities and components necessary for a mechanism, it lacks an activity. The activities that produce change in a mechanism are often linear in time, i.e., organized in sequences in which earlier stages produce later stages. They can also be cyclical, e.g., the Krebs cycle in the metabolism of sugar, in which some chemical compounds leave the mechanism at key junctures, but their residue is used at the next stage to continue the process. Finally, mechanisms can be described as underlying a phenomenon we want to explain. For example, the Hodgkin-Huxley model of action potential that explains the basic mechanism of signal propagation between neurons does not produce the phenomenon; it rather underlies it, or implements it (Craver & Darden, 2013, 50). All these ideas can be best described by the so-called Craver diagram (Fig. 1).

Fig. 1
figure 1

Craver diagram. Linear mechanisms at the bottom; a phenomenon at the top is constituted or implemented by the mechanisms at the bottom, which is represented with dotted lines between two levels

For the purpose of this study, an exposition of more sophisticated distinctions of mechanisms would be superfluous. The most important lesson to take from this is that, typically, entities in a mechanism are linguistically described with nouns and activities with verbs. In neuroscience, these entities can be neurons or neuronal assemblies, causing phenomena by their activity. An example would be: cell membranes, ion channels, Na levels (i.e., explanans consisting of entities or components) produce, generate, or underly (or a verb expressing causation) action potentials (i.e., a higher-level explanandum).

2.2 Topological explanation

Topological explanations (proper) are a relatively recent development in the sciences that was enabled by a seminal paper by Watts and Strogatz (1998), and soon followed by several other key papers in different areas of science (Barabasi & Albert, 1999; Barabási & Oltvai, 2004; Cupal et al., 2000; Stadler & Stadler, 2004).Footnote 1

Neuroscience did not lag behind, and the publication of a highly influential paper by Sporns and colleagues (Sporns et al., 2005) marked the birth of so-called network neuroscience and the origin of topological explanations in neuroscience. In the growing philosophical literature on topological explanations, there is only one account that provides necessary and sufficient conditions for a topological explanation in neuroscience (Kostić 2020). According to this account:

a's being F topologically explains why a is G if and only if:

(T1) a is F (where F is a topological property);

(T2) a is G (where G is a physical property);

(T3) Had a been F’ (rather than F), then a would have been G’ (rather than G);

(T4) a is F is an answer to the question why is a, G?


Networks are collections of nodes and edges, and topological properties are their mathematically quantifiable patterns of connectivity. In this framework, the T1 and the T2 conditions simply mean that the same system can have both a physical and a topological property. For example, a brain which is denoted as an a in the scheme above can be both computationally efficient (i.e., it uses optimal amount of energy for processing information), which is its physical property G, and when represented as a network of anatomical connections it also is a small-world network, which is its topological property F. The T1 and T2 thus concern the representation of the system.

The T3 in Kostić’s scheme describes a counterfactual dependence between a system’s topological and physical properties. In the example with the brain, the T3 tells us that the brain would not have been computationally efficient if it had a random or a regular topology instead of the small-world topology that it actually has. The T3 condition hence concerns the explanation because it tells us why something is the case.

Finally, the fourth condition provides criteria for using the counterfactual. Such criteria are perspectival, in the sense that they provide a context which makes it intelligible why some empirical property G counterfactually depends on a network connectivity pattern, which is expressed as its topological property F (Kostić, 2023). Relevant linguistic patterns in topological explanation therefore will be expressed as phrases containing nouns which denote topological properties and verbs denoting some form of a dependence. In the neuroscience literature, such an explanation would be expressed as a proposition in which a physical phenomenon (e.g., computational efficiency, robustness, or controllability) counterfactually depends on topological properties (e.g., a small-world, scale-free topology, or in general a connectivity pattern).

2.3 Dynamical explanation

A dynamical explanation is typically used to explain evolution of a chaotic system, or changes in a chaotic system over time. The possible states of a system are described as its state space, in which actual changes over time, from one state to another, form a trajectory. By using differential equations of nonlinear dynamical systems theory, it is possible to quantify these changes over time, which uniquely determine the subsequent states of the system, e.g., in systems becoming synchronized. Since the dynamical explanation focuses on the mathematical properties of a dynamical model, entities, activities and microphysical causal details of underlying mechanisms are explanatorily idle (Chemero & Silberstein, 2008; Favela, 2020, 2021; Gervais, 2015; Khalifa et al. 2022; Stepp et al., 2011; Venturelli, 2016; Verdejo, 2015). As such, dynamical explanation is typically used to explain the global behaviour of a system. For example, in neuroscience, a dynamical explanation is used to explain why bimanual coordination (synchronous wagging of the same fingers on both hands) is in, or out of phase. To that effect, a relevant linguistic pattern in dynamical explanations will be a noun denoting a dynamical property and a verb such as “to determine” or to “shape”.

3 Methods and data

In this section, we explain how we were able to detect these three different explanatory patterns in a large body of neuroscience literature. We used basic text mining tools to identify typical word patterns that resemble explanatory language. Our approach had two stages. In the first stage, we used three sets of twenty neuroscience papers each, which were cited as typical examples of mechanistic, dynamical and topological explanations, respectively, in the philosophical literature that discusses these three types of explanations (see appendices 1 and 2). These three sets were used as ‘training sets’ to identify word patterns presumably typical of each of these explanations, to be later tested in the larger corpus of neuroscience literature. We decided not to start with a top-down hypothetical list of word patterns that could be expected to express explanation according to three philosophical accounts of scientific explanation (Fletcher et al., 2021; Bonino et al., 2022; Malaterre, Chartier, and Pulizzotto 2019; Mizrahi & Dickinson, 2022a, 2022b), in order to avoid possible interpretative bias. Instead, our approach was bottom-up, as we started with the actual explanatory language used in neuroscience papers.

The full text of the three training sets was uploaded to the free text mining application Voyant-tools.org. This web-based application provides easy tools for calculating word frequencies, word co-occurrence, or quick access to the context of particular words in the text (e.g., fifteen words before and after a word of interest, which is easily expandable to a larger context if necessary). These tools allowed us to identify meaningful terms and count recurring word patterns, excluding stop-words such as “the”, “a”, or “was”, digits for page numbers or years of publication (1995, 2, 43, etc.), connectives such as “and”, “or”, bibliographical abbreviations such as “et al.”, etc. Among the most frequently occurring words, we identified terms that seemed to refer to elements typical for dynamical, topological, or mechanistic explanations, i.e., explanans, explanandum, or explanatory relation terms between them.

To analyse the most frequent terms, we used stemmed words, e.g., in order to count “analysing”, “analysis”, “analytic” etc., as all belonging to the same term “analy*”, in which the asterisk expresses an arbitrary suffix. A first joint inspection indicated that the non-random explanantia terms, unique to each of the three types of explanations, appeared among the twenty-five most frequent terms in each training set, since after the twenty-five most frequent terms we observed that the terms became paper-specific and no longer related to explanatory terms. This process revealed some terms that seemed to occur uniquely in one of our training sets, but also terms that occurred most frequently in the overlap, in one or both of the other sets. Although the philosophical literature presented the neuroscience articles in our training sets as typical of particular explanatory styles, these neuroscience articles do also frequently contain words that that are not pertinent to the analysis of explanatory language. Hence, other terms that occurred frequently in one of the training sets, just seemed accidental, such as “mouse” or “visual*” (see Fig. 2). Expressions typical of topological, dynamical, or mechanistic explanations can therefore not be easily derived from the mere frequency of particular words in such a training set.

Fig. 2
figure 2

Most frequent 25 terms in the topological, dynamical, and mechanistic training sets and their overlaps between the sets

After joint inspection of the terms in their contexts by both authors, it also became clear that the explanandum would not be distinctive for the type of explanation: all three types of explanations often aim for the same explananda terms, such as, for example, motor functions or cognition. Rather, our reading of the text in the training set showed that explanantia terms co-occurring with explanatory verbs express the most distinctive explanatory relations. For example, in the phrase “dynamics also create completely new behavioural constraints”, the explanans “dynamics” and explanatory term “create” are typical of dynamical explanatory language, while “behavioural constraints” might conceivably also be explained in mechanistic or topological terms. In identifying typical expressions, we aimed for distinct word patterns, typical for specific explanatory schemes, rather than capturing all explanations.

With this tentative long-list of explanantia terms for each of the training sets, developed in our joint inspection, we individually set out to identify such typical word patterns, i.e., identifying phrases containing explanantia in combination with explanatory relations, most often verbs (see appendix 3). Our long list included explanantia terms such as “time”, “non-linear”, “state” for dynamic papers; “architecture”, “topology”, or “connectivity” for topological ones; “neuro”, “neural”, “activity” for mechanistic papers. After identifying phrases that we both independently judged as characteristic, we discussed each phrase until we reached a consensus about which phrases were characteristic examples of the three explanatory styles. For example, we removed explanatory terms that express a vague relation (e.g., “correlates with…”, “is associated with…”) without clear explanatory relationship. Some explanantia candidates, such as the term “time” or “non-linear”, had to be removed because they returned too many phrases that were not explanatory, but referred to methodological or technical descriptions.

Our selection is thus based on our joint understanding of what constitutes mechanistic, topological and dynamical explanations, as specified in theoretical section above. If one of us raised doubt about whether an explanation was, for example, truly topological, then the expression was removed. Although this admittedly involves human judgement, in this way we prioritised clear-cut expressions, at the expense of losing many less explicit ones. Although one of us has taken a position in the philosophical debate on explanations in previous work (Kostić, 2018b, 2019a, 2020, 2022, 2023; Kostić & Khalifa, 2021, 2022), the other author has no intellectual stake in these debates.

In the remaining phrases, we then counted the word distance between the explanans and the explanatory term, i.e., the number of words between the characteristic terms. For example, in “activity controls”, the word distance between “activity” and “controls” is zero. In “activity that generally controls”, the distance between “activity” and “controls” is two, i.e., two words, noted as “activity controls ~ 2”. We agreed to consider only single digit distances, because in principle, data noise and a possibility of false negatives increase with the higher limit on distance. The actual distances that we found in our training sets are listed in the appendix 3.

The word patterns were expressed as a complex search string with which to search the larger neuroscience literature via the Dimensions.ai database, which covers an exceptionally large number of research papers (almost 130 million). We limited the research to the period 1990–2021 and to papers labelled as “neurosciences” in the database (i.e., category 1109), totalling 2.199.526 papers. Since Dimensions is so comprehensive, we surmise that a very similar procedure for selecting a corpus can be used for just about all of the relevant literature (at least in English), avoiding random sampling errors. The delineation of research fields in bibliographic databases is generally somewhat ambiguous, but this should not fundamentally affect the results of our analysis. Apart from the size advantage, Dimensions allows for searches in the abstract and full text of articles, to the extent that Dimensions has access to them.

Three complex search strings were composed, one for each type of explanation, with all combinations of explanantia terms and explanatory relation terms unique for each type (for the search strings with the most hits in each corpus, see Table 1). The search added the word distance after each first term in the expression and combined all the expressions with a Boolean “OR” e.g., (“contribution of connectivity ~ 2”) OR (“depends on connectivity ~ 1”) OR … In other words: the database would return all articles that contain at least one of the word patterns that are typical of each of three different kinds of explanation identified in our training set. When ran through the large corpus of over two million papers in the Dimensions database, these search strings had found different number of papers for each kind of explanations. Table 1 represents the search strings and how many papers each search string retrieved from the Dimensions. A schematic of the method is provided in Fig. 3.

Table 1 The actual search strings and the number of papers each of them retrieved when ran through the Dimensions.ai database
Fig. 3
figure 3

Method used to identify papers with typically topological, dynamical, or mechanical explanatory language

4 The results

The search in Dimensions returned a total of 443.966 papers, out of which 94% also had abstracts. Among the search results, the mechanistic set was by far the largest, while the dynamic and topological search strings each returned just over 30.000 results (see Table 2). This may be the result of overly specific search strings for these two latter sets, less specific mechanistic search strings, or an actual indication of the minority share of the dynamical and the topological explanations: we do not claim we have captured all papers of each explanatory type, just three characteristic sets. Our results should therefore not be read as an accurate representation of shares of either of these explanations in the literature, but as indicators of their presence and, as we show below, of their relative development over time.

Table 2 The total number of neuroscience papers in Dimensions (1990–2021), and the number of papers identified through our search strings

The actual number of papers matching our search strings for all three kinds of explanations per year since 1990 is shown in the Fig. 4. These absolute numbers are misleading when we attempt to spot trends, as in this same period the total number of neuroscience papers also grew significantly.

Fig. 4
figure 4

The total number of papers per year, from 1990 to 2020, matching our search strings

The growing number of neuroscience papers in Dimensions since 1990, is shown in Fig. 5.

Fig. 5
figure 5

The total number of neuroscience papers from 1990 to 2020 in the Dimensions.ai

As a more meaningful representation of how the three types of explanations develop over time, the share of mechanistic, dynamical and topological explanations in the total number of neurosciences papers is shown in the Fig. 6. Once again, the actual share depends very much on the accuracy of the search strings, which is open to debate. Nevertheless, the trends over time are systematic, suggesting that there is a shift in the explanatory language that is more than just an artifact of our search strings. Even though small, the share of papers with topological explanatory language starts to grow significantly after 2006. The share of papers with dynamic explanatory language is similarly low, but consistent and grows steadily since 1990. Papers with mechanistic explanatory language grew up to about 2002 and then seem to stabilise. The graph also suggests that a large segment of papers (the remaining three quarters of the neuroscience literature) either does not use explanatory terms at all (e.g., it reports descriptive research), or uses explanatory terms not captured by our search strings.

Fig. 6
figure 6

The ratio of mechanistic, topological and dynamical explanations in the total number of neuroscience papers from 1990–2020

As an additional probe into the discriminatory power of our search strings, we analysed the overlap between the three sets. It has been suggested before (Overton, 2013; Petrovich & Viola, 2022) that the explanatory language used by scientists is not always entirely consistent and we may hence expect that some papers mix different explanatory terms. Figure 7 presents the number of papers in each set and the various overlaps between the sets. The largest overlap exists with papers with mechanistic explanatory language: about two-thirds of the papers in the dynamic and two-thirds of the papers in the topological set also contain mechanistic explanatory language. The overlap is smallest between papers in the topological and dynamic sets. However, an interesting trend can be observed if we represent the development of the summated overlaps between the three sets over time (Fig. 8). Whereas the three sets nearly coincided up to about 2006, i.e., dominated by mechanistic explanatory language, after 2006 there is a steady trend towards less overlap: papers start to use more exclusively mechanistic, dynamic, or topological explanatory language.

Fig. 7
figure 7

Overlap in search results (number of papers). (Script venn.js by Ben Frederickson, d3.js by Mike Bostock)

Fig. 8
figure 8

The total, summated overlap between dynamic, topological and mechanistic papers decreases over time

5 Discussion: the explanatory landscape in neuroscience from 1990 to 2021, its trends, and the limits of text mining methodology

Our search strings returned only a limited set of the neuroscience literature, namely about a fifth. This may imply that we either missed a substantial part of the explanatory repertoires, or that a substantial part of the neuroscience literature does not use explanatory expressions (or both). Non-explanatory papers may be descriptive, i.e., they may be review articles, or papers that provide new data sets, describe new imaging techniques, or new tools for data analysis; or technical in nature, i.e., propose new experimental protocols, slight improvements on certain techniques, or in general be concerned with some form of “tinkering in the lab” (Bickle, 2021). Of course, these non-explanatory uses would require further analysis. So, pace the new mechanists’ claims that neuroscience is in the business of discovering mechanisms and ipso facto mechanistic explanations, it may be the case that a large part of neuroscience is in some other business than providing explanations, mechanistic or any other kind for that matter. Our study could not map out what that other business is, simply because the search strings were developed to identify specific and most typical explanatory linguistic patterns, and excluded less clear-cut expressions.

Having said that, within the fifth of the neuroscience literature that we analysed, our search strings suggest that mechanistic explanatory language is indeed predominant. Nevertheless, a significant number of dynamical and topological explanations papers exist, and their share slowly grows over time. The growth of topological explanation papers takes off around 2006. On the other hand, the number of papers that use the language of dynamical explanations show a steady growth without take-off points since the beginning of our corpus, the year 1990.

Unsurprisingly, the explanatory language is mixed. The topological and dynamical papers use mechanistic language too, which could be an artifact of noise generated by our search strings, or imprecise use of terms by neuroscientists, or a combination of multiple forms of explanation used in the same paper, probably a bit of all. However, the fact that the overlap of topological, dynamical and mechanistic language decreases over time, i.e., that they differentiate over time, may also indicate that loose mechanistic language was initially used as a placeholder for a more abstract non-mechanistic explanation, with which it is replaced over time as the ideas about dynamical and topological explanations start to develop and specialize.

The low number of topological explanation papers in our set (ca 0,5% of neuroscience papers) is to be expected, given that topological explanations were suggested relatively recently in the seminal paper by Sporns and colleagues (Sporns et al., 2005). To avoid contamination, we were also quite restrictive in the search terms we judged to be specifically indicative of topological explanations. On the other hand, topological explanations do use a more specific language in their explanantia, and because of that they are more discernible by our search strings. In contrast, mechanistic language is used more loosely (Dupré, 2013; Kostić & Khalifa, 2022; Ross, 2021; Woodward, 2013), and so our (or any) search strings cannot discriminate between a genuine and platitudinous mechanistic explanatory language.

Our analysis has several limitations. One limitation is that we did not have access to a larger group of raters to do an extensive validation of our search strings in order to estimate false positives and false negatives. This more extensive validation, in our estimate, would require a separate study, which is out of the scope of this paper. Our analysis therefore depends on our assessment of what counts as typical explanatory language, with the potential bias of one of the authors’ previous work on one of the three types of explanation discussed in this paper. Another limitation is built into how searches work in the Dimensions database, with search strings producing a hit regardless of whether a string was used once or multiple times in the text. This includes casual as well as systematic use of explanatory language. A full-text analysis with more powerful tools, on a sample of neuroscience papers to keep it feasible, could provide more fine-grained results for a subset of papers. Such analysis could also use Natural Language Processing techniques, e.g., text data analyses that distinguish nouns from verbs. Perhaps, this method could better discern the explanatory language especially in papers that use mixed language.

Finding ‘pure’ dynamical, mechanistic and topological language might be possible, but it would require more precise assessment of every paper. Moreover, given that we are trying to detect different explanations by mapping explanatory language used by scientists, it would be possible to argue that these linguistic differences are a matter of conceptual sloppiness in the neuroscience literature that cannot be reduced to some overarching philosophical explanatory scheme. Nevertheless, these techniques can help to put theoretical debates in philosophy of science in an empirical perspective, in a systematic way, rather than based on hand-picked examples.

6 Conclusion

Explanatory language in neuroscience papers is not exclusively mechanistic. Our analysis has shown that a relatively small but growing share of neuroscience papers uses topological and dynamical terms to explain neural phenomena. We were also able to show that the explanatory repertoires in the neuroscience literature are differentiating over time: the explanatory language appears to become more exclusively either mechanistic, topological, or dynamical. Nevertheless, expressions of different types of explanatory language are regularly mixed in neuroscience papers.

Our study has shown that typical explanatory language can be identified by searching for particular word patterns. This approach could be expanded in several ways. First, similar word patterns could be identified for other types of explanation, such as the statistical language of correlation and association. Second, longer strings with more explanatory word patterns could be used to identify how explanation types are distributed throughout all of neuroscience. Our search strings returned only a fifth of neuroscience papers. Adding additional explanatory expressions will likely capture a larger share of the literature, although this would raise the share of false negatives and unwieldy data sets might then require smaller literature samples. Third, more refined natural language processing techniques could be used, such as techniques that analyse grammatical structures, e.g., distinguishing verbs from nouns.

We see no principled obstacle in applying our approach to life sciences in general, or, in fact, any other domain of science. The three types of explanation on which we focused in this paper are also used in other sciences, and repositories such as Dimensions.ai could provide corpora for them as well. Explanatory terms are most likely specific to each research field, but a similar two-step approach could be used, establishing dominant terms and then specific explanatory patterns for each field. However, in other fields, statistic or interpretative patterns may be more prevailing.

In studying the structure and dynamics of physics as a science, philosophers of science focus on its theories, i.e., how theories are formed, interrelated, and change over time. However, given the sheer diversity of explanatory styles in neuroscience, understanding its structure and dynamics seems rather a vexing task (Gold & Roskies, 2008). In this paper we focused on the neuroscience, and by mapping out different explanatory strategies in the large body of neuroscience literature, we have chosen an empirical approach to provide an overview of its structure and dynamics.

By looking at language use in a large corpus of literature, we provide empirical evidence about the explanatory landscape in neuroscience in general, and in that way, we avoid epistemic biases resulting from focusing solely on a limited and hand-picked examples that are typical of philosophical literature on scientific explanations. Our study demonstrates that the actual explanatory language in neuroscience is diversifying, rather than being exclusively or ever more mechanistic.