1 Introduction: what is the problem?

Scientists often reason about phenomena in natural systems by studying other systems that afford better opportunities for observation and manipulation. These systems are typically referred to as “surrogate systems” and the reasoning they enable is correspondingly referred to as “surrogative reasoning” (Swoyer, 1991, see also Bolinska, 2013, Contessa, 2007, and Suárez, 2004). Despite its widespread use and success across the sciences, surrogative reasoning remains a controversial topic in the philosophy of science. At its core, the problem is how to ensure that surrogate systems afford epistemic access to their targets. Arguably, an adequate relation must be established between surrogate and target to secure epistemic access, but the nature of this adequate relation has been the fountainhead of great debate in the philosophical literature.

A contentious aspect of this debate is whether this adequacy should be construed as an “objective”, dyadic relation between surrogate and target, based on their intrinsic features, independent of the scientists engaged in surrogative reasoning. These dyadic relations have played a prominent role in philosophical accounts of scientific representation (see e.g., structural similarities or “morphisms” in fn. 21 and references therein). However, they have been criticized as a basis for surrogative reasoning (Bolinska, 2013; Contessa, 2007, and Suárez, 2004). As Suárez (2004) claims, these dyadic approaches “aim to reduce the essentially intentional judgements of representation-users to facts about the source [i.e., surrogate] and target objects or systems and their properties” (768).

A pragmatic response is that whether surrogate systems provide epistemic access to their targets depends not merely on their intrinsic features and mappings (dyadic relations) but also on how scientists use these surrogates (i.e., triadic relations). In this sense, the adequate relation between surrogate and target – i.e., one affording epistemic access – is construed as adequate for certain users for certain purposes (see, e.g., Bokulich & Parker, 2021; Parker, 2010, 2020; Currie, 2017). Furthermore, depending on how surrogate systems are used, they may provide different kinds of insights into their targets. Scientists may value these insights differently and pursue one or the other, depending on their interests.

These pragmatic proclivities locate “interpretation” center stage in surrogative reasoning: The adequacy of the relation between surrogate and target relies on how scientists construe the surrogate and its relation to the target. In line with these pragmatic views, I will approach surrogative reasoning as an interpretative endeavor in which meaning is bestowed upon a surrogate system by scientists for specific purposes. In this paper, I focus on two main aspects of such meaning, namely: i) what the surrogate is and ii) what it represents. First, scientists engage in interpretative tasks that provide a certain construal of the surrogate system itself, enabling them to inspect it intelligibly and manipulate it purposefully. Second, scientists also engage in interpretative tasks that allow them to project what they learn about the surrogate system onto a chosen target.

In the context of a pragmatic approach, a surrogate system can embody different interpretations concerning what it is and what it represents, especially if used by different scientists for dissimilar purposes. However, it remains unclear how many meanings may coexist in a surrogate system and how they relate to each other in surrogative reasoning conducted by the same group of scientists within a single research project. In this paper, I use the term “capaciousness” to refer to the number of meanings that coexist in a surrogate system and the term “complexity” to refer to the relations among coexisting meanings. I use these terms in a relative sense, i.e., interpretations are more or less capacious and complex relative to other interpretations. In other words, I do not intend to establish an absolute criterion (e.g., number of meanings and relations) to designate interpretations as capacious and complex.

This paper aims to elucidate how capacious and complex interpretations may be for a group of scientists working together within a single research project. The “overall meaning” of a surrogate system is thus understood as the set of meanings bestowed upon it plus their interconnections. To restrict the scope of this paper, my focus will be on a particular form of surrogative reasoning, namely “model explanations”. In this case, scientific models are used as surrogates to infer explanations of target phenomena, reasoning with the resources available in the model.

It is a valuable enterprise to describe how capacious and complex interpretation in models may be, especially to advance more nuanced accounts of scientific explanatory practices using models. Traditionally, scientific explanations have been conceived with one explanandum and one explanans. For instance, consider one of the earliest and most influential contributions to modern philosophical discussions on scientific explanation, namely the deductive-nomological account. According to this view, scientific explanations consist of two parts: i) an explanandum, which is a sentence describing the phenomenon to be explained, and ii) an explanans, which comprises the sentences adduced to account for the phenomenon (Hempel & Oppenheim, 1948, 247). Following the deductive-nomological model, various philosophical accounts of scientific explanation have been proposed (e.g., unification, causal-mechanical, pragmatic, and varieties of mathematical explanation, to name a few).Footnote 1 These accounts introduce different characterizations of explananda and explanantia in scientific explanations while retaining, as Hempel and Oppenheim put it, the “basic pattern” of one explanandum and one explanans.

My methodology involves conducting a case study in which I analyze and discuss modelling and explanatory practices, with a focus on interpretative tasks. The case study revolves around a model of earthquakes developed by Zeev Olami, Hans Jacob S. Feder, and Kim Christensen, known as the OFC model. Given my focus on this case, I do not expect to deliver a general account of interpretation in models for model explanations. Rather, I intend to survey the space of practices that should inform the articulation of such general accounts. Furthermore, I do not intend to evaluate these practices. Instead, I intend to describe them, identify their components, classify them, and deliver a synthetic framework. The case study shows that multiple interpretations can be intricately intertwined in the overall meaning of a model used for explanatory purposes. This leads to model explanations with layers of content, both in their explanantia and explananda. As a result, the “basic pattern” of one-explanandum-one-explanans seems descriptively inadequate for this case study.

The structure of this paper is as follows. In Section 2, I introduce the OFC model and related subjects relevant to the argument. In Section 3, I analyze interpretative tasks embodied in the OFC model, with a focus on two of them, namely conceptualization and denotation. In Section 4, I discuss how these interpretations are used as content for model explanations, arguing that they form layers of content that scientists ponder differentially, depending on their local interests. Finally, in Section 5, I deliver concluding remarks.

2 Case study: the Olami-Feder-Christensen model of earthquakes

The OFC model is an excellent case study to describe the capaciousness and complexity of interpretations in models in the context of explanatory enterprises. The original papers in which the OFC model was introduced describe in detail the influences, motivations, and reasonings that lead to the various modelling decisions. The papers are also fairly explicit about how to interpret simulation results as resources for explanations. Furthermore, much has been written about the OFC model in the years following its publication. This material provides varied perspectives and appraisals that make the study of this case more nuanced and richer. Beyond these merits, this case study is also relevant because it helps close two research gaps in the philosophy of science literature. First, it addresses an overlooked scientific discipline in the philosophy of science literature, namely seismology. Second, it tackles a rather neglected research program in which the OFC model has become a landmark, namely self-organized criticality.Footnote 2

The structure of this section is as follows. First, I begin with some preliminaries on earthquakes, self-organized criticality and its seismic expression, namely the “Gutenberg-Richter” law. Second, I briefly discuss a major influence in the OFC model, viz. the Burridge-Knopoff “spring-block” model of earthquakes. Third, I present the OFC model in detail, focusing on its set-up and simulation results.

2.1 Preliminaries on earthquakes, the Gutenberg-Richter law, and self-organized criticality

Earthquakes are complex and heterogeneous geological phenomena. Broadly speaking, they can be described as ground motions induced by elastic waves that propagate across the earth from their respective sources. The sources of earthquakes range widely. For example, seismic sources include volcanic eruptions, landslides, explosions, hydrological processes, waves and tides, and even atmospheric phenomena. However, the source that is the primary interest of global seismology is shear-faulting, i.e., earthquakes associated with displacements along geological faults (Ammon et al., 2021, 11). In particular, the OFC model focuses on the study of shear-faulting earthquakes.

Two highly influential conceptual models of shear-faulting earthquakes are relevant to this case study.Footnote 3 The first one is the “elastic rebound” model (Reid, 1910). According to this model, earthquakes are sudden displacements of rocks along a fault due to stress induced by elastic strain. Rocks on both sides of a fault are typically affected by external forces, such as those exerted by lithospheric motions. These forces cause deformation or “strain” on the rocks. In the context of this model, the deformation regime is assumed to be primarily elastic.Footnote 4 Thus, as rocks deform, the elastic potential builds up and the strained rocks exert stress on the fault. Frictional forces at the fault resist the motion of the juxtaposed bodies of rock. However, as strain keeps increasing, eventually the stress at the fault exceeds the static frictional forces. At this point, a sudden slide occurs. The rocks snap back into an unstrained position and the accumulated elastic potential energy is thus released. Part of the released energy is consumed by heating and fracturing, but a significant portion radiates as a seismic wave that propagates outward from the rebounding rocks. As long as external forces keep acting on the rocks surrounding the fault, the elastic rebound model leads to cycles of strain accumulation and stress drop (Ammon et al., 2021, 12; Reid, 1910, 16–28).Footnote 5

The second model is the “stick–slip” model (Brace & Byerlee, 1966). The model is named after the well-known phenomenon in engineering of jerky or unstable sliding across a rough surface, as opposed to smooth or stable sliding. As in the elastic rebound model, the stick–slip model also construes earthquakes as ground motions resulting from the release of elastic strain once the induced stress at a fault exceeds frictional forces. However, there are two major differences between Brace and Byerlee’s stick–slip model and Reid’s elastic rebound model. First, in the stick–slip model, the stress drop in an earthquake may only be a small portion of the total stress at the fault. This contrasts with the elastic rebound model according to which rocks snap back to unstrained positions. Second, the stick–slip model assumes that earthquakes occur in pre-existing faults. In contrast, the elastic rebound model may apply to the occurrence of earthquakes as a result of shearing motion along newly formed fractures, which typically leads to the formation of “fault zones” (see fn. 5). In the latter case, the strength of rocks (or cohesion) remains a relevant variable for the occurrence of earthquakes. In contrast, in the case of stick–slip mechanisms, the occurrence of earthquakes is mostly controlled by the nature of frictional forces at the geological fault. Brace and Byerlee (1966) assert that their stick–slip mechanism deserves to be considered, in conjunction with Reid’s mechanism, as one possible explanation for shallow earthquakes (992).

Shear-faulting earthquakes exhibit well-documented patterns of behavior in space and time. One of these patterns is known as the “Gutenberg-Richter (GR) law”. The GR law is an empirical statistical law according to which the number of earthquakes (N) with radiated energy greater than a given value (E), in a given spatial domain and time interval, follows a power-law:

$$N\left({E}_{0}>E\sim {E}^{-B}\right)$$

where B (often referred to as “B-value”) is a coefficient that varies across seismic contexts, i.e., non-universal. While the exponents are non-universal, the power-law behavior is robust across tectonic regions and time scales. Both the robustness and non-universality of the GR law are target explananda in OFC’s explanatory project.

The GR law has been characterized as a particular realization of a more general physical phenomenon known as “self-organized criticality” (SOC).Footnote 6 Broadly speaking, SOC can be characterized as a quasi-equilibrium state into which a certain class of composite systems self-organize and fluctuate about. In this state, interactions between the parts of a system can trigger chain reactions of different sizes, commonly referred to as “avalanches”. A robust feature of this quasi-equilibrium state is that the distribution of avalanches according to their size follows a power-law. This feature is analogous to the power-law behavior of thermodynamical systems at a critical point near phase transitions, hence the name self-organized criticality (Watkins et al., 2016).

The OFC model has been particularly influential in advancing explanations of features of the GR law conceived as a SOC phenomenon. However, to have a fuller appreciation of its contributions, it is crucial to introduce its main influence, viz. the Burridge-Knopoff (BK) model.

2.2 The Burridge-Knopoff “Spring-Block” model of earthquakes

BK’s novel and impactful idea was to use a spring-block system as a surrogate for the study of shear-faulting earthquakes. As BK express, they imagine the opposite sides of a geological fault as two-dimensional networks of masses interconnected by springs representing the usual elastic elements and coupled by frictional elements (Burridge & Knopoff, 1967, 342). These elastic and frictional elements capture central aspects of influential conceptual models of shear-faulting earthquakes, such as the “elastic rebound” and “stick–slip” models (see discussion in Section 2.1). On the one hand, in line with the elastic rebound theory, the sliding events are triggered by an increasing elastic potential embodied in the springs. On the other hand, in line with the stick–slip model, the pulled spring-block system exhibits a jerky sliding motion controlled by frictional forces on the supporting surface.

The BK model is a one-dimensional spring-block system with two distinct implementations. First, there is a “laboratory” implementation: a concrete, material spring-block system built for laboratory experiments. The setting is a lineal arrangement of blocks, connected via coil springs, placed on a rough surface, pulled on one end by a motor, and with the other end free (Fig. 1). As the motor pulls the system, the tension on the first spring eventually exceeds the static friction threshold. The first block slides reducing the tension on the first spring, but also pulling the second spring, thus augmenting its tension. As the simulation progresses, eventually all blocks get involved in sliding events – or “shocks” as BK call them – which release elastic potential energy accumulated in the springs.

Fig. 1
figure 1

Schematic diagram of the BK model in its “laboratory” implementation. Only four of the original eight blocks are depicted. V stands for the velocity of the pulling (adapted from BK 1967)

Second, there is a “numerical” implementation. This implementation amounts to a system of ordinary differential equations which describe the dynamics of each block in a general version of the spring-block system. In this general version, each block is connected to its neighbors via coil springs and to a moving slab via flat springs (Fig. 2). This implementation is called “numerical” because the system of equations is solved numerically via a computer program for discrete time increments.

Fig. 2
figure 2

Schematic diagram of the BK model in its “numerical” implementation. KL: flat spring elastic coefficient; K1: coil spring elastic coefficient; V: velocity of the pulling slab

One of the most relevant results of the BK model is that the shocks exhibit a robust power-law distribution in terms of their size. This is consistently observed for simulations with different settings in both the laboratory and numerical implementations. Importantly, BK explicitly compare their results to the behavior of real earthquakes and note the resemblance with the GR law (347). Particularly important is that BK use these results to make inferences about real earthquakes, i.e., BK engage in surrogative reasoning. They claim that “[…] if the demonstrations of the laboratory and numerical models are borne out in nature, it would seem likely that the nature of the friction on a fault surface determines the statistical properties of the earthquake shocks that are observed” (370). This is a good example of how surrogative reasoning leads to content for model explanations, as the nature of friction is used to plausibly explain the robustness of the GR law. BK’s results are comparable to those obtained with SOC models, although the SOC program emerged 20 years later. In the next section, I discuss the OFC model, which elaborates on the BK model from within the SOC program.

2.3 The OFC model of earthquakes

The BK model went through several revisions and developments before OFC made their contributions in 1992 (for a review, see Pruessner, 2012, 125–6; Leung et al., 1997, 423). The OFC model was published in 1992 in three papers: Olami et al. (1992), Christensen and Olami (1992a), and Christensen and Olami (1992b). Each one of these papers emphasizes distinct aspects of the OFC model, but they constitute a coherent corpus produced by (roughly) the same research group. The OFC model has become the standard model to discuss earthquakes in the SOC literature (see e.g., Bak, 1996; Leung et al., 1997; Jensen, 1998; Pruessner, 2012). For expository purposes, I describe the OFC model as presented in Olami et al. (1992). Further nuances contributed by Christensen and Olami (1992a, b) are discussed in Section 3.

The OFC model is a cellular automaton that is directly mapped into a two-dimensional version of the BK spring-block model of earthquakes. In this two-dimensional version of the BK model, each block is connected to four orthogonal neighboring blocks via coil springs and the whole array lies upon a rough static plate. Each block is connected via a flat spring to a moving plate with low constant velocity. The relative motion of the plates forces the flat springs to stretch, which in turn modifies the tensions between coil springs (Fig. 3). As the moving plate pulls the flat springs, the tensional forces on each block increase. Eventually, the tensional forces acting on a block exceed the static friction threshold, the block slips, and the tension is redistributed among its neighbors.

Fig. 3
figure 3

OFC model as a two-dimensional spring-block system. K1, K2 and KL are the elastic coefficients of springs along the x-axis, y-axis, and z-axis (adapted from Olami et al., 1992)

The amount of force distributed to the neighbors of a slipping block is a portion of the total force on the slipping block. For simplicity’s sake, I focus on the isotropic case, in which all coil springs have equal elastic coefficients (K1 = K2: = K). In this case, the portion of force distributed to the neighbors of a slipping block (i,j) is:

$$\delta {F}_{i\pm 1,j\pm 1}=\frac{K}{4K+{K}_{L}}{F}_{i,j} :=\alpha {F}_{i,j},$$

where \(\alpha\) is referred to as the “global elastic parameter”.Footnote 7 Any residual force not distributed to the neighbors is dissipated. As the force acting upon the neighboring blocks increases, it can also reach the threshold for static friction, causing them to slip as well. Thus, this process has the potential to trigger chain reactions of different sizes, commonly referred to as “avalanches” in the SOC literature.

In contrast to the BK model, the OFC model is not implemented as a material spring-block system nor as a physico-mathematical description of such a system in terms of equations of motion. Instead, OFC implement the model as a cellular automaton. Each site in the cellular automaton is a variable \({F}_{i,j}\) that stands for the tensional forces acting upon block (i,j). The mapping of the spring-block system into the cellular automaton is described by the following algorithm:

  • Step 1: Initialize all sites to a random value between 0 and \({F}_{th}\).

  • Step 2: If any \({F}_{i,j}\ge {F}_{th}\) then redistribute the force on \({F}_{i,j}\) to its neighbors according to the rule: \({F}_{n,n}\to {F}_{n,n}+\alpha {F}_{i,j}\) and \({F}_{i,j}\to 0\); where \({F}_{n,n}\) are the strains for the four-nearest neighbors. An earthquake is evolving.

  • Step 3: Repeat Step 2 until the earthquake is fully evolved (i.e., all \({F}_{i,j}<{F}_{th}\)). The number of times Step 2 is repeated stands for the size of a model earthquake.

  • Step 4: Search the site with the highest value \({F}_{max}\) and add \({F}_{th}-{F}_{max}\) to all sites (global perturbation) and return to Step 2.

As part of its design, the OFC model embodies two features that make it different from most SOC models at the time of its publication. First, it is a continuous cellular automaton. This means that the values attached to the sites are continuous: They can be any real (positive) number. This feature is distinct from most SOC models, which typically take discrete state variables. Second, the OFC model is a nonconservative cellular automaton. This means that not all of the tension accumulated in a slipping block is redistributed among its neighbors; part of the elastic potential energy is dissipated. This is an unusual feature among SOC models, as Christensen and Olami (1992a) submit: “A common feature to most of the [SOC] models was that the local dynamical rules obeyed a conservation law” (ibid: 1829). By definition, the only case in which there is conservation is for \(\alpha\) = 0.25 (each of the four neighbors receives a quarter of the force). But this implies KL = 0 and, to run the simulation, KL must be a positive number. Otherwise, the moving plate does not exert any influence on the blocks. Therefore, the OFC model is nonconservative by design.Footnote 8

OFC claim that their model displays SOC behavior, epitomized by the power-law distribution of avalanches according to their size (1246–7). This behavior is robust in the sense that it remains invariant after various sorts of modifications, e.g., the size of the lattice, the addition of noise, or consideration of anisotropic conditions.Footnote 9 The power-law behavior also obtains over a wide range of values of α and boundary conditions.Footnote 10 However, the exponents of the power-laws change with different values of α, i.e., they are non-universal. Furthermore, a transition from a power-law to an exponential regime is observed for values of α below 0.05. This suggests that SOC behavior obtains only above a minimal threshold of interaction between components of the system. Thus, I conclude the presentation of the case study. In the following section, I proceed to analyze modelling practices embodied in the OFC model, focusing on interpretative tasks.

3 Analysis: interpretative tasks in the OFC model

In this section, I analyze OFC’s modelling practices in terms of interpretative tasks in the OFC model. The purpose of this analysis is to survey the space of interpretative tasks to answer the question “How capacious and complex interpretation is?”. I pursue this question in the context of one single group of scientists bestowing meaning upon a surrogate system. To be clear, this analysis does not intend to provide a normative assessment of the interpretative tasks. I will just take them at face value.Footnote 11 I use the three original papers presenting the OFC model as a unit of analysis and, for simplicity’s sake, I refer to them collectively as OFC papers. I base most of my analysis on literal reports as expressed in text, graphs, and diagrams.Footnote 12 I also resort to well-informed inferences and commentaries found in the secondary literature. I proceed as follows. First, I give a brief introduction to scientific models and interpretative tasks. I focus on two such tasks, namely conceptualization and denotation. Then, I analyze each task in the context of the OFC model in the following subsections.

3.1 Preliminaries on scientific models and interpretative tasks

In the last decades, the philosophical literature on scientific models has grown large and diverse. Hence, providing a consensus view of scientific models is challenging and often contentious. Having admitted this, I adopt a minimal characterization of scientific models for this paper, which I consider captures broadly accepted and well-established features of scientific models. According to this minimal characterization, a scientific model is an interpreted vehicle used for scientific purposes. This characterization has three main components: i) vehicle; ii) usage for scientific purposes; and iii) interpretation. I proceed to explicate these in turn.

To begin with, a scientific model is founded on a vehicle. In principle, a vehicle can be any object. This includes concrete objects (from laboratory set-ups to living organisms) and abstract ones (from mathematical entities to imagined/fictional settings).Footnote 13 As objects, vehicles instantiate properties which are made intelligible through the lens of a particular conceptualization (cf., “I-instantiated properties” in Frigg & Nguyen, 2016, 228; see “conceptualization” below). For instance, mathematical entities do not instantiate mass or color, but they may instantiate a structure. For this paper, I do not problematize epistemic access to vehicles. That is, I assume that scientists can inspect a vehicle and get to know its instantiated properties.Footnote 14

Scientific models are used for scientific purposes. As Gelfert (2017) submits, models are “functional entities” (8). And they are so in the domain of the sciences. This caveat is introduced to distinguish scientific models from other interpreted vehicles used for non-scientific purposes. This makes the distinction a circumstantial one: There is nothing intrinsic to a vehicle that makes it inherently a scientific model, only its use as such (cf., Callender & Cohen, 2006, 83). Scientific purposes are nevertheless quite diverse. For example, scientific models could be used for more or less accurate descriptions of phenomena, more or less accurate predictions, the generation of more or less sound inferences concerning a phenomenon, and various kinds of explanations, just to name a few. In this paper, I focus on one purpose, namely explanations based on the resources provided by models, i.e., model explanations.

To be used as scientific models, vehicles must undergo an interpretation. That is, vehicles must be bestowed with meaning.Footnote 15 In this paper, interpretation is conceived as the set of tasks that bestow meaning and the ensuing meanings. In other words, the term “interpretation” is used to refer to both the actions and outcomes of meaning-giving practices. My pragmatic approach to surrogative reasoning is implicit here: Meaning is not intrinsic to a vehicle. It is established by the scientists who use the vehicle as a scientific model. In this sense, it is through interpretation that a vehicle acquires a particular potential to be used as a model to advance the attainment of scientific purposes in specific ways. To be sure, this does not mean that vehicles can be subjected to just any interpretation. I consider two limitations to my pragmatic approach. First, vehicles instantiate properties which constrain the legitimate interpretations and uses to which they can be subjected. In other words, vehicles have constraints and affordances (cf., Knuuttila & Voutilainen, 2003, 1487).Footnote 16 Second, acceptable interpretations are typically constrained by intersubjective criteria of the relevant scientific community (e.g., see “explanatory commitments” in fn. 11).

Concerning the meaning of vehicles, I adopt an approach akin to Frigg and Nguyen’s DEKI account of representation (2016, 2020). The DEKI account owes much to Goodman’s theory of symbols (1976, see also Elgin, 1983). Central to this theory are the notions of “representation-of” and “representation-as”. The basic idea is that a vehicle is interpreted (i.e., given meaning) in terms of what it represents and how. Symbolically, if a vehicle X is interpreted as representing a target Y as Z, then both claims “X is a representation of Y” and “X represent as Z” (or “X is a Z-representation”) are the content of the interpretation of X. There are various components to the DEKI account that escape the scope of this paper. My focus here will be on two interpretative tasks that are at the core of the two forms of content characterized above. I refer to these tasks as “denotation” and “conceptualization”.

Beginning with the latter, conceptualization is a process whereby scientists use a concept to represent a vehicle as a model (cf., “conceptual representation” in Faye, 2014, 61–2). Frigg and Nguyen do not use the term “conceptualization”, but they do discuss how a vehicle is interpreted in terms of a certain “Z”, where Z is a concept that may refer to pretty much anything, real or fictional, concrete or abstract, from a “unicorn” to a “protein”. Frigg and Nguyen move on to discuss how a vehicle is interpreted in terms of Z via a bijective function that maps the properties of the vehicle to the properties of Z. To keep the scope of my analysis restricted, I will not inspect how the properties of the vehicle map to Z-properties in the case study. I will only point at the particular Zs that are being chosen as the content of the conceptualization. I submit that there might be several ways to conceptualize the vehicle as a model. That is, a vehicle could be conceptualized as Z1, Z2, Z3, and so on. Different conceptualizations afford precisely different forms to assimilate and construe the vehicle as a model. In this sense, conceptualization, as an interpretative task, is fundamental as it enables scientists to inspect the vehicle intelligibly and manipulate it purposefully.Footnote 17

Second, denotation amounts to deciding the object of which the vehicle is a representation. As Frigg and Nguyen (2016) claim, denotation is the core of representation: It establishes representation-of (228; see also Goodman, 1976 and Elgin, 1983). Following Goodman (1976), denotation can be characterized as the relation between a label and that which is labelled by it. To be sure, labels are not limited to linguistic entities; scientific models, in particular, can play the role of labels. In the context of surrogative reasoning, the denotatum (that which is labelled) is typically referred to as the target. In this paper, the relevant denotata are target phenomena which play the role of explananda in model explanations.

I follow Bokulich (2018) in claiming that target phenomena must undergo conceptualizations of their own to be used as explananda (not to be confounded with the conceptualization of vehicles). As she submits, scientists do not explain phenomena-in-the-world but rather phenomena-as-represented.Footnote 18 With this, Bokulich does not merely mean the specific representational choices embodied in an explanatory model. She rather means the more basic and important choices in terms of a particular conceptualization of a phenomenon as explanandum, contextualized within a given research program or explanatory project. As she claims: “[R]epresentations are not just involved at the level of the explanans, nor just at the level of the ‘explanatory text’ (e.g., the particular diagram or equation); rather representations play a prior and more fundamental role in our very conceptualization of the explanandum phenomenon itself” (794; my emphasis).Footnote 19 As a corollary, a phenomenon may be conceptualized in multiple ways, thus providing various sorts of contents that may be used for target explananda.Footnote 20

These two interpretative tasks – conceptualization and denotation – are not exhaustive of the various tasks involved in interpretating models for model explanations. For example, I have omitted the “keys” and “imputation” in the DEKI account. Or the more ubiquitous “mappings” between vehicle and target discussed in the literature.Footnote 21 I focus on these two tasks because they bestow basic meanings upon a model in terms of what it is and what it represents. In conceptualization, a vehicle becomes represented-by a concept. In denotation, a vehicle becomes a representation-of a target. In more technical terms, in conceptualization, a vehicle is the representandum of the representans concept. In contrast, in denotation, a vehicle becomes a representans for a representandum target.

These meanings provide content that can be used in crafting model explanations. In particular, the conceptualization of a model’s vehicle provides content that can be used as part of the explanans in a model explanation. And denotation provides content that can be used for the target explanandum. As an illustration, if a model’s vehicle is conceptualized as a physical mechanism, scientists will have mechanistic content to craft an explanans for a prospective model explanation. That is, scientists will have content about the model in terms of entities, engaging in physical interactions, organized in a certain way to bring about a phenomenon, which could be used for crafting mechanistic model explanations. In the following sections, I proceed to analyze conceptualization and denotation in the OFC model to demarcate the sorts of contents that will play a role in OFC’s explanatory enterprise.

3.2 Conceptualization of the OFC vehicle

I use the term “OFC vehicle” to refer to the object investigated and manipulated by OFC, regardless of its conceptualization. I argue that the conceptualization of the OFC vehicle is threefold. First, the OFC vehicle is conceptualized as a mathematical entity, namely a cellular automaton. Second, the OFC vehicle is conceptualized as an imagined two-dimensional spring-block system. Third, the OFC vehicle is conceptualized as a computer simulation. In different passages of their papers, OFC’s attitude towards the OFC vehicle emphasizes one or another conceptualization. Furthermore, there are mappings between these conceptualizations, either explicitly acknowledged by OFC or implicitly assumed in their research. This creates a network of meanings in which these three construals, although conceptually distinct, are interconnected in the explanatory practice. I proceed to present and discuss each conceptualization.

To begin with, the OFC vehicle is consistently conceptualized as a mathematical entity, namely a cellular automaton, qualified as “nonconservative” and “continuous”. This conceptualization is regularly expressed throughout the OFC papers. Most notably, the titles of two of these papers construe the OFC vehicle as a cellular automaton: i) “Self-Organized Criticality in a Continuous, Nonconservative Cellular Automaton Modeling Earthquakes” (Olami et al., 1992); and ii) “Scaling, phase transitions, and nonuniversality in a self-organized critical cellular-automaton model” (Christensen & Olami, 1992a). In addition, several passages in the OFC papers express the conceptualization of the OFC vehicle as a cellular automaton. For example: “We introduce a generalized, continuous, nonconservative cellular automaton model that displays SOC” (Olami et al., 1992, 1244).

The conceptualization of the OFC vehicle as a cellular automaton is not exempt from criticism. Strictly speaking, cellular automata are discrete space–time lattices, with discrete state variables (see e.g., Berto & Tagliabue, 2023). As a lattice, the OFC vehicle is discrete in space, but it lacks uniform discrete time steps, and its state variables are continuous. Thus, according to the conventional employment of the term, the OFC vehicle is not a cellular automaton. Pruessner (2012) goes even further and asserts that OFC’s construal of the OFC vehicle as a “continuous cellular automaton” is an oxymoron (127n9). Instead, the term “coupled map lattice” has been proposed as more adequate to refer to the OFC vehicle (Grassberger, 1994, 2436; Pruessner, 2012, 127).

Despite the merits of this criticism, I consider this to be a terminological dispute rather than a substantial conceptual disagreement. In fact, OFC’s explication of their conceptualization reveals relevant overlaps with the concept of “coupled map lattice”. In particular, OFC’s concept of cellular automaton admits continuous state variables. They declare: “We use the term continuous cellular automaton to denote a system [i.e., a lattice] with continuous state variables. The concept of (discrete) cellular automaton is reserved for a system with discrete state variables” (1247; emphasis in original). Thus, I submit that the conceptualization of the OFC vehicle as a continuous and nonconservative cellular automaton is coherent, although unconventionally labelled.Footnote 22

Although prominent, the conceptualization of the OFC vehicle as a cellular automaton does not exhaust OFC’s attitude towards it. Additionally, the OFC vehicle is also treated as an imagined two-dimensional spring-block system. There are several cues in OFC’s papers that support this reading. For example, this conceptualization is expressed in the title of Christensen and Olami’s paper (1992b) “Variation of the Gutenberg-Richter b Values and Nontrivial Temporal Correlations in a Spring-Block Model for Earthquakes”. In the same paper, in a section called “The Model”, they describe their model as follows: “We consider a two-dimensional version of [BK’s spring-block] model where the fault is represented by a two-dimensional network of blocks interconnected by springs”. In addition, OFC insert sketches of the two-dimensional spring-block system in their papers and describe it as if this was the vehicle with which they were working.

These two conceptualizations are closely related: The cellular automaton maps directly into the imagined two-dimensional spring-block system. This is explicitly acknowledged by OFC: “The [OFC] model is equivalent to a quasistatic two-dimensional version of the Burridge-Knopoff spring-block model of earthquakes” (Olami et al., 1992, 1244) and “[The OFC model] is directly mapped into a two-dimensional version of the famous Burridge-Knopoff spring-block model for earthquakes” (ibid).Footnote 23 I suggest that this mapping enables OFC to swiftly switch from one conceptualization to the other, using them interchangeably. At some points, OFC even combine these conceptualizations, leading to hybrid content. For example, OFC describe the algorithm for the cellular automaton – a mathematical procedure – using terms proper of a physical conceptualization. The first step of the algorithm says, “Initialize all sites to a random value between 0 and Fth”, prescinding from any physical terminology. But the second step of the algorithm says: “If any \({F}_{i,j}\ge {F}_{th}\) then redistribute the force on \({F}_{i,j}\) to its neighbors […]”, thus conceptualizing operations in the cellular automaton as physical interactions in which force is redistributed.

This attitude is reminiscent of the “back and forth” between physical and mathematical approaches in modelling reported by Pincock (2005) and Gelfert (2011). Gelfert (2011) argues that one approach to the derivation of mathematical models is to take it “as a primarily mathematical exercise, which need not lend itself to a physical interpretation at every step”. Quoting Pincock (2005, 70), Gelfert adds that in deriving mathematical models, we typically move back and forth between physical and mathematical attitudes. The physical attitude insists that throughout we are talking about physical systems and magnitudes, while the mathematical attitude views derivation steps as involving only mathematical objects.Footnote 24 I suggest that passages in the OFC papers like the one cited above convey this back-and-forth attitude and reflect a mapping between conceptualizations of the OFC vehicle.Footnote 25 These mappings enable the transference of reasoning from one conceptual domain to the other.

The previous two conceptualizations also share an important property: They construe the OFC vehicle as an abstract entity, whether as a piece of mathematics or an imagined physical object. This circumstance can be controversial. There has been some debate on whether abstract entities can be vehicles of models. In particular, this is problematic for ensuring the manipulability of models. As Knuuttila (2011) suggests, the manipulability of models relies on their concrete material dimension or “representational medium” (269). If Knuuttila is right, then the conceptualization of the OFC vehicle should involve a concrete material dimension, given its obvious manipulability.

The most fitting “material” conceptualization of the OFC vehicle is as a computer simulation: The OFC model is manipulated as a computer program. OFC are not as explicit about this conceptualization as with the previous two. But it is implicit whenever they talk about “simulation results” (see e.g., Christensen & Olami, 1992a, 1832–5). Or when Christensen and Olami (1992a) express their gratitude to Jens Feder for allowing them to use one of his computers “rather intensively” (1837). The specifics of the employed computer simulation (e.g., code) are left tacit in the original papers, even though the basic algorithm behind it is explicitly described and justified. Pruessner (2012) goes into great detail explicating how the basic algorithm behind the OFC model could be coded as a computer simulation in C (357–90). He also discusses existing variants concerning implemented numerical methods (140).

The coded computer program is designed to map directly into the algorithm conducted with the cellular automaton, as described in Section 2.3. This means that there is a one-to-one function between the elements of the mathematical algorithm and the code of the computer program. This mapping enables the transference of reasoning from one conceptual domain to the other. More explicitly, OFC conduct mathematical reasoning as they develop an algorithm for the evolution of the cellular automaton. And this mathematical reasoning is transferred into the code of a computer program, which captures the elements of the mathematical algorithm in a one-to-one function.Footnote 26

As a synthesis, I suggest that the OFC vehicle can be characterized as an imagined spring-block system, implemented via a mathematical calculus – the cellular automaton – in a computer simulation. This synthesis contains three distinct conceptualizations of the OFC vehicle which map into each other. The mapping between the cellular automaton and the imagined spring-block system is explicitly acknowledged by OFC. The mapping between the algorithm conducted in the cellular automaton and the computer program is implicit in the design of the program. And the mapping between the imagined spring-block system and the computer program is implicit by transitivity through the cellular automaton. These mappings enable OFC to swiftly switch from one conceptualization to the other, at different passages of their papers.

For this reason, I suggest that the conceptualization of the OFC vehicle is best characterized as a network of interconnected meanings. These meanings are interconnected based on one-to-one mappings between their conceptual domains. Emphasis on each of these conceptualizations switches throughout the OFC papers according to distinctive local interests. More explicitly, the conceptualization of the OFC vehicle as an imagined spring-block system enables reasoning in terms of an imagined physical system with physical interactions. The conceptualization of the OFC vehicle as a cellular automaton enables mathematical derivations. And the conceptualization of the OFC vehicle as a computer program enables purposeful manipulability in a computer program. Thus, these three conceptualizations account for the various ways in which the OFC vehicle is conceived and used.

3.3 Denotation (or targets) in the OFC model

The OFC model plays a patent representational function, although a multifaceted one. I argue that the OFC model denotes three distinctively conceptualized targets. The most prominent target system is seismic faults, with its corresponding target phenomenon being the GR law of earthquake occurrence. However, I suggest that two additional target systems should be considered to fully account for the denotational function of the OFC model, namely BK’s spring-block system and nonconservative SOC systems in general. Both targets also embody a particular realization of robust and non-universal power-law distributions. As in my analysis of conceptualizations of the OFC vehicle, I suggest that the denotative function of the OFC model is best conceived as a network of meanings which includes these three targets. Furthermore, the emphasis on each of these targets shifts at different moments in OFC’s papers.

To begin with, there is an explicitly stated intention of modelling earthquakes in seismic faults, expressed throughout OFC’s papers. For example, two titles explicitly refer to the OFC model as a model of earthquakes: “Self-Organized Criticality in a Continuous, Nonconservative Cellular Automaton Modeling Earthquakes” and “Variation of the Gutenberg-Richter b Values and Nontrivial Temporal Correlations in a Spring-Block Model of Earthquakes”. Christensen and Olami (1992b) explicitly state that the OFC model is intended to represent a geological fault (8730). In addition, Olami et al. (1992) claim that their model “predicts” the GR law of earthquakes and “explains” its variance in b values. This evinces that the OFC model is used as a surrogate system for reasoning about shear-faulting earthquakes.

Beyond earthquakes in seismic faults, I suggest that two additional targets complement the denotative function of the OFC model. One of them is the BK’s spring-block model. As OFC claim, their model is “directly mapped into” a two-dimensional version of the BK spring-block model (1244). Above, I discussed this two-dimensional version of the BK spring-block model as an imagined object which serves as an abstract conceptualization of the OFC vehicle. But this imagined, two-dimensional spring-block system denotes BK’s actual, one-dimensional spring-block system. In this sense, the OFC model can be described as a model of a model. It may be worth clarifying that the BK model is not necessarily a target of the OFC model. That is, other scientists could use the OFC model to reason about seismic faults without considering the BK model as a target. However, given that this is a case study, my interest is in OFC’s practices as reported in their papers. I submit that there is vast evidence in the OFC papers to support the claim that the OFC model represents the BK model (among other targets). For example, after describing BK’s spring-block, Olami et al. (1992) claim that they intend to map it into a cellular automaton (1244).

As targets, seismic faults and the BK spring-block system are intimately related: The BK spring-block system was originally used to represent seismic faults. In this sense, by representing the BK model, the OFC model represents seismic faults due to an intended transitivity of representation. Here, the important qualification is “intended”: This transitivity is by no means necessary. OFC intend to study earthquakes by revising and developing an existing model of earthquakes, viz. the BK model. They explicitly express this intention: “[s]ome insight into the complicated dynamics of earthquakes may be derived from simplistic models that contain the essential features of earthquakes. Such a simple model, a spring-block model, was proposed by Burridge and Knopoff (1967)” (Christensen & Olami, 1992b, 8729).Footnote 27

A third target in the OFC model is SOC behavior in nonconservative systems. OFC introduce SOC as a theoretical framework to representationally interpret the OFC model beyond earthquakes. This broader representational scope of the OFC model is expressed in passages such as: “Though the motivation for the [OFC] model is derived from the Burridge-Knopoff spring-block model, it can be regarded as a generic representation of a nonconservative system” (Christensen & Olami, 1992a, 1830; my emphasis). Or “Our model can be considered as a general nonconservative cellular automaton; hence our results have general implications.” (Christensen & Olami, 1992b, 1830; my emphasis). Furthermore, OFC’s interest in earthquakes seems to be partly mediated by their interest in SOC. They recognize that “[t]he dynamics of earthquake faults may provide a physical realization of the recently proposed idea of [SOC]” (Olami et al., 1992, 1244). Or even more strongly, they claim that “[e]arthquakes are probably the most relevant paradigm of self-organized criticality” (Olami et al., 1992, 1244). In this sense, the OFC model transcends the study of earthquakes: It is used more generally to learn about SOC behavior in nonconservative systems.

Each one of these three targets exhibits a particular realization of power-law distributions. In the case of seismic faults, the relevant power-law distribution is that of the size of earthquakes, expressed in the GR law. In the case of BK’s spring-block system, the relevant power-law distribution is that of the size of “shocks” obtained in simulations. And in the case of SOC systems, the relevant power-law distribution is that of the size of avalanches resulting from the interactions of the various components in nonconservative systems. While mathematically analogous, these power-law distributions are conceptually distinct in terms of the distinctive nature of the events that follow the power-law behavior in the different target systems, viz., earthquakes, spring-block shocks, and general avalanches.

In sum, the OFC model has three distinct target systems, namely seismic faults, BK’s spring-block system, and nonconservative SOC systems in general. In each target system, the target phenomenon is a particular realization of a power-law distribution of spatial correlations, in the form of earthquakes, shocks, and avalanches, respectively. Although distinct in principle, these targets map into each other in terms of intended transitivity: BK’s spring-block system is originally used to study earthquakes, and earthquakes are taken as a paradigm of SOC in nonconservative systems. As in my analysis of conceptualizations, emphasis on one or another target switches throughout OFC’s papers according to local interests which are part of a broader explanatory project. In the following section, I proceed to discuss how these interpretations bear upon ensuing model explanations.

4 Discussion: interpretations and model explanations in the OFC case study

In the previous section, I analyzed interpretative tasks that bestow meaning upon the OFC model in terms of what it is and what it represents. More precisely, I distinguished three conceptualizations of the OFC vehicle and three intended targets of the OFC model. In addition, I indicated that the three conceptualizations of the OFC vehicle map into each other. And the three targets also map into each other due to an intended representational transitivity. These mappings enable OFC to swiftly switch between conceptualizations and targets as they engage in surrogative reasoning with their model. To put it succinctly, interpretation in the OFC model is capacious and complex. It is capacious in the sense that it comprehends three conceptualizations of the OFC vehicle and three targets of the OFC model. And it is complex in the sense that these interpretations relate to each other through mappings, thus constituting a network of contents.

In this section, I discuss how these various interpretations shape OFC’s explanatory project and afford content for ensuing model explanations. The basic picture is the following. The three intended targets instantiate phenomena that are distinctively conceptualized as explananda. These explananda are the robust and non-universal power-laws of shear-faulting earthquakes, avalanches in nonconservative systems, and shocks in BK’s spring-block system. OFC count with heterogenous content to use as explanantia in model explanations, derived from three conceptualizations of the OFC vehicle as an imagined two-dimensional spring-block system, a cellular automaton, and a computer simulation. Given how capacious and complex interpretations in the OFC model are, I suggest that the ensuing model explanations are best described as having layers of content. I proceed to discuss this proposal.

4.1 Layered model explanations

Traditionally, the formal structure of an explanation comprehends an explanandum (that which is explained) and an explanans (that which does the explaining) (see “basic pattern” in Section 1). Model explanations are no exception: They have a target explanandum which is explained with resources afforded by a model. As Bokulich (2011) says: “First, and perhaps most importantly, what makes something a model explanation is that the explanans in question makes essential reference to a scientific model” (38).Footnote 28 However, the OFC case prompts some reflection upon further structural features of model explanations.

I suggest that model explanations in the OFC case are best described as emphasizing components within a network of contents. This differential emphasis leads to model explanations with “layers” of content, both in the explanans and the explanandum. I use the term “layers” metaphorically to convey the idea that contents are differentially attended at any particular moment. One or another layer may play a more prominent explanatory role locally, depending on the epistemic interests that are being attended at a particular moment of the explanatory practice.Footnote 29 The notion of “local attention” in specific meanings relates to the “overall meaning” as part to whole. As explicated in the introduction, the overall meaning of the OFC model is the set of meanings ascribed to it plus their interconnections. From this overall meaning, there are specific contents that are locally attended. This means that, at different passages throughout the OFC papers, one or another interpretation is being attended to, discussed, used in specific claims, and eventually exploited for explanatory purposes. (These shifts in local attention are thoroughly illustrated in Sections 3.2 and 3.3). In other words, while the overall meaning is comprehensive, local attention selectively uses partial meanings from the overall meaning.

This proposal is schematically displayed in Fig. 4. The OFC vehicle has three interrelated conceptualizations (C1, C2, and C3). And the OFC model denotes three interrelated targets (T1, T2, and T3). A model explanation locally emphasizes the content derived from one conceptualization of the vehicle (as a resource for the explanans) and the content of one target (as a resource for the explanandum). Still, locally emphasized contents are mapped into other contents derived from other interpretations, momentarily left in the background of what could be called the explananda and explanantia spaces. This gives a “layered” structure to the model explanation.

Fig. 4
figure 4

Schematic proposal of layered model explanations, based on the OFC case study

I proceed to illustrate layers of content in model explanations in the OFC case study. I first illustrate layers of content in the explanans. Consider OFC’s model explanation of the non-universality of the GR law: “[T]he dependence of the power laws on the conservation allows us to explain the wide variances in the Gutenberg-Richter law as a result of the variances of the elastic parameters” (Olami et al., 1992, 1244). In this case, the dependence between power-laws and conservation in the OFC model is used as explanans. However, this dependence is interpreted and reported in three different ways. First, there is a mathematical dependence between the power-law distribution of iterations in the cellular automaton algorithm and a numerical variable, typified by α. Second, there is a physical dependence between the power-law distribution of shocks in an imagined spring-block system and the ratios of the elastic coefficients of the springs. Third, there is a computational dependence between the power-law distribution of simulation results and the inputs. These are three distinctively conceptualized dependences of power-laws in the OFC model. Accordingly, I propose that this explanans has these three layers of content.

An objection may arise: The literal explanans is stated as the dependence between power-laws and conservation, where the latter is a physical notion. There is no explicit mention of computer interventions or mathematical relations in this explanans. My response to this objection is that the term “conservation” reflects OFC’s local interest in the physical conceptualization of the OFC vehicle as an imagined spring-block system. Local attention on this physical conceptualization is arguably motivated by the conceptualization of the local explanandum also as a physical phenomenon, namely the GR law of earthquakes. However, if we inspect OFC’s modelling practice, we notice that the physical dependence of power-laws on conservation is not physically investigated. That is, OFC do not explore the physical dependence of power-laws on conservation through experiments with an actual spring-block system or solving physical equations that stand for the dynamics of the spring-block system. Instead, OFC rely on abstract mathematical modelling and computer simulations to directly investigate and explain the non-universality of power-laws. It is via mappings between these conceptualizations and the physical one that an explanans in terms of physical conservation is attained. For this reason, I suggest that this explanans has three layers of content, even though the physical one is explicitly emphasized due to local interests.

Some philosophers might still be unpersuaded: The illustrated explanans resorts solely to physical content. The fact that other contents enable OFC to attain this explanans is inconsequential to describe its structure (part of the context of discovery). My response to this objection is that the notion of layered model explanations is introduced as a descriptive tool to capture empirical aspects of model-based explanatory practices. Indeed, if taken by itself as an explanatory text, the explanans of the example resorts solely to physical content. The notion of layered model explanation accommodates this observation by submitting that this is the “local” explanans. However, the notion of layered model explanations adds descriptive depth by establishing the interrelations of this local explanans with other contents that are also explanatory within the same explanatory project.

Layers of content in model explanations may also occur in their explananda. The OFC case also illustrates this. As reported in Section 3.3, OFC express an intended representational transitivity between three targets. The OFC model represents BK’s spring-block system. BK’s spring-block system represents seismic faults. And seismic faults are a case of nonconservative SOC systems, thus representing them by exemplification.Footnote 30 Because of this, a model explanation may locally focus on one or another target and construe it as an explanandum. However, the other targets constitute layers of denotata that are intentionally connected to the local explanandum. For example, consider once again the model explanation discussed above. The explicit target explanandum is the non-universality of the GR law, which is locally attended. However, the dependence of power-laws on conservation also explains the non-universality of power-law distributions of shocks in BK’s spring-block system and of avalanches in nonconservative SOC systems more generally. Christensen and Olami (1992a) express this, as they state that “We have proved the nonuniversality of the self-organization process for our model” (1837). And later in the same paragraph, they claim “our basic conclusions seem to also be relevant to other self-organizing systems” (ibid).

4.2 Potential objections and responses

I foresee some objections to my account of the OFC case and the notion of layered model explanations. I proceed to address these concerns. First, my description of OFC’s explanatory enterprise as having three distinct explananda might be objected to. Objectors might submit that there is only one explanandum, namely robust and non-universal power-law behavior. The fact that this general explanandum is differently instantiated in three distinct targets is deemed irrelevant: If you explain for the genus, you collaterally explain for any species. My response to this objection is that it does not reflect OFC’s explanatory practice and hence it is not empirically well-informed. As shown in Section 3.3, OFC interpret their model as denoting three specific and distinct targets. Based on these targets, the relevant explananda are distinctively conceptualized as robust and non-universal power-laws of shear-faulting earthquakes (i.e., the GR law), avalanches in nonconservative systems (i.e., SOC), and shocks in BK’s spring-block system. Analytical philosophers may prefer to “reconstruct” the OFC case in a way that generalizes these specific explananda. But I submit that this cannot be done without descriptive loss of OFC’s actual explanatory practice.

Second, philosophers of science who accept the existence of three distinct explananda in the OFC case might still not adhere to my proposal of their belonging to layered model explanations. These objectors might claim that each explanandum is part of a distinct model explanation and there is no need to subsume them into layered model explanations. I am not oblivious to the appeal of this objection: Traditionally, an explanation is conceived of as having one explanandum and one explanans. Still, I suggest that the notion of layered model explanations accommodates this criticism while better accounting for empirically relevant aspects of OFC’s explanatory practice. On the one hand, the notion of layered model explanations accommodates the existence of three distinct target explananda, each of which may be locally attended by OFC. In this sense, each explanandum is indeed part of a distinct model explanation, if we distinguish model explanations by what is being “locally” attended. On the other hand, the notion of layered model explanations better accounts for the interconnections among the various explananda. The point is that by focusing on one explanandum locally, the others are also being addressed due to the intended representational transitivity of the targets (see Christensen and Olami's statement at the end of Section 4.1).

Third, philosophers of science might have a similar stance regarding the explanantia: Distinct explanatory strategies are part of distinct model explanations and there is no need for conflating them into layered model explanations. I suggest that this objection overlooks the reported interconnections among conceptualizations of the OFC vehicle, which afford the content for explanantia. As reported in Section 3.2, conceptualizations of the OFC vehicle map into each other, enabling OFC to swiftly switch among them and, in some passages, even use hybrid content. And, as illustrated in Section 4.1, even if one content is locally attended, its explanatory force may rely on its mappings to other explanatory contents. To be clear, the notion of layered model explanations does not intend to blur the conceptual distinction between contents, e.g., between physical, mathematical, and computational. It only aims to acknowledge the importance of their interconnections in affording explanantia.

Fourth, philosophers of science who agree on the existence of layers of explanantia may still insist on discussing which layer is, in fact, doing the explanatory work. This could be characterized as a monist or a reductionist approach according to which the various layers of explanantia can be reduced to one. I suggest that this approach fails to acknowledge that interpretations in models used for explanatory purposes can be rather capacious and complex. As I have shown above, there are three kinds of content (physical, mathematical, and computational) derived from three distinct conceptualizations of the OFC vehicle, which map into each other and are alternatively attended. It is not reflective of the explanatory practice to provide a description based on one or the other. OFC hold these interpretations in a network of meanings and swing back and forth among them. To put it in allegorical terms, the explanatory practice cannot be accurately described in terms of one of the extreme positions of this swinging attitude. Rather, the explanatory practice unfolds in the swinging itself, taking contents from different interpretations and only locally emphasizing one or the other.

5 Conclusions

The OFC case shows that interpretation in models can be capacious and complex, even in the context of a single explanatory project led by the same group of researchers. This case points towards a manifold picture of interpretation in models, according to which scientific models are construed as networks of interconnected meanings. These meanings are interconnected in the form of mappings among conceptualizations of the OFC vehicle and the intended representational transitivity of targets. These multiple and interrelated interpretations afford content that is part of layered model explanations. Locally, model explanations may emphasize a distinctive content as explanandum and explanans, depending on the epistemic interests that are being attended. But this local content maps into other contents that are also used within the same explanatory project, which constitute layers in model explanations. The pattern of “layered model explanations” departs from the “basic pattern” proposed by Hempel and Oppenheimer in which explanations have one explanandum and one explanans. I submit that the “layered” pattern affords a more accurate description of the OFC case than the basic pattern. And I conjecture that the layered model explanation may be used for more accurate descriptions of explanatory practices elsewhere.

In closing, it is worth noticing that my analysis and discussion of the OFC case seem to push the boundaries of what is meant by explanatory pluralism. Traditional approaches to explanatory pluralism posit that different accounts of explanation are required for describing explanatory practices of scientists across disciplines and projects (see e.g., Lipton, 2008; Weber et al., 2013; or Mantzavinos, 2016). However, the OFC case shows that explanatory pluralism may also be the right stance for describing a single explanatory enterprise by the same research group in which various interpretations are intricately intertwined in the overall meaning of a model. I conjecture that the capaciousness and complexity of interpretation in models used to explain phenomena, together with the layered-view of model explanations, are fairly common. Still, the extent of this thesis needs to be tested in other cases, which constitutes a promising matter for further research.