1 Introduction

Scientific models often contain assumptions that simplify, abstract, or idealize from reality. One issue that arises from this state of affairs concerns the epistemic status of these models. For if models misrepresent and contain assumptions that falsely describe reality, how can we use them to learn about target systems? In particular, models allow for surrogative reasoning (Swoyer 1991). That is, models enable or license their users to draw inferences about their targets. These inferences may serve various purposes and can be about, e.g., the sort of entities that exist in the targets, their properties, causal relations, or the future value of variables.Footnote 1 For instance, viewing that \(X\) causes \(Y\) in the model may enable the inference that \(X\) causes \(Y\) in the target. In other words, modellers build and study models in order to draw conclusions about the things they are surrogates of. Accounts of scientific representation face the challenge of spelling out the conditions under which models may allow surrogative reasoning (Frigg and Nguyen 2020, p. 3 ff.). Why is it that studying a representation helps us to draw conclusions about its target?

One purpose models are used for is to explain phenomena. How idealized models can license our explanatory inferences appears to pose a special challenge insofar as our theories of explanation typically require that explanations be factive (e.g. Craver 2006; Hempel 1965; Strevens 2008; Woodward 2003b). How could models that seemingly misrepresent ever warrant us to conclude about true explanations? One influential answer, representationalism, holds that it is because models faithfully represent their target that they can serve their various purposes (e.g. Bailer-Jones 2009; Giere 2010; Mäki 2011; Morgan 1999; Weisberg 2013). If models represent some aspects of the world, then scientists can use them to explain phenomena of interest. But what relationship must there be between a model and its target for the former to represent accurately the latter? Various accounts of the representation relation have been proposed. So-called structuralists have suggested that accurate representation is based on shared structural features like isomorphism between models and their targets (e.g. French and Ladyman 1999; Da Costa and French 2003; Van Fraassen 1980; Suppe 1977; Suppes 2002). Models represent since they preserve the structure of the target. Others have instead proposed that the similarity between models and their targets grounds the representation relation (e.g. Giere 1988, 2004; Poznic 2016; Teller 2001; Weisberg 2012, 2013).

The inferentialist account of representation (Suárez 2004, 2015; Suárez and Solé 2006) grew out of dissatisfaction with the structuralist and similarity conceptions (Suárez 2003). Instead of explaining the capacity of models to allow surrogative reasoning in terms of them representing, inferentialism explains the representation relation in terms of models allowing to draw inferences (Frigg and Nguyen 2017). Thus, the fundamental feature becomes surrogative reasoning itself in place of the representation relation. However, Suárez’s inferentialism is silent about whether models may represent and satisfy the factivity of explanation.

I argue that what I call factive inferentialism (Kuorikoski and Ylikoski 2015; see also Kuorikoski and Lehtinen 2009; Ylikoski 2009; Ylikoski and Aydinonat 2014; Ylikoski and Kuorikoski 2010) does not provide a satisfactory solution to the puzzle of factive model-based explanation. Section 2 introduces factive inferentialism and the claim that it is compatible with factivity and realism. However, I show in Sects. 3 and 4 that correctly answering ‘what-if-things-had-been-different’ questions is plausibly not a sufficient guide for accurate representation, factivity, or realism. Then, in Sect. 5 I argue that a more complete answer would need to specify explicitly how we should interpret model-world mismatches and, more importantly, what sort of properties we should impute to the world. Section 6 concludes.

2 Representation, inferentialism, and explanation

Critics of representationalism point out that it has proven hard to specify notions of similarity and structure that would solve what Frigg and Nguyen (2017, 2020) call the ER-Problem.Footnote 2 The ER-Problem is one of identifying the conditions that make model M an epistemic representation of T. For instance, one way of characterizing the similarity account’s answer to the ER-Problem is that a “scientific model M represents a target T iff M and T are similar” (Frigg and Nguyen 2017, p. 58). However, an adequate account of representation should also allow for the possibility of misrepresentation. Whether something is a representation is a separate issue of whether it is an accurate one. Thus, even if a representation fails to be similar to its target, we still want to say that it is a representation.

One line of defence is to rely on pragmatic factors such as the users’ intentions to specify that a given model is meant to be a representation. So even if a model fails to be similar, but was intended to be, then it would still count as a representation. However, introducing pragmatic factors, Frigg and Nguyen (2017) argue, shifts the role of similarity or structure. It is not clear what they are doing for us in solving the ER-Problem if intentions are the key factor. Inferentialism aims to solve this problem by reversing the relationship between representation and surrogative reasoning. Instead of viewing the inferential affordances of models as depending on them representing, inferentialism characterizes the representation relation in terms of these inferences (Frigg and Nguyen 2017). Surrogative reasoning itself becomes the fundamental relation instead of representation.

One influential strand of inferentialism is Suárez’s (2004, see also 2015; Suárez and Solé 2006). He argues that we should “adopt from the start a deflationary or minimalist attitude and strategy towards the concept of scientific representation” (2004, p. 770). He denies, among others, that similarity or isomorphism can be necessary or sufficient conditions for representation (Suárez 2003). Suárez also does not identify a substantive property relating a model and its target, not even the capacity to draw true conclusions (e.g., the true value of a variable). According to him, one condition for a representation source system A to represent a target system B is that “A allows competent and informed agents to draw specific inferences regarding B” (2004, p. 773).Footnote 3 The normative standards of inferential correctness “are inferential merely, and do not depend on the truth or otherwise of premises or conclusions” (Suárez 2004, fn. 8, see also 2010, fn. 25). Making correct inferences is only a matter of inferential validity, as in deductive validity, not of soundness. Inferential validity, in turn, is determined by the inferential rules that a given epistemic community establishes. Suárez’s deflationism is pragmatic since it puts practice at the centre stage of the representation relation. For him, it is impossible for a notion of representation in an area of science to differ from how representation is actually regulated: “representation in that area, if it is anything at all, is nothing but that practice” (Suárez 2015, p. 38, emphasis in original).

But even if we assume inferentialism provides a successful answer to the ER-Problem, can pragmatic criteria shed light on all issues pertaining to scientific representation? In particular, there are two questions that it is crucial to separate (Chakravartty 2010; see also Contessa 2007, p. 68; Frigg and Nguyen 2017):

  1. 1.

    What is it to scientifically represent?

  2. 2.

    What makes a scientific representation accurate?

These questions are separate because, as noted above, we need to allow for the possibility of misrepresentation. A representation may be inaccurate, yet still be a representation.Footnote 4 For instance, the Ptolemaic geocentric model is a representation, but an inaccurate one.

Chakravartty (2010) distinguishes between what he calls ‘functional’ and ‘informational’ accounts of representation. Functional accounts focus on the inferential and interpretative practices agents perform using representations of target systems. Therefore, they primarily purport to answer the first question. Informational accounts, on the other hand, focus on the objective relations that obtain between representations and their targets and thus provide solutions to the second question. Functional and informational accounts, Chakravartty argues, should not be opposed to each other. Rather, they are complementary as they illuminate the two sides of the representation coin (see also Contessa 2007).

In Chakravartty’s terms, Suárez’s account is functional since it does not provide any substantive answer regarding representational accuracy. As such, it faces two sorts of challenges. First, it has been objected on the ground that it solves the problem of representation at the cost of obscuring what makes surrogative reasoning possible in the first place (Bolinska 2013; Contessa 2007). If the representation relation is itself made up by the inferential affordances of models, we may have made progress on what constitutes the representation relation, but not on accounting for why it is possible to make these inferences. By contrast, representationalism has a ready-made explanation; it is because models represent the world.

The second challenge inferentialism faces comes from accounts of explanation which typically demand that explanations be factive (e.g. Craver 2006; Hempel 1965; Strevens 2008; Woodward 2003b). Factivity requires that an explanation’s explanans and explanandum be (approximately) true (see also Strevens 2013). Representationalism again seemingly offers a straightforward response on how models can offer factive explanations: they simply have to represent faithfully what they explain. For instance, a model may explain minimal product differentiation (Hotelling 1929; see Reiss 2012a) just in case the model faithfully represents the phenomenon and its explanatory factors, here transportation costs. Thus, representationalism serves well friends of the factivity of explanation, in particular, and, more generally, of scientific realism.Footnote 5 This line of defence, however, is prima facie not available to inferentialism since the normative inferential standards do not require the truth of the premises nor of the conclusions. Without substantive standards of inferential correctness, how do we know if the representation is accurate? It is thus unclear, to say the least, whether such an account is compatible with the factivity of explanation.

Kuorikoski and Ylikoski (2015; see also Kuorikoski and Lehtinen 2009; Ylikoski 2009; Ylikoski and Aydinonat 2014; Ylikoski and Kuorikoski 2010) claim that their inferentialist account of model-based explanation allows to salvage realism and the factivity of explanation. According to them, the “inferentialist analysis of representation” is all we need and they claim that it solves the puzzle of model-based explanation introduced above, namely how highly idealized models may provide (factive) explanations by way of being explanatory representations. In other words, their account purportedly answers the two questions of representation:

[...] a model (as an external inferential apparatus) represents some real world phenomenon by virtue of the fact that (and to the extent that) some cognitive agent (modeler) can use the apparatus to make correct inferences concerning the phenomenon. If the new inferences made possible by the model include counterfactual what-if inferences, then the model is explanatory and represents some crucial dependencies related to the phenomenon by virtue of facilitating these inferences. (Kuorikoski and Ylikoski 2015, p. 3827)

According to Kuorikoski and Ylikoski, there is thus a tight connection between surrogative reasoning, representation, and explanation.Footnote 6 To distinguish Kuorikoski and Ylikoski’s (see also Kuorikoski and Lehtinen 2009) brand of inferentialism from Suárez’s, I will call it factive inferentialism (FInf henceforth). FInf provides a three-layered account of representation and model-based explanation. First, FInf’s answer to the ER-Problem is that the representation relation is constituted by the inferential affordances of a model. If a model M affords inferences about its target (e.g. about the value of variable \(Y\)), then M represents. In that respect, they follow Suárez. Second, FInf also provides a criterion for accurate representation, namely the correctness of the inferences. The representation relation is constituted by the inferential affordances “and the extent to which these inferences are correct determines how accurate the representation is” (Kuorikoski and Ylikoski 2015, p. 3830). In short, if M affords correct inferences (e.g. it is correct that \(Y\) takes value \(y\)), then M accurately represents. Third, if some of these correct inferences are counterfactual what-if inferences (w-inferences henceforth) that allow us to answer ‘what-if-things-had-been-different’ questions (w-questions henceforth), then that representation—or model—is also explanatory. In brief, if a model M allows its user to make correct w-inferences that answer w-questions, then M accurately represents and explains. In effect, FInf thus makes a distinction between explanatory (the w-inferences) and non-explanatory inferences (all the other ones, e.g. about the value of variables).

FInf, therefore, goes beyond traditional inferentialism by providing criteria for both accurate and explanatory representation. Prima facie, it seems to dispel the worries friends of the factivity of explanation may have with the deflationary character of inferentialism. FInf not only states what constitutes the representation relation, but also provides criteria to assess its correctness. That is why, according to FInf’s proponents, there is nothing mysterious or philosophically puzzling about representation and explanation. A model that provides factive explanations is just a model that affords correct w-inferences.

3 W-inferences and explanatory representation

FInf tells us that if a model affords correct w-inferences, then the model accurately represents and explains. But what is it to afford correct w-inferences? In the following two sections, I will raise two sets of issues with taking counterfactual inferential affordances as sufficient for explanatory representation. The first concerns the distinction between explanatory and non-explanatory w-inferences. The second, that I discuss in Sect. 4, is about what we can justifiably infer from explanatory w-inferences.

One key desideratum of an account of model-based explanation is that we should be able “to explicate what makes the difference between truly explanatory and merely phenomenological models” (Kuorikoski and Ylikoski 2015, p. 3818). Phenomenological models describe the covariance of variables, they ‘save the phenomena’, whereas explanatory models typically provide us with insight about why the described reality is the way it is.Footnote 7 Models may save the phenomena, yet fail to explain. While it is correct to say that phenomenological models accurately represent some aspects of reality, that representation is not explanatory. Hence, the issue is not that phenomenological models do not represent, it is that they do not provide explanatory representations. We thus need a way to demarcate phenomenological models from explanatory ones. Absent a solution to this problem, this poses a threat to FInf insofar as it is supposed to illuminate in virtue of what models can explain. Even if it were sufficient for an account of model-based representation, it would not be for an account of explanatory representation.

FInf’s solution to this is to specify that not all correct inferential affordances are explanatory, only (a subset of) the counterfactual ones. Counterfactual statements typically have the form ‘Had A been the case, C would have occurred’. Relations of counterfactual dependence are change-relating in that they describe how the explanandum variables would change if the value of the explanans variables were to change. It is in virtue of the information of counterfactual dependence that these relations convey that they are explanatory. Evaluating the truth of counterfactuals requires specifying a semantics. A counterfactual may be true according to one semantics, but false under another one. FInf advocates pluralism about which counterfactual relations of dependence—e.g. causal or otherwise—may support w-inferences (Kuorikoski and Ylikoski 2015; Ylikoski 2013; Ylikoski and Kuorikoski 2010). In the following, I take it that FInf broadly adopts Woodward’s (2003b) account of explanatory counterfactuals.

The main issue I would like to highlight here is that w-inferences simpliciter are not sufficient for explanatory representation. We need additional constraints on what counts as an explanatory w-inference. To illustrate, consider Woodward’s (2003b, p. 197ff.) discussion of the explanation of a simple’s pendulum period by the following relationship, where \(T\) is the period, \(g\) its gravitational acceleration, and \(\ell \) the pendulum’s length.

$$\begin{aligned} T=2 \pi \sqrt{\ell / g} \end{aligned}$$
(1)

Using this relationship, it is possible to make various w-inferences about the pendulum’s period. Had the length \(\ell \) been different, then the pendulum’s period \(T\) would also have been different. As Woodward reminds us, it is also possible to derive the length \(\ell \) from the period \(T\) and the acceleration \(g\) by converting the equation.

$$\begin{aligned} \ell =\frac{T^{2} \cdot g}{4 \pi ^{2}} \end{aligned}$$
(2)

There is a sense in which that second equation also supports w-inferences and, crucially, can answer w-questions. For instance, physics students may be asked to derive the length of a pendulum had its period been different. This supposes that there is a sense in which had a pendulum’s \(P\) period been different, then its length \(\ell \) would also have been different. Woodward acknowledges this:

There is thus some interpretation of the counterfactual

(5.3.3) If the period of P were \(T^*\), then the length would be \(\ell ^*\)

such that (5.3.3) comes out true. (Woodward 2003b, p. 197, emphasis in original)

The general problem Woodward raises with the pendulum example is that not all relations of counterfactual dependence are explanatory and that we thus need to distinguish explanatory from non-explanatory ones. More generally, for him “any generalization describing a correlation (or pattern of association) will be change-relating, for anything that counts as a correlation must tell us how variations in the value of one variable are associated with variations in the value of another” (2003b, p. 246; see also Khalifa et al. 2020; Pincock 2018). In that sense, correlations may allow to make w-inferences in that they relate the value of one variable to another one.

Of course, Woodward’s point is precisely that despite the counterfactual dependence between the length and the period, the latter is irrelevant for causally explaining the former. Rather, the length explains the period. This is why he proposes to distinguish general change-relating generalizations from properly causal/explanatory ones with the notions of ‘intervention’ and ‘invariance’. Basically, the idea is that if one were to physically intervene on the period, it would not have any consequence on the pendulum’s length; the length is not causally connected to the period. However, if one were to intervene on the pendulum’s length, one could manipulate the period. A generalization is invariant if it correctly describes how the value of a variable would change under a range of interventions.

FInf recognizes the need to distinguish between phenomenological and explanatory models and would accept something like the interventionist test for a large class of w-inferences. However, since FInf is openly pluralist, it cannot simply restrict the relevant counterfactuals to those that are invariant under interventions.Footnote 8 Ylikoski, for instance, suggests that “prediction is a straightforward case of what-if inference: it tells how a system behaves under specific circumstances” (2014, p. 327, emphasis in original, 2009, pp. 101–102). A model that allows to predict may therefore allow correct w-inferences since a modeller can on that basis infer the past (retrodiction) or future (prediction) value of variables. Although it adheres to substantive explanatory criteria, the worry is thus that FInf might be too liberal.

Indeed, it is a well-known point that a model may be used to predict, yet may not explain. This is because the capacity of predicting the future value of variables does not require an accurate representation of their explanatory features. As Weisberg observes, maximizing the predictive accuracy ensures a model will be useful to generate predictions, “but gives no guarantee that the models will be useful for explaining the behavior of the system” (2007, p. 653). One way of thinking about this general problem is in terms of fidelity criteria and modelling trade-offs (Weisberg 2013; see also Gräbner 2018). Fidelity criteria determine whether a representation is adequate. Depending on the purpose, these criteria may be different; adequacy is thus purpose dependent. Dynamic fidelity concerns the comparison between model output and the target and is especially important for predictive purposes. But there are also representational fidelity criteria. Representational fidelity concerns whether a representation is sufficiently accurate, typically understood in terms of accurately representing a target’s causal structure. For the purpose of explanation, dynamic fidelity is not sufficient, we also need representational fidelity.Footnote 9 Importantly, dynamic and representational fidelity do not necessarily accompany each other. In fact, there are often trade-offs between criteria. For example, forecasting success in economics is often obtained at the cost of not accurately representing the underlying causal structure (Reiss 2012b). Indeed, since causal relations in the economic world are often bound to break, having a non-causal model is often the only way to secure predictive accuracy. Thus, a model may facilitate w-inferences (dynamic fidelity) without being an explanatory representation (representational fidelity). Prediction and explanation do not always go hand in hand.

Therefore, making w-inferences simpliciter cannot be a sufficient condition for the accurate representation of explanatory relations. This would blur the distinction between phenomenological and explanatory models, a distinction FInf wants to uphold. Friends of FInf could retort that not all w-inferences can answer w-questions.Footnote 10 So even if correlations may support w-inferences, they may not allow to correctly answer w-questions. Put differently, not all w-inferences would be explanatory. This means that the account of explanation itself is doing all the heavy lifting. FInf answers the worries we may have with more deflationary brands of inferentialism (Suárez 2004, 2015; Suárez and Solé 2006) by appending substantive explanatory standards.

My main qualm with this otherwise acceptable reply is that it is not clear whether the explanatory standards FInf follows can by themselves salvage the factivity of explanation and realism. If one can endorse FInf’s broad espousal of a counterfactual theory of explanation without upholding neither inferentialism nor that explanatory models accurately represent their targets, then it is not the explanatory standards that secure factivity, but something else.

4 W-inferences and accurate representation

The second set of issues concerns the very link between explanatory w-inferences and accurate representation. We have seen that FInf holds that if a model affords correct inferences, then that model accurately represents. If some of these inferences are w-inferences that allow to answer w-questions, then the model accurately represents and explains. Crucially, FInf maintains that this version of inferentialism accommodates both, via accurate representation, the factivity of explanation and realism. We can understand FInf as making a number of different claims concerning the relationship between the explanatory w-inferences a model \(M\) affords and accurate representation. If \(M\) affords explanatory w-inferences (and thus explains), then...

  1. 1.

    \(M\) accurately represents.

  2. 2.

    \(M\) provides a factive explanation.

  3. 3.

    \(M\) captures reality.

At first sight, it appears that all these propositions are intimately intertwined. If a model provides a factive explanation, then it must accurately represent. And if a model accurately represents, why should we doubt it captures—parts of—reality?

However, the problem is precisely that it is contentious whether we are justified in inferring anything from explanatory w-inferences. There are two main points of contention. The first one concerns the link between explanation and accurate representation. Many have claimed that models may explain without accurate representation (e.g. Batterman and Rice 2014; Bokulich 2008, 2011, 2012, 2016; De Regt 2015, 2017; Graham Kennedy 2012; Potochnik 2017). Bokulich, for one, argues that what she calls ‘model explanations’ capture the counterfactual dependencies of a target system despite the fictional representational content. As she illustrates, Bohr’s model of the hydrogen atom does not, according to contemporary quantum mechanics, accurately represent the actual structure of reality. For her, Bohr’s model is not a simple idealized representation of electron orbits. It does not distort reality, but instead introduces fictional entities or processes. Despite its fictional character, Bokulich argues that it explains the atomic spectrum of hydrogen. One reason is because “Bohr’s model is able to correctly answer a number of ‘what-if-things-had-been-different questions’ [...]” (Bokulich 2011, p. 43). The model thus captures a pattern of counterfactual dependence between its fictional entities and spectral phenomena.Footnote 11 Bokulich argues that the key difference between phenomenological and explanatory fictional models is that the latter successfully undergo a “justificatory step”. This step allows to demarcate which models are relevant for explanation. She notes that “although the range of w-questions that a phenomenological model can answer will typically be more limited, scope alone cannot distinguish between explanatory and phenomenological models” (Bokulich 2012, p. 733). Very briefly, the justificatory step involves (1) establishing a contextual relevance relation, (2) setting a domain of applicability, and (3) translating the model descriptions into correct conclusions about the target.

Bokulich’s account is particularly interesting because it relies, like FInf, on the idea that explanation involves correctly answering w-questions with counterfactual relations of dependence (Woodward 2003b). However, whereas FInf concludes that correct w-inferences imply accurate representation (because they are a subset of correct inferences), Bokulich reaches an opposite verdict. According to her, the fact that a model is a fiction does not prevent the model from correctly answering w-questions. Bokulich’s account thus suggests that explanatory w-inferences can cohabit with a view that denies accurate representation.Footnote 12

FInf’s proponents could reply that, pace Bokulich, Bohr’s model does accurately represent. Accuracy is a matter of degree and while the model may accurately represent some aspects of spectral phenomena, it may misrepresent others. In fact, Bokulich herself appears to suggest something along those lines when she says that models explain only when a model’s counterfactual structure is isomorphic to its target (Bokulich 2011, p. 39). There are two problems with this reply. First, the general issue is that the quantity and quality of explanatory w-inferences we can make with a model does not only depend on its representational accuracy. According to Ylikoski and Kuorikoski (2010; see also Kuorikoski and Ylikoski 2015), explanatory w-inferences can be assessed along five dimensions: non-sensitivity, precision, factual accuracy, degree of integration, and cognitive salience. Factually more accurate explanations are those that include “fewer falsehoods” (Ylikoski and Kuorikoski 2010, p. 212). Factual accuracy, thus, is crucial for representational accuracy, the factivity of explanation, and realism. Without going into the detail, the key point is that there are sometimes trade-offs between these dimensions. For instance, an increase in non-sensitivity (viz., robustness) may require less factual accuracy. And, as Kuorikoski and Ylikoski (2015, p. 3833) also recognize, including fewer falsehoods “might not improve explanatory understanding”. Therefore, the quantity and quality of explanatory w-inferences a model affords will not only depend on its representational accuracy.

FInf’s proponents may respond that even though there is not a perfect mapping between explanatory w-inferences and accurate representation, a representation could not be completely inaccurate and explain. This may be so, but then the second problem we run into is that it is not clear where that leaves us with respect to the factivity of explanation and realism. For example, even if we accept that answering w-questions with Bohr’s model implies that there is some accurate representation, it appears to be inconsistent with the factivity of explanation. Using the model, one can infer how the emission spectrum of hydrogen would change if the orbits had been different. But, Bokulich reminds us, there are no stationary state orbits. Thus, in what sense is that explanation of emission spectrum supposed to be ‘factive’?

What ‘realism’ this entails is likewise unclear. Woodward himself holds a minimal form of realism which he calls ‘instrumental realism’ (2003a, see also 2003b, pp. 223–224; Baird 1988; Saatsi 2020). This view is instrumentalist in that what matters is getting facts about relations of counterfactual dependence right, not the ontology. Models that capture similar patterns of dependence but postulate different entities would, according to Woodward, have the same representational content. Yet, the view is also realist because it downplays the significance of the observables/unobservables distinction and holds that scientific explanation goes beyond saving the phenomena (cf. Van Fraassen 1980). This account is “minimal” (Saatsi 2020) insofar as it entails much weaker ontological commitments than standard readings of realism.

Instead, maybe our commitment to realism should be a sort of ‘epistemic realism’. Rice argues that it should concern the understanding we obtain with models (2016, 2021). According to Rice (2021, p. 4099), “pervasively inaccurate descriptions of reality” may afford factive understanding, which essentially involves being able to answer w-questions on the basis of true modal information. Models may thus misrepresent and give access to that modal information. Bokulich (2016, pp. 270–271) aptly puts the point by saying that “[t]he key move here is the recognition that scientific understanding requires having true modal information and the ability to draw correct inferences, but that one can achieve this ‘factive understanding’ without having a true or accurate representation”.Footnote 13 Perhaps FInf’s proponents would accept either of instrumental or epistemic realism. There is nothing wrong per se with these positions. However, as I discuss in more detail in the next section, neither form of realism seems to answer the problems FInf was trying to solve in the first place.

Nothing that I have said constitutes a conclusive blow against FInf. What I wanted to highlight is that we can readily accept that explaining—or understanding—phenomena is fundamentally a matter of making w-inferences that answer w-questions without accepting that these explanations are factive, that the models that ground them accurately represent, or that they capture reality. There is significant variance in the literature concerning what we can justifiably infer from correctly answering w-questions. None of the views presented above are obviously wrong. Hence, nothing seems to follow directly from adopting FInf concerning accurate representation, the factivity of explanation, and realism. We could assert with FInf that it is “conceptually impossible” (Kuorikoski and Ylikoski 2015, p. 3830) for a model to allow to answer w-questions and not represent accurately, but it is hard to see how this is not just stipulating one’s way out of the puzzle of model-based explanation. At any rate, even if we accept the positive reasons to adopt an inferentialist perspective on representation, we would still need substantive criteria to demarcate correct from incorrect inferences.

5 Factive inferentialism: the way forward

The discussion in the previous section and this one suggests that the explanatory w-inferences a model affords may be a sufficient condition for accurate representation, factivity, or realism only if we disambiguate in virtue of what these w-inferences are correct in the first place. FInf constitutes progress over more deflationary brands of inferentialism (Suárez 2004, 2015) because it appends substantive explanatory standards. But if these standards do not imply by themselves accurate representation, then they cannot safeguard the factivity of explanation. This also suggests that the key criteria for factive explanation are to be found in the account of representation, not the explanatory standards. What is the way forward?

One avenue would involve weakening FInf. It submits that correctly answering w-questions is sufficient for accurate representation, factivity, and realism. However, a safer reading would be to view it as only sufficient for a sort of ‘epistemic realism’ without any further commitment with respect to how the model captures or represents reality. As we have seen in Sect. 4, Rice (2021) and Bokulich (2016) both argue we can obtain factive understanding without accurate representation. Here the strategy would be to disconnect cognitive achievements like understanding from other explanatory requirements like factivity. Having modal information that allows us to answer w-questions may be sufficient for understanding, the argument goes, but it may not be for factive explanation.Footnote 14

This has some prima facie appeal insofar as an inferential conception of understanding is part and parcel of FInf.Footnote 15 Since the epistemic benefit we arguably want to salvage is understanding, severing it from explanation seems to solve partially FInf’s indeterminacy regarding explanatory representation. We lose little by conceding that answering w-questions via modal information affords understanding, yet sometimes falls short of explanation. Moreover, that understanding can be had without an actual explanation has some support in the literature (Lipton 2009; Gijsbers 2013; Rice 2016; Verreault-Julien 2019).Footnote 16 If this potentially provides a suitable solution to the puzzle of model-based understanding it does not, however, fully solve the puzzle of model-based factive explanation. The initial motivation was to account for how highly idealized models may accurately represent and provide factive explanations, not understanding.

One advantage inferentialism has over standard representationalism in solving the puzzle is that it can more easily avoid the pitfalls of literalism. Literalism is “the claim that models have to be interpreted as sharing features with their targets in order to be accurate representations of those features” (Frigg and Nguyen 2021, p. 2435). Shared features accounts are those that hold that models accurately represent their targets if and only if they share features with them.Footnote 17 As Frigg and Nguyen remark, similarity and structuralist accounts of representation epitomize this tenet. In principle, inferentialism has no such built-in requirement. If a model affords (correct) inferences, then it (accurately) represents. Inferential correctness becomes the metric of accuracy instead of shared features.

Although FInf avoids the language of shared features, it also invites us to take at face value—i.e., literally—that some parts of the model are true and that it is those parts that are responsible for a model’s explanatory power and realism.Footnote 18 FInf broadly follows what Rice (2019, see also 2018) calls the “decompositional strategy”.Footnote 19 In the context of modelling, the strategy involves decomposing models in various components, some misrepresenting their targets and others accurately representing them. A model’s idealizations—the misrepresenting parts—are not necessarily problematic to the extent that the model also accurately represents the relevant ones, for instance a phenomenon’s causes. One benefit of the decompositional strategy is that it allows, in principle, to salvage literalism. Only a model’s relevant parts should be interpreted as sharing features with its target, not the idealizations.

FInf’s endorsement of the decompositional strategy becomes clear in its discussion of the different types of assumptions and their truth. Models contain substantial, Galilean, and tractability assumptions. Tractability assumptions are those literally false assumptions that are nevertheless necessary to derive results. Galilean assumptions (or idealizations) isolate the difference-making features of interest by assuming away the presence of other features. Substantial assumptions are those that represent the explanatory factors of interest. While the substantial assumptions need to be true or realistic, the idealizations or the tractability assumptions may be false.Footnote 20 According to FInf, only the former are epistemically relevant, not the latter: “it is the truthlikeness of the substantial assumptions that ultimately carries the epistemic weight in a model” (Kuorikoski and Lehtinen 2009, p. 127; see also Kuorikoski and Ylikoski 2015, p. 3827ff.).Footnote 21

One reason why FInf presumably frames the issue in terms of the truth of the substantial assumptions is to salvage the realist thesis that science successfully refers to reality (Chakravartty 2007; Psillos 1999). Models that allow us to answer w-questions have true substantial assumptions that (truly) capture reality. However, the key issue is that we cannot make a straight jump from correctly w-questions to accurate representation and, even less so, to the realisticness or truth of a model’s assumptions. This is because correctly answering w-questions may not require the truth of substantial assumptions in that literal sense. Provided FInf is serious about its commitment to inferentialism, what we need is a way of working our way back from the correct inferences to truth and accuracy without having to deny that models have literally false components. We would want to say that models may have these literally false components, yet can successfully explain or accurately represent. This, I submit, requires abandoning literalism.

Frigg and Nguyen’s (2016, 2017, 2018, 2020, 2021) account of scientific representation helps to understand how FInf could fully move past literalism and how it may differ from, e.g., Bokulich’s (2011, 2012) and Suárez’s (2004, 2015) views. According to Frigg and Nguyen (2021, p. 2438), a model M is a scientific representation of target T iff:

  1. 1.

    Denotation: M denotes T.

  2. 2.

    Exemplification: M exemplifies z-properties \(F_{1}, \ldots , F_{n}\).

  3. 3.

    Keying up: A key K associates the set \(\left\{ F_{1}, \ldots , F_{n}\right\} \) with a set of properties \(\left\{ G_{1}, \ldots , G_{n}\right\} \).

  4. 4.

    Imputation: M imputes at least one of the properties \(G_{1}, \ldots , G_{n}\) to T.

One fundamental feature of the DEKI account—for denotation, exemplification, keying up, and imputation—is that it emphasizes the need to interpret via a key the properties exemplified by a model in terms of properties imputed to a target. The properties \(F_{1}, \ldots , F_{n}\) a model exemplifies need not all be literally imputed to a target. First, a key translates the F-properties in terms of G-properties. Then, the latter can be imputed to the target.Footnote 22 Crucially, that does not require the truth of any ‘assumption’. Accurate representation is a matter of correctly imputing (or, inferring) properties to a target and does not depend on the truth of any part of a model. The keying up step allows for a mismatch between the model itself and its target. Therefore, we can—but do not have to—read models literally.

I submit that the reason why we cannot make the jump from correctly answering w-questions to accurate representation, factivity, and realism is because the keying up and imputation steps are underspecified. In fact, we can use these steps to understand the major differences between FInf, Bokulich, and Suárez. Suárez’s inferentialism implies that keying up and imputation are purely pragmatic. Practice determines the keys and what properties are imputed to a target. Therefore, we cannot infer from ‘correct’ inferences that a given model imputes real properties or features to the world.

Bokulich’s case is interesting. In the previous section, I emphasized that, according to her, models that misrepresent may nonetheless allow us to answer w-questions. In particular, her fictional view of models intimates that there can be a considerable model-world mismatch and that properties should not be keyed up in terms of identity. For instance, although Bohr’s model of the atom exemplifies the electron moving in a classical stationary state, the key tells us that we should not associate this property as is to the world. Of the keyed up properties, only patterns of counterfactual dependence are imputed to the target, not the electron orbits moving in classical fashion. But Bokulich also claims that her “explanatory fictions lies within a broadly realist approach to science” (2016, p. 261) and that “explanation and understanding are ‘success terms,’ in that they require getting something right about the way the world is [...]” (2018, p. 796). We can thus interpret her as endorsing that science aims at and achieves true imputation of patterns of counterfactual dependence, but not necessarily the entities and processes that underlie them.Footnote 23

FInf’s implicit proposal, then, resorts to more demanding realist keying up and imputation steps. For model-based explanation, this minimally requires that the imputed properties that concern the explanandum and the explanans be true. What sort of properties need to be imputed will depend on one’s favourite account of explanation.Footnote 24 It may only be patterns of counterfactual dependence, but it may also involve other properties like, for instance, being a law of nature. But, crucially, it will also depend on one’s broader realist commitments. One may be comfortable with a form of instrumental realism (e.g. Woodward 2003a) and leave ontology alone. Others may give more weight to getting facts about reality right. Contrary to Suárez, FInf at least requires inferences to true conclusions about the imputed properties. And Bokulich’s realism is arguably less demanding than FInf’s insofar as hers does not involve imputing to the world many of the properties that we usually associate with realism.

Specifying what different brands of factivity and realism imply in terms of keying up and imputation is beyond the scope of this paper.Footnote 25 My more modest point is to submit that we need an explicit act of interpretation, keying up, and imputation that provides the accuracy, explanatory, and realist standards. Without information about these steps, we cannot assess in what sense the representation is accurate, the explanation, factive, and the science, realist. And, critically, we cannot judge what to infer from inferential success.

6 Conclusion

The capacity of models to help us learn about the world has traditionally been accounted for in terms of a representation relation. It is because models represent that we can draw inferences about phenomena of interest and explain them. Inferentialism reverses that relationship and claims that surrogative reasoning is the fundamental activity. However, it has trouble showing how models can offer factive explanations. FInf builds on inferentialism and claims it is compatible with the factivity of explanation and realism.

I argued that, by itself, FInf does not solve the puzzle of model-based explanation. Basically, the problem is that the answering of w-questions only entails minimal commitments with respect to representational accuracy, the factivity of explanation, and realism. I then suggested ways FInf could solve these issues, more specifically by abandoning literalism and making more explicit the properties FInf proposes to impute from models to targets. This, I believe, would make FInf’s solution to the puzzle of model-based explanation stronger and genuinely distinct.