The remaining argument against Factivity relies on the prominent role of idealisations in providing scientific understanding. According to this argument, a factive conception of understanding cannot account for the essential role of models and theoretical posits that are false because they simplify and abstract from reality.
Just as with my response to the upwards trajectory argument, I will argue that focusing on the recovery of true belief from inaccurate inputs allows us to defuse this argument. My analysis will be broadly complementary to the views of Lawler (forthcoming) and Rice (forthcoming), who argue that the central role idealisations in scientific theorising is compatible with strictly factive views of understanding insofar as scientists who use them consciously extract true information from them. However, when defusing the upwards trajectory argument, I argued that the recovery of true contents from false theories—and hence the acquisition of understanding—can be compatible with ignorance that one is considering a false theory. This is why, for example, children can recover true contents from theories that they do not realise are false. In a similar vein I will here argue, at least in some cases, that something similar holds for idealisations: they permit the recovery of true content even if an agent is unaware of the idealised nature of the model or theory. Thus, I will vindicate the extraction view by suggesting that it a species of a general fact about scientific theorising: the arguments, models, and theories we use provide understanding to the extent that they allow an agent to recover true content from them, whether wittingly or unwittingly.
To begin to respond to the idealisation argument, recall one of platitudes we observed at the outset—namely, that understanding is an epistemic state that requires a subject. Just like beliefs require a believer, and knowledge a knower, understanding presupposes an agent doing the understanding. This is why, for instance, a textbook ‘provides understanding’ only in the sense that it elicits certain epistemic states in the agents making use of it. Keeping this in mind explains the following rather obvious point: when it comes to thinking about the role of prominent idealisations in scientific practice—such as the ideal gas law, or models in population biology—there can be variation in how much is understood by the individual using the idealised law or model. This is a natural corollary to the truism Elgin earlier relied upon in making the upwards trajectory argument, namely that understanding comes in degrees. Different scientists using idealised laws and models can understand their subject-matter to different degrees.
The reason this matters is that, since understanding is something possessed by epistemic agents, we must interpret the argument from idealisation against Factivity accordingly. In this vein, the argument cannot succeed simply by noting that the idealised models or laws themselves contain falsehoods. Rather, the focus must be on how idealisations elicit understanding in those who use them. And, in order to plausibly undermine Factivity, the argument from idealisation must say: it is not possible to explain the understanding elicited by idealised models and laws only by appealing to true beliefs acquired by those who use them. Put in these terms, so I claim, the argument from idealisations against Factivity will turn out less compelling than it initially sounded.
As has been pressed extensively by Lawler (forthcoming) and Rice (forthcoming), when using idealised models and laws, scientists are typically aware of them being idealisations. For example, when using the ideal gas law, scientists are aware that the real-world gas they are theorising about is not composed of molecules which lack extension. The emphasis on this knowledge chimes with comments in Strevens (2012: 456), who suggests that the inessential role played by falsehoods in scientific idealisation is partly explained by the fact that scientists “know the right way to read idealized models”. These points support the following schematic observations: (1) firstly that idealisations, although false, do not invariably elicit false beliefs in those who use them, and (2) that our judgements about understanding vary with what is in fact endorsed by the agent using the particular idealisation. However, while I agree with these schematic observations, I think that it is possible to recover a degree of understanding from an idealised model—just as in the case of scientific education—even if one is unaware of its status as an idealisation.Footnote 34
As the argument from idealisation is primarily driven by examples, with much of the debate turning on the proper interpretation of particular applications of certain models, we should consider how scientists use particular idealisations in context. There are various noted examples of false idealisations or models conferring understanding. The ideal gas law is one, but this example has been widely discussed.Footnote 35 Another type of example, one that has figured in the debate concerning the nature of scientific understanding, is the use of models in population biology. Indeed, these have been used by Rice and Lawler to test the plausibility of factive views of understanding. For example, one closely analysed case is the use of optimality models in biology, such as the attempt to work out the optimal copulation time for male dung flies visiting multiple piles of dung in order to mate.Footnote 36 However for the sake of any pre-breakfast readers I shall focus on a different optimality model.Footnote 37 To do so, we will need to talk about crows and whelks.
Crows and Whelks
There is colony of Northwestern Crow on Mandarte Island in British Columbia. These crows feed on whelks, a type of mollusc that lives in a hard shell. The crows open the whelks by dropping them onto a rocky beach; they only select whelks that are above a certain size; they almost always drop the whelks from a height of around 5 m; and the crows don’t tend to give up if a particular whelk stubbornly refuses to break after a few drops.Footnote 38 Upon hearing these facts, we—or a zoologist—might be interested in better understanding why the crows forage in this way. This is where optimality models can be useful.
Optimality models in biology help us to understand why a given population possesses a particular trait, by showing that the trait in question maximises evolutionary fitness in light of certain constraints. When it comes to foraging, the relevant trade-offs are calorific; what behaviours strike the right balance between energy expenditure and calorie-acquisition? The predicament facing our crows is how to achieve the right balance between energy gained from eating denuded molluscs and energy expended in upwards flight. Zach (1978, 1979) provides an optimality model demonstrating that the crows adopt an optimal strategy: focusing on large whelks (which provide more calories and break much more easily than smaller whelks), dropping them from around 5 m (which provides the best trade-off between likelihood of breakage and calories expended), and being persistent in continuously dropping their chosen whelk (because each successive whelk drop is about as likely to succeed as taking a new whelk). Hence, the crows more or less optimise calorific gains when whelk foraging.
Optimality models involve artificial idealisations and simplifications that render them inaccurate with respect to the actual causal mechanisms which led to the evolution of a given trait within a target population.Footnote 39,
Footnote 40 With respect to our colony of crows, the model used is simplified in various respects. For example, when working out the calorific expenditure used in flight, the model falsely supposes that all crow flight is horizontal by adopting the simplifying supposition that the higher calorific costs of ascending flight and the lower calorific costs of descending flight will roughly cancel each other out. So, in order to elide complicated calculations, the model works on the basis of supposing that flying higher uses more calories only because it involves flying for longer. This obviously isn’t entirely accurate, upward flight is more strenuous beyond simply extending the period of flight. Moreover, the calorific expenditure of horizontal flight in the model is calculated using a constant base rate; this strips out real world influences like favourable or adverse wind conditions or physiological differences that will change the actual level of calorific expenditure for a given crow. This isn’t entirely accurate either—there is no single base rate that accurately captures how many calories every single crow uses when flying over a given time period. In short, the model used does not accurately represent all of the actual causal mechanisms influencing the development of the crows’ foraging behaviour. Rather, it simplifies and omits various factors for theoretical ease. However, clearly such models are useful in helping us better understand the crows’ foraging behaviour; by illustrating different types of trade-offs they face, the model helps us grasp why certain behavioural strategies are apt to be selected for.
Although the optimality model itself contains false idealisations, we must remember that to determine whether or not Factivity is in trouble we must look at the epistemic state it elicits in those using the model. So, is there any reason to suppose that the understanding elicited by those using this model must contain false beliefs? The answer, I think, is negative. Rather, we can readily explain the understanding elicited by such models by appealing to true beliefs. When we think about what is useful about Zach’s optimality model, I suggest, we find that it is the fact that it justifies us in adopting the following type of beliefs about the Mandarte crow:
(T1) Selecting large whelks is an effective foraging strategy because those contain more calories and are more likely to break than smaller whelks.
(T2) The disposition to drop whelks from ~ 5 m strikes a good balance between calorific expenditure and likelihood of opening the whelk.
(T3) Persistently dropping the same whelk until it breaks is at least as good a strategy as selecting a new whelk.
(T4) Natural selection will tend to favour crows which optimise calorific gains when foraging over those which are profligate with their energy.
All of these beliefs (at least as generalisations or approximations) are true, and they are central to the increased understanding about optimal foraging strategies gained from the use of the optimality model. (T1)–(T4) are the types of belief that anyone consulting Zach’s model would acquire about the Mandarte crow. So far, this corroborates the idea that we can explain the understanding gained from an idealised model simply by focusing on what is purposively extracted from the model.
Although the model used to demonstrate why these beliefs are well-founded is simplified in various respects, these false simplifications are inessential to the increased understanding the model affords us with. Indeed, it is surely possible to recover a degree of understanding from such optimality models even if the observer failed to realise that the models were idealised in the respects explained above. Consider the following beliefs about the simplified false aspects of the optimality model:
These false beliefs are not at the heart of why Zach’s optimality model offers us an increased understanding of crow foraging. Rather, these simplifications are just ways to more conveniently construct the optimality model which acts as a tool for eliciting better understanding in those who consult it. Suppose that a reader had simply not noticed that the calculations in Zach’s model involved idealised assumptions about horizontal versus vertical flight or the uniform metabolic rate of crows. Indeed, I suspect that many casual readers would not immediately notice this feature of the model. Would this preclude them from understanding why the crow’s chosen foraging strategy optimises calorific gains? I think that it would make very little difference, for they would still recover the relevant true contents from the model, such as those enumerated in (T1)–(T4). In this sense, I suggest that while such an unwitting reader would to some extent misunderstand the nature of the model, they would in fact acquire understanding of the phenomenon it represents. Again, as with my diagnosis of the upward trajectory argument, I suggest that we would only credit such an unwitting reader understanding to the extent that they acquired true beliefs from the idealisation. As such, any false belief (or, more realistically, agnosticism) regarding (F1)–(F2) would not constitute their understanding, even if they happened to be ignorant in this way.
To sum up: while Elgin is right in claiming that strictly false idealisations are extremely useful in scientific theorising, I have suggested that their usefulness consists in being convenient tools for eliciting true beliefs that facilitate an understanding of their objects. As such, by focusing on how idealisations elicit understanding in those who use them and not just on the content of the idealised law or model itself, we can accept Elgin’s insight about their usefulness while denying that it creates any pressure to accept a non-factive theory of understanding. Notably, idealisations can serve their purpose, at least in some cases and at least to some extent, even if an agent is not aware of their idealised status. Of course, here I have only discussed one such model. As Sullivan and Khalifa (2019: 679) concede in their critique of those who use the ideal gas law to attack Factivity, opponents can still maintain that it will be possible to find further examples that do in fact support non-factive views of understanding. However, while it is correct to say that the debate must be conducted case-by-case, I think that recent work has done enough to put on the defensive those who use the argument from idealisation to undermine strictly factive views of understanding.
The strategy outlined in this paper to defend strictly factive views of understanding against cases in which falsehoods seemingly play an ineliminable role in successful theorising has general application. By clearly distinguishing what the epistemic subject believes from the vehicle (e.g. a book, a model, a string of testimony) that delivers understanding, we can also defend factive views against a further charge: that valorising the importance of true propositions leaves us unable to account for the understanding provided by non-propositional representations which are not truth apt. Moving from scientific to historical understanding, Elgin (2017: 103) asks: “Should we deny that works of art afford historical understanding because they are not verbal?” She answers her own question in the negative as follows:
There is, as far as I can see, no reason to privilege the verbal over other modes of symbolization. And if we do, we exclude not just prints, monuments, and documentary films, but also diagrams, charts, and maps. To restrict historical understanding to that which is captured in a language would be costly. [Elgin 2017: 103].
While I can only provide a thumbnail sketch here, it is easy to see how distinguishing between the non-propositional and therefore not truth-apt content of a representation—like a diagram or map—can be separated from the propositional and therefore truth-apt content that an agent recovers from considering these representations. On the factive view defended in this paper, we would seek to explain the understanding afforded by such representations by identifying true propositions recovered by the agent using the representation. Thus, while we can agree with Elgin that non-propositional representations play a crucial role in affording us with understanding, we do not need to suppose that this creates any pressure to deny a factive account of understanding.
A further challenge, relating to the idea of non-propositional understanding, is due to Lipton (2009). He claim that we can derive ‘inarticulate’ understanding from using models such as an orrery—for example, that we can derive understanding why the planets exhibit a certain type of motion without being able to articulate this understanding in propositional terms. I lack space to fully discuss this challenge here, but it is worth noting that a strictly factive view of understanding might respond by appealing to fact that beliefs do not necessarily need to be articulable to count as beliefs. For instance, there are reasons to suppose that women living in more benighted times both were capable of knowing that sexual harassment was wrong and have some grasp of why it was wrong, despite lacking the conceptual framework required to clearly articulate these beliefs.Footnote 41 It may be the case that, similarly, we can have beliefs that contribute to our understanding of a phenomenon even in lieu of possessing a conceptual framework needed to articulate them. Further work would do well to explore the conditions for attributing beliefs even in the absence of articulability to answer Lipton’s challenge within a strictly factive conception of understanding.Footnote 42