Here we don’t give a single account of understanding, and for a good philosophic reason. Understanding, we claim, is not a natural kind; it is an ordinary human concept. It is used almost everywhere, from everyday life to the natural and social sciences, history, theology, the law courts, and the arts, in different loosely related ways to serve different loosely related purposes. We will look instead at many currently popular accounts in philosophy of science, which we group into three categories, that arguably capture reasonable, though different, senses of scientific understanding, to see how empirical adequacy fares with them.
Scientific understanding, as Regt et al. (2009) have pointed out, is a three-term relation involving a model, explanation, or other vehicle, the target that we want to understand, and an agent, the understander. Although agency is an important aspect of scientific understanding, there seems to be a rough division of labor in philosophy. Epistemology focuses on agency – what characteristics an agent must have and what they must do in order to understand (cf. the recent spate of work in epistemology on ‘grasping’Footnote 2). Philosophy of science has by contrast not been much concerned with what is in the heads of agents, or what they do in understanding something, but rather with the public products of science that can provide understanding, the vehicles of understanding – explanations, theories, models etc. That is what our discussion will focus on.
One could categorize types of understanding discussed in recent philosophy of science literature according to kinds of vehicles (e.g theories, models, narratives, and images) or perhaps according to what is to be understood (e.g. a happening, a general phenomenon, a regularity, a domain of happenings, even “the world as a whole”). To explore our claim, we find an alternative categorization – realist, counterfactual, and pragmatic understandings – more useful. As you will notice these classes are only roughly characterized; nor are they mutually independent since there are examples that fall into multiple categories. But they should suffice to show that empirical adequacy is not needed for understanding as understanding is conceived on a great many current philosophic accounts. We shall spend the most time on realist understanding because we take it that this is where the case for empirical adequacy is strongest. We shall then turn to other senses of ‘scientific understanding’ that philosophers have developed in which the importance of empirical adequacy does not seem so apparent from the start.
Realist, yes, but realist about what? In the philosophy of science literature ‘realist’ usually has to do with whatever it is that theory presents as responsible for, or (in whatever is one’s preferred sense) ‘explaining’ phenomena, such as ‘underlying’ causes, structures, general principles, or theoretical laws–. We shall start there, considering cases where the understanding of some phenomena is provided by a theoretical representation of the laws or causes supposed to be responsible for them, and we shall suppose, since we are dealing here with ‘realist’ understanding, that understanding requires that the representation get it right, or at least right enough about what matters most.
Besides theoretical laws and underlying causes, one can be realist about a good many other things. For instance, suppose one has what might be thought of as a thoroughly empiricist notion of understanding, that for a theory or model to provide understanding, it has to get right the empirical facts that follow from it. Here one is being ‘realist’ about the empirical facts. This kind of realist understanding would of course require empirical adequacy. Many other accounts of understanding look to something that is neither theoretical laws and causes nor just empirical facts. They look to things like the ‘overall picture’, the ‘world as a whole’, the ‘patterns’, the ‘similarities’, or the ‘categories’ and ‘natural kinds’ in nature. One can be realist about these too: there is indeed a forest to be seen, not just the trees; there genuinely are patterns; things genuinely are similar or dissimilar in ways that matter; some category schemes represent natural kinds, not just classifications we impose on the world. Realist understanding in these cases means getting the right overall picture, or representing patterns, similarities or categories as they really are.
We should note that here we neither endorse nor deny realism about any of these. Each has been linked with understanding, and in each case it seems the understanding could be either realist or pragmatist. (We get to the latter at the end of this section.) We take the realist interpretation because it poses the bigger challenge to our views. After all, it seems far less surprising that an erroneous representation of laws or causes or patterns will produce erroneous empirical predictions than that a true one will.
Understanding via vehicles that get the theory approximately right
Here is one widely held view about scientific theories and theoretical models: in order for a theoretical account to give us genuine understanding of a phenomenon, the theory has to get (at least most of) the theoretical facts cited in the account right. Truth about theoretical causes and laws may not be sufficient for understanding – we might demand a theory or model to be visualizable, simple, explanatory, etc., but – it’s thought – it is necessary.Footnote 3 We do not agree with this thesis since we embrace the variety of kinds of understanding that we discuss. However, we think this is a significant kind of understanding that many scientists aim for, and they often intend it to be realist understanding. Our point here is that even when the understanding of a phenomenon is via seeing the theoretical laws or causes responsible for it and even when what we’re aiming for is a kind of realist understanding, the vehicle that provides that understanding need not be empirically adequate.
How is this possible? The short answer is that understanding comes in degrees. A vehicle can be empirically inadequate – hence theoretically not ‘all true’ – but can get right some or many of the important theoretical features of the target and hence afford a degree of (realist) understanding of it.
It may be useful to think in terms of two different kinds of case here. One is the familiar case of ‘idealizing models’, on which there is a lot of literature. Roughly, these get some of the significant theoretical structure of what is to be understood fairly precisely right. The second, which we call ‘rough proximates’, gets significant parts, or perhaps all, of the structure right but only very roughly. In this case, we may think of the vehicle as providing understanding because it stands in stark contrast with what is otherwise available, which is not even roughly right. Sometimes, perhaps because for the purposes for which one wants to understand something, the departure in detail from the right account does not matter. We suspect there is far more to be said about these and we try to separate them from the ‘idealization’ cases to encourage more attention to them. We’ll discuss these first, then turn to ‘idealizations’.
Take the Rutherford model of the atom. Today the model is considered to be theoretically grossly inaccurate. According to modern quantum mechanical models, the electron does not revolve around the nucleus in planetary orbits as the Rutherford model pictures. The model is also empirically inadequate: it predicts that the electron will continuously lose energy and spiral into the nucleus causing the atom to collapse – and of course, atoms don’t collapse for if they did, matter wouldn’t exist the way it does. Despite these flaws the Rutherford model affords us some degree of (realist) understanding of atomic structure.
As realist understanding would have it, suppose we take correctness of theoretical features as one standard for evaluating the understanding provided by a model and take our current models to be more correct than older ones. Then the Rutherford model was on the path to these more correct models – it was significantly more correct than its predecessor, the plum pudding model, which takes the atom to be a ‘pudding’ of positive charge with electrons embedded in it. The Rutherford model was part of a chain of continually improving models comprising the Bohr model, the Bohr-Sommerfeld model, and the modern cloud model. As Catherine Elgin (2009) points out, understanding comes in degrees. We can think of the Rutherford model as a starting point. After all, it tells us that the positive charges in an atom are concentrated in a central nucleus containing protons and neutrons, and electrons surround it – a feature it shares with even the most modern model of the atom.
There is one clear sense of understanding – the sense of realist understanding – in which a model that gets just right the theoretical features of a target phenomenon and is hence empirically adequate can be taken to give us great understanding. In the same sense, one that is false and empirically inadequate can give us some, less-than-perfect (realist) understanding of the target owing to being somewhere in the vicinity of the true theoretical story: the atom is something like what the model says, and the model is better than others that are nowhere close to the theoretically true story. But why settle for partial understanding? For one, it is better than no understanding at all, and further, there are many situations in which our aim is ‘some understanding’ – when explaining things to children for instance, where the correct/true story can be too complex. For example, as Elgin (2009) points out, a child’s understanding of evolution according to which humans descended from apes is better than one according to which humans descended from butterflies. This could be a reason why the Rutherford model still finds a place in school science textbooks.
So models and theories that depart from the full theoretical truth and that may in consequence make wrong predictions about significant empirical factsFootnote 4 can still provide partial realist understanding. So empirical adequacy is not necessary for partial realist understanding. More, it is not even a good clue. It might be presumed that ceteris paribus, if V
are both vehicles of understanding of a phenomenon X, and V
is more empirically adequate than V
provides better partial realist understanding of X than V
.Footnote 5 This would be a mistake. There are a great many models that get right a great number of central empirical predictions, including ones that are deemed central from the point of view of a ‘true’ account, and yet are wide off the mark theoretically. This brings us to into the very familiar, much worked over philosophical territory of underdetermination, unconceived alternatives, and the like, so we will not have more to say here.
Scientific models often contain idealizations, exaggerations, and omissions of certain features of the target and thus deviate from the true theoretical story, and in consequence can be empirically inadequate. How do idealized models give understanding? There is a large literature on models; since we are concerned narrowly with understanding here, we concentrate on Elgin, who addresses this explicitly. Elgin (2012) calls idealizations, omissions etc. that enhance understanding, ‘felicitous falsehoods’. According to ElginFootnote 6 an idealized model can exemplify – highlight, exhibit, or display – characteristics it shares with the true causes, laws, or mechanisms responsible for the phenomenon it purports to explain. In doing so, Elgin argues, the model provides understanding of, and affords epistemic access to, those features in a way a more accurate model would not because the more accurate model introduces complexities that mask the features we care about. So it can provide more understanding than one that is more accurate but more complicated.
Idealized models of the kind Elgin discusses are likely to be empirically inadequate owing to the several theoretical falsehoods they contain. One nice example comes from Nobel Prize winning economist Rodolfo Manuelli (1986), commenting on the models of another Chicago School Nobel Prize winner, Edward Prescott:
.... consider the models Prescott surveys ... Most of them are representative agent models. Formally, the models assume a large number of consumers, but they are specialised by assuming also that the consumers are identical. One of the consequences of this specialisation is a very sharp prediction about the volume of trade: it is zero. If explaining observations on the volume of trade is considered essential to an analysis, this prediction is enough to dismiss such models. But if accounting for individual fluctuations beyond the component explained by aggregate fluctuations is not considered essential to understand the effects of business cycles, the abstraction is not unreasonable. A case can even be made that if what matters, in terms of utility, is the behavior of aggregate consumption and leisure, then any model that helps explain movements in the two variables is useful in evaluating alternative policies. This usefulness is independent of the ability of the model to explain other observations. (5)
In this model – as Elgin would (rightly) claim – the effects of the aggregate behavior would be obscured if we took into account individual fluctuations. We gain understanding since the model depicts vividly what is supposed to be the correct mechanism for generating a business cycle, which depends on average behavior, though at the cost of getting woefully wrong some effects that depend on the distribution.
One specific kind of idealization that illustrates our point is what Cartwright (2006) calls “Galilean thought experiments” and Uskali Mäki (1994), ‘isolating models’: models that study what a single one (or small set) of the many causes of an empirical effect in a target setting contributes separately. This kind of vehicle necessarily distorts the setting in which the effect occurs and the effect it predicts will be different, often dramatically, from the effect that happens. But they nonetheless get right just what the particular cause in question contributes to that overall effects. They provide genuine realist understanding of an element of the theoretical structure responsible for that effect and how that element contributes.
Understanding via vehicles that get other things that matter right
Here we take up some oft-discussed aspects of ‘unification’ in the philosophy of science literature.
The “overall” picture
Many philosophers of science have urged unification as a source of understanding. Michael Friedman (1974) is one famous example. According to Friedman, the understanding unification provides is global as opposed to local. Unifying explanations may not increase our understanding of independent phenomena, but they increase our understanding of phenomena overall. They do so by giving us a picture of the world ‘as a whole’ not just as a collection of separate parts. He explains: “From the fact that all bodies obey the laws of mechanics it follows that the planets behave as they do, falling bodies behave as they do, and gases behave as they do. …. [W]e have reduced a multiplicity of unexplained, independent phenomena to one”. (15, italics as in original) For Friedman this reduction is the very “essence of scientific explanation”: “A world with fewer independent phenomena is, other things equal, more comprehensible than one with more.” (15) As Friedman pictures it, this kind of understanding by unification requires that the unifying theory be true. It is supposed that it is a fact that all bodies obey the laws of mechanics, a fact that embraces a good many others. In consequence the unifying theory must also be empirically adequate.
But a unifying theory need not state the facts to give us a true picture – the right picture – of the world (or of a particular domain within it) ‘as a whole’, reducing the number of independent phenomena and making the world more comprehensible. We can invoke Cartwright’s (1980) early arguments from The Truth Doesn’t Explain Much
Footnote 7 about what are generally deemed to be our very best unifying theories – the unifying “high” theories in physics – to defend the view that a unifying theory may be as good at this job as can be and yet not be true. It can give us an excellent picture of the world as a whole, so long as we do not then expect to see the details correctly. Cartwright based her claims on the way she saw scientific modeling working in practice. The behavior of the planets does not ‘follow from’ the laws of mechanics. Rather, we derive the details of their behaviour. We do so starting from those laws, but in the course of our derivations we distort what the laws say. The corrections are not unmotivated. A great deal of knowledge from other domains, and lots of experienced practice, goes into it. But they are ad hoc from the point of view of mechanics.
Even though, on an account like Cartwright’s, the laws are not true, still they may be the very best and indeed an excellent – and thus ‘the correct’ – way to see ‘as one’ all the disparate phenomena we derive from them. We might liken this to the kind of realism about the choice of laws that many advocates of the Mill-Ramsey-Lewis ‘best system’ account of laws seem to adopt. On the Mill-Ramsey-Lewis account, the laws are the simplest set of claims from which we can derive the widest set of phenomena. Of course one may suspect that there is no ‘best’ system. But many act as if there is and that fundamental physics is on its way to finding it, perhaps even to finding one ‘simple’ system from which all facts can be derived. What we’d like to point out is that this kind of realism about the system – that there is one unique best one – is independent of whether the lower level facts ‘follow from’ the unifying laws, as Friedman pictures it, or we derive them, with distortions, as Cartwright sees it.
So, a theory may give us an understanding of a domain ‘as whole’ without being true to any of the phenomena in that domain. This can be classed a kind of realist understanding, supposing that we can be right or wrong about what the best picture of the whole is. How does empirical adequacy fare for this kind of understanding? The answer is immediate. The unifying theory that provides the understanding will generally be far less empirically adequate than the lower-level theories it unifies.
Perhaps we should recall at this stage that we are not committed one way or the other to realism about any of the items we discuss. For those who think that there is no right or wrong about the choice of a unifying theory when no such theory is true, it seems the understanding provided by unification would then be, in our system of classification, not ‘realist’ but rather ‘pragmatic’: it make things more comprehensible to us. But in this sense it seems to need no argument that a theory can at one and the same time improve comprehensibility and diminish empirical adequacy.
The natural classifications for laws
This is the central job of theory according to Pierre Duhem, who thought that successful physics categorizes empirical laws in a way that progressively reflects an underlying ‘natural’ classification and it is still reflected in how physics theory is organized today. Consider for example theories in physics with well-known names: Newton’s theory of gravity, Maxwell’s theory of electromagnetism, Einstein’s theory of relativity, quantum gravity, or string theory. Whether these theories are true or not, they organize empirical phenomena under them in a way that allows for subject-specialization in physics and the detailed comprehension that goes with it that promotes new visions, new practices, and what gets called ‘the growth of knowledge. Even if one argues that this is a realist understanding – that there is a right way to sort laws together into separate categories (as some take to be Duhem’s viewFootnote 8) – as with the overall picture, when we formulate laws in ways that make them fit in tidy categories, the laws so formulated may be less empirically adequate than when they could be formulated just so, so as to get the empirical phenomena exactly right.
It is sometimes said that theory supplies understanding by revealing the patterns in nature. On this view, in Philip Kitcher’s (1989) words, “Understanding the phenomena is not simply a matter of reducing the "fundamental incomprehensibilities" but of seeing connections, common patterns, in what initially appeared to be different situations.” (pp. 81–82) Often these patterns are taken to be real: there is a fact of the matter about what patterns there are and just what they are like.Footnote 9 In this case the understanding supplied is a realist understanding.
Still, when a theory supplies understanding by correctly showing patterns in the world, just as when it supplies understanding by reducing the number of independent variables or showing the overall picture or getting the laws placed in the right categories, it may well be less empirically adequate than a theory that aims just for empirical adequacy with no attention to making the patterns visible. The reason can be similar in all these cases. As with Elgin’s “felicitous falsehoods”, often the best way to bring out similarities and differences or to show overall patterns or how things fit together is by using a representation that is an average, or blur, or idealization of the real things that is not true to any of them. This is widely recognized in the case of seeing the trees as a forest, where it is clear that in seeing them as a forest we both lose and misrepresent a lot of empirical detail. Similarly, we can see and appreciate the pattern even if each individual piece is not entirely accurately represented and departs in various ways from it. And seeing things together that are very much alike is a way of understanding them that is in no way dependent on either the truth or the empirical adequacy of the vehicle that unites them.
Often there is understanding of the world to be had from vehicles not owing to their having any proximity to truth or empirical adequacy – so the vehicles need not be remotely true or empirically adequate. One kind of understanding that fits this bill is counterfactual understanding: understanding that comes from being able to see counterfactual possibilities. (See Lipton (2009) for a fairly detailed account of this kind of understanding.) We will consider counterfactual understanding of three different kinds.
Understanding via vehicles that provide simple make-believe models
One way to provide counterfactual understanding is by constructing simple make-believe, often very diagrammatic, worlds, frequently described in highly abstract terms or, where more concrete terminology is used, the descriptions are meant to carry little of their ordinary content.
Consider Akerlof’s (1970) model of the car market. The model pictures an abstractly described cause – asymmetric information – and a concretely but thinly characterized effect: a big difference in price between new and slightly used cars. In the model, asymmetric information in constituted by the seller of the car having much relevant information about its condition; the buyer, little. As a result, the price a rational buyer will offer for a used car depends on the average quality of used cars on the market; the price that a seller will accept depends on the quality of that particular vehicle. Therefore, one will not sell a used car that has a quality higher than the average across the used car population. Rational buyers, knowing this, will further reduce the price they offer, which causes sellers to withhold even more cars and so on. Ultimately the market collapses and no used cars are offered for sale.
This is of course a bad prediction about the sale of used cars in the real world. In so far as we think that the difference in knowledge about cars is working in real world cases as it does in the model but results are different because the model ignores the other causes at work (e.g., used car salespersons’ care for their reputation) we could place this example in the category of partial realist understanding. There are a few reasons for putting it here instead. First is that it does give us counterfactual understanding of whether and how asymmetric information affects car sales in a world where no other causes are present. The idea is that if we lived in a world where asymmetric information were the only cause, then the used car market would collapse. This gives us not just (realist) understanding of this alternate world, but also (counterfactual) understanding of our world. For instance, with this model in view a government might successfully implement very strong full disclosure legislation for car sellers that eliminates asymmetric information. The Akerlof model would then give us very good counterfactual understanding of why that car market works so well, even though it does not depict any mechanism that exists there – this particular market works well in part because the government has eliminated the opportunity for sellers to have better information about their cars than buyers do. Striving for empirical adequacy would hinder the model from illustrating this possibility.
A second reason is that the model is not used primarily to understand car markets but rather to understand what happens, or could happen, or does not happen in a huge variety of quite different situations from asset pricing to the signing of the Magna Carta. This contrasts with the usual examples of Galilean idealization; for instance, the model of how bodies move when gravity alone is at work is generally used to help us understand real world motions. But Akerlof’s model about what would happen to a car market were differences in knowledge of cars’ features and history at work unimpeded is supposed to help us understand why the Magna Carta was signed.
Independent of where the understanding these models provide should be catalogued, it should be clear that models like this will not get better at supplying that kind of understanding just by increasing their empirical adequacy, and for the most part, they would probably get worse at it.
Another example of a model that can be seen as giving counterfactual understanding is economist Thomas Schelling’s (1978) checkerboard model, which gives a story about racial segregation. It is easy to see how neighborhoods would be racially segregated if individuals have strong discriminatory preferences. But this simple scenario does not make us understand how people could be segregated if they prefer mixed neighborhoods. To understand this, Schelling distributes nickels and dimes on a checkerboard. Coins are moved to new locations where they are less outnumbered depending on how many of their neighbors are the same denomination. Schelling found that even when coins are not moved unless 2/3 or more of their neighbors are different, ultimately the coins bunch into neighborhoods all of the same denomination.
This model talks about coins on a checkerboard and predicts how their locations will bunch. It makes no empirical predictions about people and segregation. What it does is far more subtle and interesting. It deals with a possible situation. As Aydinonat Emrah (2007) notes, the model is constructed to provide insight into how certain individual mechanisms that real people may display (i.e. individual tendencies to avoid a minority status) may interact under certain conditions to produce segregation. But that’s not what the descriptions in the model stand for.
Do we slip into relativism here? Can any old model that constructs a make-believe world give us counterfactual understanding of some aspects of the real world? No: Schelling’s model makes a plausible conjecture that segregation may be the unintended consequence of even mild discriminatory preferences – and this is based on our familiarity with real people and their preferences. In this way it is consistent with our real world in key respects, like its assumptions about people.
The Schelling model does not tell us that mild discriminatory preferences do result in large segregation, but it opens our minds to a previously unimagined possibility – who’d have thought that such mild racial preferences could lead to complete segregation in a world quite similar to ours? Here then is a case of a celebrated model in economics – celebrated for providing insight into, and what we are labeling ‘counterfactual understanding’ of, racial segregation – where empirical adequacy simply does not figure. The model is empirically sterile with respect to the issues it gives insight into – making no predictions about them at all.
Understanding via vehicles that show up impossibilities revealed by ‘failed’ ideas from the past
Consider again the Rutherford model. In addition to the realist understanding it gives, it also gives us counterfactual understanding. Owing to its empirically incorrect prediction about the inward spiraling of the electron, it shows how an atom couldn’t be: it illustrates a physical impossibility. While the model was proposed by Rutherford in the hopes of giving realist understanding understanding since it, today we can use it for gaining counterfactual understanding: if electrons revolved around the nucleus the way the model says they do, then matter couldn’t exist as we know it.
We’d again like to stress that this plays a particularly significant role in science teaching. The Rutherford model was an important step in the evolution of ideas about atomic structure in the history of science, as it is in a science learner’s progression of ideas about the phenomenon. In Kuhnian jargon, within the Rutherford paradigm – given the results of his gold foil experiment – the model seemed very plausible. So before being introduced to the idea of an accelerated charged particle losing energy, students of science will likely appreciate Rutherford’s model. Once they are introduced to this conflicting idea of the spiraling electron, they will be able to see why an atom simply couldn’t be the way Rutherford thought it was. Similar arguments can be made about gaining understanding from other ‘failed’ ideas such as luminiferous ether, the heliocentric model of the solar system, and so on. The understanding we gain here seems especially significant when we look at the study of science as a study of the evolution of scientific ideas. And it’s worth noting that understanding here squarely depends on empirical inadequacy!
Understanding via vehicles that provide plausible explanatory stories
Empirically inadequate plausible explanatory stories can give us understanding by showing how things can be consistent with facts we insist on, such as accepted theory. Consider the MIT bag model. It describes hadrons (particles like protons and neutrons) as ‘bags’ in which (two or three) quarks are spatially confined, forced by external pressure. This takes into consideration the fact that quarks have never been found in isolation and are hence thought to be spatially confined. With the help of boundary conditions and suitable approximations, the single model parameter (bag pressure) can be adjusted to fit hadronic observables (e.g. mass and charge). Stephen Hartmann (1999, 336) observes that the predictions of the model only very modestly agree with empirical data. By normal empirical standards, the model fares badly. Are quarks really confined the way described in the model? We don’t know – and if empirical adequacy is a guide to truth, then very probably not.
Hartmann asks why physicists entertain the model, despite its empirical shortcomings. His answer is that it provides a “plausible story” by which it enhances our understanding. The bag model is a “narrative told around the formalism of the theory”; it is consistent with the theory; and importantly, it gives a plausible, intuitive, and visualizable picture of a hadron as quarks confined in a bag. Here we also get modal understanding. To the question, ‘How could quarks be spatially confined?’, this model answers, ‘Possibly, as if they were in a bag’. The answer is a good one because it is easily visualizable and because it illustrates a possibility about quark confinement.
A common response is that the model is itself understandable, but it does not provide understanding of the target if it is not reasonably empirically adequate to it – we understand the model, but not the target. No, Hartmann contends: “[A] qualitative story, which establishes an explanatory link between the fundamental theory and a model, plays an important role in model acceptance” (1999, 15). That’s because the model gives a story that relates to the known mechanisms of quantum chromodynamics, the theory that is fundamental of this domain.Footnote 10 Not any old model that’s visualizable and intuitively plausible will do the job: although empirically inadequate, the bag model is consistent with many things the theory says. It may be too bad for the realists and empiricists that this model is not empirically adequate, but theoretically, there is little reason why the quarks couldn’t be confined this way. Here again is a model highlighting a theoretically as well intuitively plausible possibility.
The final kind of understanding in our catalogue, described by de Regt (2014), comes from using a theory or model for practical use and manipulation, which lines up closely with the other aim of science we discuss in this paper – managing the world. There is understanding to be had of the world via a vehicle that helps us manipulate and control it – call this pragmatic understanding. De Regt associates understanding with the intelligibility of a theory. Intelligibility in this case is pragmatic and contextual. It consists in knowing how to use the theory for prediction and manipulation/control – so understanding for him is a skill.
As mentioned earlier, de Regt advances arguments similar to Elgin’s: he criticizes and rejects the realist thesis regarding understanding. He then shows how, in trying to predict and control (parts of) the world, we employ models and theories that are judged to be false. “Whether or not theories or models can be used for understanding phenomena does not depend on whether they are accurate representations of a reality underlying the phenomena,” (2014, 16) he maintains. (He gives many examples of false theories used for domain-specific manipulation and prediction, like Newtonian mechanics.) Nor, we add, does it depend on whether they are empirically adequate. Although de Regt doesn’t explicitly say much about the empirical adequacy of theories and models for understanding, the arguments he gives against requiring a vehicle of understanding to be true apply to requiring it to be empirically adequate as well. We shall say no more about this here because we continue this line about models and theories for use and manipulation in the next section.
Unification can also supply a kind of pragmatic understanding. As Mary Morgan (2010) points out in her work on the travel of facts and techniques from one domain to another, it can be extremely useful to see that the laws grouped together under the same unifying claim are similar in significant ways. It allows us to use similar methods of study, modeling strategies, approximation techniques, and the like, and it suggests analogous predictions to look for from one domain to another. This suggests new concepts, new theories, and new methods; it helps us advance our sciences at both a theoretical and a practical level.
Note that, just as with realist unificatory understanding, a unifying theory can supply pragmatic understanding while diminishing empirical adequacy. When it comes to borrowing techniques, looking for predictions in one domain analogous to those already established in another, and the like, it is the analogies among the unified sub-theories that matter, not the empirical adequacy of the unifying theory. The unification may be substantially less empirically adequate than the ones being unified.