In the first part of the paper, I depicted understanding of facts or phenomena as the provisional end result of a process of rearrangement of one’s web of cognitive attitudes, to the effect that the corresponding informational units are brought to fit into the web in question. When an informational unit p fits into a web W, I suggested, it can be derived from W, it does not clash with already established contents of W, and it is properly allocated within W—relative to the other contents that pertain to the same subject matter. In the second part of the paper, granted this model of understanding, I investigated the role of testimony in providing a hearer with understanding, or in yielding advancement in a hearer’s epistemic standing. The general idea was that a piece of testimonial information generates (advancements in) understanding when it—or, more precisely, its semantic elaboration—yields appropriate rearrangements in a web of cognitive attitudes, that result in the information corresponding to the phenomenon to be understood to fit into the web in question. I pointed to two conditions that a piece of testimony (like, e.g., an explanation) needs to satisfy to provide one with understanding and, more generally, to yield genuine advancements in one’s epistemic standing. The piece of information needs to be reasonable, or credible to the hearer, on the one hand, and it needs to be semantically intelligible to her, on the other.
However, is fitting, so conceived, really enough for understanding? Or is it conceivable that a piece of information fits into a web of cognitive attitudes, and that genuine understanding (of the corresponding fact or phenomenon) is absent? Suppose a subject belonging to our epistemic community is struggling in trying to understand the so-called apparently retrograde motion of the planets. She struggles in making sense of the fact that some planets suddenly and (for her) unpredictably invert their direction of movement. She runs into another subject who tells her that the phenomenon she is observing is due to the fact that some planets do not simply perform a circular orbit around the earth (deferent); they also perform smaller circular orbits around the deferent itself (epi-cycle, literally: above the circle). These planets, hence, appear to her as moving backward while orbiting on the side of the epicycle closer to (more distant from) the earth—given that the epicycle and deferent have the same (opposite) direction of movement. Suppose that our budding astronomer has no reason to doubt the credibility of the testifier, and that all the above-mentioned conditions are satisfied. The explanation the subject receives is intelligible for her, and it is perfectly reasonable relative to the (extremely poor) background knowledge she has about astronomy. Once the information about epicycles and deferents is incorporated, elaborated and understood, the phenomenon of the retrograde motion starts fitting into her updated and enriched web of cognitive attitudes. I.e.: the phenomenon is not puzzling for her anymore, and it is to be expected in light of the already established content of her web; moreover, the corresponding informational unit has its place relative to the other items inhabiting her web and pertaining to astronomy. Now, would we grant genuine understanding to the subject in a similar case? Would we say that she understands the retrograde motion of the planets, and that she understands why certain planets sometimes appear to us as moving backwards? We certainly would not. We would probably rather say that, although the subject is probably experiencing a sense of understanding, and although her sense of understanding might appear to her to be a reliable sign of genuine understanding or well grounded, she does not really understand. She understands the phenomenon of the retrograde motion of the planets relative to a theory or an explanation that she is holding true, but she does not understand the phenomenon in an “objective”, or “real” sense.
Behind this simple example, there is a general worry. The model suggested in this paper might be too subjective, or too internalistic, to do proper justice to understanding. We are working with the idea that understanding of single facts and phenomena involves or needs to be explicated in terms of fitting. The example depicted above shows that fitting, in certain conditions, might be enabled even by pieces of testimonial information or explanations that are utterly false, or untenable in the given epistemic circumstances. But intuitively, we want our theory of understanding to rule out the possibility of an utterly false or bad explanation providing one with genuine understanding. This is because we take “understanding” to be a success term, and to denote a certain cognitive achievement (Elgin 2007a, b, p. 33). Understanding needs to be somehow grounded on facts, or must “answer to the facts” in some sense (Elgin 2007a, b, p. 37). It would certainly be a discomforting result if a theory of understanding would force us to the conclusion that a member of our epistemic community committed to the existence of epicycles and deferents genuinely understands astronomical phenomena.
I mentioned before that not every explanation will grant one with understanding. An explanation, I said, needs to be credible or reasonable for the subject, in order to yield substantial advancements in her epistemic standing. This condition, however, does not help much. Not everything that appears to one to be reasonable is reasonable in an objective sense, or in the given epistemic circumstances. Suppose one is working and making judgements of reasonability from within a fairly bad web of cognitive attitudes—one, e.g., that contains mostly false beliefs about the relevant subject matter and many biased standards of justification. Such a web will not be a good basis to tell good from bad explanations and will typically give rise to a discrepancy between seeming reasonability and actual reasonability. The moral to draw from this is that the model of understanding presented here needs to be strengthened somehow, because we certainly do not want it to reduce to an analysis of what it feels like, to have the (maybe subjectively or internalistically justified) impression of understanding. We want understanding, and not the mere sense of understanding, to proliferate, with the aid of testimony, in our epistemic community.
One possibility here would be to embrace a certain measure of factivism. Why not simply add a truth requirement to the picture? Why not simply say that one needs to get things right, at least to a certain extent, in order to genuinely understand? One’s understanding could then be said to improve in that the truth-content of one’s web of cognitive attitudes increases or becomes more significant. A piece of testimonial information or an explanation—factivists would say—needs not only to enable fitting, in order to provide one with genuine understanding; it also needs to be true. If the explanation depicts dependence relations, these relations must have counterparts in reality. If entities are postulated, these must exist. If processes are described, these must actually occur, and so on. Intuitively, we said, we would not grant genuine understanding of astronomical phenomena to our budding astronomer who believes in a system of epicycles and deferents. Factivists have a straightforward way to do justice to this intuitive and somehow unquestionable judgement: the astronomer fails to understand, because she is committed to an explanation that is false, and a false explanation cannot work as an effective source of understanding. End of story.
Factivism, however, might turn out to be more problematic than it seems at first sight.
Elgin (2012, 2017b, 2017) famously argues that a factive conception of understanding forces us to deny that contemporary science affords or embodies an understanding of the phenomena it seeks to explain and account for. Contemporary science, so Elgin, is a paradigm of epistemic success. If contemporary science does not afford understanding of its subject matter, probably nothing does. Now, scientists typically deploy representational devices and epistemic mediators that (are known to) misrepresent their intended domain. These epistemic mediators simplify, abstract, and sometimes even distort their subject matter in order to make certain aspects of this subject matter salient. They provide us with understanding of their intended domain not by mirroring it; rather, they create a cognitive environment in which certain features of the domain stand out. Our best contemporary science, e.g., leads us to think of gases as comprised of dimensionless, spherical molecules that exhibit no mutual attraction. As Elgin puts it: “There is no such gas; indeed, if our fundamental theories are even nearly right, there could be no such gas” (Elgin 2017, p. 15). Now, the fact that the ideal gas model departs from reality in certain respects does not seem to obstruct its epistemic functioning. On the contrary, it seems to foster it: the idealized model makes us appreciate how pressure, volume and temperature are related in real gases. By picturing gases as the model suggest us to think of them, we genuinely understand something of gas-phenomena. The ideal gas model, however, is not simply pragmatically useful. It is not simply a good or reliable instrument to predict gas-phenomena. It has an undeniable epistemic value.Footnote 6 Now, if we demand from an “understander” true beliefs and only true beliefs about a subject matter, we are forced to deny that somebody who masters and accepts the ideal gas model genuinely understands gas-phenomena. But this, so Elgin, is highly counterintuitive.Footnote 7
Moreover, a factive conception of understanding does not sit well with our practices of ascription of understanding. Understanding is not an all-or-nothing cognitive achievement. Our understanding grows, or improves. It gets better, deeper, more sophisticated over time. The steps leading to (full) understanding, however, might involve simplifications, approximations, and even the incorporation of false items of information. While some falsehoods are certainly detrimental for understanding, some others are not. We would certainly grant (a certain measure of) understanding to a child who had incorporated into his web of cognitive attitudes the information that human beings descended from apes. This information is false, as, according to evolutionary theory, human beings and apes descended from a common ancestor who was not, strictly speaking, an ape. Still, the false information and the way the child probably rearranged his web of cognitive attitudes for making place for it signalize some understanding of the relevant subject matter. A factive conception of understanding has a hard time in explaining why the child is epistemically better off than his classmate, who believes, say, that human beings descended from butterflies or did not evolve at all (Elgin 2007b, 8).
These arguments by Elgin have not persuaded everyone.Footnote 8 I do not claim that they prove factivism to be false or completely untenable. Still, I believe they succeed in showing that factivism might be problematic, and that the issue of the relation between understanding and truth is far from settled. Thus, I think it is worth exploring the possibility of grounding understanding on facts without appealing (directly) to truth.
Suppose, then, that after a process of rearrangement of our web of cognitive attitudes (maybe yielded by a verbal interaction with another subject) we have reached a point in which a certain informational unit fits into our web. How do we make sure that we are on the right track? How do we make sure, or at least how do we raise the probability that we are genuinely understanding the corresponding phenomenon, and not just seemingly so? How do we rule out the possibility of being in an epistemically bad scenario, e.g., of experiencing a mere sense of understanding? It seems that the best one can do is to take seriously two constraints: an empirical and a social one.
The empirical constraint tells us that as long as our web of cognitive attitudes shows itself to be a decent and most of the time reliable guide to get along in the world, the epistemically most responsible behavior is to stick to it. If problems arise—predictions fail, expectations are not met, goals are not reached, problems remain unsolved, and the like—we should take it as a sign that something about our already established corpus of beliefs and commitments needs to be revised or rearranged, ideally in a non-ad-hoc manner. But we probably need something more than this. In domains in which what matters is retrodiction, e.g., or explanation of past events, there will be no way to test our beliefs and assumptions and to bring them to face the tribunal of immediate experience. Moreover, recent empirical studies on the phenomenon called “illusion of explanatory depth” suggest that we often act successfully within a certain domain, not because the beliefs we hold about it are true (or reasonable in the given epistemic circumstances), but simply because the domain is particularly user-friendly. In such cases, the actions we perform are not sufficiently based on the beliefs we hold about the underlying mechanisms, to the effect that the beliefs in question are not really responsible for our success in reaching our goals or in predicting the occurrence of events. If this phenomenon is as widespread as cognitive scientists think it is, practical and empirical success should not, at least not unconditionally, make us too confident that what we believe about the relevant domain is correct, tenable in the given epistemic circumstances, or embodying genuine understanding (see Trout 2002; Ylikovski 2009; Sloman and Fernbach 2017).
Here is where the social constraint comes into play. As Elgin nicely puts it, understanding is not just a matter of being “in suitable relation … to the phenomena …, but also to other members of the epistemic community” (Elgin 2017, p. 121). How do we make sure that we are on the right track, then? The social constraint tells us: use other subjects’ opinions as yardstick; test what you believe or endorse by comparing it to what other members of your epistemic community think. More specifically: take disagreements with your peers as an indication that you could be wrong, and disagreements with epistemically superior subjects (experts and epistemic authorities) to be an indication that you are very probably wrong—as far as their domain of expertise is concerned. And on the other hand: take stable agreements as providing a prima facie and defeasible justification to stick to what you have, until problems or anomalies arise. In order to do justice to the intuition that our astronomer committed to a system of epicycles and deferents fails to understand the retrograde motion of the planets, hence, we do not need truth: her position is untenable, relative to what most of her epistemic community thinks, and relative to objective features of her social-epistemic environment. A factivist, we said, would claim that the reason why she fails to understand certain astronomical phenomena is that her web of cognitive attitudes fails to mirror reality properly. Another, possible explanation is that she fails to understand because the social constraint is not fulfilled, and her web fails to approximate our webs, and the webs of other members of her epistemic community, in a robust enough way.Footnote 9
Is fitting, conjoined with an empirical and a social constraint, really enough for understanding? In the end, one might argue, experience tells us in a straightforward manner only that we are wrong, not that we are right; and it is actually conceivable that we are deeply mistaken even about matters that are deeply rooted in our shared worldview and that we have agreed upon for a very long time. Still, shaping and improving one’s web of cognitive attitudes while trying to keep these two constraints fulfilled seems to be the best one can do in the epistemic circumstances. Whether the best one can do is good enough for genuine understanding or not, is a question that probably deserves another paper.