Introduction

The standard view in the recent literature in social epistemology has it that while knowledge can, given the right conditions, be transmitted via the testimony of others, understanding is very difficult, or even impossible, to pass on. Linda Zagzebski, for example, writes that “understanding cannot be given to another person at all except in the indirect sense that a good teacher can sometimes recreate the conditions that produce understanding in hopes that the student will acquire it also” (Zagzebski 2009, pp. 145–146). Along similar lines, Alison Hills points out that “if you are attempting to gain knowledge, testimony can serve as a justification for your own belief, but it is not usually a good way of acquiring […] understanding” (Hills 2009, pp. 19–20).

The idea underlying the standard view seems to be that, while the acquisition of testimonial knowledge can be, and very often is, a passive affair, gaining understanding requires significant cognitive work on behalf of the hearer. But if most of the work that needs to be done in order to obtain understanding is performed by the hearer herself, it does not really make sense to say that the understanding she gains is “testimonial”—in the sense of being appropriately based or (epistemically) dependent upon testimony. To be sure, different accounts of testimony require different amounts of effort from the hearer’s side, and some accounts are more demanding than others even in cases in which knowledge or simply true belief is at stake. Still, so the standard view, the cognitive effort required for obtaining understanding is more significant, and probably also qualitatively different, to the effect that the talk of “testimonial understanding”, or “second hand understanding”, is not justified.

It is widely acknowledged in the literature that understanding involves or is even reducible to a certain act of “grasping”. What “grasping” refers to is a matter of dispute. Some claim grasping to involve appreciating, or “seeing how the various parts of a system depend upon one another” (Grimm 2011). “Seeing” here does not seem to be reducible to a form of believing truly or knowing and, therefore, seems to be something that escapes the possibility of being brought to expression with the aid of mere assertions. Many authors defend what we might call an ability-based account of grasping. Catherine Elgin, e.g., suggests that “to grasp a proposition or an account is at least in part to know-how to wield it to further one’s epistemic ends” (Elgin 2017, p. 33, my emphasis). The idea here seems to be that grasping a proposition is a matter of being able to use the information appropriately, as basis for inference, reasoning and maybe even action (when, as Elgin puts it, one’s ends are cognitive). In a very similar vein, Alison Hills contends that “when you grasp a relationship between two propositions, you have that relationship under your control. You can manipulate it. You have a set of abilities or know-how relevant to it, which you can exercise if you choose” (Hills 2015, p. 663, my emphasis). This ability-based account of understanding seems to ground the skepticism concerning the possibility of understanding-transmission. Grasping, as long as it is conceived in terms of abilities or know-how, is not something that one can pass on to another in the same way as she would pass on isolated pieces of information or items or knowledge. I can give you some advice on how to perform, say, backstroke; I can show you. But you won’t learn as long as you sit by the pool. You may even listen carefully to my explanations, understand everything I say and trust me blindly, but you will need to do more than believing me, if you want to get to know how to swim: you need to use the information I gave you—you need to jump in and practice yourself.Footnote 1

Kenneth Boyd (2017) has recently argued that the standard view can be challenged, and that there are at least certain forms of understanding that can actually be transmitted via a testimonial interaction. (For a claim along similar lines, see Grimm 2019.) Boyd works with the assumption that understanding involves two components: an informational and a grasping one. He concedes, or at least takes the possibility seriously, that grasping is to be spelled out in terms of abilities, skills or knowing-how. Nonetheless, he manages to argue very convincingly that understanding can be properly transmitted—at least in those cases in which understanding is “easy” relative to the potential understander, and her background knowledge is such that the “grasping” requirement is automatically satisfied when she elaborates the informational unit received by the testifier. Adherents of the standard view usually point to an asymmetry between knowledge- and understanding-acquisition on the basis of testimony: while gaining knowledge is (or at least can be and very often is) a straightforward affair, significant cognitive effort on the side of the hearer and potential understander is required when understanding is at stake. Boyd’s arguments show that there are contextual factors we need to consider, and that if we do, we realize that there are cases in which the amount of cognitive work required from the potential understander actually resembles the amount required for gaining testimonial knowledge. Hence, he concludes, nothing stands in the way of talking of testimonial understanding, at least for these particular cases.

Whether or not understanding can be genuinely testimonial is a very important question for social epistemology, and Boyd has made a very important step in the process of persuading us that we should answer this question in the positive. However, I believe epistemologists should not lose sight of where the heart of the issue actually lies. It is widely acknowledged that knowledge has a social dimension: we are (epistemically) dependent upon one another for most of what we know. But this holds, at least in some measure and in some sense, for understanding as well. We learn from one another’s words, tellings and assertions all the time. Sometimes we make sense of things for ourselves; when for some reason we struggle, we reach out to others who understand better than we do and ask them for (hopefully) understanding-providing explanations. In certain contexts, we even have the rational expectation that other members of our epistemic community—experts, epistemic authorities and the like—will help us in gaining understanding or in making advancements in understanding, by engaging in a verbal interaction with us. Testimony, hence, plays a very important role in the dynamics of proliferation of understanding within the epistemic community. Now, we value understanding at least as much as we value knowledge, maybe even more (Knavig 2003; Pritchard 2010). It is fair to say that it is desirable that understanding is augmented within the epistemic community, and that we want understanding to proliferate. Granted this, what can we do in order to maximize the probability that the process of learning and acquiring understanding from one another’s words succeeds? This is, roughly, the question this paper aims at answering.Footnote 2

It might be objected at this point that I plan to deal with a question that lies outside the scope of epistemology. The interesting question, so the objection might go, is whether testimony can work as an epistemic source of understanding, and not as a source whatsoever. Nobody would dare to question the fact that testimony can be among the factors that somehow lead to the achievement or improvement of understanding. However, if the relation between testimony and understanding turns out to be not of epistemic dependence but of a different kind, one might argue, then it is just irrelevant to epistemology.

I beg to disagree. That we gain understanding or make advancements in understanding on the basis of one another’s words, tellings and assertions is obviously not something in need to be argued for; it is a fact. It is a common, wide-spread phenomenon that we observe and experience within our epistemic practices all the time. But it is actually more than this. It is also something that we want to happen. It is a phenomenon that we want to have under our control. To achieve this control, we need a theory about the mechanisms and dynamics that are responsible for it. We need to understand why sometimes these dynamics have the desired effect, and sometimes, instead, despite all our efforts and good will, they go awry. This understanding is something that epistemology—i.e. any epistemology concerned with the questions of what understanding is, of what good testimony is, and of how testimony contributes in augmenting and spreading epistemic goods within the epistemic community—can and should contribute in providing. Hence, even if the relation between testimony and understanding turned out to be merely causal, and not of pure epistemic dependence, it would be in any case of paramount importance to investigate the conditions under which the cause will have the desired effect.

By looking at our epistemic practices, we realize two things. First, we see that gaining knowledge via testimony is usually easy, while gaining understanding (cases depicted by Boyd 2017 set aside) is usually hard. A theory of understanding should provide us with an explanation for why this is so. Second, we see that not every piece of testimony and not every testimonial interaction is epistemically worth the same (for everybody). Some epistemic agents are particularly good (and some are particularly bad) for the sake of bringing their interlocutors to understand. But how so? What makes a testifier a good one, or a reliable one, for the sake of generating understanding or promoting advancements in understanding in her hearers? A theory of understanding should have something to say about these questions as well.

In what follows, I suggest a tentative model of understanding that is not ability-based, and that meets these two requirements. In the first part of the paper, I defend the idea that understanding facts or phenomena is a matter of bringing the corresponding informational units to “make sense” relative to, or to “fit” into one’s already established corpus of beliefs and commitments that bears on the relevant domain or subject matter. In the second part of the paper, I use this model to shed light on the dynamics of gaining understanding, or of making advancements in understanding, on the basis of other subjects’ tellings and assertions. I show that the model has the resources to explain to us (i) why knowledge and understanding relate differently to testimony, and (ii) why some pieces of testimonial information are better than others for the sake of yielding advancements in one’s epistemic standing. In the last part of the paper, I show how this model of understanding could be strengthened to overcome the potential criticism of being excessively subjective.

Understanding: A Tentative Model

What does it mean to understand something? There are many different types of understanding (Baumberger et al. 2016). For the sake of simplicity, I will focus on understanding directed to single facts, events, or phenomena. When does a phenomenon count as understood by a certain subject—at least to a certain extent? The following model of understanding builds upon the following intuition: if a subject fails to understand a phenomenon, this means, roughly, that she cannot make sense of it, i.e., that the corresponding piece of information does not fit with everything else that she has good reason to hold true or to endorse about the world. If, in contrast, a phenomenon is understood—at least to a certain extent—this means that the corresponding piece of information does fit. The notion of “fitting” here involved needs to be sharpened to be helpful. In order to do so, I start by introducing the notion of web of cognitive attitudes.

The Web of Cognitive Attitudes

I suggest to call the web of cognitive attitudes (W) the set of informational units believed, accepted or endorsed by a certain epistemic subject (S) at a certain time (t). Some of these informational units will be true, some will be false; some will be true enough to serve a particular epistemic or cognitive aim. Some will amount to knowledge; some will not. Some will be held only momentarily by S and will probably be dropped or forgotten at a time close to t (“The door is open”, “The train is leaving in an hour”); some others will be held more tenaciously (“King Louis XVI of France was beheaded on Pláce de la Révolution on January 21st, 1793”, “All ravens are black”). Some other will probably inhabit the web in a permanent manner (“Every unmarried man is a bachelor”). The informational units belonging to W will not be isolated, i.e., they will not form a long conjunction. Rather, they will depend upon one another in many different ways. To accommodate this fact, it is useful to conceive of W as a structured set and to represent it as a pair <B, R> , where B is the set of informational units believed, accepted or endorses by S (let us call this the informational dimension of S’s web) and R is the set of relations holding among the elements of B (let us call this the relational dimension of S’s web).

The elements of set B will be very different in kind: they will be observational or theoretical, stand-alone or belonging to a theory, descriptive or normative, evaluative, law-like, hypothetical, and so on. They will come from different sources: they will be based on perception or inference, or they will be gained through what others told S, i.e., through testimony. Moreover, when they are believed, they will be believed by S with different degrees of confidence. Roughly and informally, one could take the degree of confidence in holding a given proposition to be inversely proportional to the readiness to drop it, or to change the doxastic attitude toward it, in light of new evidence.

The elements of set R will also be of different kinds. There will be logical relations, evidential or justificatory relations, basing relations, and so on. To clarify: an evidential relation re holds between two propositions q and p iff q is evidence for p (i.e. iff it is rational for S to believe that p in light of q). A basing relation rb holds between q and p iff q is the reason or grounds for which S is holding p (i.e. iff q is the reason or among the reasons why S has a certain doxastic attitude towards p). Note that a relation holding among propositions in W might be basing without being evidential (and vice versa): S can be aware of evidence q for p, and nevertheless come to believe p on grounds radically different from q. However, we assume that if p is properly based on q, then q also has a justifying force toward p (q also counts as evidence for p)—because proper basing requires not only q to be the reason for which S is holding p but also for S to have a true and rational belief to the effect that q supports, or speaks for, p. Like the elements of B, the elements of R will also come from different sources (S can come to believe that q always follows from p inferring it inductively from perception or because another subject S* told her so). Additionally, S will be more or less certain of their occurrence (S can believe that q always follows from p with different degrees of confidence).

Understanding as Fitting into

Keeping this notion of web of cognitive attitudes in mind, one can address the question of what understanding amounts to. The very general idea underlying the present model amounts to the following: understanding must be explicated in terms of fitting. If something fails to be understood, this means, roughly, that it does not fit; if something, in contrast, is understood, this means that it does fit. It is now clear where exactly this fitting occurs: at the level of one’s web of cognitive attitudes (W). What is also clear is what exactly is brought to fit into the web, when understanding is gained: single informational units depicting worldly events or phenomena. In order to avoid explicating obscurum per obscurius, let me try to spell out the notion of “fitting” here involved in more detail.

Suppose that what S is trying to understand is a certain phenomenon P that she has observed or detected at a certain moment in time. What happens at the level of S’s web of cognitive attitudes W when S passes from not having understanding of P to actually having it? It seems that when S fails to understand P, and is aware of the fact that she does, at least one of the following is the case (or a combination of these):

  1. (i)

    the corresponding informational unit p cannot be derived from W (to the effect that P was not to be expected, in light of W);

  2. (ii)

    an inconsistency arises from the conjunction between p and W;

  3. (iii)

    p is isolated in W.

In case (i), S could not see P coming. This will typically result in a phenomenological sense of surprise, when P is observed or detected. In case (ii), the informational unit p clashes, in some measure, with other things S already has reason to believe or endorse about reality. This will typically result in a phenomenological sense of puzzlement. Note that I am working with the assumption that S is directly observing or has directly detected P; this means that I am assuming that S has excellent or even compelling perception-based reasons to believe that P is, actually, the case. In such a scenario, S’s sense of puzzlement seems to derive from a conflict at the level of her evidence: what S already believes or endorses relative to the domain in question pulls towards not p; but S’s perceptual apparatus (which, we assume, she has reasons to believe is working properly), forces her to incorporate the information that p.Footnote 3 In case (iii), p’s isolation will typically result in p being “epistemically inert” in S’s web. I.e.: S will not be able to use p as a basis for inference, reasoning, action, generation of new information, and so on. Now, when P passes from not being understood to being understood, S’s web is modified and updated in such a way that the following turns out to be the case:

  1. (i)*

    p turns out to be derivable from W (and, therefore, P turns out to be expected, in light of W);

  2. (ii)*

    the inconsistency arising from the conjunction between p and W is eliminated;

  3. (iii)*

    p is properly connected to other elements of W.

What happens in the transition from (i) to (i)*? From a phenomenological perspective, what we have is a transition from surprise to expectation. At the level of S’s web, we have a process of appropriate expansion—sometimes only at the relational level, sometimes both at the relational and at the informational one. When an expansion occurs only at the relational level, this means that W already had the resources to derive p, but S was not yet able to perform the derivation. When instead both the informational and the relational level of the web are expanded, S gains new information about the domain that P pertains to and, additionally, she becomes able to derive p on the basis of the new contents. What happens in the transition from (ii) to (ii)*? Here, what occurs is a transition from contradiction to consistency. In this case, S’s web will not be modified in terms of expansion; rather, it will be subjected to an appropriate revision: an already established content (and, ideally, the assumptions that led to it) will be dropped to “make place” for p.Footnote 4 Finally, what happens in the transition from (iii) to (iii)*? Here, what occurs is a transition from isolation to connection, as a result of an expansion of the web either only at the relational level, or both at the informational and at the relational level.

One might be tempted here to argue that proper connection can actually be reduced to derivability. Isn’t it the case that once a piece of information is derivable from a system, a certain relation is established, so that the information is actually pulled out from its isolation? However, it seems that a certain informational unit that p can be derivable from a web W without being properly connected to other elements of W. This is because proper connection seems to involve more than mere derivability: if p is properly connected to other elements of W, this means that p has a proper place relative to all, or at least relative to most the information pertaining to the subject matter or domain the corresponding phenomenon belongs to. To illustrate the difference between derivability and proper position assignment, take p to be the information standing for the fact (P) that the King of France was beheaded in 1793. Derivability has to do with an awareness of causally relevant factors, of the way that, according to our best available reconstruction of the historical period in question, led to P; proper position assignment, on the other side, has to do with an embedding of the corresponding information into an overall framework—say, into the framework of the French Revolution or of modern European history. (This distinction is meant to do justice to the fact that we probably intuitively would not say that one (fully) understands the event of the beheading of the King of France, if she were unaware of the consequences of the event in question.)

Now, if a phenomenon P is derivable from a web W, is in equilibrium with W and is properly connected to other elements inhabiting W, the corresponding informational unit p fits into W. Granted this notion of fitting, I suggest the following explication of understanding (for the case of understanding being directed to a single phenomenon P):

UP:

Understanding P is the cognitive state that a subject S reaches as a result of the overall rearrangement of her web of cognitive attitudes W to the effect that P fits into W (for the above specified notion of fitting).Footnote 5

Grasping?

Philosophers sympathetic with the ability-based conception of understanding will probably argue that this model has a flaw, as it does not account for the “grasping” feature of understanding, nor it provides us with an explanation of why understanding seems to be usually associated to certain (inferential and reasoning) skills. However, it might be argued that the model does have the resources to account for both these aspects of understanding.

So far, I suggested to conceive a web of cognitive attitudes as consisting of the set of informational units believed, accepted or endorsed by a certain subject at a certain moment in time, on the one hand, and of the set of relations holding among the units in question, on the other. Another way of stating this fact would be: the web consists of the informational units that a subject is believing, accepting or endorsing, and of the relations holding among informational units a subject is aware of. One might hence rephrase “p has a position in W” by saying “S is aware of the position of p in W”. Now, one might suppose here that such access to or awareness of the relational aspect of one’s web will ground, or yield, certain cognitive (inferential and reasoning) skills. If for example S is aware of p and q being bound together in her W by a certain dependence relation (so that, for example, p logically entails q), S will be able, among other things, to infer q deductively, given p. If p best explains q, S will be able, among other things, to formulate the hypothesis that p (i.e. to infer p abductively) in light of q.

What about “grasping”? On the basis of the model suggested, I contend that grasping a fact means, at least in part, to be able to properly embed the corresponding informational unit into one’s web of cognitive attitudes. If one fails to grasp a fact, the corresponding informational unit clashes, in some measure, with the subject’s already established framework of beliefs and commitments or is isolated in the framework. When grasping instead succeeds, the information unit in question gets properly allocated within the web, and it starts having a proper position and “making sense” relative to everything else the subject has reason to believe about reality.

Learning from Others

On the basis of this model of understanding, I have gained the resources to tackle the following questions: what happens when understanding is achieved on the basis of a verbal interaction with others? When does a piece of testimony successfully yield understanding, or contribute to the achievement of understanding, once it is incorporated in one’s web of cognitive attitudes?

Testimony and Fitting

In the previous section, I depicted understanding (of single facts or phenomena) as the provisional end point of a process of rearrangement of one’s web of cognitive attitudes, to the effect that a certain informational unit gets properly allocated in the system (or in relevant subsets of it). More specifically, I described the process of acquiring understanding as a transition from a state in which a piece of information does not fit into a web of cognitive attitudes to a state in which the information does fit. Granted this picture, I suggest that a piece of testimonial information yields understanding, or at least contributes to the achievement of understanding, in that it activates or triggers a process of adjustment and rearrangement within the hearer’s web of cognitive attitudes that results in the informational unit depicting the fact or phenomenon to be understood fitting into the web in question. I pointed out that there are at least three possible scenarios in which a certain informational unit p fails to fit into a web W. Namely, when p (i) cannot be derived from W, (ii) clashes with some already established contents of W, or (iii) is isolated in W, i.e. is not properly connected to other elements of W. A piece of testimonial information contributes to one’s understanding, hence, in that it enables one to modify one’s web as to amend to shortcomings of this kind.

Suppose you are confronted with a fact or phenomenon P that is in some sense problematic, in light of your already established corpus of beliefs and commitments. Let me consider the case in which you were not expecting P. Granted your already established beliefs and commitments, and given your best take on the matter, the occurrence of P surprises you, you were not able to see P coming. The way I described this situation at the level of one’s web of cognitive attitudes is in terms of impossibility of derivation: your web does not have the resources necessary and sufficient to enable you to derive the corresponding informational unit that p. There are actually two possible subcases here. In the first subcase, your web would actually have the resources to derive p, but for some reason you are unable to perform the derivation—this might happen, e.g., when your web contains the right informational units, but these are not mutually arranged in the right way. In this particular case, the shortcoming of your system will be merely at the relational or structural level, while everything will work well at the informational level. In the second subcase, you are missing also the information to derive p. In this case, the shortcoming of your system concerns not just its structure, but also its content. How do you typically amend to this kind of situation? Intuitively, I said, via an appropriate expansion of you web. In the first subcase, via an expansion at the relational or structural level; in the second subcase, via an expansion at the relational and at the informational level. Now, a piece of testimony will contribute to your understanding in that it will provide you with appropriate information about how your web needs to be updated, i.e., enriched or expanded, in order to make p derivable from it and in order to render P to be expected in light of it. Typically, pieces of testimonial information playing this role will have the form of explanations and will provide you with an answer to a why-question (implicit or explicit). If the explanation is an adequate one, relative to your particular epistemic situation, it will make you appreciate the shortcomings of your system and it will provide you with the information that you need in order to “fill the gaps” that need to be filled—either at the informational level, or both at the informational and at the structural level.

If the analysis suggested above is along the right lines, fitting, and hence understanding, involves more than mere derivability. We may be able to derive a certain piece of information from our web, without being able to assign to this information its proper place within our overall system of beliefs and commitments or proper subsets of it. Understanding the event of the beheading of the King of France in 1793, e.g., involves more than an awareness of causally relevant factors; it involves the ability to embed the corresponding informational unit into an overall theoretical framework (and, one might suppose, the wider the framework, the better the understanding). How can embedding in this sense, or proper position assignment be fostered with the aid of testimony? In order to favor understanding of the fact that the King of France was beheaded in 1793, e.g., a history teacher might tell her students something like: “If King Louis XVI had been more flexible and more open to the ideas of the Enlightenment, France would probably be a parliamentary monarchy today, like the United Kingdom actually became after 1689”. If understood properly, and on the basis of the right background knowledge, an utterance of this kind will tend to have positive epistemic effects and will contribute to the proper position assignment to the informational unit in question. The positive epistemic effects are due to the fact that the utterance highlights causally relevant factors of the event in question, and points to its short- and long-term consequences. Interestingly, the relevant information about the causal history of the event and its consequences is conveyed somehow indirectly, via providing access to a counterfactual scenario, on the one hand, and by suggesting a comparison between two apparently similar and temporally close revolutionary movements, on the other. The teacher does not tell her students directly that it was because the King of France was fiercely bounded to a strictly hierarchical worldview and to an absolutist conception of power that the National Constituent Assembly decided to cut off with his authority completely, and to get rid of the royal power all the way down. Instead, she tells them something about how things could have gone differently. She tells them something about the conditions under which the assembly could or would have likely agreed upon turning France into a parliamentary monarchy. This might be a way of highlighting how crucial the King’s attitude has been in determining the course of the events. If the attitude would have been different, things would probably have developed differently. (For an enlightening analysis of the relation between understanding and possibility, see Grimm 2017.)

What about the consistency requirement? Sometimes we fail in understanding something in that we have some kind of rational obligation to incorporate an information that clashes, that is some way incompatible with our already established system of beliefs and commitments. I described above the case of a “puzzling” phenomenon. You observe something (P) that explicitly contradicts your expectations. Or, you are confronted with an information (p) coming from an authoritative source that clashes with your already established domain-relative beliefs or reasons. In similar cases, I said, the sense of puzzlement and the failure in understanding are probably due to the fact that there is a tension at the level your evidence. Your own beliefs about how authoritative the source of the information is are telling you that it would be irrational for you not to incorporate the item; still, the information does not fit into your system. It makes it inconsistent, in light of what you already believe about the relevant domain. How do you ought to deal with this kind of cases? I mentioned before that understanding requires contradictions to be eliminated. But in order to (re)establish the consistency of your web, you will need to modify it and to operate appropriate revisions within it. Take the case of the puzzling information coming from a source you judge as authoritative. You will need to remove the source of the puzzlement—either by operating revision at the level of your domain-relative reasons, or by questioning your own beliefs about the authoritative status of the source. Which might be the role of the testimony of others in this process, i.e. in the process of (re)establishing the consistency of your system? Testimony, as standardly conceived, yields expansions and enrichments of your web, so no piece of testimony will yield revisions directly, for itself. (Although, one might point out, a piece of testimony could provide you with an undercutting defeater precisely for the belief that needs to be dropped in order to reestablish consistency.) Still, it is conceivable that some verbal interactions with other epistemic agents will play an important role (i) in making you aware of the fact that there is an inconsistency in your web; (ii) in making you aware of the reasons why the inconsistency arises; and (iii) in guiding you in the process of deciding which content is worth being retained and which content, instead, you should let go.

Telling Good from Bad Testimony

I mentioned in the introduction that there are at least two facts about our epistemic practices that a theory of understanding should be able to account for. First, a theory of understanding should provide us with an explanation for the fact that gaining knowledge via testimony can be, and very often ist, an easy and straightforward process, while gaining understanding from others is usually hard. How come that one can pile up explanations, maybe even true or reasonable ones, and still fail in bringing her interlocutors to understand? The model presented in this paper has the resources to explain this fact, because it depicts understanding (i) as having a holistic component and (ii) as a process sometimes involving not just expansions and enrichments of one’s system, but appropriate revisions and rearrangements. Knowledge can be a “local” matter (think of cases in which you fully defer to an epistemic authority you know it for a fact to be infallible); understanding can never, or at least very rarely be local in the same way as knowledge can be, and the process of acquiring understanding always activates a broad area of your web of cognitive attitudes.

The second fact that a theory of understanding should be able to account for is the fact that not every piece of testimony is epistemically worth the same (for everybody). Some testimonial interactions are better than others, for the sake of gaining understanding. Some testifiers are particularly good (and some are particularly bad) in bringing their interlocutors to understand. But why? What makes a piece of testimony a good basis for gaining understanding, and what makes a testifier a good one, or a reliable one, as a source of understanding? These are highly relevant questions in the current epistemological landscape, especially if we consider the fact that it is desirable that understanding is augmented within the epistemic community and that it is shared between subjects. Let me tackle these questions indirectly. Let me start from clear cases of “bad” testimony, i.e. of a testifier failing to provide her interlocutor with understanding or, more precisely, failing to yield substantial advancements in her interlocutor’s epistemic standing.

Suppose you are trying to understand a historical fact, e.g., the fact that the King Louis XVI of France was beheaded in 1793 (call this information that p). Suppose you learn from your history teacher that the King was beheaded because he was sentenced to death by means of a regular trial (call this information that q). You trust your teacher, and hence incorporate that q in your web and establish a relation of (explanatory) dependence between q and p. Suppose that among your already established corpus of beliefs and commitments there is the quite firmly rooted conviction that Louis XVI’s power was absolute. Now, a monarch with absolute power is for you the source of law and justice and cannot be subjected to it as any other French citizen. The testimonial information that q certainly enables you to derive the information that p from your system, but it probably does not contribute much to your understanding of the event corresponding to p and of the domain this event belongs to (say, French Revolution). This seems to be because the incorporation of q somehow shakes the already established balance of your web, and this seems prima facie incompatible with a substantial advancement of your epistemic standing.

Now consider a different case (inspired by Schurz and Lambert 1994). You find out that a very good friend of yours is at the hospital in life-threatening conditions. You are surprised, and therefore ask a common friend what happened. You get told that the person tried to kill herself. Suppose you know the person in question quite well—or, at least, you think you do—and you would have described her as a happy, joyful person. The explanation you receive certainly contributes essentially to the derivation of the initial fact, but all things considered, does not have very positive epistemic results. Granted your already established corpus of beliefs and commitments, the explanation is not credible to you, or at least problematic. It brings to your attention a fact that does not fit into your web. You have gained the resources to understand why your friend is in life-threatening conditions, but there is now a new, different fact about the situation that you fail to understand. In order to improve your epistemic standing, you need more information and a longer story enabling you also to start a process of revision and rearrangement of your system. Suppose, e.g., you find out that your friend was taking some anti-depressant drug. You now have gained a defeater for your belief that your friend was happy. You probably now have some reason to think that although she looked happy to you, very probably she was not. Once you realize that her joyful mood was not authentic, you know which belief you need to drop in order to bring the fact pointed out by the explanation to fit into your web.

Now consider a well-known case depicted by Pritchard (2010). Imagine a fireman telling your 5-year-old daughter that the house burnt down (p) because of faulty wiring (q). Suppose that this statement is true and that the child forms the beliefs that q, and that q brought about p, on the basis of the fireman’s telling. Does she understand why the house burnt down? She probably does not. Pritchard tells us that the reason why she does not is that she fails to appreciate (to “grasp”) how a faulty wiring may cause a fire, so she cannot “see” the relation between the two events. This may be right. But there is actually a much easier explanation for this fact: the child does not understand the utterance of the fireman from a purely semantic point of view. She does not have any access to the truth conditions of q. She does not have an idea of what would be the case if faulty wiring would occur.

De Regt and Gijsbers (2016) depict a very similar case. Imagine that an angel appears to Newton in a dream and tells him something along the following lines: “Gravity is not a force. It is an effect of the curvature of space–time caused by heavy bodies”. Newton, who is a religious man, does not hesitate in taking the words of the angel at face value. (We take it that he is believing to be dealing with an infallible source.) However, does he improve his understanding of gravity in doing so? He does not. As De Regt and Gijsbers correctly point out, he does not seem to have the conceptual and mathematical tools necessary to grasp what space–time is. He is not able to do anything with the information the angel is providing him. The item he incorporates is epistemically completely inert within his web. In addition, there seems to be a further reason as to why Newton does not make any advancement in his epistemic standing: the item he incorporates in his system does not cohere well with everything else that he holds true about the world. Rather, it explicitly contradicts what he has (very) good reason to believe. By incorporating the piece of testimonial information, the beliefs and commitments comprising his world view lose their balance, and this seems prima facie to be hardly compatible with epistemic advancement and with an achievement in terms of understanding.

The moral to draw from these cases is that whether or not a piece of testimonial information turns out to be epistemically valuable for you depends, at least, upon the already established content of your web of cognitive attitudes, and upon your ability to elaborate it from a purely semantic point of view. In general, a piece of testimonial information enabling an advancement in your epistemic standing (i) needs to be at least credible to you, in light of your best take on the matter, and not to clash with your already established domain–relative beliefs, commitments or reasons and (ii) to be intelligible to you, at least from a semantical perspective. What makes a testifier a good one, then, for the sake of bringing her interlocutors to understand? Granted these two conditions, good testifiers are those agents who have a high “sensitivity” for the content of their interlocutors’ web of cognitive attitudes, and who are able and willing to adjust their tellings accordingly. Moreover, good testifiers are those who will try to make sure that what they are using to communicate with their interlocutors is a shared language, as to guarantee that their hearers will interpret their words in the (approximately) right way by assigning to them the (approximately) correct meaning and extension.

Limits of the Model: Is Fitting Enough?

In the first part of the paper, I depicted understanding of facts or phenomena as the provisional end result of a process of rearrangement of one’s web of cognitive attitudes, to the effect that the corresponding informational units are brought to fit into the web in question. When an informational unit p fits into a web W, I suggested, it can be derived from W, it does not clash with already established contents of W, and it is properly allocated within W—relative to the other contents that pertain to the same subject matter. In the second part of the paper, granted this model of understanding, I investigated the role of testimony in providing a hearer with understanding, or in yielding advancement in a hearer’s epistemic standing. The general idea was that a piece of testimonial information generates (advancements in) understanding when it—or, more precisely, its semantic elaboration—yields appropriate rearrangements in a web of cognitive attitudes, that result in the information corresponding to the phenomenon to be understood to fit into the web in question. I pointed to two conditions that a piece of testimony (like, e.g., an explanation) needs to satisfy to provide one with understanding and, more generally, to yield genuine advancements in one’s epistemic standing. The piece of information needs to be reasonable, or credible to the hearer, on the one hand, and it needs to be semantically intelligible to her, on the other.

However, is fitting, so conceived, really enough for understanding? Or is it conceivable that a piece of information fits into a web of cognitive attitudes, and that genuine understanding (of the corresponding fact or phenomenon) is absent? Suppose a subject belonging to our epistemic community is struggling in trying to understand the so-called apparently retrograde motion of the planets. She struggles in making sense of the fact that some planets suddenly and (for her) unpredictably invert their direction of movement. She runs into another subject who tells her that the phenomenon she is observing is due to the fact that some planets do not simply perform a circular orbit around the earth (deferent); they also perform smaller circular orbits around the deferent itself (epi-cycle, literally: above the circle). These planets, hence, appear to her as moving backward while orbiting on the side of the epicycle closer to (more distant from) the earth—given that the epicycle and deferent have the same (opposite) direction of movement. Suppose that our budding astronomer has no reason to doubt the credibility of the testifier, and that all the above-mentioned conditions are satisfied. The explanation the subject receives is intelligible for her, and it is perfectly reasonable relative to the (extremely poor) background knowledge she has about astronomy. Once the information about epicycles and deferents is incorporated, elaborated and understood, the phenomenon of the retrograde motion starts fitting into her updated and enriched web of cognitive attitudes. I.e.: the phenomenon is not puzzling for her anymore, and it is to be expected in light of the already established content of her web; moreover, the corresponding informational unit has its place relative to the other items inhabiting her web and pertaining to astronomy. Now, would we grant genuine understanding to the subject in a similar case? Would we say that she understands the retrograde motion of the planets, and that she understands why certain planets sometimes appear to us as moving backwards? We certainly would not. We would probably rather say that, although the subject is probably experiencing a sense of understanding, and although her sense of understanding might appear to her to be a reliable sign of genuine understanding or well grounded, she does not really understand. She understands the phenomenon of the retrograde motion of the planets relative to a theory or an explanation that she is holding true, but she does not understand the phenomenon in an “objective”, or “real” sense.

Behind this simple example, there is a general worry. The model suggested in this paper might be too subjective, or too internalistic, to do proper justice to understanding. We are working with the idea that understanding of single facts and phenomena involves or needs to be explicated in terms of fitting. The example depicted above shows that fitting, in certain conditions, might be enabled even by pieces of testimonial information or explanations that are utterly false, or untenable in the given epistemic circumstances. But intuitively, we want our theory of understanding to rule out the possibility of an utterly false or bad explanation providing one with genuine understanding. This is because we take “understanding” to be a success term, and to denote a certain cognitive achievement (Elgin 2007a, b, p. 33). Understanding needs to be somehow grounded on facts, or must “answer to the facts” in some sense (Elgin 2007a, b, p. 37). It would certainly be a discomforting result if a theory of understanding would force us to the conclusion that a member of our epistemic community committed to the existence of epicycles and deferents genuinely understands astronomical phenomena.

I mentioned before that not every explanation will grant one with understanding. An explanation, I said, needs to be credible or reasonable for the subject, in order to yield substantial advancements in her epistemic standing. This condition, however, does not help much. Not everything that appears to one to be reasonable is reasonable in an objective sense, or in the given epistemic circumstances. Suppose one is working and making judgements of reasonability from within a fairly bad web of cognitive attitudes—one, e.g., that contains mostly false beliefs about the relevant subject matter and many biased standards of justification. Such a web will not be a good basis to tell good from bad explanations and will typically give rise to a discrepancy between seeming reasonability and actual reasonability. The moral to draw from this is that the model of understanding presented here needs to be strengthened somehow, because we certainly do not want it to reduce to an analysis of what it feels like, to have the (maybe subjectively or internalistically justified) impression of understanding. We want understanding, and not the mere sense of understanding, to proliferate, with the aid of testimony, in our epistemic community.

One possibility here would be to embrace a certain measure of factivism. Why not simply add a truth requirement to the picture? Why not simply say that one needs to get things right, at least to a certain extent, in order to genuinely understand? One’s understanding could then be said to improve in that the truth-content of one’s web of cognitive attitudes increases or becomes more significant. A piece of testimonial information or an explanation—factivists would say—needs not only to enable fitting, in order to provide one with genuine understanding; it also needs to be true. If the explanation depicts dependence relations, these relations must have counterparts in reality. If entities are postulated, these must exist. If processes are described, these must actually occur, and so on. Intuitively, we said, we would not grant genuine understanding of astronomical phenomena to our budding astronomer who believes in a system of epicycles and deferents. Factivists have a straightforward way to do justice to this intuitive and somehow unquestionable judgement: the astronomer fails to understand, because she is committed to an explanation that is false, and a false explanation cannot work as an effective source of understanding. End of story.

Factivism, however, might turn out to be more problematic than it seems at first sight.

Elgin (2012, 2017b, 2017) famously argues that a factive conception of understanding forces us to deny that contemporary science affords or embodies an understanding of the phenomena it seeks to explain and account for. Contemporary science, so Elgin, is a paradigm of epistemic success. If contemporary science does not afford understanding of its subject matter, probably nothing does. Now, scientists typically deploy representational devices and epistemic mediators that (are known to) misrepresent their intended domain. These epistemic mediators simplify, abstract, and sometimes even distort their subject matter in order to make certain aspects of this subject matter salient. They provide us with understanding of their intended domain not by mirroring it; rather, they create a cognitive environment in which certain features of the domain stand out. Our best contemporary science, e.g., leads us to think of gases as comprised of dimensionless, spherical molecules that exhibit no mutual attraction. As Elgin puts it: “There is no such gas; indeed, if our fundamental theories are even nearly right, there could be no such gas” (Elgin 2017, p. 15). Now, the fact that the ideal gas model departs from reality in certain respects does not seem to obstruct its epistemic functioning. On the contrary, it seems to foster it: the idealized model makes us appreciate how pressure, volume and temperature are related in real gases. By picturing gases as the model suggest us to think of them, we genuinely understand something of gas-phenomena. The ideal gas model, however, is not simply pragmatically useful. It is not simply a good or reliable instrument to predict gas-phenomena. It has an undeniable epistemic value.Footnote 6 Now, if we demand from an “understander” true beliefs and only true beliefs about a subject matter, we are forced to deny that somebody who masters and accepts the ideal gas model genuinely understands gas-phenomena. But this, so Elgin, is highly counterintuitive.Footnote 7

Moreover, a factive conception of understanding does not sit well with our practices of ascription of understanding. Understanding is not an all-or-nothing cognitive achievement. Our understanding grows, or improves. It gets better, deeper, more sophisticated over time. The steps leading to (full) understanding, however, might involve simplifications, approximations, and even the incorporation of false items of information. While some falsehoods are certainly detrimental for understanding, some others are not. We would certainly grant (a certain measure of) understanding to a child who had incorporated into his web of cognitive attitudes the information that human beings descended from apes. This information is false, as, according to evolutionary theory, human beings and apes descended from a common ancestor who was not, strictly speaking, an ape. Still, the false information and the way the child probably rearranged his web of cognitive attitudes for making place for it signalize some understanding of the relevant subject matter. A factive conception of understanding has a hard time in explaining why the child is epistemically better off than his classmate, who believes, say, that human beings descended from butterflies or did not evolve at all (Elgin 2007b, 8).

These arguments by Elgin have not persuaded everyone.Footnote 8 I do not claim that they prove factivism to be false or completely untenable. Still, I believe they succeed in showing that factivism might be problematic, and that the issue of the relation between understanding and truth is far from settled. Thus, I think it is worth exploring the possibility of grounding understanding on facts without appealing (directly) to truth.

Suppose, then, that after a process of rearrangement of our web of cognitive attitudes (maybe yielded by a verbal interaction with another subject) we have reached a point in which a certain informational unit fits into our web. How do we make sure that we are on the right track? How do we make sure, or at least how do we raise the probability that we are genuinely understanding the corresponding phenomenon, and not just seemingly so? How do we rule out the possibility of being in an epistemically bad scenario, e.g., of experiencing a mere sense of understanding? It seems that the best one can do is to take seriously two constraints: an empirical and a social one.

The empirical constraint tells us that as long as our web of cognitive attitudes shows itself to be a decent and most of the time reliable guide to get along in the world, the epistemically most responsible behavior is to stick to it. If problems arise—predictions fail, expectations are not met, goals are not reached, problems remain unsolved, and the like—we should take it as a sign that something about our already established corpus of beliefs and commitments needs to be revised or rearranged, ideally in a non-ad-hoc manner. But we probably need something more than this. In domains in which what matters is retrodiction, e.g., or explanation of past events, there will be no way to test our beliefs and assumptions and to bring them to face the tribunal of immediate experience. Moreover, recent empirical studies on the phenomenon called “illusion of explanatory depth” suggest that we often act successfully within a certain domain, not because the beliefs we hold about it are true (or reasonable in the given epistemic circumstances), but simply because the domain is particularly user-friendly. In such cases, the actions we perform are not sufficiently based on the beliefs we hold about the underlying mechanisms, to the effect that the beliefs in question are not really responsible for our success in reaching our goals or in predicting the occurrence of events. If this phenomenon is as widespread as cognitive scientists think it is, practical and empirical success should not, at least not unconditionally, make us too confident that what we believe about the relevant domain is correct, tenable in the given epistemic circumstances, or embodying genuine understanding (see Trout 2002; Ylikovski 2009; Sloman and Fernbach 2017).

Here is where the social constraint comes into play. As Elgin nicely puts it, understanding is not just a matter of being “in suitable relation … to the phenomena …, but also to other members of the epistemic community” (Elgin 2017, p. 121). How do we make sure that we are on the right track, then? The social constraint tells us: use other subjects’ opinions as yardstick; test what you believe or endorse by comparing it to what other members of your epistemic community think. More specifically: take disagreements with your peers as an indication that you could be wrong, and disagreements with epistemically superior subjects (experts and epistemic authorities) to be an indication that you are very probably wrong—as far as their domain of expertise is concerned. And on the other hand: take stable agreements as providing a prima facie and defeasible justification to stick to what you have, until problems or anomalies arise. In order to do justice to the intuition that our astronomer committed to a system of epicycles and deferents fails to understand the retrograde motion of the planets, hence, we do not need truth: her position is untenable, relative to what most of her epistemic community thinks, and relative to objective features of her social-epistemic environment. A factivist, we said, would claim that the reason why she fails to understand certain astronomical phenomena is that her web of cognitive attitudes fails to mirror reality properly. Another, possible explanation is that she fails to understand because the social constraint is not fulfilled, and her web fails to approximate our webs, and the webs of other members of her epistemic community, in a robust enough way.Footnote 9

Is fitting, conjoined with an empirical and a social constraint, really enough for understanding? In the end, one might argue, experience tells us in a straightforward manner only that we are wrong, not that we are right; and it is actually conceivable that we are deeply mistaken even about matters that are deeply rooted in our shared worldview and that we have agreed upon for a very long time. Still, shaping and improving one’s web of cognitive attitudes while trying to keep these two constraints fulfilled seems to be the best one can do in the epistemic circumstances. Whether the best one can do is good enough for genuine understanding or not, is a question that probably deserves another paper.