1 A change of perspective

Mature science is one of humanity’s most stunning epistemic achievements. But what makes mature science epistemically excellent? Why is mature science a successful epistemic enterprise? In multiple papers over the last 3 decades (e.g., Elgin 1991, 2004, 2006, 2007, 2009, 2011, 2012) and in her book True Enough (2017), Catherine Z. Elgin has argued that our traditional epistemology has a hard time answering these questions. Traditional epistemology is veritistic: it is committed to the claim that something is epistemically valuable either because it is true, or because it is somehow truth-conducive. Traditional epistemology takes truth to be the main, or even the only, standard of epistemic acceptability. It is also knowledge-oriented, taking knowledge to be the aim we (ought to) strive for in our epistemic and cognitive endeavors. Yet the representational systems deployed in science, Elgin contends, even the best and most successful systems humans have developed, can hardly be conceived as repositories of knowledge. Typically, they are not true—sometimes not even partially. Sometimes they do not even purport to be true (think of idealized models). Sometimes they cannot be true, as they are not propositional and therefore not truth-apt (think of three-dimensional models, diagrams or graphs). It thus seems as if science does not work by providing a truthful and accurate depiction of the world. It neither reproduces reality faithfully nor aims at doing so. Rather, it works by making highly complex domains of reality tractable. It works by simplifying, streamlining, idealizing, distorting. It constantly produces and deploys representational systems that are inaccurate—sometimes in the sense that they are only selectively correct, sometimes in the sense that they deliberately misrepresent their intended subject matter. Yet these inaccurate representational systems are not bugs. When they succeed in providing an epistemic community with insight into the phenomena—sometimes despite, sometimes even because of, their inaccuracy—their inaccuracy is desirable. Elgin calls such inaccurate yet enlightening representational systems “felicitous falsehoods”. Felicitous falsehoods, she says, are not simply tools that scientists deploy and exploit to get closer to the true description of the way the world is. They are not stepping-stones without a role to play in the final edifice of science. If there ever will be such a thing as the final edifice of science, felicitous falsehoods will certainly be a constitutive part of it. There simply would be no edifice without them. There is no way for them to be left behind.

Epistemology, so Elgin, cannot shut its eyes to these facts about our scientific practice. Our epistemology must explain what makes science a successful epistemic enterprise, and what makes the products of our best science epistemically valuable. This, according to Elgin, requires a quite radical reconfiguration of the epistemic territory. First and foremost, she contends, epistemology should weaken its commitment to truth and truth-conduciveness. This is necessary if we want to explain what it is about felicitous falsehoods that makes them epistemically valuable, and what it is about scientific methods that makes them epistemically successful. Moreover, epistemology should widen its focus. It should acknowledge that beside knowledge, there are other epistemic phenomena worthy of investigation and other epistemic aims worthy of pursuit.

This gives rise to a variety of far-reaching questions. Once knowledge is dethroned, which other epistemic goals should epistemology acknowledge? Which aims should guide us in our epistemic endeavors? Once truth and truth-conduciveness are sidelined, how are we to avoid “epistemically bad” scenarios? How are we to guarantee that what results from our epistemic efforts will answer to the facts in the appropriate way? Elgin addresses these questions by developing a holistic epistemology which has as backbone the notion of understanding. What makes our best science epistemically excellent? Elgin’s answer is this: the fact that it succeeds in providing us with an understanding of natural phenomena.

2 Understanding and the facts

As Elgin conceives it, understanding is “grounded in fact” and is “duly responsive to reason or evidence” (Elgin 2017, 44; see also Elgin 2007, 39); however, as it can be conveyed by symbols that are strictly speaking false or not truth-apt, it is a non-factive cognitive state. This means that understanding, in Elgin’s sense, is neither reducible to, nor typically involves, holding true a host of mutually dependent propositions that mirror the facts. An “understander’s” system of thought typically contains informational units that partially depart from the way things (allegedly) are. Only such a liberal, flexible and non-factive conception of understanding, so Elgin, can “accommodate the deliverances of science” (Elgin 2017, 38).

Seven authors in the topical collection challenge Elgin on this point. They all agree with her that epistemology should accommodate the cognitive and epistemic contribution of science. They agree that we need an epistemology that does not deny that our current best science achieves genuine understanding of the phenomena it accounts for. But they all suggest ways to retain truth as the central epistemic value and to reaffirm truth (or truth-conduciveness) as the standard of epistemic acceptability.

Drawing on Kvanvig (2003), Gordon (2019) defends a moderately factive view of understanding, according to which, for an epistemic subject S to genuinely understand a subject matter or domain of reality D, S’s understanding must contain true central beliefs about D, yet can contain false peripheral beliefs about D. (Gordon’s paper actually touches on multiple important aspect of Elgin’s work, but I’ll focus on the factivity problem here.) Elgin’s conception of understanding requires that an epistemic agent who understands a subject matter be able to effectively use the information bearing on this subject matter, e.g., as a basis for reasoning, (non-trivial) inference, and prediction. Gordon deploys these ideas of Elgin’s to specify a criterion of “centrality” for beliefs: a belief will be more or less central in one’s understanding of a subject matter to the extent that that belief is significant in view of the agent’s ability to reason, draw (non-trivial) inferences, and make predictions relative to the subject matter in question. Elgin contends that a factive conception of understanding has a hard time in doing justice to the achievements of science. Gordon challenges this claim and argues that a moderately factive view of understanding performs as least as well as a non-factive view in explaining what about scientific products makes them epistemically valuable.

Rice (2019) defends the view that, contra Elgin, the primary goal of science is factive understanding. This goal, he claims, is sometimes accomplished by deploying, investigating, and manipulating models and theories that are pervasively inaccurate representations of their target systems. Rice points out that, in the attempt of spelling out the relation between understanding and truth, there are two levels to be kept distinguished: the level of the accuracy of the representational systems themselves (the theories and models), and the level of the accuracy of the body of scientific understanding that is derived from these representational systems. In order to settle the question whether understanding is factive, we ought to consider the nature of the information that scientists are in the position to extract from inaccurate model and theories. In the case of genuine scientific understanding, Rice claims, what is extracted is true modal information about the system under investigation, i.e., true information about relationships of counterfactual dependence and independence among various observable and unobservable features of the system.

Lawler (2019) opts for a similar strategy. She also reminds us that, in assessing the relation between understanding and truth, two questions must be distinguished and answered separately: how a certain falsehood relates to the phenomena; and how, and to what extent, this falsehood figures in the content of one’s understanding of the phenomena. With this distinction in the background, Lawler develops a positive factive view of scientific understanding, which she labels the “extraction view”. By embracing the extraction view, she claims, we can retain Elgin’s idea that some falsehoods are epistemically valuable, but we recognize that this value is merely instrumental. To clarify this point, Lawler appeals to a Wittgensteinian metaphor: We might need a ladder to get to the top of the wall that we are climbing, but we do not need to take the ladder with us once the top is reached. We thus need felicitous falsehoods in the process of targeting understanding and in coming to understand, but once understanding is reached, we can leave them behind. They do not need to be part of the content of our understanding.

Nawar (2019) argues against Elgin’s non-factivist conception of understanding along slightly different lines. Rice and Lawler open their papers by conceding to Elgin that many representational systems deployed in our current best science are not even partially true, and go on to show that even false representational systems can work as sources of accurate information about reality. In contrast, Nawar does not concede this point. He argues that many of Elgin’s felicitous falsehoods could in principle be reconstructed by factivists as partially adequate descriptions of their intended domains. He goes on arguing that even if Elgin were right that such a reconstruction is impossible, not everything would be lost for factivism. Elgin herself points out that scientists need not believe but merely accept felicitous falsehoods. Factivists, Nawar contends, could use the belief/acceptance distinction to their own advantage and claim that indispensable felicitous falsehoods ought to be accepted for instrumental reasons—namely, for the sake of obtaining true beliefs about the relevant subject matter.

Le Bihan (2019) agrees with Lawler and Nawar that the best way to account for the epistemic value of felicitous falsehoods in Elgin’s sense is in terms of instrumental value. Felicitous falsehoods are not epistemically valuable per se; their epistemic value rather depends crucially on their capacity to provide epistemic access to true information about the target system. Le Bihan does not see this interpretation of felicitous falsehoods as clashing with Elgin’s view, however. On the contrary, she argues that this is actually the best way to read Elgin and to interpret Elgin’s claim that felicitous falsehoods typically create a cognitive environment in which certain relevant features of the target system stand out. Elgin explicitly offers her view as an alternative to factivism and veritism. Le Bihan argues that, if her reading of Elgin is correct, Elgin’s project is more akin to factivism than Elgin herself is ready to acknowledge.

Warenski (2019) also thinks that there is a way to reconcile Elgin’s view with veritism. The way to achieve this reconciliation, she contends, is to appropriately modify our conception of veritism and to construe it more liberally, so that it acknowledges a rich array of truth-oriented values. The fact that science deploys inaccurate representations does not force us to relax our commitment to truth, Warenski claims; it simply tells us that there are many different ways in which we value truth in our theorizing. As an alternative to veritism, Elgin offers a holistic picture on which epistemic norms are justified by the fact that they are the norms that suitably idealized, responsible epistemic agents would endorse. Warenski claims that this picture is flawed, and that Elgin’s holism faces an objection that is best answered by veritism.

Frigg and Nguyen (2019) defend veritism along different lines. They agree with Elgin that epistemology faces a challenge: the claim that truth is required for epistemic acceptability clashes with the observation that our best current science delivers products that are false, if interpreted as realistic representations of their target domains. This challenge, these authors claim, are at best represented as a paradox consisting of three individually plausible but jointly inconsistent claims: (i) truth is required for epistemic acceptability (veritism); (ii) the claims of science should be interpreted literally (literialism); (iii) the products of science are mostly literally false and yet epistemically acceptable. Elgin’s recommendation for escaping this paradox is to relax epistemology’s commitment to truth, and thus to reject (i), veritism. Frigg and Nguyen argue that this is not the only way out of the paradox. One could also choose to retain veritism and reject (ii), literalism. This alternative way, they claim, should be welcomed by Elgin, as it seems to be implicit in her own theory of representation.

3 Understanding, acceptance, and acceptability

The kind of understanding that Elgin is primarily concerned with is not directed towards single, isolated facts. Rather, it is directed towards objects that display a certain measure of complexity—such as a topic, a domain, or a subject matter. Elgin takes this kind of understanding (which the recent literature calls “objectual”) to be typically embodied not in single, isolated propositions—but in a comprehensive theory, a system of thought, or an account, i.e., in a “constellation of mutually supportive commitments that bear on a topic” (Elgin 2017, 12). It should be noted here that an account, in Elgin’s view, incorporates not only informational units that are meant to represent or describe relevant aspects of the target domain. An account also contains various different kinds of commitments, for example concerning the values and principles that the particular epistemic agent acknowledges; the methods, rules and definitions that she recognizes as valid; the goals that guide her actions and investigations; and so on.

Intuitively, to understand a topic or a domain via or on the basis of an account involves some sort of endorsement of the informational units constitutive of that account. Against the current mainstream view in epistemology, Elgin does not take this endorsement to be a form of belief or conviction. She contends that we should rather think of the endorsement involved in understanding as a form of acceptance. Drawing on Cohen (1992), she claims that acceptance differs, e.g., from conviction in that it may be reasonably directed towards contents that one takes, or “feels”, not to be literally true; it is under the epistemic agent’s voluntary control; and it is action-oriented. Acceptance in Elgin’s sense involves both the willingness and the ability to take a certain consideration or cluster of considerations “as a premise, a basis for action, or … as an epistemic norm or rule of inference, when one’s ends are cognitive” (Elgin 2017, 19).

Of course, not every account that an epistemic agent accepts or is willing to accept will provide the agent with understanding of the topic it bears on. Accepting the reports of the Pythia’s prophecies will not typically be enough to understand the outcomes of the Peloponnesian War. Accepting Thucydides’ reconstruction might be. Astrology does not provide an epistemic community with understanding of the supermoon. Astronomy probably does. Why? Elgin makes us aware that understanding is embodied in accounts that are not only accepted, but also worthy of being accepted, in the given epistemic circumstances. So, to provide an epistemic agent or a community of epistemic agents with understanding, an account must be tenable, or acceptable. What does it take it for an account to be tenable? What are the criteria for an account’s acceptability? In True Enough (64, emphasis added), Elgin tells us the following:

An account is tenable just in case it is, or is rationally reconstructable as, a result of a process of adjudication that brings a collection of initially tenable commitments into reflective equilibrium.

What does an account in reflective equilibrium in Elgin’s sense look like? First, it is logically consistent. “Consistency is mandatory”—Elgin writes—“for the admission of jointly inconsistent claims would subvert the epistemic enterprise” (Elgin 1996, 103). Yet logical consistency is not enough for reflective equilibrium. We probably would not say that a scattered collection of informational units with no bearing on one another whatsoever is in reflective equilibrium, even in case it would fulfil a logical consistency requirement. For an account to be in reflective equilibrium, its constituent elements must build a coherent whole; i.e.: they must hang together and be mutually supportive (Elgin 2017, 72 and 76).

Three papers in the topical collection address Elgin’s conception of reflective equilibrium, although from different angles.

Baumberger and Brun (2020) use Elgin’s idea as a starting point to develop their own theory of reflective equilibrium. They start with an in-depth analysis of Elgin’s notion of an “account”. They argue that in order to adequately characterize reflective equilibrium as a target state, it is necessary to draw clear distinctions with respect to the elements that comprise an account and that are involved in such a state. More specifically, they argue that it is necessary to clearly distinguish between an agent’s commitments to the propositions about the subject matter at hand, the epistemic goals the agent tries to do justice to, and the theory (or theories) the agent develops about the subject matter. On the basis of these distinctions, Baumberger and Brun suggest four conditions that an account in reflective equilibrium must meet. They finish by critically assessing Elgin’s conception of the relation between understanding and reflective equilibrium. In a nutshell, Elgin’s conception can be summarized thus: acceptable accounts afford understanding of their topics; an account is acceptable if and only if it is in reflective equilibrium; hence: “an understanding of a topic consists in accepting an account in reflective equilibrium” (Elgin 2017, 3–4, emphasis added). Baumberger and Brun argue that at least in those cases in which, for the sake of understanding a domain, an epistemic agent deploys an epistemic mediator such as a theory, understanding cannot be fully characterized appealing to reflective equilibrium alone. Beside reflective equilibrium, understanding in such cases seems to require fulfilling an ability condition (the epistemic agent must be able to use the theory in question appropriately) and an external rightness condition (the theory must objectively answer to the facts).

Dellsén (2019) focuses on the notion of acceptability. In the first part of his paper, Dellsén argues that the standard, probabilistic account of justification typically deployed to assess the appropriateness of belief is inadequate for assessing the appropriateness of the endorsement involved in understanding. More precisely, he contends that it is possible for an epistemic agent to reasonably accept an account without being probabilistically justified in believing it. In the second part of the paper, Dellsén develops his own probability-based model of acceptability, which ends up resembling Elgin’s conception of reflective equilibrium in significant respects. An important aspect of Elgin’s theory is that reflective equilibrium has a relational, or comparative dimension. Whether an account is in reflective equilibrium cannot be settled by considering the internal features of the account alone. What one ought to consider is also how the account relates to its competitors, i.e., how strong the account is in relation to every other available account that has been formulated on the same subject matter. Elgin writes:

Reflective equilibrium requires that [one’s] epistemic commitments are mutually supportive and that they constitute an account that is at least as reasonable as any available alternative in the epistemic circumstances. (Elgin 2017, 98–99)

Dellsén’s analysis strongly reinforces this idea of Elgin’s. Within his model, too, acceptability cannot be settled “in absolute”, i.e., by considering how probable an account is, taken by itself. What one ought to consider is how much more probable the account in question is in comparison to its rivals.

Jäger and Malfatti (2020) focus instead on the social-epistemic aspects of Elgin’s conception of reflective equilibrium. Elgin makes it very clear that the process of seeking reflective equilibrium by adjusting and improving one’s system of thought should not be conceived as a solipsistic enterprise. In approaching reflective equilibrium, we crucially depend on one another’s competence and expertise. “Rather than relying exclusively on considerations in my ken”—Elgin writes—“I draw on the expertise of others, and they in turn draw on mine” (1996, 114; see also 2017, 112). Jäger and Malfatti build on these ideas and develop them further. They start by noticing that, when for some reason we struggle in preserving, approaching, or improving reflective equilibrium in our systems of thought, we seek the advice of suitable “epistemic authorities”. Epistemic authorities, Jäger and Malfatti claim, are not simply those agents who are epistemically better positioned than we are; they are agents who can help us solve the problems, and iron out the cognitive dissonances, that obstruct us in balancing our system of thought. Jäger and Malfatti distinguish and analyze a variety of possible ways in which an epistemic agent might run into trouble in her attempt to achieve reflective equilibrium, and show why the interaction with epistemic authorities is particularly fruitful in this context. If one accepts Elgin’s proposal that an agent’s achieving or approaching reflective equilibrium in her noetic profile provides her with understanding, Jäger and Malfatti argue, it follows that an important role of epistemic authorities is that of fostering the advancement of understanding in their interlocutors. If one accepts that understanding is not reducible to acquiring true beliefs or even knowledge, this thesis departs from mainstream accounts of epistemic authority, on which epistemic authorities essentially transmit (individual) doxastic attitudes to their interlocutors. In the last part of their paper, the authors argue that to reliably promote understanding in their interlocutors’ noetic systems, epistemic authorities must possess the social-epistemic virtue of “epistemic empathy”.

4 Fallibilism, knowledge, and understanding

Acknowledging our epistemic vulnerability, i.e., admitting that despite our evidence and despite our best efforts we might be mistaken, “seems an entirely appropriate confession of intellectual humility” (Elgin 2017, 292). As intellectually humble epistemic agents, we thus ought to be fallibilists. But what does fallibilism amount to? And what is it exactly that a humble epistemic agent should be fallibilist about?

Thinking of fallibilism in terms of knowledge, Elgin reminds us, is highly problematic. David Lewis famously argued that:

If you claim that S knows that P, and yet you grant that S cannot eliminate a certain possibility in which not–P, it certainly seems as if you have granted that S does not know after all that P. (Lewis 1996, 549)

In a similar vein, Elgin writes that “‘I know that p, but I might be wrong that p’ has the air of giving one’s word in one breath and taking it back in the next” (Elgin 2017, 296). Elgin’s major worry in relation to a fallibilist stance on one’s knowledge, however, is not that it is logically inconsistent; rather, her worry is that fallibilism about knowledge seems to prescribe incompatible courses of action. If an epistemic agent knows that p, she will rely on p. She will feel entitled to use p as a basis for inference, reasoning, and action—no matter how high the stakes are. She will not hesitate in spreading the information that p across her epistemic community. She will testify that p and invite others to trust her on p’s truth. She will assure her interlocutors that p is true and invite them to rely on its truth as she does. Yet if the epistemic agent cannot rule out the possibility of being wrong about p, her situation is radically different. She will not rely on p in every situation. She will adjust her epistemic behavior depending on the circumstances. For example, if the stakes are particularly high, she will probably refrain from telling others that p. She might even responsibly decide to share her doubts about whether p with her interlocutors. “The problem”—Elgin claims—“is that she cannot do both” (Elgin 2017, 297). One course of action excludes the other. A fallibilist stance on knowledge thus has the unwelcome consequence of leading us to a sort of practical paralysis.

Giving up fallibilism about knowledge—i.e., believing that once a certain threshold on the justification required for knowledge is reached, one cannot be wrong—does not seem a desirable option either. Worries about intellectual arrogance aside, infallibilist epistemic agents are confronted with what is known as Kripke’s paradox (Kripke 2011, 43–33; see Elgin’s reconstruction in 2017, 297). The problem the paradox calls our attention to is the following: if by claiming that an epistemic agent knows that p we imply that there is not even the slightest chance that she is wrong about p, it would seem natural to expect an agent who knows that p to disregard any evidence she comes across which speaks against p (i.e., it would be natural to expect an agent who knows that p to be dogmatic). Closing one’s eyes to new evidence, however, strikes us as epistemically irresponsible; it is not something we would expect any rational epistemic agent to do.

How do we escape this impasse? Elgin’s suggestion is to think of fallibilism in terms of understanding. If understanding rather than knowledge is our target, neither the possibility of error nor actual error scare us. On the contrary, we learn from our mistakes. Sometimes we even need to make mistakes in order to learn. Sometimes we are lucky enough to make particularly significant mistakes that not only make us realize that we are wrong, but that trigger new questions about the topic we are investigating and put us on the right track in attempting to answer them (Elgin 2017, 306). Acknowledging that our provisional understanding of a topic is or might be flawed or mistaken does not paralyze the epistemic agent; rather, it may prompt an improvement of her epistemic standing. “Rather than being a weakness”—Elgin writes—“our vulnerability to error is a strength” (Elgin 2017, 309).

Hetherington (2019) shares with Elgin the conviction that epistemology should “cast its net more widely into the epistemic waters” so as to catch a variety of neglected epistemic phenomena worthy of attention; however, he is generally more optimistic than Elgin about what a theory of knowledge can achieve. Switching our epistemological focus from knowledge to understanding, Hetherington suggests, is not the only way to kill two birds with one stone—that is, to make good sense of fallibilism and to avoid Kripke’s puzzle. In his contribution to the topical collection, he suggests the possibility of what he calls “open knowledge”. Open knowledge is a particular form that knowing can take, a distinctive category of knowledge constituted by the presence of a specific self-questioning attitude. For an epistemic agent who is in a state of open knowledge it would be possible and unproblematic to claim to know that p, even while asking whether she might be mistaken as to p (more precisely, even while asking whether her use of the evidence on the basis of which she believes that p or the justificatory means by which she came to believe that p has or has not guaranteed her forming a true belief about whether p). An agent in a state of open knowledge that p is thus self-reflective and ready for self-correction. She is open to new evidence about p. She is willing to inquire and learn more and to improve her epistemic standing relative to p. “Closed knowledge”, on the other hand, lacks attitudes of this kind. Kripke’s argument, Hetherington concludes, strikes us problematic only as long as we neglect the distinction between open knowledge and closed knowledge.

We saw that errors, according to Elgin, are constitutive of the process of improving understanding. But how exactly do we learn from our mistakes? How do we make advancements in understanding on the basis of or thanks to our mistakes? Morales Carbonell (2019) tackles these questions. In True Enough, Elgin argues that not only appreciating actual mistakes, but also acknowledging the possibility of being mistaken about something, can be epistemically fruitful. She writes: “In taking the possibility of error seriously, we treat it as itself worthy of attention. We put ourselves in a position to identify the potential fault lines in our currently accepted account” (Elgin 2017, 299). Carbonell builds on this idea and distinguishes two different kinds of strategy that an epistemic agent might employ to deal with mistakes and to use mistakes as springboards for epistemic amelioration. What he calls ex post strategies are strategies of repair, which tell the agent what she ought to do once an error has occurred. What he calls ex ante strategies, on the other hand, are forward-looking and apply either before the agent has objectively fallen into error or before the agent has realized that she has. In True Enough, Elgin also helps us appreciate that epistemically significant mistakes in a certain domain are possible only for those agents who already enjoy a certain measure of understanding of the domain in question. We thus have to be competent enough thus to be able to make mistakes that prompt advancements in our epistemic standing (Elgin 2017, 301). Now, competence is typically construed as an ability to succeed in doing something, or as an ability to get things right. Carbonell’s analysis shows that, if Elgin is right in claiming that competence can be displayed in getting things wrong in epistemically fruitful ways, the standard conception of competence should probably be revised.

5 General topics in social epistemology

Suppose that I disagree about whether p is the case with an epistemic agent whom I reckon to be (and who we assume actually is) exactly as good epistemically as I am. We have equivalent cognitive skills, and we are in the same evidential situation. How should I react? Should I remain steadfast, i.e., stick to my position, or should I conciliate, i.e., change my mind (to some extent) about p? These questions have attracted enormous recent attention among epistemologists. Against the backdrop of this debate, the following concerns lurk: how is it possible for two epistemic peers to reasonably disagree about something? How can it be that two epistemic agents who are literally in the same cognitive and evidential situation arrive at divergent conclusions? Elgin (2018) provides us with a tentative explanation for this puzzling fact: epistemic peers, she contends, are not epistemic clones. Two agents who are epistemic peers can reasonably disagree in the event that, despite having the same reasoning abilities and being in the same evidential situation, both make different use of their epistemic resources. She writes:

Although it is stipulated that the peers have the same evidence, background assumptions, reasoning abilities, and epistemic motivation, there is no reason to assume that they use them in the same way. If they do not, they may arrive at different verdicts. (2018, 15)

Elgin goes on to argue that not all disagreements should be conceived as problems to be solved and put behind us. Some disagreements can be beneficial within an epistemic community: it can be epistemically fruitful to conduct our epistemic explorations in an environment in which there are dissenting voices and in which there is a multiplicity of viewpoints worthy of being entertained.

In his contribution to the topical collection, Lougheed (2019) challenges Elgin’s conception of epistemic peerhood. He claims that differences in the way agents make use of their epistemic resources (e.g., in the way they weight the evidence, in the way they assign salience to the same items of information, in the kind of reasoning style they favor, and the like) can justify reasonable disagreements; these differences can explain why two parties who are in a very similar epistemic situation can disagree. But this is only because such differences signal that the two parties are not actually epistemic peers after all. It is highly improbable, Lougheed contends, that two agents with identical cognitive abilities but who typically use their epistemic resources differently (one, say, favors analogical arguments; the other trusts only inferences to the best explanation) will end up with a similar (let alone identical) truth-tracking record. But this is something that epistemic peerhood, as standardly conceived, seems to require. Lougheed goes on to suggest his own account of epistemic peerhood, which is allegedly immune to some problems affecting Elgin’s. In the last part of the paper, he defends and explores Elgin’s claim that disagreements can be epistemically beneficial for inquiry and complements Elgin’s view by showing what role such a claim could play in an argument defending non-conciliatory (i.e., steadfast) views of peer disagreement.

Kraay (2019) reminds us that Elgin herself seems to favor a certain form of steadfastness. She writes:

When the reasons favoring each side of a dispute are sparse or exceedingly delicate, or the evidence is equivocal, or each side can solve important common problems that the other cannot, it may be better for the epistemic community that both positions continue to be accepted. … Each group then can draw on a different range of commitments for premises in their reasoning and as a basis for their actions. By developing their positions, they put them to the test. (Elgin 2010, 67–68)

So in face of certain disagreements, we should not conciliate; by giving up our position too easily, we might miss opportunities for fruitful confrontation and thus for epistemic amelioration. In his paper, Kray suggests an in-depth reconstruction of Elgin’s take on the matter and argues that the kind of steadfastness that she defends should be conceived as “community-oriented”: an epistemic agent is entitled to stick to her guns in face of peer disagreement, given that the disagreement is potentially beneficial for the epistemic community she belongs to. This suggestion, Kray notices, faces many challenges; yet none, he argues, is decisive. After uncovering seven potential objections to Elgin’s position, he shows how each could be addressed.

6 Concluding remarks

This topical collection enables no more than a glimpse of the richness and complexity of Elgin’s work. Many terrains remain to be explored. One as yet unexplored topic, for example, is Elgin’s notion of exemplification. Exemplification plays a pivotal role in Elgin’s conception of understanding. Elgin claims that representational systems do not need to be true either to be epistemically valuable or to be effective sources of understanding of reality. Epistemically valuable representational systems ought to be true enough, and true enough representational systems are those that connect us to reality by exemplifying features they share with the phenomena. If Elgin is right on this—i.e., if True Enough is true enough—the line between the arts and the sciences turns out to be much more blurred than standardly assumed. A scientific representational system enables cognitive access to reality and affords an understanding of its topic in the same way in which a work of art does. These are quite revolutionary claims that will no doubt trigger and nurture extensive philosophical discussions. Elgin’s work has only just begun to inspire, intrigue, and enlighten thinkers across philosophical disciplines.