1 Introduction

That we may rely on our knowledge in action seems like a platitude. According to a prominent view, this platitude explains why knowledge is distinctively valuable: knowledge can guide action in a way that mere true belief cannot (Meno 97, a–c; Williamson, 2000, pp. 78–79, 101–102; Hawthorne & Stanley, 2008, p. 590; Hyman, 2010).Footnote 1 Knowledge talk also plays a prominent role in our assessment of practical reasoning: we criticize those who don’t rely on knowledge in practical reasoning, and can defend our own practical reasoning by pointing out that we only relied on knowledge. Various authors have argued that this is best explained by our practical reasoning being governed by a knowledge norm:Footnote 2

  • Knowledge norm for practical reasoning (KPR) One may rely on a proposition p in practical reasoning iff one knows that p.Footnote 3

Given the platitudinousness of the right-to-left direction of KPR, it comes as a surprise that it faces a substantial challenge from counterexamples: when much hangs on whether we know, relying on our knowledge seems to license irrational action.Footnote 4 Consider the following case:

  • Jellybean You are participating in a psychological study intended to measure the effect of stress on memory. The researcher asks you questions about Roman history—a subject with which you are well acquainted. For every correct answer you give, the researcher will reward you with a jelly bean; for every incorrect answer, you are punished by an extremely painful electric shock. There is neither reward nor punishment for failing to give an answer. The first question is: when was Julius Caesar born? You are confident, though not absolutely certain, that the answer is 100 BC. You also know that, given that Caesar was born in 100 BC, the best thing to do is to provide this answer (i.e. this course of action will have the best consequences—you will be one jelly bean richer!). (Reed, 2010, pp. 228–229)

Jellybean and similar cases have been proposed as counterexamples to the right-to-left direction of KPR. Reed tells us that in Jellybean, you confidently know that Julius Caesar was born in 100 BC. However, the reward for getting the answer right is meager and the punishment for getting it wrong severe. Intuitively, risking a shock for your confident but not certain knowledge does not seem like the rational course of action. Generally speaking, if stakes are high, it is sometimes not rational to perform some salient action when we rely on what we know.Footnote 5 This shows, critics claim, that we may not always rely on our knowledge in practical reasoning and hence that KPR must be false.

One response to counterexamples like this is to embrace Impurism about knowledge and deny that one knows the relevant proposition in Jellybean. According to Impurism, whether a subject’s true belief qualifies as knowledge partly depends on the subject’s practical interests (Roeber, 2018). However, many reject the idea that whether one knows depends on what is at stake. Those who want to stick to epistemological orthodoxy hence need to opt for another response to high-stakes cases.

The following observation suggests a way of moving forward: High-stakes cases, in which one knows but in which it is intuitively not rational to act in some salient way, do not constitute counterexamples to KPR by themselves.Footnote 6 The reason for this is that KPR only tells us when we may rely on propositions in practical reasoning, not how to act in the light of the propositions we may rely on. To assess whether high-stakes cases like Jellybean constitute a challenge to KPR, there needs to be a view in place to determine which actions are rational for us to perform in the light of the propositions we may rely on. In other words, proponents of KPR need a supplementary knowledge-based decision theory to tell us what is rational for us to do, given what we know. If, for instance, such a view were to tell us that answering that Caesar was born in 100 BC is the rational thing to do in Jellybean, then cases like Jellybean would indeed be (indirect) counterexamples to KPR. Hence, the prospects of defending KPR very much depends on there being an independently plausible knowledge-based decision theory that delivers the correct verdict about high-stakes cases like Jellybean.

One could of course deny that there is any relation between what we may rely on in practical reasoning and what is rational for us to do. While Jellybean would be immediately taken care of, I think this response is unattractive. First, it is puzzling: why think there is a norm for practical reasoning in the first place if compliance with this norm has no downstream effect on rational action? This response denies the plausible view that the rationality of practical reasoning is linked to the rationality of action. As Schulz (2017, p. 463) aptly puts it, “[i]f decisions should be based on what one knows, knowledge should play a fundamental role in decision theory.” Second, this response undermines any prospects of providing a unified knowledge-first view of practical rationality, a project many proponents of KPR are attracted to (Goldschmidt forthcoming, 4). Indeed, it is natural to think of KPR and knowledge-based decision theory as complementary constraints on ex post rational action. Knowledge-based decision theory is concerned with the question of what is ex ante rational for us to do. Plausibly, an action is ex post rational only if it was ex ante rational for one to perform. KPR is concerned with the rationality of practical reasoning. Plausibly, an action is ex post rational only if it is based on rational practical reasoning.Footnote 7 Putting this together, we get the following, knowledge-based view: An action is ex post rational only if it was ex ante rational for one to perform, given what one knows (knowledge-based decision theory), and performed by relying on one’s knowledge (KPR). While I think that the resulting unified view is theoretically attractive and worth pursuing, I want to note that knowledge-based decision theory has also been proposed and motivated independently of KPR (Moss, 2018; Dutant , forthcoming; Goldschmidt , forthcoming). For those not wanting to commit to KPR, my paper can be read more narrowly as advancing two novel knowledge-based decision theories that, in contrast to extant proposals, provide us with the correct verdict about what is rational to do in high stakes cases.Footnote 8

In this paper, I will first argue that extant proposals either face a substantial objection or result in high-stakes cases being counterexamples to KPR. I will then propose and compare two novel knowledge-based decision theories that avoid these problems, get the verdict right about cases like Jellybean, provide us with a view about when to simplify our practical reasoning and ultimately vindicate the platitude that we may rely on what we know.

The paper is structured as follows. In Sect. 1, I will reject the kind of knowledge-based decision theory that is commonly assumed on behalf of KPR. According to this view, which I will call Basic Infallibilism, an action is rational if it maximizes expected utility, conditional on the totality of what one knows. Basic Infallibilism assigns probability 1 to our knowledge and thus gives no probabilistic weight to the dire outcome in Jellybean. Hence, it tells us that answering maximizes expected utility in Jellybean, which is clearly at odds with our intuitions. In Sect. 2, I will discuss what I call Sophisticated Infallibilism (Williamson, 2005a; Schulz, 2017, 2021b), the view according to which, if stakes are high, we should rely and conditionalize only on an epistemically particularly safe subset of our knowledge. While Sophisticated Infallibilism makes the right predictions about Jellybean, I will show that it faces a substantial objection.

After having made my negative case against extant proposals, I will propose two novel knowledge-based decision theories. In Sect. 3, I will lay the groundwork for the view I ultimately favour. I will argue that our ordinary notion of reliance is much more fallibilist than has hitherto been acknowledged, an insight which has important ramifications for how knowledge connects to rational action. I will then employ a generalized form of conditionalization to develop a knowledge-based decision theory that implements these lessons. The resulting proposal, which I call Flexible Fallibilism, not only makes the right predictions in Jellybean but, as I will show in Sect. 4, also provides us with a novel knowledge-based view about how to simplify our practical reasoning. In Sect. 5, I will compare Flexible Fallibilism to yet another proposal, which I call Dual Infallibilism, that combines two recent claims by Jackson (2019a) and Moss (2013, 2018). While Dual Infallibilism also makes the right predictions about Jellybean and offers a theory of how to simplify our practical reasoning, I will argue that Flexible Fallibilism has various advantages over it and is thus preferable.

2 Basic infallibilism

To defend KPR, a decision theory has to be offered that is knowledge-based. A decision-theory is knowledge-based if one’s knowledge plays a crucial role in determining what is rational for one to do. In what follows, I will focus on the epistemic role knowledge might play in this regard.

The knowledge-based decision theory that is sympathetically discussed by Hawthorne and Stanley (2008) and commonly assumed in the literature says that an action is rational for one to perform if it maximizes expected utility, conditional on what one knows.Footnote 9 On this picture, knowledge plays a role in determining what is rational for one to do because the probability function that we use to calculate expected utilities is conditionalized on one’s knowledge. Since known propositions receive probability 1 if conditionalized on the totality of one’s knowledge K (i.e., if \(p\subseteq K\), then \(P(p|K)=1\)), this view is most naturally classified as a form of infallibilism about knowledge (Brown, 2011, p. 159). Let’s call this version of knowledge-based decision theory Basic Infallibilism.Footnote 10

The problem is that Basic Infallibilism leads right into trouble for KPR. Consider the following decision matrix that represents our choice in Jellybean between answering (A) and not answering (\({\sim } A\)), where p is the proposition that Caesar was born in 100 BC:

$$\begin{aligned} \begin{array}{c|cc} &{} p&{} {\sim } p\\ \hline A&{} \text {Win a jellybean (1)} &{} \text {Electric shock }(-1000)\\ {\sim } A &{} 0&{} 0\\ \end{array} \end{aligned}$$

In case Caesar was not born in 100 BC (\({\sim } p\)), answering leads to an outcome with substantial disutility, an extremely painful electric shock (–1000). Yet, according to Basic Infallibilism, its disutility should be ignored in determining what is rational for us to do. After all, if one knows that p, \({\sim } p\) receives a conditional probability of 0 (i.e. \(P({\sim } p|K)=0\)), so outcomes associated with \({\sim } p\) do not receive any probabilistic weight at all in the expected utility calculation. According to Basic Infallibilism, answering would maximize expected utility and thus be the rational thing to do.Footnote 11 Yet, we clearly have the intuition in Jellybean that risking an extremely painful electric shock is not something that is rational for us to do. If stakes are high, we shouldn’t ignore the possibility that what seems like an ordinary piece of knowledge could be false after all.Footnote 12

The argument now proceeds as follows: If we are committed to Basic Infallibilism, but this leads to the just noted implausible consequences, then we have ample reason to reject KPR and to give up on the platitude that knowledge is sufficient for permissible reliance. To avoid this, we should reject Basic Infallibilism and look elsewhere for a supplementary knowledge-based decision theory.

Before discussing more promising proposals, it is instructive to investigate our intuitions about high-stakes cases like Jellybean more closely. Williamson (2005c, pp. 480–481) discusses a high-stakes case where you get to decide whether to bet on a complex logical falsehood \({\sim }q\), standing to win a carrot if q is false and to be horribly tortured to death if q is true. He writes that

few reasonable humans would accept the bet, even if they had worked out the truth-table. The penalty for a small computational error is just too high. Reasonable humans have cognitive habits for managing their own fallibility which the probability calculus makes no attempt to reflect. (my emphasis)

I think Williamson is exactly right in tracing our intuitions about high-stakes cases to our fallibilist habits.Footnote 13 Humans often err, and yet we have to act. We are painfully aware of our own fallibility and have developed cognitive habits that allow us to price in our fallibility when making decisions. These cognitive habits are plausibly the source of our intuitions in high-stakes cases. When a lot hangs on whether we know, relying on what we take ourselves to know with an infallibilist spirit is rightfully not something we are comfortable doing, at least not without double-checking, second thoughts or further assurances.

I think our fallibilist intuitions affect (and should affect) how we think about what is rational for us to do. Whether an action is rational should reflect whether much can be lost if the agent is incorrect about the things she takes herself to know. A desideratum for any knowledge-based decision theory then is that it needs to take human fallibility into account.Footnote 14

3 Sophisticated infallibilism

In what follows, I will discuss a more refined infallibilist proposal, which I will call Sophisticated Infallibilism, that performs better when it comes to satisfying the desideratum from the last section.Footnote 15 I will first introduce the view in general and then note two variants afterwards.

Generally put, Sophisticated Infallibilism says that in high-stakes cases, we should not rely and conditionalize on the totality of what we know, but only on a certain epistemically privileged subset of our knowledge. If stakes are high (such as in Jellybean), we should rely and conditionalize on the subset of epistemically particularly secure knowledge to protect the goods at stake. Depending on one’s favourite theory of epistemic justification, only knowledge that is particularly well warranted, well supported by one’s evidence, reliable, sensitive or safe—surpassing what is needed to qualify as mere knowledge—is particularly secure in this sense.Footnote 16 If stakes are low, by contrast, our ordinary, moderately secure knowledge is fit to be relied and conditionalized on.

Let’s suppose that we can broadly divide what is at stake into different levels (Schulz, 2017, p. 469). So, for instance, if a carrot is at stake, we are at the lowest level of stakes. If one’s house, job, physical well-being or the life of a loved one is at stake (ibid.), we typically will be at a high level of stakes. Depending on one’s utility function, the different levels in between can be populated accordingly. We are now ready to formulate a version of KPR that captures the commitments of Sophisticated Infallibilism:

  • KPRSI One may rely on a proposition p in practical reasoning involving nth-level stakes iff one knows that p with nth-level of epistemic security.

Here’s how Sophisticated Infallibilism deals with Jellybean: Since Jellybean is a high-stakes decision situation, we may only rely and conditionalize on the subset of knowledge that comes with a correspondingly high level of epistemic security. Sophisticated infallibilists assume that particularly secure knowledge will only be rather sparsely available. Much of our ordinary knowledge will not be so secure and thus not part of the privileged subset of knowledge that is conditionalized on in high stakes cases. Hence, conditional on what we do know with the needed level of security, the proposition that Caesar was born in 100 BC will plausibly receive a probability of less than 1. But now, conversely, the outcomes associated with its falsity receive probabilistic weight in one’s expected utility calculation. Since one of these outcomes, the outcome of answering wrongly if Caesar was not born in 100 BC, is decidedly terrible, giving it some probabilistic weight would be enough to make not answering the action that maximizes expected utility and thus the rational thing to do. Since Sophisticated Infallibilism gives probabilistic weight to the possibility that the proposition one takes oneself to know could be false, it takes into account our fallibilist intuitions about Jellybean. Relying and conditionalizing only on particularly secure knowledge is what those with “cognitive habits for managing their own fallibility” can be expected to do if stakes are high.

With Sophisticated Infallibilism in general introduced, let’s look at some more specific proposals to fill in the blanks. First, what subset of knowledge is epistemically particularly secure? According to Williamson (2005a, p. 232) and Schulz (2017, pp. 467–468), the knowledge that is particularly secure is our higher-order knowledge. Why should we think that higher-order knowledge is epistemically particularly secure? One answer is that higher-order knowledge is safer than first-order knowledge (Schulz, 2017, p. 467).Footnote 17 In his more recent proposal, Schulz (2021b, p. 8081) directly cashes out his candidate for particularly secure knowledge—knowledge of higher strength—in terms of safety: one piece of knowledge is stronger than another piece of knowledge in this sense if it is safer. On both proposals, the subset of knowledge sophisticated infallibilists recommend relying and conditionalizing on in high-stakes cases is epistemically particularly secure.

Which version of Sophisticated Infallibilism should we prefer? There have been many objections to cashing out Sophisticated Infallibilism in terms of higher-order knowledge.Footnote 18 To note just one worry, higher-order Sophisticated Infallibilism tells us that we have to engage in higher-order reasoning to acquire higher-order knowledge (Gerken, 2011, p. 539; Gao, 2019a, pp. 99–101). But it is quite puzzling why this is something we ought to do. Higher-order reasoning does not increase first-order safety and if a belief is first-order safe enough for higher-order knowledge, why not rely on it directly instead of investing cognitive resources to acquire the corresponding higher-order knowledge? These and other worries have led Schulz (2021b) to cash out Sophisticated Infallibilism directly in terms of knowledge of higher strength.

In what follows, I will argue that there is a structural objection to Sophisticated Infallibilism that can’t be avoided by opting for knowledge of higher strength. The objection shows that we should reject Sophisticated Infallibilism, despite its correct verdicts in high-stakes cases.

4 The indiscriminate security objection

The problem is that Sophisticated Infallibilism’s demand for epistemic security is too indiscriminate, resulting in intuitively irrational choices and implausible predictions about how stakes matter for rational action. Consider the following case.

  • Camping Trip Ronda and her young daughter Daphne are going camping. Their backpacks were already packed by Ronda last night. Since she was quite tired after work, she just about knows that she packed all the things needed for a great trip. By contrast, after a full night’s sleep, she confidently knows that the cat feeder is turned on, which takes care of their beloved house cat by dispensing water and food while they are gone. Ronda has to decide between two campings spots: the first camping spot C1 is surrounded by beautiful scenery and involves a few hours hike to get there. The second camping spot C2 is much less beautiful, but only 20min away from their home. Ronda goes through her mental checklist: “I packed some ground coffee (\(l_{1}\)), Daphne’s plush-deer (\(l_{2}\)), ..., (\(l_{100}\)). The cat feeder is also turned on (\(h_{1}\)).” She then contemplates her choice: The scenery is much more beautiful at C1 than at C2 and the hike is going to be fun. Going to C2 would allow her to go to her yearly dental appointment and, on her way back, check on the cat and pick up missing things. However, she confidently knows that the feeder is turned on and the appointment can easily be postponed. Although she less confidently knows that she packed all things yesterday, missing some of them would not affect their trip significantly. Ronda concludes her reasoning by deciding to go to C1.

Let’s note a few things about the case: First, stakes intuitively are high since their cat’s life is at stake in Ronda’s decision: if they go to C1 and the cat feeder is not turned on, then their cat will die as a result. Second, note that not all propositions contribute equally to what is at stake: while being wrong about \(h_{1}\) would have a significant negative impact on the outcomes of her choice, being wrong about some of the propositions \(l_{1}\), \(l_{2}\) ..., \(l_{100}\) would only have a slight negative impact. For instance, if Ronda was mistaken about knowing \(l_{1}\) or \(l_{2}\), then she might be tired without coffee in the morning and Daphne would be cranky without her plush-deer. Yet she would at some point be awake even without coffee and Daphne’s mood would surely improve given the chance of seeing a real deer. Third, Ronda’s decision to go to C1 is intuitively clearly rational, given the beautiful scenery and the fun hike to get there. Furthermore, she confidently knows that (\(h_{1}\)) the cat is well taken care of. While she less confidently knows that \(l_{1}\), \(l_{2}\) ..., \(l_{100}\), she can reasonably expect to know most of \(l_{1}\), \(l_{2}\), ..., \(l_{100}\). Perhaps she is in fact mistaken about knowing to have packed one thing or another, but that does not significantly affect their trip.

To see what Sophisticated Infallibilism would say about the case, the utility assignments have to be specified. Let’s suppose that in C1, the falsity of \(h_{1}\)—resulting in the death of their cat—contributes -5000 utility to what is at stake. In C2, the falsity of \(h_{1}\) contributes 0 utility: after her dental appointment, Ronda will stop by their home anyway and check on the cat. In both C1 or C2, \(h_{1}\) being true upholds the status quo and thus contributes 0 utility. Let’s suppose further that in C1, the truth of each of \(l_1\), ..., \(l_{100}\) contributes 15 utility, whereas the falsity of each of \(l_1\), ..., \(l_{100}\) contributes -20 utility to the overall outcome (adding a mild annoyance on top of the thing missing). In C2, the truth of each of \(l_{1}, \dots , l_{100}\) only contributes 10 utility, pricing in the lack of beautiful scenery (after all, most things are more enjoyable with a nice view!). However, they also only contribute -5 utility if false: Ronda can bring them along while stopping by home, with the remaining downside being that she has to find them first.

To specify the epistemic characteristics of the case, let’s suppose that Ronda has only ordinary, level-1 knowledge that \(l_{1}\), \(l_{2}\) \(\dots \), \(l_{100}\), refllecting that she was fairly tired when she packed, but level-2 knowledge that \(h_{1}\), reflecting that she turned on the cat feeder with fresh eyes in the morning.

What does Sophisticated Infallibilism tell us about Ronda’s decision in Camping Trip? Since her decision involves goods—their life of her cat—that demand a high level of epistemic security (Schulz, 2017, pp. 469–470), Sophisticated Infallibilism tells us that she should make her decision from the perspective of a correspondingly secure level of knowledge. Let’s suppose that this is her level-3 knowledge.Footnote 19

Let’s assume that given her level-3 knowledge, the probability of \(l_1\), ..., \(l_{100}\) is. 7 each and the probability of \(h_{1}\) is. 99. While not much hangs on this, the probability distribution strikes me as plausible: Ronda was tired when acquiring knowledge of \(l_1\), ..., \(l_{100}\), so taking on the perspective of a more secure level of knowledge will not include propositions that just about meet the standard for level-1 knowledge. However, they will still be assessed as moderately probable from this more secure perspective. By contrast, she acquired her knowledge that \(h_{1}\) after a full night’s sleep, which in the absence of defeating conditions is a very reliable (but not infallible) belief-forming method that produces fairly secure knowledge. That \(h_{1}\) is still highly likely, given the more demanding perspective of level 3 knowledge, is thus natural.

The problem now is that Sophisticated Infallibilism fails to vindiciate our intuitions about Camping Trip. As we noted earlier, it is clearly intuitively rational for Ronda to opt for C1. Sophisticated Infallibilism makes the opposite prediction. The view tells us that, given Ronda’s level-3 knowledge, it is not rational for her to go to C1:

EU(C1)\(_{3}=100\times (.7\times 15 + .3 \times (-20)) + .99 \times 0 + .01 \times (-5000)= 400\)

EU(C2)\(_{3}=100\times (.7\times 10 + .3 \times (-5)) + .7 \times 0 + .3 \times (0)= 550\)

EU(C2)\(_{3}>\) EU(C1)\(_{3}\)

According to Sophisticated Infallibilism, it is rational for her to go to the camping spot C2 close to their home, which is intuitively implausible.

Note that we should not take our intuition that it is rational for Ronda to go to C1 mislead us into thinking that she perhaps does in fact have level-3 knowledge that \(h_{1}\), conditional on which \({\sim }h_{1}\) receives probability 0. Rather, it is intuitively rational for her to “tolerate a very small probability of a fairly big loss for a moderate gain” (Schulz, 2017, p. 478). We regularly do so, too, when we go about our day driving cars, crossing streets and eating food prepared by others. This does not mean that we have secure knowledge that doing so can’t go wrong, but rather that it is sufficiently likely that it won’t.

We can trace the source of this problem to Sophisticated Infallibilism’s demand for indiscriminate high security in high-stakes cases for all decision-relevant propositions, no matter whether this security is needed for all of them. Sophisticated Infallibilism says that one’s overall epistemic perspective on all decision-relevant propositions should take on a level of secure knowledge that matches what is at stake. However, this demand neglects important differences between these propositions: not all propositions that matter for one’s decision need this level of epistemic security. If we look at Camping Trip, for instance, each of \(l_1\), ..., \(l_{100}\) contribute very little to the overall potential outcome of the decision, so it is doubtful that Ronda needs to treat them individually with the same level of epistemic security that is rightfully afforded to \(h_{1}\). After all, the latter is much more closely tied to the potential outcome that makes Camping Trip a high-stakes case.

The problem is that Sophisticated Infallibilism cannot recognize these differences. Instead, if stakes are high, it advises us to indiscriminately assess all propositions from a highly secure perspective, leading to intuitively irrational choices. Put succinctly, it’s like moving to a new apartment and packing one’s entire belongings in bubble wrap because a single vase needs to be protected like this.

The problem of indiscriminate security affects Sophisticated Infallibilism on a structural level. Hence, proponents of KPR should be wary to turn to any variant of Sophisticated Infallibilism for a defence of their view.

5 How to rely on what you know

In what follows, I will argue that there is a reason the proposals discussed so far have failed to serve as plausible knowledge-based decision theories: there is a commonly held, unquestioned assumption in the literature about the central notion of KPR, the notion of reliance, that is inadequate and has downstream effects on how we think about knowledge determining rational action. The assumption is that once we may rely on a proposition in practical reasoning, we may rely on it with an infallibilist stance. In what follows, I will argue that our ordinary notion of reliance is much more fallibilist than commonly assumed. This, in turn, has important ramifications for the kind of knowledge-based decision theory we should adopt.

It is a commonly held assumption about the notion of reliance that once we may rely on a proposition in practical reasoning, we may rely on it with an infallibilist stance. Let’s call this way of relying on things infallible reliance. Illustrative for this way of thinking about reliance is Fantl and McGrath’s statement that once you know a proposition, “you can take it for granted, assume it’s true, count on it, take it to the bank, and book it” (2012, p. 441, their emphasis). That many accept this assumption about the notion of reliance can most easily be seen by considering what is commonly accepted to follow from permissible reliance. In the debate about norms of practical reasoning, many hold that if one may infallibly rely on a proposition p in practical reasoning, one ought to choose the action that maximizes expected utility, conditional on p (see, e.g., Fantl & McGrath, 2002, pp. 76–78, 2007, p. 559; Hawthorne & Stanley, 2008, p. 580f; Ross & Schroeder, 2014, p. 261; Locke, 2014, p. 86; Beddor 2021, p. 194).Footnote 20 Recall that by conditionalizing a known proposition on what we know, we assign probability 1 to said proposition and probability 0 to its negation. If relying on p is spelled out by conditionalization, then relying on p means that we don’t assign any probabilistic weight—and thus ignore—the possibility, however remote, that p might be false. Hence, the infallibilist stance.

For Basic Infallibilism, the problem with infallible reliance arises directly. For Sophisticated Infallibilism, infallible reliance leads to the demand for indiscriminate security for each and every proposition we rely on. As Camping Trip shows, this is not tenable either.

In what follows, I will show that we should challenge the assumption that our notion of reliance is so rigidly infallibilist. Indeed, I will argue that our ordinary notion of reliance is much more fallibilist than has hitherto been assumed.

Let’s start out by making some observations about our ordinary notion of reliance.Footnote 21 There is something that proponents of infallible reliance get right: in many ordinary decision situations, we do infallibly rely on things. Consider the following case that exemplifies this:

  • Watering Paul enjoys having plants on his balcony and relies on a moisture gauge to take proper care of them. The gauge is an instrument with no backup system. Since Paul is quite bad at estimating by other means how much water the plants need, he solely relies on the gauge’s reading for watering the plants. Giving them the correct amount of water is not a high stakes situation (for him, anyway), so he’s fine with treating the propositions acquired by reading the gauge as certain.

The example illustrates our way of relying when stakes are low. Whether we rely on restaurant opening times we read online, the weather report we saw on TV, the promise a usually reliable friend gave us or the reading of the moisture gauge of our plants, we rely on them with infallibilist conviction—if not much hangs on it. Importantly, doing so seems entirely appropriate.

My claim now is that those who accept an infallibilist conception of reliance are mistaken about how we rely on things when stakes are high. To see this, consider the following case:

  • Low Visibility Christine is trying to land her old Cessna on a remote runway during a foggy night. Since visibility is low, she has to rely on various instruments to successfully land her plane. For instance, she constantly has an eye on her altimeter which measures the altitude of the plane. Yet, she is aware that instruments can and do fail, especially in old airplanes, so she is alert to any signs of malfunctioning. As a backup, she has a backup altimeter on her phone that she can consult if the main altimeter breaks down.

In Low Visibility, Christine relies on the built-in altimeter to land her plane. Importantly, however, she does not rely on the built-in altimeter the way Paul relies on his moisture gauge. In contrast to Paul, Christine relies on the built-in altimeter in a more fallibilist-minded way: in doing so, she stays alert to the possibility that the altimeter might malfunction and that she might have to consult the backup altimeter on her phone. Let’s call the way Christine relies on her built-in altimeter fallible reliance. Generally put, when we fallibly rely on things, we rely on them in a more fallibilist-minded way, i.e., by remaining sensitive to the possibility that the things we rely on might be undependable. Again, importantly, that Christine relies on the primary altimeter only in this way seems entirely appropriate.

Here are two quick objections to this way of describing the case. First, couldn’t one say instead that Christine infallibly relies on the combined system of altimeters? This description is inadequate. To see this, suppose the built-in altimeter indeed malfunctions and Christine has to use the backup altimeter on her phone. In that case, we would say that Christine stopped relying on the built-in altimeter and started relying on her backup altimeter to assess the altitude. Naturally, one only relies on things in \(\varphi \)-ing if one let’s oneself be guided by these things in \(\varphi \)-ing. This is clearly true of the built-in altimeter, as Christine lets herself be guided by it while landing the plane. The back-up altimeter, by contrast, is true to its name: something that could guide her if she had to rely on it in a case of malfunction.

However, one might now object, if Christine starts to rely on her back-up altimeter, doesn’t she just stop infallibly relying on the built-in altimeter? Again, I think this description is inadequate, with the main worry being that her relying seems importantly different from typical cases of infallible reliance: when we infallibly rely on p, we have psychologically closed the matter of whether p and would be blindsided if it turned out that \({\sim }p\). For instance, someone who is diagnosed with cancer will be caught off guard in this sense, realizing that they can’t take their health for granted any more. Christine, by contrast, shows low need for closure (Nagel, 2008) and the awareness that it is in principle possible that altimeters in old planes can break, indicating fallible reliance. If her altimeters would break, she would not be caught off guard. She would initially be concerned but pull out her phone with the back-up altimeter right after.

There are plenty of cases like Low Visibility. A young athlete fallibly relies on her knowledge of having a clean health record in signing a disability insurance. An elderly man fallibly relies on his knowledge that he can walk the round trip to the park while still carrying an umbrella to possibly lean on to. An engaged couple fallibly relies on their knowledge of their unshakable love for each other in signing a prenuptial agreement. In all of these cases, we rely on our knowledge in a way that is sensitive to the possibility that we might fail to know after all.

Some might initially feel somewhat uneasy about the notion of fallible reliance. Doesn’t it sound odd for Christine to say, “I rely on the reading of the altimeter, but it is possible that the reading is false”? While I think there is a tension to be felt, we shouldn’t worry about it. First, the felt tension is a familiar one: fallibilism about knowledge licenses similarly odd-sounding concessive knowledge attributions (CKAs), such as “I know that p but possibly \({\sim } p\)”). Yet, many accept fallibilism, because they think these oddities can plausibly be explained away in one way or other (Stanley, 2005; Dougherty & Rysiew, 2009, 2011; Worsnip, 2015)Footnote 22 or because they prefer it over worse alternatives (Lewis, 1996, p. 550). Hence, those who accept fallibilism about knowledge should not hesitate to embrace fallibilism about reliance and related notions. A second point worth making is that certain concessive reliance attributions sound just fine. Consider, for instance, “We can rely on prices being stable for deciding on the budget for the next quarter, even though it is of course possible that oil prices may fall dramatically”. Or consider: “We have presented considerable evidence indicating the existence of quarks. While we acknowledge that there is always a chance that these results will be overturned, we will, in the design of further experiments, rely on there being quarks.”Footnote 23 If some concessive reliance attributions sound fine, we might wonder how dependable a feeling of uneasiness about the notion of fallible reliance really is.

I argued that our fallibilist habits of managing our own fallibility affect how we rely on our knowledge. If stakes are high, we rely on our knowledge but remain sensitive to the possibility that we might fail to know after all. If stakes are low, we can make our life easier by relying on our knowledge with an infallibilist stance and treat it as certain, because it is certain enough for our purposes. If this is right, then our notion of reliance is not a rigid, infallibilist notion, as it has hitherto been assumed, but a flexible one: depending on what is at stake, reliance will manifest differently.Footnote 24

Let’s call the resulting view about how can we rely on our knowledge Flexible Fallibilism. A version of KPR in the spirit of Flexible Fallibilism goes as follows:

  • KPRFF If one knows that p one may (1) infallibly rely on p in low-stakes practical reasoning and (2) fallibly rely on p in high-stakes practical reasoning.

KPRFF is a precisification of KPR. It captures the idea that our fallibilist habits affect both how we rely on what we know and when different ways of relying are appropriate. Naturally, this should have ramifications for the role knowledge plays in determining what is rational for us to do. In what follows, I will offer a novel knowledge-based decision theory that implements this idea.

6 How to act on what you know

My proposal employs a generalized form of conditionalization, so-called Jeffrey conditionalization (Jeffrey, 1983, ch. 11). Jeffrey conditionalization is typically conceived as a diachronic rule that we should use to learn from uncertain evidence. In what follows, this standard conception will be used to formally introduce Jeffrey conditionalization. I will then reinterpret the latter for my purposes.

Jeffrey conditionalization allows us to conditionalize on a weighted partition of mutually exclusive and jointly exhaustive propositions \(E=\{(E_{1},\mu _{1}), (E_{2},\mu _{2}),\ldots ,(E_{n},\mu _{n})\}\). The weights \(\mu _{1},~\ldots ,~\mu _{n}\) sum up to 1 and codify one’s uncertainty about the propositions in E. Formally:

  • Jeffrey Conditionalization \(P_{J}({}\cdot {})=\sum \limits _{i=1}^{n} \mu _{i}P({}\cdot {}|E_{i}).\)

In what follows, I will not use the standard conception of Jeffrey conditionalization as a diachronic learning rule. Rather, the key idea will be to employ Jeffrey conditionalization as a synchronic rule that measures how our way of relying contributes to determining what is rational for us to do.Footnote 25 Having specified this, I can now explicitly state my proposal for a flexibly fallibilist knowledge-based decision theory.

  • KDTFF An action A is rational for a subject S to perform iff A maximizes expected utility, Jeffrey-conditional on S’s knowledge.

To see how KDTFF works, let’s suppose we rely on our knowledge that p. If stakes are low, I suggested that we can agree with infallibilists in that we may infallibly rely on p and ignore the possibility that \({\sim } p\). To determine what is rational to do, we have to consider the partition that consists of a piece of knowledge p and its negation: \(\{p, {\sim }p\}\). To capture the idea that we can ignore \({\sim } p\) in decision-making, we should choose the weights accordingly: \(\mu _{p}=1\) and \(\mu _{{\sim } p}=0\), giving us the weighted partition \(L=\{(p,1), ({\sim } p,0)\}\). Note, however, that it is a well-known feature of Jeffrey conditionalization that it yields standard conditionalization if we assign weights like this.Footnote 26 Hence, Flexible Fallibilism agrees with Basic Infallibilism that if stakes are low, we should simply conditionalize on our knowledge.

Where Flexible Fallibilism disagrees with Basic Infallibilists is when stakes are high. I argued that when much is at stake, we only fallibly rely on our knowledge, i.e., we rely on it while being sensitive to the possibility that we might fail to know after all. When it comes to determining what is rational to do, we should again choose weights that corresponds to this stance towards our knowledge. One way to do so, generally speaking, is to include non-negative parameter \(\Delta \) in our weighting that reflects our fallibilist stance towards our knowledge: \(\mu _{p}=1-\Delta \) and \(\mu _{{\sim } p}=0+\Delta \), resulting in the following weighted partition: \(H=\{(p,1-\Delta ), ({\sim } p,0+\Delta )\}\).

How should we assign the weights when stakes are high? One salient answer is to look at the epistemic probability of the proposition that one knows that p. This probability will typically be less than 1, reflecting the fallibilist idea that knowing is compatible with an epistemic chance that p (and thus Kp) is false.Footnote 27 Weighing our knowledge that p with the epistemic probability of Kp naturally fits the idea that if stakes are high, our fallibilist habits raise awareness of our own fallibility and incorporate our assessment of our fallibility into how strongly we should let p determine what is rational for us to do.

Before elaborating the conception of epistemic probabilities at play here, let me first illustrate how my proposal works. Let’s assume that the epistemic probability of the proposition (Kp) that you know Caesar was born in 100 BC is. 9, reflecting that it is epistemically highly likely that in Jellybean you know that p. Correspondingly, the parameter \(\Delta \) is set at. 1, capturing your remaining sensitivity to the possibility that you might after all not know that p.The resulting weights are as follows: \(\mu _{p}=.9\); \(\mu _{{\sim } p}=.1\). Choosing the weights like this is a natural option for the case at hand, since the impact of one’s knowledge on what is rational for one to do is just as high as is warranted by one’s epistemic position towards one’s knowing that p. It corresponds to the notion that just because we know, we should not ignore that we only fallibly do so when we determine what is rational for us to do.

With the resulting weighted partition being \(J=\{(p,.9), ({\sim } p,.1)\}\), a quick calculation reveals that not answering (\({\sim }A\)) maximizes expected utility, Jeffrey-conditional on one’s knowledge p, and is hence the rational thing to do.

EUFF\((A)=(\mu _{p}\times P(p|p)+\mu _{{\sim } p} \times P(p|{\sim } p))\times 1 + (\mu _{p}\times P({\sim } p|p)+\mu _{{\sim } p}\times P({\sim } p|{\sim } p))\times (-1000)=-99.1\)

EUFF\(({\sim }A)=0\)

EUFF\(({\sim }A)>\) EUFF(A).

This is the intuitive verdict. Furthermore, one can (fallibly) rely on one’s knowledge in deciding that not answering is rational. Thus, according to Flexible Fallibilism, Jellybean is not a counterexample to the view that knowledge is sufficient for permissible reliance. Since the respective subjects are in a high-stakes case and fallibly rely on their knowledge, they simply keep in mind the small epistemic chance that they might fail to know after all and act accordingly.

Having illustrated how my proposal works, let me now specify the notion of epistemic probability at work. While there are many conceptions of epistemic probability available,Footnote 28 the employed conception of epistemic probability needs to be suitably knowledge-based. If knowledge did not play a role in determining epistemic probabilities, then one might worry that the resulting view might not be properly classified as knowledge-based decision theory.Footnote 29 Fortunately, however, a fitting and independently plausible knowledge-based conception of epistemic probability is available. On this conception, we arrive at the epistemic probability of a proposition by conditionalizing it on a selected set of known propositions.Footnote 30 This set is typically taken to include certain basic types of knowledge (e.g., perceptual, introspective and evidential knowledge, etc.) and background knowledge (e.g., about the reliability of our cognitive faculties or instruments). The set is taken to exclude inductive or inferential knowledge that “goes beyond our evidence” (Goodman & Salow, 2018, p. 90) and that is plausibly particularly susceptible to being scrutinized by our fallibilist habits.Footnote 31

Let’s apply this conception to Low Visibility and Jellybean. Suppose that, in the former case, Christine sees that the altimeter displays an altitude of y and gains perceptual knowledge that the altimeter displays an altitude of y. How epistemically likely should she consider it that she has inductive knowledge that the altitude is y? Conditional on her perceptual knowledge and her background knowledge that the altimeters in her plane are fairly reliable, she considers it epistemically highly likely, though not certain, that she knows that the altitude is y. By fallibly relying on her knowledge, she rationally stays alert and keeps her phone altimeter close by. Likewise, in Jellybean, you are confident, but not certain, that Caesar was born in 100 BC and your background knowledge includes that you very occasionally have misremembered specific historic dates. Conditionalizing on this knowledge, it is epistemically highly likely that you in fact know that Caesar was born in 100 BC. Weighing your knowledge that p accordingly in your practical reasoning, you rationally decide to not give an answer, given the meager rewards and risk of an extremely painful shock. This knowledge-based conception of epistemic probabilities thus provides us with a natural and intuitively plausible description of these cases.

The proposed account also delivers an answer to the question concerning what is at stake in rational decision-making.Footnote 32 As Schulz (2017, pp. 470–471) rightly notes, outcomes need to be in one’s epistemic ken to contribute to what’s at stake. They are in one’s epistemic ken on my view iff they are an epistemically possible consequence of an available action, i.e., iff they receive a probability of >0, conditional on the selected set of knowledge characterized above and the action being performed.

The resulting decision theory is knowledge-based through and through. If stakes are low, we give the known propositions a weight of 1, resulting in conditionalizing on them to determine what is rational for one to do. If stakes are high, I argued that we should weigh the known propositions by the epistemic probability that one knows them, which is in turn determined by the selected set of knowledge characterized above.

Now I am also in a position to give a fuller characterization of fallible reliance. By fallibly relying on p, we are sensitive in practical reasoning to the question how epistemically likely it is that we know that p. This, in turn, lessens the impact of p in our practical reasoning, corresponding to its reduced impact on determining what is rational for us to do, as specified by KDTFF. We can further clarify this by putting it in terms of reasons. Suppose p is a reason for \(\varphi \)-ing. If S fallibly relies on p in her practical reasoning, then S should take p to speak slightly less in favor of \(\varphi \)-ing than if she had infallibly relied on p. Depending on one’s decision situation, these general characteristics of fallible reliance can manifest in various ways. In Low Visibility, fallible reliance will manifest as an increased alertness by Christine towards signs that the altimeter reading she relies on is false, making her practically reason that she should keep a backup altimeter at hand. In Jellybean, fallible reliance will manifest as a hesitation to answer, giving expression to one’s registering of the lessened (and ultimately insufficient) support that knowing that Caesar was born in 100 BC provides for this possible course of action. By being sensitive in one’s practical reasoning to the possibility that one may fail to know after all, one weighs one’s knowledge accordingly and concludes that one should better not answer, arriving at what is rational for one to do.

I argued that one salient way to set the weights in Jeffrey-conditionalizing on known propositions is to use the epistemic probabilities that one knows the respective propositions. In the next section, I will give a general account of how to weigh one’s knowledge. I will identify two factors relevant in weight-setting, which I call the total stakes factor and the stake proportionality factor. The resulting account not only makes the correct predictions in the cases of interest, but has the independent virtue of providing us with a knowledge-based explanation of when to simplify our decision-making.

7 Weight setting

First, as already suggested, one important factor is the totality of what is at stake. Assuming that we can divide what is at stake into different levels (Schulz, 2017, p. 469), our setting of the relevant weights should plausibly be responsive to what level of stakes we are dealing with in decision-making. If stakes are low, assigning \(\mu _{K}=1\) to what one knows and \(\mu _{{\sim } K}=0\) to what one does not know is adequate. It is a harmless way of simplifying one’s decision-making that saves cognitive resources and allows one to get on with one’s life. If we face middling stakes, we might think that we should not disregard the possibility that what we know is false, but still simplify somewhat. Suppose that the epistemic probability of your decision-relevant knowledge that p is .87963 and the probability of \({\sim } p\) is .12037 accordingly. One way to simplify your reasoning without neglecting the possibility that what you take yourself to know, might turn out to be false would be to use a \(\mu _{K}=.88\) for p and a \(\mu _{{\sim }K}=.12\) for \({\sim } p\). Finally, if stakes are high, we might consider using the exact relevant epistemic probabilities of that one knows the relevant propositions as weights to accurately adjust its impact on rational action to its epistemic standing. Thus, depending on the level of the stakes, we should choose weights that allow as much simplification in decision-making as is permissible for what is at stake. According to one plausible view, then, the range of weights for one’s knowledge goes from 1 to the epistemic probability of whatever one knows: if stakes are low, we may fully simplify. If stakes are high, we should match the impact of our knowledge on what is rational to do with its epistemic probability. Intermediate levels of stakes, in turn, allow for various grades of simplification in between. Flexible Fallibilism thus yields a natural view about the simplification of practical reasoning: simplify just as much as is warranted by what is at stake.

The second factor that should affect the weights is stake proportionality. How we should set the weights also depends on how much the truth-value of a decision-relevant proposition contributes to a given outcome. Recall how in Camping Trip, the truth-values of the propositions \(l_1\), ..., \(l_{100}\) respectively have only little impact on what is at stake. Intuitively, Ronda may respectively rely on these propositions infallibly. By contrast, Ronda should only fallibly rely on her knowledge of \(h_{1}\), the proposition on which proportionally a lot hangs when it comes to what is at stake. It is now easy to see that the indiscriminate security problem does not arise for Flexible Fallibilism: Ronda should Jeffrey-conditionalize on \(l_1\), ..., \(l_{100}\) with \(\mu _{K}=1\) and \(\mu _{{\sim } K}=0\) weights, yielding standard conditionalization, but Jeffrey-conditionalize on \(h_{1}\) with a weight that matches the epistemic probability of her knowledge of \(h_{1}\), reflecting the proposition’s proportional contribution to what is at stake.Footnote 33

How do these two factors interact? For high stakes decisions that concern only two ways the world might be—such as Jellybean—the stake proportionality factor simply collapses into the total stakes factor. When it comes to high stakes decisions that involve more than two ways the world might be—such as in Camping Trip—the stake proportionality factor acts as a corrective to the total stakes factor. The total stakes factor can be understood as reflecting our general fallibilist habits that initially manifest if stakes are high, whereas the stake proportionality factor fine-adjusts the manifestation of these habits by retracting the fallibilist stance towards those propositions that do not contribute significantly to what is at stake.

It is worth pointing out that the picture just presented suggests that pragmatic factors can affect the weights needed for Jeffrey-conditionalizing on our knowledge. The result is a mild, non-threatening form of pragmatic encroachment when it comes to the epistemic input that contributes to determining what is rational for us to do.Footnote 34 Importantly, the epistemic domain remains entirely unaffected. On my view, pragmatic considerations affect only the relation between knowledge and action, but not knowledge itself.

We thus arrive at a full characterization of Flexible Fallibilism, a knowledge-based view of practical reasoning and decision-making. It encompasses two complementary components:

  • KPRFF If one knows that p, one may (1) infallibly rely on p in low-stakes practical reasoning and (2) fallibly rely on p in high-stakes practical reasoning.

  • KDTFF An action A is rational for a subject S to perform iff A maximizes expected utility, Jeffrey-conditional on S’s knowledge.

KPRFF is a precisification of KPR, telling us how we should rely on what we know, given what’s at stake. KDTFF is the supplementary knowledge-based decision theory that determines what is rational for us to do, given what we may rely on and how we may rely on it. Taken together, we get a view about the simplification of practical reasoning: how we may rely on our knowledge (mirroring the impact our knowledge has on determining what is rational for us to do) corresponds to how much is at stake, as specified by the total stakes factor and the stake proportionality factor, yielding various stages of simplification.

In the next section, I will compare Flexible Fallibilism to another proposal, inspired by views recently advanced by Jackson (2019a) and Moss (2013, 2018). I will show that while this further proposal gives us the right results in high-stakes cases and tells us when to simplify our practical reasoning, Flexible Fallibilism has various advantages over it and is thus preferable.

8 Can we do (even) better with credal knowledge?

In the final section, I will briefly compare Flexible Fallibilism to a further proposal for defending the view that knowledge is sufficient for permissible reliance in practical reasoning. This further proposal combines two claims from Jackson (2019a) and Moss (2013, 2018). Jackson’s (2019a) claim is that when stakes are high, we do not rely on our beliefs, but on our credences. This, on her view, explains our intuitions in high-stakes cases and also yields a view of when we simplify our reasoning. Moss’s (2013, 2018) claim is that credences can constitute knowledge.

The backdrop of the first component of the proposal is a dualist metaphysics of beliefs and credences (‘dualism’, for short). One main motivating idea for dualism is that beliefs and credences seem to play different roles in practical reasoning (Jackson, 2019a, pp. 516–517).Footnote 35 By relying on our beliefs, we take the believed propositions for granted and ignore small error possibilities, thereby simplifying our reasoning and reducing our cognitive load.Footnote 36 By contrast, when we rely on our credences, we don’t ignore these small possibilities any more, resulting in higher accuracy but increased cognitive costs. Jackson now argues that when stakes are low, we want to be cognitively efficient and thus simplify our reasoning by relying on our beliefs. However, if stakes are high, such as in Jellybean, we should expend some cognitive resources and rely on our credences instead, giving some weight to the possibility that Caesar being born in 100 BC might be false after all (Jackson, 2019a, p. 521).

The second component is due to Moss (2013, 2018), who has recently argued that our credences can have similar epistemic qualities as our beliefs, and thus can likewise constitute knowledge (I will call this kind of knowledge ‘credal knowledge’, in contrast to ordinary ‘outright’ knowledge).

A version of KPR that combines these components now goes as follows:

  • KPRDI  (1) One may rely on a proposition p in low-stakes practical reasoning if one knows that p;  (2) One may rely on one’s credence in p in high-stakes practical reasoning if one’s credence in p constitutes credal knowledge.

Since the view says that we may infallibly rely on the two kinds of knowledge, depending on stakes, I will call it Dual Infallibilism.

Again, a corresponding knowledge-based decision theory can be developed. According to this knowledge-based decision theory, different kinds of knowledge are relevant for determining expected utility, depending on what is at stake: If stakes are low, then an action is rational for one to perform if it maximizes expected utility, conditional on the totality of what one knows. However, if stakes are high, we use the credences that qualify as credal knowledge as input to our expected utility calculation. Let’s call a credence function that qualifies as knowledge “CK” and the result of an expected utility calculation that uses CK as epistemic input one’s CK-expected utility. If stakes are high, Dual Infallibilism tells us that an action is rational for one to perform if it maximizes CK-expected utility.

It is easy to see that Dual Infallibilism gives us the right results in Jellybean. Let’s assume that we have a .9 credence in p, corresponding to the description that we are confident, but not certain, that Caesar was born in 100 BC. You are well acquainted with Roman History, so this credence plausibly constitutes credal knowledge (i.e., \(C_{K}(p)=.9\)). Since stakes are high in Jellybean, Dual Infallibilism tells us that we may infallibly rely on our credal knowledge about p. Accordingly, to now determine what is rational for us to do, we need to calculate our CK-expected utilities. Since \(C_{K}({\sim }p)=.1\), the possibility that p is false, and the negative outcome associated with it, receive minor probabilistic weight. Hence, not answering maximizes CK-expected utility, the intuitively correct prediction.Footnote 37

A further selling point of Dual Infallibilism is that it brings along a theory of when to simplify practical reasoning. In these regards, Dual Infallibilism is on a par with Flexible Fallibilism. However, Dual Infallibilism faces various problems. As I will argue in what follows, Dual Infallibilism is both less flexible and epistemically more demanding than Flexible Fallibilism. While these are not fatal problems, they suggest that Flexible Fallibilism is the preferable choice.

The first problem concerns the account of simplification provided by Dual Infallibilism. Dual Infallibilism recognizes only two kinds of stakes: when stakes are low, it tells us to simplify by relying on your outright knowledge; when stakes are high, it tells us to rely on our credal knowledge. But what about intermediary stakes, where we may plausibly simplify a bit? The worry is that Dual Infallibilism does not allow continuous simplification, providing no plausible treatment of intermediary stakes. By contrast, Flexible Fallibilism allows for continuous simplification: depending on what is at stake, the weights can be set in a way that yields the appropriate degree of simplification.Footnote 38

As an anonymous referee has noted, Dual Infallibilist could argue that one’s credences could likewise be subject to continuous simplification. However, those committed to a dualist metaphysics of belief should be inclined against this move. After all, one motivating thought for the dualist picture is to argue that credences and beliefs play different roles in our cognition. If dualists were to allow the continuous simplification of credences, perhaps sometimes even treating propositions as certain in which one has a credence of just above .5, the line between the functional roles of credences and beliefs would be significantly blurred. This, however, would undermine a key motivation for adopting a dualist picture in the first place (Jackson, 2019b). Furthermore, there are plausibly principled barriers to credences being the right kind of attitude for simplification (Palmira, 2023).

The second problem concerns the epistemic demandingness of Dual Infallibilism. To cover both low- and high-stakes decision-making, both outright and credal knowledge have to be available. However, as I will show below, there are cases in which this availability is non-trivial. I will argue that Dual Infallibilism thus either makes it harder to rely on one’s knowledge or makes the wrong predictions about high-stakes decision-making.

Consider the following case to illustrate this worry.

  • Gettierized Jellybean Let everything be as above in Jellybean. Suppose that in addition to the original researcher (let’s call her “Hi”), a second researcher (let’s call him “Lo”) comes in and also asks you questions about Roman history. When it comes to Lo’s line of questioning, the conditions are slightly different: you win a Jellybean if you answer correctly, but you receive a barely noticeable electric shock if you answer incorrectly. While sipping on a cup of coffee, you contemplate what is rational for you to do. Given that you think that Dual Infallibilism is true, you decide to rely on your credal knowledge to determine whether it is rational to answer Hi, and to rely on your outright knowledge to determine whether it is rational to answer Lo. Since you take yourself to have confident, but not certain knowledge that p, you infer that your corresponding high confidence in p constitutes credal knowledge. However, unbeknownst to you, Hi has mixed a substance into your coffee that affects this kind of introspective inference: in most cases, inferring your confidence from your knowledge has the side effect that your confidence is slightly lowered in this process. Yet, you have been lucky: although your confidence could easily have been lowered by the inference, you are in one of the rare cases where this doesn’t happen.

Does your credence constitute credal knowledge? No, since it could have easily been the case that your credence had failed to correspond to how likely p is.Footnote 39 Hence, your credence is Gettierized and thus fails to constitute credal knowledge. However, you still know outright that Caesar was born in 100 BC, since a counterfactually slightly lowered confidence doesn’t affect your outright knowledge. As Gettierized Jellybean shows, knowing that p does not guarantee credal knowledge about p.

Since you lack credal knowledge about p, how should you now determine what is rational for you to do, according to Dual Infallibilism? Naturally, you should consider what you do know. Thus, the next best fallback position is your outright knowledge. However, if you (infallibly) rely on your outright knowledge that p and conditionalize on it to determine what is rational for you to do, we again get the unintuitive result that it is rational to answer Hi’s question. As an anonymous referee has pointed out, another response would be to simply acknowledge that there might be unfortunate cases in which you lack the needed credal knowledge. Of course, cases like this can arise for Flexible Fallibilism, too, e.g., when one’s outright beliefs fail to constitute knowledge due to being false or Gettierized. However, Flexible Fallibilism, or so I argued, is less susceptible to scenarios of this kind, given that it involves only outright knowledge, and not outright and credal knowledge.

By contrast, Flexible Fallibilism does well when it comes to Gettierized Jellybean. Since you know that p, you rely on p to determine both whether to answer Hi and whether to answer Lo. When it comes to determining whether it is rational to answer Lo, you infallibly rely on what you know and set your weights accordingly (\(\mu _{p}=1; \mu _{{\sim } p}=0\)). When it comes to determining whether it is rational to answer Hi, you fallibly rely on what you know and likewise set your weights more carefully, say, by considering the epistemic probability of p (\(\mu _{p}=.9; \mu _{{\sim } p}=.1\)). For both decisions, knowledge that p is sufficient, no credal knowledge is needed.

So, the second problem of Dual Infallibilism is that it is more demanding. More things have to go epistemically well in order for you to be able to rely on knowledge in both high- and low-stakes decision-making. Furthermore, if things don’t go well, like in Gettierized Jellybean, then the most plausible fallback position fails to vindicate our fallibilist intuitions.Footnote 40

While Dual Infallibilism and Flexible Fallibilism are on a par when it comes to getting the verdict right about cases like Jellybean, I argued that Flexible Fallibilism has advantages over Dual Infallibilism and is thus preferable: first, its theory of simplification accommodates continuous differences in stakes, and second, it is less demanding when it comes to the availability of one’s knowledge.

Finally, zooming out, one might wonder how Flexible Fallibilism compares to a Bayesian view, according to which our (rational) credences, not our knowledge, play the central epistemic role in decision theory. While some have argued that knowledge-based decision theory has some advantages over Bayesian decision theory (see, e.g., Greco, 2013) others have been skeptical (see, e.g., Schiffer, 2007, and Fassio & Gao, 2021). I think this is a worthwhile topic for future research (Heil, MS) but beyond the scope of this paper. Here, I argued for the conditional that if knowledge plays a role for our practical reasoning and decision-making, then Flexible Fallibilism is the most promising way to capture this role.

9 Conclusion

In this paper, I defended the view that knowledge is sufficient for permissible reliance against the challenge from high stakes. After having pointed out the need for a knowledge-based decision theory, I argued that extant proposals face the indiscriminate security problem. I then explored two novel proposals, Flexible Fallibilism and Dual Infallibilism, and argued that we should opt for the former: Flexible Fallibilism vindicates the platitude that we may rely on what we know, but reminds us to be attentive to our fallibility when we do.Footnote 41