1 Introduction

Much work in the philosophy of action in the last few decades has focused on the elucidation and justification of a series of purported norms of practical rationality that concern the presence or absence of intention in light of belief, and that demand a kind of structural coherence in the psychology of an agent. Some examples, roughly formulated, include the following.Footnote 1 It is commonly thought that rationality requires of an agent A that:

  • Intention Detachment: A does not [intend that (p if q), believe that q, believe that (were she not to intend that p, then because of that not p), and not intend that p].

  • Intention-Belief Consistency: A does not [intend that p and believe that not p].

  • Intention Consistency: A does not [intend that p, intend that q, and believe that (p and q are inconsistent—in the sense that, were it to be the case that p, then because of that not q, or vice versa)].Footnote 2

  • Means-End Coherence: A does not [intend that p, believe that (intending that q is a means implied by p—in the sense that, were she not to intend that q, then because of that not p), and not intend that q].Footnote 3

Interest in these norms has been fueled in part by a move away from a simple belief-desire model of human agency and its focus on the maximization of expected utility as the sole principle of practical rationality, towards a more complex model that incorporates intentions as genuine attitudes, characterized by their own distinctive functional features and rational demands.

Let me call purported norms of practical rationality that demand structural coherence as such, independently of any other concern, “requirements of coherence,” and proponents of them “coherentists” about practical rationality. Moreover, let me call the specific set of requirements just mentioned requirements of “Intention-Belief coherence” (IB-coherence), and proponents of them “Intention-Belief coherentists” (IB-coherentists).Footnote 4

The coherentist project has not gone unchallenged. Different theorists have argued that it is a mistake to think that rationality requires structural coherence as such. They have different reasons for thinking this. For example, some people believe that practical rationality consists solely of maximizing expected utility.Footnote 5 Others think that it consists solely of responding correctly to the reasons you have, or, alternatively, to the reasons you believe yourself to have, or that are somehow made available by your evidence.Footnote 6 Because of this, they believe that whether or not an agent is coherent is, at best, only indirectly related to the question whether she is rational. What really matters, rationally speaking, is maximizing expected utility, or responding correctly to (believed/available) reasons. I will call proponents of this idea “myth theorists” about coherence requirements of practical rationality.

Here, I will present a series of examples that show that, indeed, the requirements mentioned above are not genuine requirements of rationality.

The reason is simple: the listed requirements concern the presence or absence of intention in light of all-out belief. Rational agents like us, however, do not, and in fact should not, always form or revise (or, as I will also put it, ‘regulate’) their intentions in light of what they all-out believe. When such agents do not regulate their intentions on the basis of what they all-out believe, then breach of these requirements need not imply irrationality.

Here, I assume that a requirement is a genuine requirement of rationality only if, necessarily, breach of it implies irrationality.Footnote 7 That is, I assume that genuine requirements of rationality impose a kind of “strict liability” with respect to rationality. Following Broome (2013), let me call this property “rational strict liability”:

  • Rational strict liability: requirement R imposes rational strict liability iff, necessarily, if R requires of A that p, and it is not the case that p, then A is irrational.

In what follows, I will present a series of examples where agents are in breach of each of the listed requirements and yet are, intuitively, not irrational. Under the assumption that genuine requirements of rationality impose rational strict liability, this would show that these are not genuine requirements of rationality.Footnote 8

This would make it seem like I am arguing in favor of a myth theory. But, in fact, I think coherentists should embrace this conclusion. This doesn’t mean coherentists should become myth theorists. It simply means that they need a more nuanced picture of what practical rationality requires of an agent when her intentions are not regulated in light of her all-out beliefs.

I will proceed as follows. In the next section, I will mention the assumptions about the nature of belief on which I will be relying. In Sect. 3, I present counterexamples to each of the aforementioned requirements. In Sect. 4, I explain what lessons I think we should draw from these cases. In Sect. 5, I consider the special case of another requirement of coherence that also concerns the presence of intention in light of belief, but that seems to be surprisingly immune to the problem that plagues all the other requirements considered. This is Enkrasia, which demands (roughly) that you intend to do what you believe you ought to do. I will explain why I think Enkrasia is in this sense immune.

2 Belief and certainty

The cases I will present depend on the following relatively uncontroversial ideas, which I will assume for the purposes of this paper. The first idea is that it doesn’t make rational sense to be willing to bet on the truth of a proposition p, at any odds, if one is not certain that p.Footnote 9 This is because, if one is uncertain that p, then there will be bets where the expected utility of taking them would be outweighed by the expected utility of rejecting them. Suppose I am .9 confident that p. Then it would be irrational for me to take any bet with odds worse than 1:9 on the truth of p. I take this idea to be uncontroversial.

The second idea is that belief—by which I mean “all-out belief”—does not imply certainty. That is, one can believe that p without being certain that p. This idea is not completely uncontroversial.Footnote 10 But it does represent the consensus and is, for familiar reasons, quite burdensome to reject.Footnote 11 Doing so implies denying that people believe most of the things we take them to believe, since people are not certain of most of the things we take them to believe. It also implies denying that people rationally believe most of what we take them to rationally believe, since we think that epistemic justification for certainty that p would require evidence that rules out any possibility that not p, whereas we allow that one could be epistemically justified in believing that p even when one’s evidence does not completely rule out the possibility that not p.

I think these reasons are enough to reject the idea that belief implies certainty. If, however, one accepts certain familiar Bayesian principles, then there would be two further (though obviously related) reasons to reject this view. First, the view would imply that one should be willing to accept any bet on the truth of one’s beliefs, regardless of the odds. After all, to be certain that p is to disregard any possibility that not p and, consequently, any undesirable outcome on the contingency that not p. So, if one believes that p, one should be willing to stake everything one has, however valuable, against any possible payoff, however insignificant, on the truth of p. Second, it would imply that it would never be rationally permissible to revise one’s beliefs: that, once one comes to believe that p, there is no piece of evidence one could ever come to be aware of, no matter the kind or amount, that could rationally allow one to change one’s mind on that issue. This is because, given a credence of 1 that p, there is no way to update by conditionalization to a credence of less than 1 on that proposition.Footnote 12 For these reasons, I assume from now on that belief does not imply certainty.Footnote 13

Together, these two ideas strongly suggest that there can be cases where it would make no rational sense to deliberate, and so to regulate one’s intentions, on the basis of one’s beliefs.Footnote 14 This is because deliberating on the basis of a belief (by which I mean: evaluating the relevant alternatives conditional on the truth of the belief) is, in a natural sense, a way of betting on its truth. But if belief does not imply certainty, and sufficiently unfavorable odds can make it irrational to bet on the truth of a proposition if one is not certain of its truth, then this suggests that there can be cases where it would be irrational to deliberate—and so to regulate one’s intentions—on the basis of what one believes.Footnote 15

We are familiar with such cases. Oftentimes what we do and rationally ought to do is deliberate on the basis of the probabilities we assign to the relevant contingencies. Other times, what we do and ought to do is deliberate on the basis of propositions we, for whatever reason, accept in a certain context, regardless of whether we believe them or not.Footnote 16

Consider: I believe I am blood-type O-positive. I am not completely certain, but I clearly remember different doctors telling me so. Now contrast two deliberating contexts in which I am asked about my blood type, where I have the option of either simply answering that I am O-positive based on my belief, or of taking a blood test, offered by the person asking the question, that would provide the sure answer but cost me $10. In one context, the question is asked by a student for the purposes of gathering statistical information. In the other, the question is asked by a paramedic who is about to give me a blood transfusion. Since nothing too bad would happen if I say to the student that I am O-positive in the contingency that I am not, whereas something terribly bad would happen if I say to the paramedic that I am O-positive in case I am not, the expected utility of deliberating on the basis of my belief is greater than that of deliberating on the basis of the relevant probabilities in the survey scenario, but lower in the transfusion scenario. If this is so, then, plausibly, in the one case I would and should deliberate on the basis of my belief and simply say that I am O-positive without taking the test, whereas in the other I would and should deliberate on the basis of the probabilities I assign to the relevant contingencies and take the test instead.

Obviously, conditional on my being O-positive, answering without taking the test is in both scenarios the dominant strategy, because it costs me $10 to take the test and nothing not to take it. But it would be foolish of me, in the transfusion case, to evaluate the relevant alternatives conditional on the proposition that I am O-positive, since I am not sure that I am, and that so much is at stake. This doesn’t show that I don’t really believe I am O-positive. It just shows that there are cases where what is at stake might make it too risky for me to deliberate on the basis of propositions of which I am not sure (or, in any case, not sure enough).

Naturally, the question of whether my unwillingness in such a scenario to deliberate on the basis of that proposition shows that I do not really believe it, depends on what we say all-out belief is, and there are different possibilities here. As far as I can tell, though, there is no plausible theory of belief according to which such unwillingness implies lack of belief. Since the issue is important for my purposes, though, let me briefly consider some of those possibilities:

One possibility is that all-out belief just is certainty.Footnote 17 Since I am assuming that belief does not even imply certainty, I shall ignore this view.

Another possibility is that belief is a contextually invariable degree of confidence. Suppose we say the degree is anything above .9. Then we could stipulate that I am .91 confident that I am O-positive. Still, given the odds, I wouldn’t and shouldn’t deliberate under the assumption that I am, even though, by stipulation, I do believe I am.

Some people have suggested instead that belief is a contextually variable degree of confidence, where the degree corresponds, precisely, to how confident you would need to be of a proposition to be willing to deliberate, in a given context, under the assumption of its truth.Footnote 18 By stipulation, then, in the transfusion case I do not believe that I am O-positive, since I am not confident enough to deliberate under this assumption.

Now, it is not my aim here to argue against this view in any detail, but let me just briefly say why I think we should reject it. The most basic reason is that it conflicts with a plausible and firmly established view of the nature of belief and its overall role in rational agency. One way to formulate the issue is to point to the familiar idea that belief somehow ‘aims’ at truth.Footnote 19 Needless to say, it is difficult to specify precisely what this amounts to, and I won’t try to do so here, but it involves ideas roughly along the lines that belief is, at the functional level, ‘regulated’—so ‘formed, revised and extinguished’ to use Velleman’s (2000) phrase—in response to evidence of truth, and that it is, at the normative level, assessable in relation to truth, both objectively (so that a belief is correct iff it is true) and subjectively (so that a belief is rational or warranted to the extent that it is appropriately regulated in accordance to standards that would be, somehow, conducive to truth). The view under consideration conflicts with this familiar picture because, under it, belief comes to be regulated and assessable by truth-independent factors, like what the odds are in a given deliberative situation, or even by such outlandish factors as whether you have had breakfast or not, since such factors might affect your willingness to take risks.

This is a radical departure from our ordinary concept of belief, and it has some rather unpalatable implications. Just to illustrate: I presume you believe you have a head. I also presume that you are extremely confident of this. Under the present view, I could deprive you of this belief by offering you a bet at sufficiently unfavorable odds on this proposition. And this regardless of the fact that you would (we can assume) remain as confident and as thoroughly convinced as ever that you do, in fact, have a head, and that your evidence for this has not changed one bit and is as conclusive as ever. I presume you also believe you have hands. If so, given the deliberative questions you face in the situation in which I offer the bet, you believe you have hands but you do not believe you have a head, even though (we can assume) you are equally confident that you have a head as that you have hands; and even though you recognize that the evidence for, and the probability of, the two propositions is exactly the same; and even though—and this point is crucial—the attitudes you have towards these two propositions play exactly the same kind of functional role in your overall psychology (e.g., you are disposed to regulate them in response to the same kind of evidence, to deliberate on their basis relative to exactly the same odds, stakes, deliberative questions, etc.). I find this incredible. Of course, implications like these do not prove that this view is false. But they do show it to be rather costly, and far enough removed from our ordinary concept of a belief that I think we would do well to avoid it unless the reasons for it are compelling.Footnote 20 As far as I can tell, however, there aren’t very good reasons to adopt it.Footnote 21 Because of this, I will from now on ignore it.

Another possibility, or family of possibilities, is to say that all-out belief is a kind of intentional disposition. Here are some options:

One is that a belief that p is a non-defeasible disposition to deliberate on the basis of p. Since this is a non-defeasible disposition, it entails that all-out belief implies certainty, and so can be ignored for the same reasons as before.

One might say instead that belief that p is a defeasible disposition to deliberate on the basis of p. This view avoids the previous problem. However, under most plausible ways of specifying what the defeating factors might be, the stakes or odds of a given deliberative context will most certainly figure among them (as is actually the case, among the views of this kind on offer).Footnote 22 But if this is so, then, once again, I might be defeasibly disposed to deliberate under the assumption that I am O-positive, and yet be unwilling to deliberate on the basis of this belief in the transfusion case, because my disposition is defeated by the odds in that context.

Another option is to say that all-out belief is a linguistic disposition to assert the proposition believed. Mark Kaplan gives us a specific version of this idea. According to him: “You count as believing P just if, were your sole aim to assert the truth (as it pertains to P), and your only options were to assert that P, assert that ~ P or make neither assertion, you would prefer to assert that P.” (Kaplan 1996, p.109) Well, we could stipulate that this is true in the transfusion case. If my sole aim were to assert the truth, and my only options were to assert that I am O-positive, that I am not, or to say nothing at all, I would prefer to assert that I am O-positive. Still, I might rationally refuse to rely on this belief and choose to take the blood test instead.

No doubt there are other views concerning the nature of all-out belief. But I know of no plausible view that implies that my not being willing to answer to the paramedic without taking the test implies that I do not believe I am O-positive. So I will assume from now on that it does not.

From these ideas, however, it follows—I think in a rather straightforward manner—that the requirements mentioned above do not impose rational strict liability and so are not genuine requirements of rationality.

The cases I go on to present are meant to illustrate this point. But the reason is rather simple and follows straightforwardly from what has been said so far: it is plausible to think that one dimension along which an agent’s practical rationality can be assessed concerns the way in which her intentions relate to the space of possibilities that she regards as open for the purposes of deciding what to do, or, more generally stated, for the purposes of adopting, and maintaining, practical ends or goals (“for practical purposes,” as I will put it from now on). I think this is an idea that is driving the coherentists, and I think it is a plausible idea. However, behind all the requirements stated above is the assumption that such a space is always carved up by the agent’s beliefs, so that if an agent believes that p, then the possibility that not p is, for practical purposes, simply disregarded as an open possibility. But this is not true. For sure, beliefs oftentimes determine the space of possibilities that an agent regards as open, and so takes seriously, in practical deliberation. But sometimes, depending on what the odds are in a given deliberative context, they do not.Footnote 23 When they do not, it is implausible to think that they provide the framework in relation to which her practical rationality is to be assessed. The following cases try to illustrate this simple point.

3 Counter-examples to requirements of IB-coherence

3.1 Intention Detachment

According to Intention Detachment, rationality requires, roughly, that you do not intend to do something in case some condition obtains, believe that such condition obtains, believe that you will not do that thing unless you intend to do it, and not intend to do it.

Now consider the following case. I believe my dog has rabies. I am not yet sure. But I do believe he has it. I have already sent blood samples to the vet, and I am waiting to get official confirmation. Because of the risks of having a dog with rabies, and the suffering the dog himself would experience, I intend to kill him if he has rabies. Obviously, I love my dog, and I do not want to kill him. Given what is at stake, however, it seems plausible that I will not, and in fact should not, deliberate on the basis of my belief that he has rabies. I will, and should, rely on the probabilities I assign to his having rabies or not. Because I am relying not on my all-out belief but on my credences in my deliberation, and I still think there is a chance that my dog does not have rabies, I do not yet intend to kill him. But I do intend to kill him if he has rabies. Schematically, I intend that (p if q) and I believe that q (and also that, were I not to intend that p, then because of that not p; I will omit mentioning this last belief from now on). Yet, I don’t intend that p. That I don’t yet unconditionally intend to kill him makes a huge difference in my behaviour. I am not yet about to kill him, and I will not be until I am sure (or in any case sure enough) that he does have rabies.

So I fail to detach the consequent-intention from my conditional intention and my belief that the antecedent obtains. If the requirement of Intention Detachment imposes rational strict liability, I am guilty of practical irrationality. But, intuitively, I am guilty of no such thing. I am doing the most sensible thing I could do, given the situation. So the requirement of Intention Detachment does not impose rational strict liability and is not a genuine requirement of rationality.

One may want to object to this case by denying that the agent in question has the relevant belief or the relevant intention, so let me consider these objections in turn.

First, some people react to this example by claiming that it is implausible to think that, in a case like this one, I really believe that my dog has rabies. Instead, they claim that what I really believe is that he most likely has rabies (or something to that effect). Now, I do not want to reject that in such a scenario I may very well have this belief. But believing that my dog most likely has rabies doesn’t prevent me from believing that he has rabies. We still need a reason to doubt that I have this belief. Some people say that the reason is that, in such a case, I believe the dog might not have rabies (or something to that effect), and that, if I believe this, then I don’t really believe he has it. Again, though: the belief that the dog might not have rabies doesn’t preclude the belief that he does. We still need a reason to think I cannot have this belief. Still others may want to claim that the fact that I am not, in that situation, willing to deliberate on the basis of the proposition that the dog has rabies is proof that I do not believe this. I have already explained why I reject this idea. In the end, I see no reason that would allow one to reject the case by simply denying that I could have this belief.

The second way in which one may want to reject this case is to say that it is implausible to think that, in such a scenario, I would really have the intention to kill the dog if he has rabies. Some people feel tempted to say that what I really intend is to kill the dog if (and perhaps only if) I am sure he has it, or maybe if (and only if) he has rabies and I am sure he does. Once again, I do not deny that it may be true of me that I have these intentions. Just as before, though, what I do want to deny is that having any of these intentions would in any way preclude my intending to kill the dog if he has rabies. It seems to me there are two states of affairs I am mainly concerned to avoid: the first one consists in my not killing the dog in the contingency that he has rabies, the other consists of my killing the dog in the contingency that he does not. My being determined to avoid these states of affairs is my intending, on the one hand, to kill the dog if he has rabies, and, on the other, to not kill him if it does not. It is because I have these intentions that I would form the further intentions mentioned before. But those intentions are subsidiary: rather than preclude, they are formed precisely because I intend to kill the dog if he has rabies and not to kill him if he does not. So, again, I see no reason that would allow one to reject the case by simply denying that I could have the relevant intention.

For these reasons, I believe this case shows that the requirement of Intention Detachment does not impose rational strict liability and is not a genuine requirement of rationality.Footnote 24

3.2 Intention-Belief Consistency

Intention-Belief Consistency, roughly, requires that you do not intend to do what you believe you will not do. Some people think that this is in fact impossible.Footnote 25 I will not try to argue in favor of this possibility.Footnote 26 I assume it is possible. The question I want to ask is whether, assuming this is possible, it would necessarily be irrational. My suggestion is that it would not.

Consider: A fisherman, out at sea, hears by radio that a storm has formed and will soon hit him. He believes that the only way he has of making it out alive is reaching the shore before the storm reaches him. He is an experienced seaman, however, and, given the evidence, he correctly forms the belief that he will not be able to reach the shore in time. He is not completely certain of this. He knows there is a slight chance that he will make it back. But he is too experienced not to form the right conclusion in the face of the evidence. And the right conclusion is that the storm will hit him before he reaches the shore. Now, since he knows making it back is the only way he has of surviving, and since he thinks there is a slight chance of making it back, it is plausible to think that the fisherman would, and in fact should, form the intention to make it back.

The fisherman, then, would intend to do what he (justifiably) believes he will not do. If the requirement of Intention-Belief Consistency imposes rational strict liability, he is guilty of practical irrationality. But he is guilty of no such thing. He is doing the most sensible thing he could do, given the situation. So Intention-Belief Consistency does not impose rational strict liability and is not a genuine requirement of rationality.

Some people will balk at this suggestion. They will insist that what makes sense for the fisherman to do in that situation is to try to make it back, and so perhaps, at most, to intend to try to make it back, but not to actually intend to make it back. This, they think, would be irrational. They will agree that it makes sense for the fisherman to aim to make it back, to adopt the goal or end to make it back, and to try as hard as he can to achieve it. But he better not actually intend it, because that—they think—would be irrational.

Just to be clear, then: nobody doubts that it would make perfect rational sense for the fisherman to make it his goal to reach the shore, and to try as hard as he can to do so. In fact, nobody doubts that it would make perfect rational sense for the fisherman to come to be disposed to think and act in the following manner:

  • first, the goal of making it back is a settled, cross-temporally and counter-factually stable objective for him. (So, the agent represents this goal, monitors his progress towards it, and adjusts his behaviour so as to track it. Moreover, absent new reasons to reconsider, he will neither keep deliberating whether to pursue it, nor abandon it, he will stick to it as a settled object of pursuit);

  • second, he is disposed to filter out as possible further objects of pursuit—as possible further ends—any state of affairs he believes to be inconsistent with his goal of making it back. (So, for example, if he believes that, were he to spend some time trying to catch more fish before heading back, then because of that he would not make it back, then he is disposed not to adopt the goal of spending any more time trying to catch more fish);

  • third, he is disposed to figure out, and then to intend to take, any means he believes to be appropriate (implied, most desirable on balance, etc.) for his goal of making it back. (So, for example, if he believes that, were he not to start the motor right away, then because of that he would not make it back, he is disposed to intend to start the motor right away).

Notice, however, that these are, precisely, the dispositions that prominent philosophers of action tell us are characteristic of intention.Footnote 27 In other words, no one would deny that it makes perfect rational sense for the fisherman to come to be disposed towards the goal of making it back just as if he intended to make it back. But if this is so, then it would be obtuse to insist that it doesn’t make rational sense for him to intend to make it back. If it makes rational sense for the fisherman to come to be disposed to think and act, and to actually think and act, exactly as if he intended to make it back, then it makes rational sense for him to intend to make it back.Footnote 28

Put differently: intentions, I assume, are realized by a certain cluster of dispositions. Such a cluster—action theorists tell us—involves dispositions to track the relevant end e and monitor progress towards it; not to deliberate further whether to pursue it, and to not drop it in the absence of reasons to reconsider. It includes a disposition not to adopt ends believed to be inconsistent with e.Footnote 29 And it involves a disposition to figure out and intend to take means believed to be appropriate to e. This dispositional profile realizes an intention that e.Footnote 30 This means that, if it isn’t irrational for the fisherman to come to be so disposed towards the end of making it back, it isn’t irrational for him to intend to make it back, since being disposed in that way realizes an intention to make it back.Footnote 31 It isn’t irrational for the fisherman to be so disposed. So Intention-Belief Consistency does not impose rational strict liability and is not a genuine requirement of rationality.

3.3 Intention Consistency

Intention Consistency requires, roughly, that you do not intend each of two ends you believe to be inconsistent.

Now consider: a doctor believes that substance X is the only way of treating life threatening condition C, but she believes that a dose of X of, or above, .5 ml would be lethal. A patient comes in with a critical case of C. The doctor believes that if the patient doesn’t receive any treatment, she will die; that she has such an advanced case of C that no dose below .5 would be enough to save her; and that there is a very slight chance that a dose higher than .5 could save her. Supposing the doctor believes there are no further reasons (ethical, financial, legal, etc.) not to administer that dose, I think the doctor would, and should, intend to administer that dose. That is, given present odds, she would not, and in fact should not, deliberate on the basis of her belief that the dose is lethal. She would, and should, rely instead on the probabilities she assigns to it being lethal or not. And on the basis of the probabilities she assigns to the patient surviving conditional on her receiving that dose (extremely slight) versus the probabilities she assigns to the patient surviving conditional on her not receiving that dose (none), the reasonable course of action is to (intend to) administer that dose. But then it will be true of her that she intends to save the patient, all-out believes that giving her the dose will kill her, and yet intends to give her the dose.Footnote 32 So she intends each of two ends she believes to be inconsistent. If the requirement of Intention Consistency imposes rational strict liability, then she is guilty of practical irrationality. But she is guilty of no such thing. She is doing the most sensible thing she could do in the situation. So the requirement of Intention Consistency does not impose rational strict liability and it is not a genuine requirement of rationality.

3.4 Means-End Coherence

Means-End Coherence requires, roughly, that you intend the means you believe to be implied by your ends.

Now consider: the zoo veterinarian intends that a newborn gorilla, Coco, survives. But Coco has a deficient organ. The vet thinks that the only thing that would save him is an organ transplant. But it so happens that the only possible donor is Coco’s sibling, Mocha. The vet believes that taking the organ from Mocha would severely harm her. It wouldn’t kill her, but it would certainly harm her. Since the decision whether to make the transplant or not relies solely on the vet, she all-out believes that, were she not to intend that Coco receives the transplant from Mocha, then Coco would not survive. At the same time, she believes there is something else that could potentially save Coco: some kind of new, experimental treatment that promises to repair the relevant organ, but that hasn’t been implemented with much success so far. Still, she thinks there is a very slight chance this could work. Suppose the vet has full authority to decide on the issue and put aside any possible ethical concerns. In this situation, I think she can intend that Coco survives and not intend that he receives the organ from Mocha. She might instead intend that he receives the experimental treatment. So this is a situation where she intends that Coco survives, all-out believes that, were she not to intend that he receives the transplant from Mocha, he would not survive, and yet does not intend that he receives that transplant. So she does not intend the means the intending of which she believes to be implied by her end. If the requirement of Means-End Coherence imposes rational strict liability, she is guilty of practical irrationality. But she is guilty of no such thing. She is doing the most sensible thing she could do, given the situation. So the requirement of Means-End Coherence does not impose rational strict liability and it is not a genuine requirement of rationality.

4 What conclusions should we draw?

I think coherentists should accept the previous cases. This does not mean that they should become myth theorists. It simply means that they need a more nuanced picture of when it is that all-out belief is relevant to determining an agent’s practical rationality. What the previous cases show is the rather intuitive idea that the requirements of coherence that concern the presence or absence of intention in light of all-out belief are relevant to determining an agent’s practical rationality only in cases where the agent regulates her intentions in light of the relevant all-out beliefs. They lose relevance when the agent regulates her intentions in light of the probabilities she assigns to the relevant contingencies, or in light of propositions she is, for whatever reason, accepting in a given context. This fact needs to be recognized by coherentists and reflected in their formulation of the relevant requirements.

Now, it may seem that coherentists should simply include in such formulations the condition that the agent deliberates in light of her relevant all-out beliefs. To take the example of Intention Consistency, it may seem that coherentists should simply say that rationality requires of an agent A that:

  • Intention Consistency Revised: [if A believes, and deliberates under the assumption that, (p and q are inconsistent), then A does not (intend that p and intend that q)].

Once we formulate the requirement in this way, however, I think it becomes clear that including the condition that the agent believes the relevant proposition is misleading. After all, as I’ve argued, she could be rational while intending that p and intending that q despite believing that (p and q are inconsistent) if this proposition did not, for whatever reason, play the role of carving up the space of possibilities that she regards as open for practical purposes. And, although I haven’t argued for this idea here, it isn’t implausible to think that she would be irrational (holding the intentions fixed) despite not believing this if that proposition did, nevertheless, and for whatever reason, play such a role. The crucial question, then, is whether that proposition plays the role of carving up the space of possibilities that she regards as open in her deliberation, not whether she believes it or not.

Perhaps, then, coherentists should simply say something roughly along the lines that rationality requires of an agent A that:

  • Intention Consistency Revised 2: [if A deliberates under the assumption that (p and q are inconsistent), then A does not (intend that p and intend that q)].Footnote 33

Needless to say, this is still preliminary. Much more needs to be spelled out before we arrive at a satisfactory formulation of this requirement.Footnote 34 I will not attempt to do so here. In any case, I hope the suggestion is clear: coherentists should acknowledge the fact that beliefs do not always determine which possibilities we regard as open for practical purposes, and so do not always provide the framework in relation to which our practical rationality is to be assessed.

Naturally, coherentists who accept the cases I presented will still affirm, and myth theorists will still deny, that the appropriately re-formulated requirements are genuine.

Likewise, coherentists who accept these cases may still hold that requirements that are just like those of IB-coherence, except in that they are formulated in terms of certainty and not all-out belief, are genuine requirements of rationality. So, for example, coherentists who accept my case against Intention-Belief Consistency may still perfectly claim that rationality requires of an agent A that:

  • Intention-Certainty Consistency: A does not [intend that p and be certain that not p].

Nothing I have said in this paper challenges this (or any similarly modified) requirement. Moreover, if certainty always carves up the space of possibilities that an agent regards as open for practical purposes—so that, necessarily, if an agent is certain that p, then she deliberates under the assumption that p—then such requirements would be immune to the worries presented above.

Coherentists have also suggested that rationality requires of an agent A that:

  • Intention Non-Contradiction: A does not [intend that p and intend that not p].

This is also a requirement of coherence, but it does not concern the presence or absence of intention in light of belief, so nothing I have said puts any pressure on it.

There is one more requirement I would like to consider, because, although it also concerns the presence of intention in light of belief, it appears to be, in an interesting sense, immune to the considerations presented so far. This is the requirement of Enkrasia. I turn to this issue now.

5 The case of Enkrasia

It is commonly thought that rationality requires of an agent A that:

  • Enkrasia: A does not [believe (she ought that p), believe that (not p unless she intends that p), and not intend that p).

Enkrasia is a curious requirement in the landscape of rationality. To some people, it seems to enjoin something somehow different from simple coherence.Footnote 35 Be this as it may, if it is not a requirement of coherence proper, it is somewhere in the neighborhood of coherence, and it also concerns the presence or absence of intention in light of all-out belief. It would seem, then, that the kinds of considerations presented above should apply to it as well. That is, it would seem that, if an agent all-out believes, but is not certain, that she ought to do something (and holding her belief that she will not do it unless she intends to do it fixed, which from now on I will assume), then there could be cases where it would not be irrational for her not to intend to do it.

I myself had expected this, but I have come to see this as a mistake. To see why, let me consider a purported counterexample to Enkrasia, presented by Ralph Wedgwood (2013), that is supposed to exploit this very issue. It goes like this: you face two options, A and B, such that you must choose one and only one of them. You have a very high degree of confidence that you ought to do A, but you cannot rule out the possibility that you ought to do B instead. So you all-out believe, but are not sure, that you ought to do A. However, you are certain that, in the contingency that you are mistaken about what you ought to do, doing A would be catastrophic (it would involve, say, the destruction of the world), whereas in the contingency that you are right, doing B would only be slightly worse than doing A, and not really much of a problem. “In this case—Wedgwood says—it seems possible for you to be rational, to have beliefs of this sort, and simultaneously to intend to do not A, but B instead.” (p. 491)

This, however, is not a counterexample to Enkrasia. This is because there are different senses of ‘ought,’ and the ought that figures in the cited belief is not the ought that figures in the requirement. The ought of Enkrasia is not the sometimes called ‘objective’ ought of full information. It is the so called ‘subjective’, ‘practical’ or ‘deliberative’ ought. Since we oftentimes deliberate without full information, these oughts can, and oftentimes do, diverge. As different theorists have argued, however, it isn’t necessarily irrational not to intend to do what you believe you ought objectively to do.Footnote 36 This is a point that defenders of the requirement of Enkrasia recognize.Footnote 37 So this cannot be the ought they have in mind when they defend Enkrasia.

As Broome (2013, pp 24–25) emphasizes, the ought of Enkrasia is identified, precisely, as the ought of which it is true that, if you believe you ought, in that sense, to φ, then you are irrational if you do not intend to φ. This is not the ought that figures in the belief that you ought to A in Wedgwood’s example.

Wedgwood tells us you believe you ought to A. Well, perhaps you believe that you ought objectively to A, but given that you believe there are huge risks associated with A-ing in case you are mistaken, and only negligible costs associated with B-ing in case you are not, it is not the ought that would guide your action if you were rational. Since it is not the action-guiding ought of deliberation, you don’t believe that you ought to A in the sense that figures in Enkrasia, and this is not a counterexample to it. In fact, if you are rational, besides believing that you ought, objectively, to A, you will believe that you ought, in light of the information you do have, to B (after all, if you A you risk destroying the whole world!). This, then, is the ought that, if you are rational, guides your action. It is the practical or deliberative ought. This is the ought of Enkrasia.

Now, there is an interesting question of what exactly accounts for this peculiarity of Enkrasia. I suspect the reason is that—at least as far as I can see—every reason that would count against doing what would be best conditional on the truth of the belief that you ought—in the relevant sense—to do something, works by way of counting as a reason to doubt its truth, and so against holding such a belief in the first place. This is not the case with all the previous requirements. With respect to them, there may be plenty of reasons not to do what would be best conditional on the truth of the relevant belief that do not, in any way, constitute evidence against its truth. Such reasons are normally considerations that point to the risks of ignoring the possibility that the belief might be false, without putting into question its truth. For example, that the fisherman’s life is at stake counts against his ignoring the possibility of making it back, without constituting evidence that his belief that he will not make it back is false. The same is true of all the other requirements.

In the case of Enkrasia, though, considerations that point to the risks of doing what would be best conditional on the truth of the belief that you ought—in the relevant sense—to do something, work—as far as I can see—by way of pointing to the possible bad features or consequences of doing that thing. But such considerations, in turn, constitute evidence against the truth of the belief, because what you ought to do is in part determined by the risks and possible bad features or consequences of doing it. To return to Wedgwood’s example: the consideration that, if you A in the contingency of being mistaken about what you ought to do, you will destroy the whole world, counts as a reason to doubt that you really ought—in the relevant sense—to A. It counts as evidence that you ought, instead, to take the much less risky option of doing B. This feature is unique to Enkrasia, and—as far as I can see—it is why Enkrasia isn’t affected by the considerations presented above.Footnote 38

6 Conclusion

It is natural to think that rationality requires that we somehow regulate our intentions in relation to the space of possibilities that we regard as open for practical purposes. Coherentists are plausibly right to think this. But it is a mistake to assume that such a space is always carved up by the agent’s beliefs. Coherentists have been wrong to assume this. Because of this, (most of) the requirements of intention in light of belief that they have tried to elucidate and justify are not genuine requirements of rationality. Still, it might perfectly well be the case that, for each of those purported requirements, there is a corresponding genuine requirement that is formulated not in terms of what the agent believes, but in terms of the propositions that carve up the space of possibilities that she regards as open for practical purposes. Such requirements would still count as requirements of coherence.

The cases I presented depend on a view about the nature of all-out belief that is supported by what I take to be two extremely plausible theses: The first is that one shouldn’t be willing to bet at any odds on the truth of p if one is not certain that p; the second is that belief does not imply certainty. The view they support is that one shouldn’t be willing to deliberate, in every context and regardless of the odds, on the basis of one’s (relevant) beliefs.

All the cases I presented depend on this view about all-out belief. Now, although I offered some of the reasons why I think this view is extremely plausible, I did not exactly try to give a proper defense of it. So if one is thoroughly convinced that the requirements of intention in light of belief defended by IB-Coherentists are genuine, one might perfectly well use my own cases to reject the view of all-out belief on which they depend. As they say, one philosopher’s modus ponens is another’s philosopher’s modus tollens. So holding these requirements as fixed theoretical points would provide a reason for a radically revisionist view of the nature of belief. I imagine some people will welcome this result. I myself think we should stick to the traditional view of belief, and instead revise our understanding of exactly how, and when, rationality requires that we regulate our intentions in light of our beliefs.