A Puzzle About Desire
The following four assumptions plausibly describe the ideal rational agent. (1) She knows what her beliefs are. (2) She desires to believe only truths. (3) Whenever she desires that P → Q and knows that P, she desires that Q. (4) She does not both desire that P and desire that ~P, for any P. Although the assumptions are plausible, they have an implausible consequence. They imply that the ideal rational agent does not believe and desire contradictory propositions. She neither desires the world to be any different than she thinks it is, nor thinks it is any different than she desires it to be. The problem of preserving our intuitions about desire, without embracing the implausible conclusion, is what I call “the Wishful Thinking Puzzle.” In this paper, I examine how this puzzle arises, and I argue that it is surprisingly difficult to solve. Even the decision theoretic conception of desire is not immune to the puzzle. One approach, the contrastive conception of desire, does avoid the puzzle without being ad hoc, but it remains too inchoate to win our full confidence.
Awareness. If S believes that P, then S knows that S believes that P.
Epistemic Responsibility. For any proposition P that S considers, S desires: that she believes that P only if P is true. Symbolically: D(B(P) → P)
Conative Consequence. If S desires the conditional, P → Q, and she knows that P, then she desires that Q.
Consistency. S does not desire that P and also desire that ~P.
S is “ideal” in two ways. First, she does not suffer the sort of limitations that prevent real agents from satisfying Awareness. Second, she constitutes a normative ideal. When we fall short of the standard set by (1)–(4), we often see that as a failure in need of correction. We aim to emulate S as closely as possible. At least, it is plausible that we do.
There is a serious problem with the model of rationality (1)–(4) provide. Although each principle is plausible, together they have a very implausible consequence. They imply that S never desires that ~P while believing that P. So, if (1)–(4) correctly describe our ideal of rationality, it is irrational to believe a proposition while desiring its contradictory. That consequence is unacceptable. Thus we have a puzzle: How can we reconcile the intuitions behind (1)–(4) with the further intuition that it is not irrational to desire ~P while believing P? I call this the Wishful Thinking Puzzle, and it proves surprisingly difficult to solve.
For example, a likely move in response to the puzzle is to give up the idea of full desire all things considered. Perhaps we could replace it with a conception of desire that is a matter of more and less, rather than all or nothing. As I argue below, this move does not work. The most familiar, gradational conception of desire comes from decision theory, but decision theory faces its own version of the Wishful Thinking Puzzle that is independent of (1)–(4).
Here is the outline of this paper. First, I show how (1)–(4) lead to the unacceptable conclusion that it is irrational to desire that ~P while believing that P. Then I consider the plausibility of (1)–(4) themselves, for perhaps we could evade the puzzle by rejecting one of those principles. None of the most likely objections to the principles is a clear success, though, so I consider a few other approaches to the puzzle: appealing to the notion of “direction of fit,” construing (1)–(4) as “ceteris paribus” norms rather than absolute principles, and replacing the notion of full desire all things considered with the decision theoretic conception of desire. None of those approaches solves the puzzle.
Finally, I consider the prospects for a “contrastive” conception of desire. On that view, to desire a proposition is to prefer the way things would be if it were true to the way they would be if it were false. That conception of desire has the virtue of solving both versions of the Wishful Thinking Puzzle without being ad hoc. However, the contrastive conception of desire is too inchoate to be declared the definitive solution to the puzzle. Ultimately, the right conclusion to draw is just that the Wishful Thinking Puzzle is a genuine and difficult puzzle, and the contrastive conception of desire is a promising response to it.
2 The Wishful Thinking Puzzle
D(B(P) → P) Epistemic Responsibility
K(B(P)) 5, Awareness
D(P) 7, 8 Conative Consequence
D(P) & D(~P) 6, 9 Conjunction
~(D(P) & D(~P)) Consistency
Imagine Sarah, an ideally rational Yankees fan, and suppose for reductio that she believes her team is losing and desires that they not be. By Epistemic Responsibility, she desires that she believe they are losing only if they are losing. By Awareness, she knows she believes they are losing. So, lest her belief that they are losing be false, she desires that the Yankees be losing. But then Sarah would violate Consistency, for she would desire that the Yankees be losing and also desire that they not be losing. Thus Sarah does not both believe they are losing and desire that they not be losing.
How Sarah manages this is anybody’s guess. She might be a Buddhist free from desire or a Pyrrhonist free from belief. She might be a Stoic whose beliefs guide her desires, or a wishful thinker whose desires guide her beliefs. Or maybe she sometimes follows one of those courses and sometimes follows another. However she does it, Sarah does not believe a proposition and desire its contradictory, if she satisfies (1)–(4).
If (1)–(4) characterize the rational ideal, then it is irrational to believe something while desiring its denial. (1)–(4) seem to characterize the rational ideal, but it is not irrational to believe something while desiring its denial. The Wishful Thinking Puzzle is the puzzle of resolving the intuitions behind (1)–(4) with the rationality of desiring what you believe to be false. It is not an easy puzzle to solve.
3 The Plausibility of the Principles
The most natural response to the Wishful Thinking Puzzle is to object to one of the principles driving the Wishful Thinking Argument. If any of (1)–(4) can be shown to be false, on independent grounds, then we can solve the puzzle by dismissing the Wishful Thinking Argument as depending on a false presupposition. Each principle, though, is plausible in its own rights, and it is hard to find independent justification for rejecting any of them. Let us consider the principles one by one, starting with Awareness.
Awareness, it must be granted, is false of actual agents. If the ideal rational agent has unconscious beliefs, then it is false of her as well. All the same, dropping Awareness would not solve the Wishful Thinking Puzzle, for the Wishful Thinking Argument does not require the principle in its full generality. We could run another version of the argument, without Awareness, to show that S never desires that ~P while knowingly believing that P. That conclusion is no more welcome, and no less puzzling, than the conclusion of the original Wishful Thinking Argument.
The principle of Consistency is plausible because we are considering only full desire, all things considered. To desire something fully, all things considered, is to desire it wholeheartedly upon reflection on all relevant considerations. The intuitive plausibility of Consistency stems from the fact that it seems irrational to desire both P and ~P wholeheartedly, in full awareness that they are incompatible with one another. If you are rational and have a certain pro attitude toward P, while having that same pro attitude toward ~P, it seems that the attitude must not be full desire or desire all things considered.
Conative Consequence encapsulates the idea that desiring a conditional is, in part, being disposed to desire its consequent when one knows its antecedent is satisfied. It has precedent in the literature on conditional desires (e.g., Goldstein 1992; Bradley 1999), and it follows from other plausible assumptions. In particular, it is plausible that the ideal rational agent would desire whatever is obviously necessary and sufficient to satisfy one of her desires. If she desires P → Q and knows that P, then Q is obviously necessary and sufficient to satisfy her desire. So, just as Conative Consequence says, she would desire that Q.
Not everyone accepts Conative Consequence. Moore (1994) has argued against the principle. According to Moore, the principle is false because the belief (or knowledge) that P and the desire that P → Q can directly motivate Q-promoting action, without the mediation of an additional desire that Q. This argument is inconclusive. It draws a conclusion about the rational commitments of one who believes that P and desires that P → Q from premises about the causal powers of the belief that P and the desire that P → Q. Even if the belief that P and the desire that P → Q can motivate action directly, it does not follow that one who believes that P and desires that P → Q is not thereby committed to desiring that Q. Moore’s objection does not reveal Conative Consequence to be implausible, and so it does not provide a clear solution to the Wishful Thinking Puzzle.
One might attempt to reject Conative Consequence on the grounds of supposed counterexamples. Suppose I desire that you file for bankruptcy only if you cannot pay your bills, and I know that you have filed for bankruptcy. Does rationality really require me to desire that you not be able to pay your bills? And does this not show that Conative Consequence is implausible?
It does not. Suppose I desire at time t, fully and all things considered, that you file for bankruptcy only if you cannot pay your bills. At some later time, t*, I learn that you are filing for bankruptcy. Rationality does not require me to start desiring that you be unable to pay your bills. Rather, consistently with Conative Consequence, I could abandon my earlier desire that you file for bankruptcy only if you cannot pay your bills. What Conative Consequence requires is only that I not simultaneously: (a) desire that you file for bankruptcy only if you are unable to pay your bills, (b) know that you are filing for bankruptcy, and (c) not desire that you be unable to pay your bills. The principle does not say anything about what adjustments one must make to fulfill that requirement.
It is also worth mentioning that a person’s unwillingness to desire the consequent of a conditional, in the event that its antecedent is true, is prima facie evidence that she does not desire that conditional fully and all things considered after all. I might desire fully and all things considered that you not file for bankruptcy at all, but that is not the same as desiring that you file for bankruptcy only if you cannot pay your bills. (Assumptions (1) through (4) do not imply that a person’s desires are closed under logical implication. So, they do not require a person who desires ~P also to desire P → Q.) If I would not desire that you be unable to pay your bills, were I to know you had filed for bankruptcy, it is reasonable to conclude that my pro attitude toward your filing only if you cannot pay your bills is something less than full desire all things considered after all.
My point here is not to show that Conative Consequence is definitely true. Rather, my point is to show that it is plausible. It captures a reasonable intuition about the way certain rational desires and knowledge interact. The principle itself has found endorsement in the literature on conditional desires, it withstands Moore’s criticism, and the most obvious putative counterexamples are inconclusive at best. I take that to be sufficient evidence of the claim’s plausibility.
We are left with Epistemic Responsibility. That principle is also plausible, for it is natural to think of truth (or knowledge, which entails truth) as the aim of belief. Bernard Williams famously argued that an attitude not aimed at truth simply could not be the attitude of belief (Williams 1973). A rational person wants her beliefs to be true. I assume ideal rationality requires that to be a full desire, all things considered. Anything less would be either a limitation on one’s rationality or a limitation on one’s commitment to being rational. The ideal rational agent suffers no such limitations. She is fully rational, and she is fully committed to being rational.
Epistemic Responsibility does not say that the ideal rational agent desires to believe all truths. Nor does it say that the ideal rational agent has a standing desire that all her beliefs be true, a desire for infallibility. Its claim is much weaker. It says that the ideal rational agent desires, of each proposition she in fact considers, that she believes it only if it is true. Thus one could satisfy Epistemic Responsibility while acknowledging that there are truths not worth knowing, truths best not known, and possibly even truths we would be better off disbelieving than believing.
The logical weakness of Epistemic Responsibility also makes it compatible with the possibility that ideal rational agents choose among belief-forming policies of varying levels of reliability.1 The ideal rational agent might knowingly choose an error prone policy that promises to deliver significant truths over an infallible policy that will deliver only trivialities. That would not contradict Epistemic Responsibility, because the principle requires only that, when the agent considers a proposition, she desires to believe that proposition only if it is true. Such an attitude is compatible with acknowledging the fallibility of one’s methods for fixing belief.2
The philosophical precedent for Epistemic Responsibility is broad. In addition to Williams’s claim that belief is necessarily aimed at truth, there is Timothy Williamson’s suggestion that knowledge is the norm of belief (2000, pp. 255–256). According to Williamson, a person ought to believe only what she knows. Knowledge entails truth. So, on Williamson’s view, one ought to believe only what is true. The ideal rational agent would care about the norm of belief, and she would desire to conform to it. That is, she would desire to believe only what is true. So, if Williamson is right, the ideal rational agent would satisfy Epistemic Responsibility.
In the second edition of Theory of Knowledge, Roderick Chisholm claims it is our basic intellectual obligation to do our best to ensure that, for any proposition we consider, we believe it if and only if it is true (Chisholm 1977, p. 16). As Chisholm points out, this means that “(1) [the ideal rational agent] should try his best to bring it about that if a proposition is true then he believe it; and (2) he should try to bring it about that if that proposition is false, then he not believe it” (1977, p. 15). To do the latter is to try to bring it about that one believes a proposition only if it is true.3 The ideal rational agent desires to fulfill her basic intellectual obligation. So, if Chisholm is right, she satisfies Epistemic Responsibility.
Ernest Sosa (2001) considers the question what our rational concern for truth might come to. One way of understanding it, he says, is as the desire for safe belief, the desire of each proposition one considers that one would believe it only if it were true. If one would believe P only if Pwere true, then one does believe P only if Pis true. (False beliefs are not safe.) Safety entails B(P) → P, and so it is plausible that one who desires safety would also satisfy Epistemic Responsibility.
As Sosa rightly points out, desiring that you would believe P only if P were true does not entail desiring P, even given that you already believe that P. It does not entail desiring, of each proposition you believe, that that same proposition be true. To get from the desire for safety (or from Epistemic Responsibility) and the belief that P to the desire that P, we need to apply Conative Consequence and Awareness as well.
Several philosophers have recently turned their attention directly to the value of truth (e.g., Lynch 2004; Blackburn 2005; Williams 1973, 2002; Kornblith 1993; Wrenn 2010). Often, they argue that truth has a special role in a rational person’s cognitive life. The rational person adopts epistemic practices aimed at leading her to believe what is true and not what is false. If this widely held view is correct, the rational person deeply desires that her beliefs be true. She satisfies Epistemic Responsibility.
Despite its precedent and plausibility, I have found that others often think Epistemic Responsibility must be to blame for the Wishful Thinking Puzzle. It is thus worth considering the objections they have raised to the principle.
The objection fails for two reasons.
Rational people do desire, in general, to believe only truths, but that is different from desiring-true every proposition one believes. For example, I believe that there is now injustice in the world, but I do not desire it true that there now be injustice in the world. Epistemic Responsibility is incorrect because it says we desire-true every proposition we believe, when it should say we desire to believe only truths.
First, it presupposes that believing something, such as that there is injustice in the world, gives one no reason to desire that thing.4 That is not exactly correct. I believe there is injustice in the world, and I want to have only true beliefs. I could reason as follows: If there were no injustice in the world, I would have a false belief. But I want not to have false beliefs. So, I have a reason to desire that there be injustice in the world, lest I have a false belief.
Of course, I also have reasons to desire that there be no injustice. That is why the Wishful Thinking Puzzle is puzzling. Rationality seems to pull one in incompatible directions—toward desiring that there be injustice (because false beliefs are bad) and also toward desiring that there be no injustice (because injustice is bad).
The second reason the objection fails is this: Epistemic Responsibility simply does not say that S desires-true every proposition S believes. It says S desires-true a bundle of propositions with the form ‘B(P) → P’, where P is a proposition S considers. But D(B(P) → P) does not imply B(P) → D(P). To get from the former to the latter, we need to apply Awareness and Conative Consequence.5 So, the objection does not show that Epistemic Responsibility is wrong. If it shows anything, it shows that the conjunction of (1), (2) and (3) is implausible. We knew that already; that conjunction implies that S violates Consistency if she desires ~P while believing P.
This objection, then, amounts to claiming that we should drop Epistemic Responsibility because it is implicated in the Wishful Thinking Argument. That would be an ad hoc maneuver, not a solution to the puzzle. A solution would show that we have independent reason for rejecting one or more of the principles.6
(∀P)D(B(P) → P)
D(∀P)(B(P) → P)
So, one might object, the Wishful Thinking Argument depends on (12) when (13) is the right expression of our intuitions.
This is a nice move, but it is not entirely successful. Given one more plausible assumption, (13) implies (12). In that case, though, expressing Epistemic Responsibility in the form of (13) does not block the Wishful Thinking Argument after all.
S’s full desires all things considered are closed under implication.
S’s full desires all things considered are closed under universal instantiation.
S’s full desires all things considered are closed under the instantiation of universally quantified conditionals.
To see why (16) is plausible, suppose I have a certain pro attitude A toward the proposition that all my students do well, but I do not have that attitude toward the proposition that Mr. Ornery, whom I despise, does well if he is one of my students. Then, it seems, A is something less than full desire, all things considered. If it were full desire, all things considered, then I should not only have it toward the proposition that all my students do well, but toward the proposition that Mr. Ornery does well if he is one of my students. So, it seems, desiring the universally quantified conditional (‘All my students do well’, in this case) commits me to desiring its instance (‘Mr. Ornery does well if he is one of my students’, in this case).
To get around the Wishful Thinking Argument by replacing Epistemic Responsibility with (13), we must also reject (16), but (16) is also plausible. If we were to reject it, we would need to do so on grounds other than the Wishful Thinking Puzzle, on pain of ad hoc-ery.
Perhaps because of its additional logical strength, (13) is also somewhat less plausible than (12) or Epistemic Responsibilty. (13) would require the ideal rational agent to have a general desire for infallibility, rather than the various specific desires, of each proposition she considers, that she believe it only if it is true. The ideal rational agent might not desire infallibility, and so (13) might require too much of her.
Even when we are careful to construe Epistemic Responsibility along the lines of (12), and we are careful to note its restriction to propositions S considers, one still might think it is too strong. Perhaps there is some proposition that S is better off believing whether or not it is true, and perhaps S knows that to be so.7 In that case, perhaps the ideally rational S would violate Epistemic Responsibility and not desire that she believes the proposition only if it is true.
Such cases are, of necessity, exceptional for two reasons. First, to disrupt the Wishful Thinking Argument, it is not enough that S knows she would be better off believing that P regardless of whether P is true. It must also be the case that S desires that ~P.8 That is, this must be a case in which S desires, fully and all things considered, this: to believe that Pwhile ~Pis the case. This is a very odd desire (and one that, of necessity, one could not know to be fulfilled). Second, the cases are exceptional because they involve holding one’s beliefs answerable to something other than the way the world is. If the aim of belief is to represent the world accurately, then these must be isolated exceptions to the general rule.
More to the point, the Wishful Thinking Puzzle remains even if we allow for exceptions to Epistemic Responsibility. Despite the existence of the exceptions, in the typical case, S desires, of a proposition she considers, that she believes it only if it is true. Thus the typical case is one in which the Wishful Thinking Argument works and it is irrational for S to believe that P while desiring that ~P. The result that it is typically irrational to believe that P while desiring that ~P is no less puzzling or troublesome than the result that it is always irrational.9
Epistemic Responsibility and similar principles have also been attacked by philosophers who think we should not aim to believe truths. On their view, our aim should be more modest. We should aim at what it is reasonable for us to believe or what would be justified. This view also has precedent. In the third edition of Theory of Knowledge, Chisholm revises his earlier view. He says there that our basic intellectual obligation is to endeavor to believe only what is reasonable (1989, p. 1). Richard Rorty (1998), Larry Laudan (1984) and Bas van Fraassen (1980) have all argued that true belief, over and above reasonable or justified belief, is not a rational goal.
It would be too lengthy a digression to address all the issues bearing on whether we ought to aim for truth over and above reasonability in our beliefs.10 I will mention just two general points here. First, the view that we ought to aim for reasonability rather than truth is controversial and unpopular, as even its advocates acknowledge. So, even if it is debatable whether Epistemic Responsibility is true, the principle is still plausible, and that is all that is necessary for the Wishful Thinking Puzzle to be puzzling. Second, there are good reasons to reject the view that we should aim for reasonability rather than truth. One is just that the concept of “reasonable” or “justified” belief is already bound up with the idea of probable truth. Even Laurence BonJour and Alvin Goldman, whose views are otherwise very different, agree on this much: whatever accounts for the justification of a belief must make it likely that the belief is true, and that is a conceptual fact about the justification of belief. So, even if we should aim for reasonability, it is hard to see how we could do that except by aiming for truth.
The most obvious strategy for solving the Wishful Thinking Puzzle would be to show, independently of the puzzle itself, that (1), (2), (3), or (4) is false. The principles are all plausible, though, and they survive the most likely objections to them. That is part of why the puzzle is puzzling; it is not clear that the most obvious strategy for solving it will work. So, it is worth considering other approaches.
One other approach is to give up the notion of full desire all things considered in favor of a gradational conception of desire, thereby giving up on (2)–(4) as well. I consider that move in Sect. 6. First, though, I consider a couple of moves that preserve the idea of full desire all things considered: the appeal to “direction of fit” and the “ceteris paribus” interpretation of the principles.
4 Direction of Fit
Taking a cue from Elizabeth Anscombe (1957), some philosophers apply a metaphor of “fitting” to our beliefs and desires.11 Our beliefs, they say, have “mind to world direction of fit;” we adjust (or should adjust) our beliefs to reflect how things stand in the world. Our desires, in contrast, have “world to mind direction of fit;” we adjust (or should adjust) the world to reflect our desires. This difference in direction of fit is supposed to account for the difference between beliefs and desires with the same contents.
One might think this metaphor holds the key to the Wishful Thinking Puzzle. After all, the Wishful Thinking Argument seems to contradict the principle that beliefs and desires have opposite directions of fit. Its conclusion is that one’s beliefs and desires should fit one another. According to the Wishful Thinking Argument, one who desires ~P while believing P should either change what she believes or change what she desires. To change what she believes would be to indulge in wishful thinking. To change what she desires would be to impose an incorrect, belief-like direction of fit on her desires, obliterating the distinction between belief and desire. Either way, the Wishful Thinking Argument seems to require us to impose the wrong directions of fit on our attitudes.
Even if this tells us that the Wishful Thinking Argument goes wrong, it does not tell us where. The principles on which the Wishful Thinking Argument depends are all compatible with the idea that beliefs and desires have opposite directions of fit. Epistemic Responsibility, the only principle that actually ascribes a belief or desire to S, says that S wants her beliefs to fit the world. It seems to codify the mind to world direction of fit for belief, not to contradict it. Step (9) is the only place in the argument where it is inferred that S has a certain desire. Conative Consequence licenses that inference, but Conative Consequence does not apply the wrong direction of fit to desires. Rather, it just specifies part of what it means to desire a conditional fully, all things considered. Similarly, Consistency just expresses a formal constraint on full desires, all things considered. It does not apply the wrong direction of fit either. Thus, the Wishful Thinking Argument has an interesting feature. Although its conclusion violates the principle that belief and desire have opposite directions of fit, no step in the argument depends on misconstruing the direction of fit of either attitude.
The direction of fit metaphor might be a useful, pre-theoretic way to characterize the difference between belief and desire,12 but it does not solve the Wishful Thinking Puzzle. To solve the puzzle, the metaphor would have to give us a way to block the Wishful Thinking Argument, but it does not. Instead of telling us how to avoid the unacceptable conclusion of the Wishful Thinking Argument, the metaphor only tells us something about why the conclusion is unacceptable. The “ceteris paribus” response, in contrast, does involve an effort to block the argument.
5 The Ceteris Paribus Response
One might think of (1)–(4) not as absolute requirements of rationality, but merely “ceteris paribus” requirements. Such requirements hold in normal circumstances, other things being equal, but they also allow for exceptions in abnormal circumstances, when other things are not equal. It is no objection to a set of ceteris paribus requirements that they sometimes conflict, so long as the conflicts arise only in exceptional circumstances.
This suggests a way of solving the Wishful Thinking Puzzle, which I call the “ceteris paribus response.” According to this response, at least one of (1)–(4) is a ceteris paribus requirement that does not apply in the circumstances the Wishful Thinking Argument presupposes. So, the Wishful Thinking Argument is invalid, and the Wishful Thinking Puzzle does not arise. The argument goes wrong at whatever step invokes a principle that does not apply in the envisioned circumstances.
The “ceteris paribus” response does not work. To solve the Wishful Thinking Puzzle, it is not enough to proclaim that some of the principles (1)–(4) are ceteris paribus norms. We also need an explanation of precisely what is abnormal about the situation assumed in the Wishful Thinking Argument, such that the ordinary rules of rationality do not apply. We need an explanation of what “other things” are not “equal” in that situation. We also need to know exactly which principle is to be suspended under those circumstances. In the absence of such explanations, the ceteris paribus response is less a solution to the puzzle than an ad hoc refusal to admit that it arises.
The ceteris paribus response fails because nothing is unusual about the situation the Wishful Thinking Argument envisions. The argument assumes only that S believes P and desires ~P. That is not abnormal, exceptional or unusual. We frequently desire things we believe to be false. Often, we desire things because we think they are false. According to the ceteris paribus response, though, the case in which one believes P while desiring ~P is not only unusual, but so unusual that the ordinary rules of rationality do not apply. That claim seems no less bizarre than the Wishful Thinking Argument’s conclusion itself. A solution to the puzzle will have to come from a different direction.13
6 Decision Theory
One might suppose that the notion of full desire all things considered is to blame for the Wishful Thinking Puzzle. Maybe there is no such attitude. Most of our desires are matters of more and less, not all or nothing. And even if there is such an attitude as full desire all things considered, maybe we are wrong to formulate Epistemic Responsibility in terms of it. Once we allow for degrees of desire, (2)–(4) look much less plausible, and so the puzzle might seem to evaporate. One could even see the puzzle as a reductio ad absurdum of the idea that there are full desires all things considered.
I will confine my discussion here to the most familiar notions of graded desire, which come from decision theory. If we adopt a decision theoretic conception of desire, we do not escape the Wishful Thinking Puzzle. There is a decision theoretic version of the puzzle.
Decision theory presupposes that the rational agent has preferences among possible gambles, and those preferences satisfy constraints that make it possible to define both a value function over the possible outcomes of the gambles and a credence function over propositions. I will denote the value function ‘v()’, and I will use ‘C()’ for the credence function. The credence function is a probability function, and it is taken to measure the subjective probabilities or degrees of belief the agent assigns to each proposition. Because it is a probability function, we can define C(A | B), the agent’s conditional credence for A given B, as the ratio C(A & B)/C(B).
Σiv(Si) C(Si | A)
Σj ΣiC(Kj) v(Si) C(Si | A and Kj).
dese(P) = Σiv(Si) C(Si | P)
desc(P) = Σj ΣiC(Kj) v(Si) C(Si | P and Kj)
De(P) = dese(P) − dese(~P)
Dc(P) = desc(P) − desc(~P).16
The decision theoretic Wishful Thinking Puzzle arises because dese(~P), desc(~P), De(~P), and Dc(~P) are all undefined for an agent who fully believes that P. To fully believe that P is to assign it a credence of 1, so C(~P) = 0. But when C(~P) = 0, the conditional probabilities in (19) and (20) are fractions whose denominators are 0. So, if the decision theoretic agent fully believes that P, there is no degree to which she desires that ~P.17
This feature of decision theory is not entirely unknown. The usual response is to ignore it. In a decision problem, the desirability of a proposition with 0 subjective probability is never relevant. David Lewis (1981) dismisses the problem as a curiosity one should never allow to arise, for he thinks it is “rash” to assign a contingent proposition a credence of 0 or 1.
Lewis’s response is common, but far too cavalier. To give a proposition a credence less than 1 is to believe it less than fully. It is to entertain some doubt about the proposition, however small. Maybe it is true that we should fully believe very few propositions, but some propositions do merit our full belief. Consider the proposition that I am now suffering agonizing pain. There are certain clear cases in which it is rational, not rash, for me to believe fully that I am suffering agonizing pain. They are cases in which I neither have nor ought to have any doubt that I am then suffering agonizing pain, and they are also cases in which it is perfectly reasonable for me to desire that I not then be suffering agonizing pain.
On the standard, decision theoretic account of desire, the Wishful Thinking Puzzle might arise only rarely, but it does arise and it is a problem. That account makes it impossible for a rational person to desire not to be suffering agony unless she also entertains some doubt as to whether she is actually suffering it. The move to decision theory does not solve our problem. To the contrary, it reintroduces the problem in a new form.
7 Counterfactual Decision Theory and the Contrastive Conception of Desire
The contrastive conception of desire provides solutions to both versions of the Wishful Thinking Puzzle. It is based on an insight that is already apparent in (21) and (22)’s definitions of desire. The desire that P is indistinguishable from the preference that P rather than ~P. According to standard decision theory, to prefer that P rather than ~P is to find the way the world is if P more valuable (in terms of either expected value or expected utility) than the way it is if ~P. That makes it impossible to desire that ~P while fully believing that P. The contrastive conception of desire applies a different account of what it means to prefer P rather than ~P.
Suppose I know I am now being tortured, and I desire that I not now be tortured. This is not because I think the world is a better place if I am not being tortured. It is because I think the world would be better if I were not being tortured. To a first approximation, contrastivism is the claim that desiring P is preferring the way the world would be if P were true to the way it would be if P were false.
‘The way the world would be’ is a notoriously context-sensitive expression. In some contexts, possibilities are relevant that are irrelevant in other contexts. Consequently, “the way the world would be” can vary from one context to another. Sometimes, the variation is striking.
Consider the way the world would be if kangaroos lacked tails. What would kangaroos be like? In the context of taxidermy, the answer might well be that tailless kangaroos would be shaped nearly the same as actual kangaroos. They would just be missing their tails and a little less expensive to mount. In the context of evolutionary biology, it is far less clear what tailless kangaroos would look like. They would not be shaped much like actual kangaroos, for such critters would have untenable body mechanics. An evolutionary trajectory resulting in tailless kangaroos might well result in kangaroos that look more like actual koalas (which are nearly tailless marsupials) than like actual kangaroos. There is no single, context-invariant answer to the question, “What would kangaroos look like if they did not have tails?”
Which possibilities are relevant, and so how the world would be if various things were the case, is a contextual parameter that varies from conversation to conversation. It can also shift in the span of a single conversation (Lewis 1979, 1996). It is a consequence of contrastivism that one’s desires can vary, across and within contexts, as a function of which possibilities are contextually relevant. A taxidermist’s desire that kangaroos lack tails might well have a different content from a biologist’s desire. They are attitudes toward different contrasts.
Suppose Bill has looked over the dessert menu and decided that he wants the Black Forest torte for dessert. The relevant possibilities are that he have no dessert, that he have the torte, and that he have something else. The torte possibility is his favorite. After Bill announces his decision, Amanda asks if anyone wants to go to the restaurant next door for their excellent cheesecake. Her question makes a possibility relevant that had been irrelevant before. Bill must now decide between having the torte here and having the cheesecake next door. His previous desire for the torte does not commit him to desiring it now. He might well find that he no longer desires the torte, because he finds the cheesecake possibility preferable to it.
‘S desires that P’ is true at context C if and only if S prefers the P-possibilities relevant in C to the ~P-possibilities relevant in C.
First consider the decision theoretic version of the puzzle. According to contrastivism, to desire that P is not to prefer how things are if P rather than ~P, but to prefer how things would be if P rather than ~P. That means we should alter the definitions of expected value and expected utility to apply a counterfactual notion of conditional probability, rather than the more familiar notion applied in (19) and (20). Robert Stalnaker’s (1970) account of counterfactual conditional probability will work nicely here.
(a) C(A ! B) ≥ 0
(b) C (A ! A) = 1
(c) If C(~A ! A) ≠ 1, then C(~A ! C) = 1 − C(A ! C)
(d) If C(A ! B) = C(B ! A) = 1, then C(C ! A) = C(C ! B)
(e) C(A & B ! C) = C(B & A ! C)
(f) C(A & B ! C) = C(A ! C) ∙ C(B ! A & C)
dese+(P) = Σiv(Si) C(Si ! P)
desc+(P) = Σj ΣiC(Kj) v(Si) C(Si ! P and Kj)
De+(P) = dese+(P) − dese+(~P)
Dc+(P) = desc+(P) − desc+(~P).
The Wishful Thinking Puzzle does not arise for counterfactual decision theory. De+(~P) and Dc+(~P) are well defined even when C(~P) = 0. Because counterfactual decision theory is a conservative extension of standard decision theory, it preserves whatever correct intuitions standard decision theory expresses. So, the move to counterfactual decision theory, which follows naturally from the contrastive conception of desire, solves the decision theoretic Wishful Thinking Puzzle. It shows how to avoid the unacceptable conclusion without sacrificing what is good in standard decision theory.
Contrastivism also solves the Wishful Thinking Puzzle for full desire all things considered. Given contrastivism, principles (2), (3) and (4) need to be reformulated to take the context-sensitivity of desire into account. The required adjustments undermine the Wishful Thinking Argument.
Contextual Consistency: For any context C, ‘S desires that P’ and ‘S desires that ~P’ are not both true at C.19
Contextual Conative Consequence: For any context C such that there are no relevant ~P-possibilities, if S knows that P and ‘S desires that P → Q’ is true at C, then ‘S desires that Q’ is true at C.21
Contextual Epistemic Responsibility: For any proposition P that S considers and context C that is fixed with respect to P but variable with respect to B(~P), ‘S desires ~B(~P)’ is true at C.
There may be contexts that are fixed with respect to P but where the costs of not believing ~P would be enormous. Let P be the proposition that you are a handless brain in a vat, artificially stimulated to have illusory experiences, and suppose that C is a context that is fixed with respect to P. Maybe you would be much better off believing that ~P, even if it is fixed that P in all the relevant possibilities, and maybe we can know that a priori. If that is so, then it seems wrong to think that rationality requires you to desire not to believe ~P. Some further restrictions on (31) are thus needed for it to be plausible, but I will ignore them for now.
With (2)–(4) replaced by (29)–(31), we can ask whether the Wishful Thinking Argument still goes through. It does not. Assume that S believes P but, relative to some context C, S desires ~P. Because desiring ~P means preferring the relevant ~P-possibilities to the relevant P-possibilities, there must be some relevant P-possibilities in C. Unlike (2), Contextual Epistemic Responsibility does not say that D(B(P) → P), in any contexts. It says that D(~B(~P)) in contexts that are fixed with respect to P but variable with respect to B(~P). So, there are two reasons the appeal to Contextual Epistemic Responsibility is not available at (7) of the Wishful Thinking Argument. First, the principle applies only in contexts that are fixed with respect to P, and the assumption that S desires ~P requires a context that is variable with respect to P. Second, even if the principle applied, it would not license the claim that D(B(P) → P); it would only license D(~B(~P)).
The Wishful Thinking Argument runs into additional problems if we replace (2)–(4) with (29)–(31). Consider step (9), where it is inferred that S desires that P because S desires to have only true beliefs and S believes that P. For this inference to be plausible at all, we must suppose we are in a context that is fixed with respect to B(P) and variable with respect to P. It is in such contexts where the relevant alternative to truly believing that P is falsely believing that P. Such a context is almost certainly not the same as the context of (6), where it is assumed that S desires ~P. Ordinarily, when a person desires that ~P, the context is not fixed with respect to her believing that P. But if the contexts of (6) and (9) are different, there is no violation of Contextual Consistency. That principle requires only that one not desire P and ~P in one context. There is nothing wrong with desiring contrary propositions in different contexts from one another.
If we replace (2)–(4) with their contrastivist alternatives, the Wishful Thinking Argument is no longer valid. So, the contrastive conception of desire gives us a way to solve the Wishful Thinking Puzzle for full desire all things considered. We can preserve the intuitions behind (1)–(4) by embracing the corresponding contrastivist principles, but doing so does not lead to the conclusion that it is irrational to believe and desire contradictory propositions.
Contrastivism solves the decision theoretic Wishful Thinking Puzzle by embracing counterfactual decision theory. It solves the puzzle for full desire all things considered by exploiting the context sensitivity of desire. Both moves arise from a single insight: desiring that P is a matter of preferring the way things would be if P to the way they would be if ~P.
It is also worth noting that counterfactual decision theory can accommodate the context-sensitivity of desire. C(A ! B) is meant as a measure of how likely the agent thinks A would be if B were the case. That depends entirely on which B-possibilities are relevant, and so we can expect it to vary from one context to another. Moreover, although traditional decision theory tends to assume that a single agent’s values and credences are unchanged from one decision problem to the next, this assumption is not essential to solving any particular decision problems. In practice, values and credences are determined on the basis of one’s preferences among outcomes in particular problems, and one’s preferences among outcomes are determined by one’s preferences among various possible gambles (see Gauker 2005). Ultimately, one’s desires depend on what one’s attitudes are to possible gambles. They depend on what alternatives are contextually relevant.
It is easy to see counterfactual decision theory as an elaboration or formalization of the contrastive conception of desire. In particular, the decision theoretic apparatus allows us to answer the question of what it means to prefer “the relevant P-possibilities” to the “the relevant ~P-possibilities.” The desirability of the relevant P-possibilities is either dese+(P) or desc+(P), and to prefer them to the relevant ~P-possibilities is just for De+(P) or Dc+(P) to be positive.
Before the contrastive conception of desire is fully satisfactory, more issues need to be settled. An especially pressing question is whether the alternatives that matter to the truth value of ‘S desires that P’ are those that are relevant to S or those that are relevant to the person who makes the desire attribution. Also, even though the contrastive conception of desire has independent motivation (from the truism that one desires P if and only if one prefers the way things would be if P to the way they would be if ~P), I know of no direct argument for the view. The contrastive conception avoids the Wishful Thinking Puzzle, and it does so without being ad hoc, but it would be a mistake to think that is the last word on the puzzle. The contrastive conception of desire might well be wrong.
My conclusion, then, is that the Wishful Thinking Puzzle is a real puzzle about rational desire. Some of the more obvious strategies for solving it do not work. At least one approach does work, but it is too underdeveloped to win our full confidence. I want the puzzle to have a clear and obvious solution, but I do not think it does. I hope that is not irrational.
I thank an anonymous reviewer for mentioning this possibility to me.
Suppose someone has to choose between two cars, one roomier but less reliable than the other, and she chooses the roomier car in full knowledge that it will break down more often. Still it is rational, every time she drives the car, for her to desire it not to break down. By the same token, it is rational for a person who has chosen a fallible method of inquiry to desire, each time she uses it, that it lead to true belief, even in full knowledge that it sometimes will not.
~P → ~B(P) is logically equivalent to B(P) → P.
Christian Piller (2009) objects to Epistemic Responsibility partly on the grounds that believing P does not give us a reason to desire P. He applies (independently of me) the same reasoning as the Wishful Thinking Argument to try to show that desiring B(P) → P would commit us to treating the belief that something bad is going to happen to us as a reason for wanting that bad thing to happen (pp. 197–198), and he thinks that would be absurd. Piller neglects the fact that false beliefs are also bad, though. If I think I’m going to have a mild, brief headache next Wednesday, then either I am going to have the headache or I have a false belief. Either way, something bad happens. I have some reason to want the headache to occur, and it is a separate question whether I have more reason to want it to occur than to want it not to occur. Even if I would prefer the false belief to the headache, this is a case of preferring the lesser of two evils, not a case of preferring the neutral to the bad. See also n. 5 for discussion of another point Piller raises against the idea that Epistemic Responsibility characterizes part of our rational interest in truth, and see the Appendix for a discussion of Piller’s alternative solution to the Wishful Thinking Puzzle.
Piller (2009) contends that desiring B(P) → P encapsulates a dogmatic desire to be right about what one already believes rather than a rational interest in matching one’s beliefs to the world (pp. 198–199). That is, he conflates desiring B(P) → P with desiring-true every proposition one beliefs. He thinks the conflation is legitimate, though, because the reasoning of the Wishful Thinking Argument takes us from B(P), through Epistemic Responsibility, to D(P). But the Wishful Thinking Argument also depends on Conative Consequence. It thus gives us no more reason for doubting Epistemic Responsibility than it gives us for doubting Conative Consequence, since Epistemic Responsibility does not imply B(P) → D(P) unless we also take Conative Consequence (and Awareness) for granted. When we consider Epistemic Responsibility on its own, we see that it is does not say we desire, of every proposition we believe, that it be true. Instead, it says that we desire, of each proposition we consider, that we believe it only if it is true.
This point applies to both Piller’s objections to the idea that our interest in truth includes the desire that B(P) → P. Those objections rely on the reasoning of the Wishful Thinking Argument, and so they do not provide independent grounds for rejecting Epistemic Responsibility rather than Conative Consequence. Piller does give some independent motivation for his version of Conative Consequence, but that is not a reason to reject Epistemic Responsibility, because equally independent motivation is also available for Epistemic Responsibility. If the reasons to accept Conative Consequence are reasons to reject Epistemic Responsibility, then the reasons to accept Epistemic Responsibility are reasons to reject Conative Consequence, and the stalemate remains.
Sosa (2001) considers the example that his deceased parents loved him, and he argues that it might be rational not to desire to believe that proposition only if it is true.
This is because the Wishful Thinking Argument addresses the case in which one believes that P while desiring that ~P. If there are exceptions to Epistemic Responsibility, they are relevant to the Wishful Thinking Argument only if they arise in cases where one desires ~P. But the easiest cases to think of in which one might desire to believe that P regardless of whether P are also cases in which one would desire that P.
An anonymous reviewer has pointed out that the Wishful Thinking Argument appears to need a further assumption to the effect that Epistemic Responsibility is not overridden by other practical or epistemic considerations. But it is plausible that Epistemic Responsibility is not overridden in the ordinary case, and so the argument would apply in ordinary cases even if not in full generality.
See, for example, Smith (1994).
See Sobel and Copp (2001) for an excellent discussion of the metaphor’s limitations.
Piller (2009) seems to think we desire B(P) → P only in cases when we find B(P) & ~P to be less desirable than P. I think this is a step in the right direction, because it involves understanding our desires in terms of our preferences. Nevertheless, Piller’s move does not help the ceteris paribus response. His move depends on the idea that B(P) is a reason to desire P only when B(P) & ~P is worse than P. Thus it depends on his claim that desiring B(P) → P requires treating B(P) as a reason to desire P. But Piller’s argument for that claim is a version of the Wishful Thinking Argument. So, the only reason he gives for thinking that Epistemic Responsibility does not always apply is that it is implicated in the Wishful Thinking Argument. That reason is unhelpful when the question at issue is how to solve the Wishful Thinking Puzzle.
I take this way of distinguishing causal and evidential decision theory from Lewis (1981). The use of ‘expected value’ and ‘expected utility’ as labels for the different functions is due to him.
Jeffrey’s (1965) definition of desirability is (19); (20) is its causalist analog.
Here is why it is better. Suppose we identify dese(P) with the degree to which one desires that P. If dese(P) > 0 and dese(P) = dese(~P), then the agent qualifies as desiring that P and also qualifies as being indifferent to P (because she neither prefers P to ~P nor prefers ~P to P). The attitude of desiring that P, however, should be incompatible with the attitude of being indifferent whether P.
It makes no difference whether we take it to be false or meaningless that the agent desires ~P. In either case, it is not true that the agent desires ~P, and that is puzzling enough. Believing that P should not automatically make it impossible to desire ~P.
Stalnaker does this by showing that C(A ! B)/(1 − C(A ! B)) represents the odds at which the agent would have accepted a bet that A, had B been known.
If rational preferences are transitive and irreflexive, this principle follows from contrastivism. Suppose there were a context, C, relative to which S desires P and desires ~P. By contrastivism and the transitivity of preference, S would have to prefer P to P. But rational preferences are irreflexive, so there can be no such context.
For example, I might desire that, if you commit any murders at all, you commit them gently, and I might know that you are committing some murders. Relative to a context that includes relevant possibilities in which you commit no murders, though, I might not desire that you commit gentle murders, but instead desire that you commit no murders at all. I should only be committed to desiring that you commit gentle murders relative to contexts in which the possibility that you commit no murders is off the table.
The requirement that S know that P is strictly unnecessary. If no ~P-possibilities are relevant, then (given contrastivism) there is no difference between desiring P → Q and desiring Q. Part of the intuitive force of Conative Consequence probably comes from the intuition that knowing that P automatically renders the possibility that ~P irrelevant. If there are cases in which one knows that P but some ~P possibilities are relevant, then Contextual Conative Consequence does not apply. That is part of why (30) enables us to avoid the Wishful Thinking Argument.
Then why not just avoid the puzzle from the start by replacing Epistemic Responsibility’s D(B(P) → P) with ~P → D(~ B(P))? The proposed replacement principle is implausible. It says that the ideal rational agent desires, of each proposition that is actually false, that she not believe it—regardless of whether she is aware that the proposition is false or not. Contextual Epistemic Responsibility does not have that problem, though, given the contrastive conception of desire. It says that the ideal agent prefers the way things would be if P and she didn’t believe ~P to the way things would be if P and she believed ~P. This captures the intuition that one desires not to believe what is actually false without implying that one desires of each actual falsehood that one does not believe it.
This paper benefitted tremendously from discussions and comments with a number of people. I owe special thanks to Torin Alter, Robert Barnard, Neil Manson, Stuart Rachels, Mark Scala, Michelle Wrenn, and several anonymous reviewers.
- Anscombe, G. E. M. (1957). Intention. Oxford, UK: Blackwell.Google Scholar
- Blackburn, S. (2005). Truth: A guide. Oxford, UK: Oxford University Press.Google Scholar
- Chisholm, R. (1977). Theory of knowledge (2nd ed.). Upper Saddle River, NJ: Prentice-Hall.Google Scholar
- Chisholm, R. (1989). Theory of knowledge (3rd ed.). Upper Saddle River, NJ: Prentice-Hall.Google Scholar
- Jeffrey, R. (1965). The logic of decision. New York: McGraw-Hill.Google Scholar
- Laudan, L. (1984). Science and values. Berkeley, CA: University of California Press.Google Scholar
- Lynch, M. P. (2004). True to life: Why truth matters. Cambridge, MA: MIT Press.Google Scholar
- Smith, M. (1994). The moral problem. Oxford, UK: Blackwell.Google Scholar
- Sosa, E. (2001). For the love of truth? In A. Fairweather & L. Zagzebski (Eds.), Virtue epistemology: Essays on epistemic virtue and responsibility (pp. 49–62). New York: Oxford University Press.Google Scholar
- Williams, B. (1978). Descartes: The project of pure enquiry. Atlantic Highlands, NJ: Humanities Press.Google Scholar
- Williams, B. (2002). Truth and truthfulness: An essay in genealogy. Princeton, NJ: Princeton University Press.Google Scholar
- Williamson, T. (2000). Knowledge and its limits. Oxford, UK: Oxford University Press.Google Scholar
- Wrenn, C. (2005). Pragmatism, truth and inquiry. Contemporary Pragmatism, 2(1), 95–113.Google Scholar
- Wrenn, C. (2010). Truth is not instrumentally valuable. In Cory Wright & Nikolaj Pederson (Eds.), New waves in truth. New York: Palgrave Macmillan.Google Scholar