Skip to main content
Log in

Parity, prospects, and predominance

  • Published:
Philosophical Studies Aims and scope Submit manuscript

Abstract

Let’s say that you regard two things as on a par when you don’t prefer one to other and aren’t indifferent between them. What does rationality require of you when choosing between risky options whose outcomes you regard as on a par? According to Prospectism, you are required to choose the option with the best prospects, where an option’s prospects is a probability-distribution over its potential outcomes. In this paper, I argue that Prospectism violates a dominance principle—which I call The Principle of Predominance—because it sometimes requires you to do something that’s no better than the alternatives and might (or even likely) be worse. I argue that this undermines the strongest argument that’s been given in favor of Prospectism.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. I’ve put ‘dominance’ in scare quotes because, unlike the more traditional dominance-relations familiar from standard decision and game theory, the notion here isn’t asymmetric: as we’ll see, it’s possible for two options to “dominate” each other (and every option trivially “dominates” itself). This might strike you as a misuse of the term. Hence, the scare quotes.

  2. Here is why you are unable to place a single, absolute value on any of these outcomes. Suppose, to the contrary, that you could. You assign the number \(r\in \mathbb {R}\) to A: \(u(A) = r.\) Because you don’t prefer A to B, the number you assign to B, u(B), cannot be less than r. Because you don’t prefer B to A, u(B) also cannot be greater than r. Therefore, \(u(B) = r\). And, because you prefer \(A^+\) to A, it must be that \(u\left( A^+\right) >r\). But, because you don’t prefer \(A^+\) to B, it cannot be the case that \(u\left( A^+\right) >r\). And that’s a contradiction.

  3. Every partial ordering can be represented, in the manner described, by a set of complete orderings. The converse, however, doesn’t hold: there are sets of complete orderings that cannot be faithfully represented by a partial ordering. Here’s an example. Suppose you are deciding between three dessert options: an apple pie (A), a bowl of blueberries (B), and a cantaloupe cake (C). And, at least as far as desserts are concerned, you only care about two things: how healthy the dessert is, and how delicious it is. Suppose that A is the most delicious, B is the least delicious, and C is just slightly more delicious than B; and suppose that B is the healthiest option, A is the least healthy option, and C is just slightly healthier than A. Consequently, in terms of your all-things-considered preferences, none of the three options stand in any of the traditional preference-relations to any of the others. But, in such a case, we might want to represent your motivational-state with a set of complete orderings which includes orderings that rank C ahead of A and C ahead of B, but doesn’t include any orderings that rank C ahead of bothA and B. In other words, there are no admissible ways of evaluating your options, resolving your concern for health and your concern for deliciousness, according to which C is the dessert that is most desirable to you. [See (Levi 1985, 2008) for a discussion of cases with this structure.] This distinction won’t matter for our purposes, however.

  4. In addition to Hare’s Prospectism, there are a number of views that have a very similar structure. See for example: I.J. Good’s Quantizationism (Good 1952); Isaac Levi’s V-admissibility (Levi 1986, 2008); Amartya Sen’s Intersection Maximization (Sen 2004); and (Weirich 2004). Among economists, views of this general nature are nearly the only game in town. See, for example, Dubra et al. (2004), Evren and Ok (2011), Galaabaatar and Karni (2013), Ok et al. (2012). Some of these views differ from the others in some important respects. But these differences won’t matter for our purposes because each of the views recommend taking the Larger box over the Regular one. There are also a number of decision theories designed to handle similar cases that arise not because of incomplete preferences but because of imprecise (or unsharp) credences: for example, Susanna Rinard’s Moderate (Rinard 2015); Weatherson’s Caprice (Weatherson 2008); and (Joyce 2010).

  5. Here’s why. \(u^*({take\, Larger }) > u^*({take \,Regular })\) iff \(u^*({take\,Larger }) - u^*({take\,Regular })>0\). Because every utility-function in your set ranks \(A^+\) ahead of A and ranks \(B^+\) ahead of B, \(u^*\left( A^+\right) >u^*\left( A\right)\) and \(u^*\left( B^+\right) >u^*\left( B\right)\).

    So, \(u^*\left( A^+\right) - u^*\left( A\right) + u^*\left( B^+\right) - u^*\left( B\right) >0\).

    So, \(\frac{1}{2}\bigl (u^*\left( A^+\right) - u^*\left( A\right) \bigr ) + \frac{1}{2}\bigl (u^*\left( B^+\right) - u^*\left( B\right) \bigr ) > 0\).

    Thus, \(\frac{1}{2}\bigl (u^*\left( A^+\right) + u^*\left( B^+ \right) \bigr ) - \frac{1}{2}\bigl (u^*\left( B\right) + u^*\left( A\right) \bigr ) > 0\).

  6. The point that Prospectism violates such a principle is made in Bales et al. (2014), who call the principle Competitiveness. Rabinowicz (2016) makes a very similar point, but calls the principle “complementary dominance.” And, although not explicitly put in terms of “dominance,” Hare (2010), too, makes this point.

  7. This defense is very closely related to the argument that Bales et al. (2014) offer for Competitiveness. They appeal to an analogy: Competitiveness is the basis of a principle (which they call Strong Competitiveness) that is the analogue of the principle of weak dominance, a traditional dominance principle familiar from decision and game theory. Recall: an option \(\phi\)weakly dominates option \(\psi\) if, for every way the world might be, \(\phi\)’s outcomes are as good as or better than \(\psi\)’s outcomes and there’s some way the world might be according to which \(\phi\)’s outcome is better than \(\psi\)’s. If \(\phi\) weakly dominates all other available options, then you rationally ought to \(\phi\). That’s The Principle of Weak Dominance. Bales et al. (2014) take this to be “[o]ne of the least controversial principles of rational choice [\(\dots\)]” (pg. 459) and, if we replace “as good as or better than” in the principle of weak dominance with “not worse than,” we arrive at what Bales et al. (2014) call Strong Competitiveness, which they consider to be to be at least as plausible as The Principle of Weak Dominance (pg. 460). Because Competitiveness says “if an option is competitive, then it is rationally permissible to take it,” Bales et al. (2014) consider it to be a “simpler, and even more compelling, principle” than Strong Competitiveness (pg. 460).

    There are a number of problems with this argument. First, Strong Competitiveness, in addition to being logically stronger than The Principle of Weak Dominance, is false (see fn. 18). So the former certainly isn’t at least as plausible as the latter. Second, Strong Competitiveness and Competitiveness are logically independent—even if the former were true, the latter needn’t be—so it’s not clear why we should find it “even more compelling.”

  8. Or at least it is if it’s understood to apply only to cases in which your options are suitably independent of the states of the world. Bales et al. (2014) restrict their discussion of dominance principles to cases in which your options and the states are probabilistically independent. I think that’s overkill: causal independence is enough. But, for the time being, I will follow their lead.

  9. Here’s why. If \(\phi\) is at least as good as all the other options, then, for all \(S\in \mathbf {S}\) and all available options \(\psi\), \((\phi \wedge S)\) is weakly preferred to \((\psi \wedge S)\). If none of the other options are at least as good as \(\phi\), then, for each of the other available options \(\psi\), there will be some state \(S^*\) such that \((\psi \wedge S^*)\not \succeq (\phi \wedge S^*)\). And because \(\phi\) is at least as good as \(\psi\), \((\phi \wedge S^*)\succeq (\psi \wedge S^*)\). So, it must be that \((\phi \wedge S^*)\succ (\psi \wedge S^*)\). Therefore, for all states S and every other available option \(\psi\), you weakly prefer \((\phi \wedge S)\) to \((\psi \wedge S)\) and there is, for each of those alternatives, some state \(S^*\) such that you strictly prefer \((\phi \wedge S^*)\) to \((\psi \wedge S^*)\). In other words, \(\phi\) weakly dominates all other available options.

  10. Here’s why. If \(\phi\) is no worse than the other options, then, for all states S and all other options \(\psi\), \((\phi \wedge S) \not \prec (\psi \wedge S)\). And if none of the other options are no worse than \(\phi\), then, for each \(\psi\), there is some state \(S^*\) such that \((\psi \wedge S^*)\prec (\phi \wedge S^*)\). And so \(\phi\) predominates over all other options.

  11. Schoenfield (2014) calls this principle Link because it links facts about expected value to what is known about value. Link is less general than the principle presented here—it’s stated in terms of two available options, and it’s restricted to “cases in which considerations of value are the only ones that are relevant”—but these are superficial differences that won’t matter for our purposes.

  12. Schoenfield (2014) says “if Link is rejected, expected value theory cannot play the role that it was intended to play: namely, providing agents with limited information guidance concerning how to make choices in circumstances in which value-based considerations are all that matter.” (pg. 268). But, of course, this isn’t literally true. Prospectism rejects Link and, yet, it does provide guidance to agents with limited information. The issue is whether the guidance it provides is correct. Schoenfield (2014) thinks it’s not—the view, in virtue of violating the second clause of Link, “is imposing requirements that transcend what we actually care about: the achievement of value” (pg. 268)—but, as we’ll see, this isn’t obvious.

  13. The Newcomb Problem was first discussed in print by Nozick (1969), who attributes it to the physicist William Newcomb. Here’s the case. Before you, there are two boxes: an opaque box, which either contains a million dollars or nothing; and a transparent box, which contains a thousand dollars. You have the option, either, to take only the opaque box (One-Box) or to take both the opaque and the transparent box (Two-Box). Here’s the catch. Whether the opaque box contains the million dollars or nothing has been determined by the prediction of a super-reliable predictor. If the predictor predicted that you’d One-Box, she put a million dollars in the opaque box; if she predicted that you’d Two-Box, she put nothing in the opaque box.

  14. In fact, if we assume your preferences can be represented with a utility-function, causal decision theory entails Known Value-Relations. And, if we define the actual value of an option \(\phi\), \(V_@(\phi )\), to be the utility you assign to the outcome that would result were you to \(\phi\), then the causal expected utility of \(\phi\) equals your best estimate of \(\phi\)’s actual value: that is, \(U(\phi ) = \sum _{v} Cr\left( V_@(\phi ) = v \right) \cdot v\).

  15. This is one of the arguments Hare offers in favor of Prospectism’s verdict in cases like Vacation Boxes (see Hare 2010, 2013). The argument is criticized by Bales et al. (2014) and Schoenfield (2014), largely on the grounds that it fails to appreciate the potentially complicated ways in which reasons can interact [e.g., “Reasons interact in complex ways and they don’t always add up as one might expect them to.” (Schoenfield 2014, pg. 273)]. All parties want to accept that rationality requires you to do what you have the most reason to do, but I think, if you want to resist Hare’s argument, this is untenable. Presenting the argument for why, though, would take us too far afield.

  16. Hare [2013, pg. 51] defends Prospectism against arguments like Schoenfield (2014)’s along these lines. He gives an explanation for why these arguments might seem attractive even though they are, in his opinion, unsound: sometimes, when we learn that \(\phi\) is no worse than \(\psi\), it ceases to be true that we have a reason to \(\psi\) and no reason to \(\phi\). Much depends on exactly what we know about the value-relation that holds between our options. For example, if you know that \(\phi\) is no worse than \(\psi\) because you know that \(\phi\) is actually better than \(\psi\), then you’ll have a reason to \(\phi\) rather than \(\psi\) (and, presumably, a decisive reason at that). If you know that \(\phi\) is no worse than \(\psi\) because you know that \(\phi\) and \(\psi\) are actually equally good, then you won’t have more reason to \(\psi\) than to \(\phi\) (either because you won’t have any reason to do one rather than the other, or because you will have a reason to \(\phi\) that perfectly balances your reasons to \(\psi\)). But if you know that \(\phi\) is no worse than \(\psi\) because you know that \(\phi\) and \(\psi\) are actually on a par, you might, like in Vacation Boxes, have a reason to \(\psi\) and no reason to \(\phi\). As I’ll argue in the next section, however, this defense isn’t entirely adequate.

  17. If you do not value money linearly—if, for example, receiving 85¢ is less than five-sixths as good than receiving \(\$1\)—we’ll have to pick a smaller value for \(\$\epsilon\). In particular, we should pick an amount so that u($1\(-\epsilon )>\frac{5}{6}\cdot u\)($1). If you’re fairly risk-seeking—see Buchak (2013) for a way of modeling agents who are genuinely risk-averse and genuinely risk-seeking—then, again, we’ll have to pick a smaller value for \(\$\epsilon\). In particular, making use of Buchak (2013)’s Risk-Weighted Expected Utility Theory, where \(0\le r_x(p)=p^x\le 1\) is a risk-function representing your attitude to risk, \(L^-\) will have better prospects than M if \(r\left( \frac{5}{12}\right) +r\left( \frac{11}{12}\right) -r\left( \frac{1}{2}\right) \le \frac{u\left( \$1-\epsilon \right) }{u\left( \$1 \right) }\). In order for M to come out ahead of \(L^-\), when \(\epsilon \le 15\), you’d need to be more than just slightly risk-seeking. If you’re enough of a risk-seeker, might there no value small enough for Prospectism to recommend choosing \(L^-\) over M? Yes. You could be such a risk-seeker that you’re disposed to avoid sure-things at all costs. This is an extreme—and arguably irrational—way to be. It should provide only cold comfort for the Prospectist then.

  18. It might be tempting to think that, if an option predominates over all others, then you’re rationally required to take it. (This is entailed by—and, when there are only two options at play, equivalent to—what Bales et al. (2014) call Strong Competitiveness). But resist the temptation because that principle is surely false. For example, it would require you to \(\phi\) over \(\psi\)—even if you think it’s overwhelmingly likely that the two are on a par—so long as there is some chance, no matter how small, that \(\phi\) might be ever-so-slightly better than \(\psi\). That strikes me as implausible. But even more seriously: imagine a case in which there are more than two options such that each of which is predominated by one of the others. In such a case, this principle will say, when considering the options pair by pair, that you are rationally required to choose in a way that will result in cyclic—and, hence, strongly money-pumpable—choice behavior. I think this is a decisive reason to reject Strong Competitiveness. (There is, it might be objected, a similar money-pump worry for the weaker principle endorsed in the main text: the same cases will be ones in which, according to that principle, it’s not impermissible to be money-pumped. This is a much less serious problem, though, because, rather than forcing you into a sub-optimal outcome, the principle merely fails at preventing you from stumbling into one. In the former (more serious) case, there’s nothing you can do to satisfy what’s rationally required of you; in the latter (less serious) case, there is: it’s rationally permissible for you to turn down any or all of the trades. Moreover, so long as your preferences are incomplete, you are already vulnerable to money-pumps of this sort: e.g., it’s permissible to trade \(A^+\) for B, it’s permissible to trade B for A, but \(A^+\) is better than A. So, these weaker money-pumps aren’t reason enough to reject The Principe of Predominance, whereas the stronger ones are reason to reject Strong Competitiveness. Thanks to an anonymous referee for helpful discussion on these points.)

  19. One might object that, because \(L^-\) comes with a sure-thing 85¢ and (per the assumptions we made earlier about your preferences) you’d rather have a sure-thing 85¢ than a five-sixths chance at getting \(\$1\), this isn’t really a consideration that speaks in favor of taking M over \(L^-\) after all. To bring out the thought behind this objection, consider the following example. Suppose that if you \(\phi\) you’ll get a dollar and that if you \(\psi\) you’ll get two. Is the fact that you’ll get a dollar if you \(\phi\) and that you won’t get a dollar if you \(\psi\) a reason to take \(\phi\) over \(\psi\)? It’s unclear. You might think: no because you might think getting two dollars entails getting one and so “I’ll get a dollar” doesn’t properly distinguish between the two options. But what about “I’ll get exactly one dollar”? That distinguishes between the two. But you might think that’s not a reason either; it doesn’t speak in favor of taking \(\phi\) given that getting exactly one dollar isn’t a good thing when compared to getting two. And, similarly, a five-sixths chance at a dollar is not a good thing when compared to a sure-thing 85¢. Alternatively, you might think: no, “I’ll get (exactly) a dollar” is a reason to \(\phi\) rather than \(\psi\); it’s just that, in this case, that reason is clearly outweighed by the fact that you’ll get two dollars if you take \(\psi\). I think that’s the right thing to say, and that’s what we should say about Pay or Roll: the fact that you’re likely to get a dollar if you take M (and that you won’t get a dollar if you take \(L^-\)) is a reason to choose M over \(L^-\), even if it’s a reason that’s ultimately outweighed by others.

  20. Here, like before, I am making some assumptions about your preferences. If those assumptions don’t hold, we can change the example so that they do.

  21. Here’s one example. Consider a view that endorses Weak Link: it agrees with Prospectism in cases like Vacation Boxes (you’re required to take L over R) but disagrees with Prospectism in cases like Pay or Roll (it’s permissible to take either option). A view like this offers counterintuitive recommendations concerning cases of probabilistic sweetening. Imagine a variant of Vacation Boxes in which L has been sweetened with \(\$2\). You have the opportunity, before choosing, to sweeten R either with a dollar (\(R^{+\$1}\)) or with a lottery ticket that pays out a million dollars on the very slim chance that it wins (\(R^{+\ell }\)). The chance of the ticket winning is so low that you prefer the dollar, all else equal. Because \(R^{+\$1}\) never does better than L and you have no reason to take it, you should prefer L to \(R^{+\$1}\). However, because \(R^{+\ell }\) never does worse than L and might (with very small probability) be better, you shouldn’t prefer L to \(R^{+\ell }\). This is, at the very least, odd: sweetening R with something you prefer appears to make it worse! Moreover, you have intransitive preferences: you should prefer L to \(R^{+\$1}\), you should prefer \(R^{+\$1}\) to \(R^{+\ell }\), but you shouldn’t prefer L to \(R^{+\ell }\). Offhand, this seems like a good reason to reject this view in favor of one that disagrees with Prospectism’s recommendation in Vacation Boxes as well. However, those views have problems of their own. A full discussion of the strengths and weaknesses of the alternative views, though, are outside the scope of this paper.

References

  • Bales, A., Cohen, D., & Handfield, T. (2014). Decision theory of agents with incomplete preferences. Australasian Journal of Philosophy, 92(3), 453–470.

    Article  Google Scholar 

  • Buchak, L. (2013). Risk and rationality. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Chang, R. (2002). The possibility of parity. Ethics, 112(4), 659–688.

    Article  Google Scholar 

  • Dubra, J., Maccheroni, F., & Ok, E. (2004). Expected utility theory without the completeness axiom. Journal of Economic Theory, 115(1), 118–133.

    Article  Google Scholar 

  • Evren, O., & Ok, E. (2011). On the multi-utility representation of preference relations. Journal of Mathematical Economics, 47, 554–563.

    Article  Google Scholar 

  • Galaabaatar, T., & Karni, E. (2013). Subjective expected utility theory with incomplete preferences. Econometrica, 81(1), 255–284.

    Article  Google Scholar 

  • Good, I. J. (1952). Rational decisions. Journal of the Royal Statistical Society, B, 14, 107–114.

    Google Scholar 

  • Hare, C. (2010). Take the sugar. Analysis, 70(2), 237–247.

    Article  Google Scholar 

  • Hare, C. (2013). The limits of kindness. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Joyce, J. (2010). A defense of imprecise credences in inference and decision making. Philosophical Perspectives, 24(1), 281–323.

    Article  Google Scholar 

  • Levi, I. (1985). Imprecision and indeterminacy in probability judgment. Philosophy of Science, 36, 331–340.

    Google Scholar 

  • Levi, I. (1986). Hard choices. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Levi, I. (2008). Why rational agents should not be liberal maximizers. Canadian Journal of Philosophy, 38(Supplementary Vol 34), 1–17.

    Article  Google Scholar 

  • Nozick, R. (1969). Newcomb’s problem and two principles of choice. In N. Rescher, et al. (Eds.), Essays in honor of Carl G. Hempel. Kufstein: Reidel.

    Google Scholar 

  • Ok, E., Ortoleva, P., & Riella, G. (2012). Incomplete preferences under uncertainty: Indecisiveness in beliefs versus tastes. Econometrica, 80(4), 1791–1808.

    Article  Google Scholar 

  • Rabinowicz, W. Incommensurability meets risk. Unpublished, July 2016.

  • Rinard, S. (2015). A decision theory for imprecise credences. Philosopher’s Imprint, 15(7), 1–16.

    Google Scholar 

  • Schoenfield, M. (2014). Decision making in the face of parity. Philosophical Perspectives, 28(1), 263–277.

    Article  Google Scholar 

  • Sen, A. (2004). Incompleteness and reasoned choice. Synthese, 140(1/2), 43–59.

    Article  Google Scholar 

  • Weatherson, B. Decision making with imprecise probabilities. Manuscript, November 2008.

  • Weirich, P. (2004). Realistic decision theory: Rules for nonideal agents in nonideal circumstances. Oxford: Oxford University Press.

    Book  Google Scholar 

Download references

Acknowledgements

For extremely helpful comments and suggestions, very special thanks to Caspar Hare and Agustin Rayo. Thanks as well to two anonymous referees.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ryan Doody.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Doody, R. Parity, prospects, and predominance. Philos Stud 176, 1077–1095 (2019). https://doi.org/10.1007/s11098-018-1048-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-018-1048-0

Keywords

Navigation