Skip to main content
Log in

Rational monism and rational pluralism

  • Published:
Philosophical Studies Aims and scope Submit manuscript

Abstract

Consequentialists often assume rational monism: the thesis that options are always made rationally permissible by the maximization of the selfsame quantity. This essay argues that consequentialists should reject rational monism and instead accept rational pluralism: the thesis that, on different occasions, options are made rationally permissible by the maximization of different quantities. The essay then develops a systematic form of rational pluralism which, unlike its rivals, is capable of handling both the Newcomb problems that challenge evidential decision theory and the unstable problems that challenge causal decision theory.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. See e.g. Bentham (1961[1789]), Mill (1988[1861]), Moore (1903, 1912), and Ramsey (1990[1926]).

  2. The claim that there are both objective and rational permissions is not entirely uncontroversial; see e.g. Kolodny and MacFarlane (2010) and Thomson (2008).

  3. For reasons discussed in Sect. 5.3, it is the stable maximization, not the mere maximization, of a quantity that makes options rationally permissible.

  4. See e.g. Hammond (1988), Joyce (1999, 2012, 2018), Lewis (1981), von Neumann and Morgenstern (1944), Pettigrew (2015), Ramsey (1990[1926]), Savage (1954), Skyrms (1982, 1984, 1990), Sobel (1994), and Stalnaker (1981).

  5. See e.g. Ahmed (2014a), Eells (1982), and Jeffrey (1965, 1983).

  6. See e.g. Rawls (1971).

  7. See e.g. Buchak (2013).

  8. There are other rational pluralists; see e.g. Weirich (1988, 2004).

  9. For more on the dispute between instrumental and realist views of credences, see e.g. Eriksson and Háyek (2007), List and Dietrich (2016), and Pettigrew (2019).

  10. Cf. Lewis (1981) and Skyrms (1982).

  11. This characterization assumes that we have set nonideal agents aside.

  12. Other discussions of Newcomb problems and/or unstable problems include: Ahmed (2012, 2014a, b), Arntzenius (2008), Bales (2018), Bassett (2015), Briggs (2010), Eells (1982), Eells and Harper (1991), Gallow (2020), Gibbard and Harper (1978), Gustafsson (2011), Hare and Hedden (2016), Horgan (1981), Hunter and Richter (1978), Jeffrey (1983), Joyce (1999, 2012, 2018), Lewis (1981), Nozick (1969), Oddie and Menzies (1992), Rabinowicz (1988, 1989), Skyrms (1982, 1984, 1990), Spencer and Wells (2019) Stalnaker (1981), Wedgwood (2013), Weirich (1985, 1988, 2004), and Wells (2019).

  13. For a defense of two-boxing, see Spencer and Wells (2019).

  14. This example is from Spencer and Wells (2019: 34). It’s assumed that the agent is unable to randomize their choice.

  15. This example is from Spencer and Wells (2019: 35).

  16. The two relevant dependency hypotheses are \(k_W\), which says that the white box contains $100, and \(k_B\), which says that the black box contains $100, and: \(u(a_{LW}k_W) = 100 < 105 = u(a_{RW}k_W)\); \(u(a_{LW}k_B) = 0 < 5 = u(a_{RW}k_B)\); \(u(a_{LB}k_W) = 0 < 5 = u(a_{RB}k_W)\); and \(u(a_{LB}k_B) = 100 < 105 = u(a_{RB}k_B)\).

  17. For an inveterate defense of V-monism, see e.g. Ahmed (2014a). For an inveterate defense of U-monism, see e.g. Harper (1986) and Joyce (2012, 2018).

  18. The proof draws on and is heavily indebted to Ahmed (2012).

  19. There is a delicate question about when an option survives a restriction. I am inclined toward what J. Dmitri Gallow calls the simple view: that option \(a_n\) survives the elimination of some options just if some post-restriction option, \(a_m\), is such that, for any \(k \in K\), \(u(a_nk) = u(a_mk)\) and \(C(k|a_n) = C(k|a_m)\). One might wonder whether there are any decision problems that witness K-Selection and also permit restriction. I am convinced that there are. One (rather complicated) example is a variation on The Semi-Frustrater.

    As before, there is a white box and a black box. One contains $100; the other contains $0. The agent gets the contents of the box they point to, plus $5 if they point right-handedly. It has not been settled whether the agent will have four options, being able to point to either box with either hand, or just two options, being forced to point either left-handedly or right-handedly to a selected box. The Semi-Frustrater has made two predictions. They predicted which box the agent would point to if choosing from four options, placing $100 in the box they predicted the agent would not point to, and they also predicted whether the agent would point left-handedly or right-handed if choosing from two options. If they predicted that the agent would point left-handedly if choosing from two options, then they flipped a fair coin, and selected the box that contains $100 if the coin landed heads. If they predicted that the agent would point right-handedly, then they selected the box that contains $0. The Semi-Frustrater’s predictions about whether an agent would point left-handedly or right-handedly to a selected box are 90%-reliable, whichever option is chosen. The agent knows all of this. Moreover, as the agent will learn if and only if they end up having two options, the white box has been selected.

    If Q-monism is true, then, relative to the four-option decision problem, \(Q(a_{LW}) > Q(a_{RW})\). Eliminating the options that correspond to the black box leaves \(u(a_{RW}k)\), \(u(a_{LW}k)\), \(C(k|a_{RW})\), and \(C(k|a_{LW})\) the same, for every \(k \in K\), so the options corresponding to the white box survive the restriction. If Q is independent, then \(Q(a_{LW}) > Q(a_{RW})\) relative to the two-option decision problem. But K-Selection entails that \(a_{RW}\) is the only rationally permissible option relative to the two-option decision problem.

  20. Cf. Sen (1970).

  21. Cf. Jeffrey (1983).

  22. We can also formulate V-ratificationism as a form of rational pluralism. The monistic and pluralistic formulations of V-ratificationism do not differ extensionally, but they differ metaethically—see Sect. 4.3.

  23. If an option strictly K-dominates every other option, then it is the only ratifiable option.

  24. One dependent monism on offer is defended by Wedgwood (2013). Wedgwood defends B-monism, where \(B(a) = \sum _K ( C(k|a)(u(ak) - \frac{\sum _A u(ak)}{\#A}))\). Three Shells is a counterexample to B-monism. In Three Shells, no matter what the agent’s credences are:

    $$\begin{aligned}&B(a_A) = \sum _K \left( C(k|a_A)\left( u(ak) - \frac{\sum _A u(ak)}{\#A}\right) \right) \approx (1)\left( 5 - \frac{5}{3}\right) + (0)\left( 0 - \frac{19}{3}\right) + (0)\left( 0 - \frac{19}{3}\right) = \frac{10}{3};\\&B(a_B) = \sum _K \left( C(k|a_B)\left( u(ak) - \frac{\sum _A u(ak)}{\#A}\right) \right) \approx (0)\left( 0 - \frac{5}{3}\right) + (1)\left( 9 - \frac{19}{3}\right) + (0)\left( 10 - \frac{19}{3}\right) = \frac{8}{3} \end{aligned}$$

    ; and

    $$\begin{aligned} B(a_C) = \sum _K \left( C(k|a_C)\left( u(ak) - \frac{\sum _A u(ak)}{\#A}\right) \right) \approx (0)\left( 0 - \frac{5}{3}\right) + (0)\left( 10 - \frac{19}{3}\right) + (1)\left( 9 - \frac{19}{3}\right) = \frac{8}{3}. \end{aligned}$$

    Gallow (2020) defends a different form of dependent monism. I have argued elsewhere that Gallow's preferred form of dependent monism also admits of counterexamples.

  25. The most sustained argument against optimism is Briggs’ (2010). Briggs argues that any adequate decision theory must verify two principles—a Pareto principle and a self-sovereignty principle—and then proves that no decision theory can verify both. I think that an adequate decision theory must falsify both: the Pareto principle is refuted by The Semi-Frustrater, and the self-sovereignty principle is refuted by Three Shells.

  26. Some representative quotations:

    The fundamental source for the normative force of expected utility theory lies in what are known as representation theorems... (Bermúdez 2009: 30).

    The standard method for justifying any version of expected utility theory involves proving a representation theorem... (Joyce 1999: 4).

    For an alternative way of trying to answer the problem of consequentialists credentials, see Hammond (1988).

  27. For an orthogonal critique of using representation theorems to answer the problem of consequentialist credentials, see Meacham and Weisberg (2011).

  28. The representation theorem at best establishes a claim of coextensionality, so those with reductive ambitions must combine the representation theorem with some additional principles. Representation theorems do not play any role in my preferred way of solving the problem of consequentialist credentials, so I won’t hazard a guess at what the additional principles might be.

  29. Familiar representation theorems have been monistic; they aim to arrive at a representing quantity. But as a helpful reviewer points out, one could, in principle, attempt to prove a pluralistic representation theorem, which purports to show that rational agents act as if they maximize one quantity in one sort of circumstance and act as if they maximize another quantity in another sort of circumstance. It would be interesting to explore the prospects of pluralistic representation theorems, but I shall not do so here.

  30. The familiar representation theorems include: Bolker (1967), Buchak (2013), Joyce (1999), von Neumann and Morgenstern (1944), and Savage (1954).

  31. The idea that we should be scoring quantities (or programs) and optimizing subject to some constraint has been a mainstay of work in bounded rationality, especially in computer science; see e.g. Halpern et al. (2014), Icard (2018), and Russell and Subramanian (1995).

  32. For example, I have not yet been able to find any (remotely plausible) way of d-scoring quantities that (a) entails that V weakly D-dominates U and (b) does not entail that the d-score of V sometimes exceeds the d-score of actual value. A method of d-scoring quantities cannot be adequate unless in ensures that the d-score of actual value is never exceeded, so this amounts to an outstanding challenge to V-enthusiasts.

  33. Proof: Let \(\alpha\) be the quantity maximized by exactly the actual value maximizing options at every \(\langle w,d \rangle\). For any quantity Q and any \(\langle w,d \rangle\), \(@(\alpha ,w,d) \ge (Q,w,d)\), since the average actual value of the options that maximize actual value at \(\langle w,d \rangle\) cannot be less than the average actual value of the options that maximize Q relative to \(\langle w,d \rangle\). Hence, for any d, \(S(\alpha ,d) \ge S(Q,d)\).

  34. If d is some particular decision problem, then \(Q_1\) is d-coincident with \(Q_2\) if, for each \(\langle w, d \rangle\), \(Max(Q_1,w,d) = Max(Q_2,w,d)\). For any quantity Q and any decision d, there are uncountably many quantities that are d-coincident with Q, and every quantity that is d-coincident with Q has the same d-score as Q does.

  35. Or anyway, much of what sets them apart is score. We may also want to impose some formal conditions, like continuity.

  36. A similar conception of guidance is defended in Spencer and Wells (2019: 38–40).

  37. Condition (2) is akin to, but not quite equivalent to, a principle that Hare (2011: 196) calls “Reasons are not Self-Undermining.”

  38. See Spencer and Wells (2019: 38–40).

  39. Note an important distinction here. What condition (2) requires is that it be true and certain relative to \(C^a\) that a maximizes the quantity relative to \(C^a\), not that it be true and certain relative to \(C^a\) that a maximizes the quantity relative to C. Thanks to Arif Ahmed for discussion here.

  40. If the Meta-Frustrater is perfect, then in both examples:

    \(V(a_{RW}) = V(a_{RB}) = (105)(0.1) + (5)(0.9) = 15;\) and

    \(V(a_{LW}) = V(a_{LB}) = (100)(0.5) + (0)(0.5) = 50.\)

    The U-values are sensitive to the agent’s credences over A. If, for example, \(C(a_{RW}) = C(a_{RB}) = C(a_{LW}) = C(a_{RW}) = 0.25\), then, in both examples:

    \(U(a_{RW}) = U(a_{RB}) = (105)(0.5) + (5)(0.5) = 55;\) and

    \(U(a_{LW}) = U(a_{LB}) = (100)(0.5) + (0)(0.5) = 50.\)

  41. Cf. Lewis (1981).

  42. See e.g. Ahmed (2014a), Joyce (1999), and Lewis (1981).

  43. An option together with the laws and past, insofar as those are beyond the agent’s control, do not always suffice to determine the utility of the world; for sometimes we must also specify parts of the present or future that are beyond the agent’s control. (For example, one can construct a variant of The Frustrater where the Frustrater’s prediction is not causally affected by the agent’s choice but occurs after the agent’s choice. The utility of a world depends on the Frustrater’s prediction, but an option together with the past and laws, insofar as those are beyond the agent’s control, might not settle what the Frustrater predicted, and thus might not settle the utility of the world.) In such cases, we should identify K, not with \(\{lh_1, lh_2, ..., lh_n \}\), but instead with \(\{lhf_1, lhf_2, ..., lhf_m \}\), where each lhf specifies the laws, past, present, and future, insofar as those are beyond the agent’s control. If we identify K with \(\{lhf_1, lhf_2, ..., lhf_m \}\), we gradually coarsen in the same way, removing time-slices by their temporal order, producing ever shorter initial segments, and then finally removing the laws, themselves. Thanks to a helpful reviewer for pressing me on this point.

  44. One alternative I find attractive is purely modal. Each fact is assigned some counterfactual fixity, à la Kment (2014), and we gradually coarsen by progressively removing the facts with the least counterfactual fixity. This purely modal characterization of K is harder to work with, but probably superior.

  45. Assuming that the agent is certain that the prediction was made instantaneously j units prior to the time of decision makes the metaethical transition sudden. For every \(U^i \prec U^j\), \(U^i(a_A) + U^i(a_B) = 100\). And, for every \(U^k \succeq U^j\), \(U^k(a_A) + U^k(a_B) \approx 0\). If we drop the assumption that the agent is certain that the prediction was made j units prior to the time of decision, the metaethical transition might instead be gradual. If the decrease is gradual, then emphasizing stable maximization might be important. In a version of The Frustrater in which the agent is uncertain when the prediction was made, it may be the case that the least member of U that is stably maximized, say, \(U^j\), is maximized both by, say, \(a_A\) and \(a_E\). This sort of co-maximization would not make \(a_A\) rationally permissible, however, because \(a_A\) will not stably maximize \(U^j\). In fact, neither \(a_A\), nor \(a_B\) stably maximize any member of U. If \(a_A\) maximizes some \(U^j\), then \(\sum C(k^j|a_A)u(a_Ak^j) < \sum C(k^j|a_a)u(a_Bk^j)\), since the agent then will regard \(a_A\) as evidence in favor of \(a_B\)-friendly \(k^j\)s. But the co-maximization would make \(a_E\) rationally permissible, since \(a_E\) stably maximizes the least member of U that is stably maximized, whatever that proves to be.

  46. There is one added complication. As Bernhard Salow pointed out to me, according to U-pluralism as formulated, it is essential that the Meta-Frustrater makes his prediction before the minions do. If the minions make their prediction first, then the options that stably maximize the least member of U that is stably maximized will be the left-handed options. I am not sure whether this prediction is wrong. (Flipping the temporal order makes my intuitions less clear.) But when I am inclined to think that flipping the temporal order makes no normative difference, I am inclined, not to abandon U-pluralism, but to adopt an alternative conception of K. See note 44.

  47. This example, from Spencer and Wells (2019: 33–34), adapts an example from Kagan (2018). For related discussion, see e.g. Feldman (2006) and Weirich (2004).

  48. See e.g. Spencer and Wells (2019) and Weirich (2004).

  49. Much work on bounded rationality is similarly animated by a constrained optimization conception of rationality. See e.g. Bossaerts and Murawski (2017), Gigerenzer (2008), Griffiths et al. (2015), Griffiths and Tenenbaum (2006), Halpern et al. (2014), Icard (2018), Lorkowski and Kreinovich (2018), Paul and Quiggin (2018), Russell and Subramanian (1995), Simon (1956, 1957, 1983), Vul et al. (2014), and Weirich (1988, 2004).

  50. This paper has been long in the making. I can’t remember all of the people who have helped it along, but they include: Arif Ahmed, Sara Aronowitz, Adam Bales, D. Black, R. A. Briggs, Tyler Brooke-Wilson, David Builes, Nilanjan Das, Kevin Dorst, Kenny Easwaran, Branden Fitelson, J. Dmitri Gallow, Ned Hall, Caspar Hare, Brian Hedden, Wes Holliday, Michele Odisseas Impagnatiello, Boris Kment, Daniel Muñoz, L. A. Paul, Agustín Rayo, Bernhard Salow, Haley Schilling, Miriam Schoenfield, Ginger Schultheis, Robert Stalnaker, P. Quinn White, members of the Rutgers Formal Epistemology and Decision Theory Reading Group (2015), attendees of the Cambridge Decision Theory Workshop (2016), members of the Northeastern Philosophy Reading Group (2017), attendees of the Ranch Workshop (2019) and especially my commentator there, Josh Dever, a helpful reviewer at another journal, and an extraordinarily helpful reviewer at this journal. A special thanks to Marshall Louis Reaves for help with the computer simulation, and an extra special thanks to Ian Wells, who was my co-equal partner as these ideas started taking shape.

References

  • Ahmed, A. (2012). Press the button. Philosophy of Science, 79, 386–95.

    Google Scholar 

  • Ahmed, A. (2014a). Evidence, decision and causality. Cambridge: Cambridge University Press.

    Google Scholar 

  • Ahmed, A. (2014b). Dicing with death. Analysis, 74, 587–94.

    Google Scholar 

  • Arntzenius, F. (2008). No regrets, or: Edith Piaf revamps decision theory. Erkenntnis, 68, 277–97.

    Google Scholar 

  • Bales, A. (2018). Decision-theoretic pluralism. Philosophical Quarterly, 68, 801–18.

    Google Scholar 

  • Bassett, R. (2015). A critique of benchmark theory. Synthese, 192, 241–67.

    Google Scholar 

  • Bentham, J. (1961[1789]). An introduction to the principles of morals and legislation. Garden City: Doubleday.

  • Bermúdez, J. L. (2009). Decision theory and rationality. Oxford: Oxford University Press.

    Google Scholar 

  • Bolker, E. D. (1967). A simultaneous axiomatisation of utility and subjective probability. Philosophy of Science, 34, 333–40.

    Google Scholar 

  • Bossaerts, P., & Murawski, C. (2017). Computational complexity and human decision-making. Trends in Cognitive Sciences, 21, 917–29.

    Google Scholar 

  • Bostrom, N. (2001). The meta-Newcomb problem. Analysis, 61, 309–10.

    Google Scholar 

  • Briggs, R. A. (2010). Decision-theoretic paradoxes as voting paradoxes. Philosophical Review, 119, 1–30.

    Google Scholar 

  • Buchak, L. (2013). Risk and rationality. Oxford: Oxford University Press.

    Google Scholar 

  • Eells, E. (1982). Rational decision and causality. Cambridge: Cambridge University Press.

    Google Scholar 

  • Eells, E., & Harper, W. (1991). Ratifiability, game theory, and the principle of independence of irrelevant alternatives. Australasian Journal of Philosophy, 69, 1–19.

    Google Scholar 

  • Egan, A. (2007). Some counterexamples to causal decision theory. Philosophical Review, 116, 94–114.

    Google Scholar 

  • Eriksson, L., & Háyek, A. (2007). What are degrees of belief? Studia Logica, 86, 183–213.

    Google Scholar 

  • Feldman, F. (2006). Actual utility, the objection from impracticality, and the move to expected utility. Philosophical Studies, 129, 49–79.

    Google Scholar 

  • Gallow, J. D. (2020). The causal decision theorist’s guide to managing the news. Journal of Philosophy, 117, 117–49.

    Google Scholar 

  • Gibbard, A. & Harper, W. (1978). Counterfactuals and two kinds of expected utility. In A. Hooker, J. J. Leach, & E. F. McClennen (Eds.), Foundations and applications of decision theory (pp. 125–162). Kufstein: Reidel.

    Google Scholar 

  • Gigerenzer, G. (2008). Rationality for mortals: How people cope with uncertainty. Oxford: Oxford University Press.

    Google Scholar 

  • Griffiths, T. L., Lieder, F., & Goodman, N. D. (2015). Rational use of cognitive resources: Levels of analysis between the computational and algorithmic. Topics in Cognitive Science, 7, 217–29.

    Google Scholar 

  • Griffiths, T. L., & Tenenbaum, J. B. (2006). Optimal predictions in everyday cognition. Psychological Science, 17, 767–73.

    Google Scholar 

  • Gustafsson, J. (2011). A note in defense of ratificationism. Erkenntnis, 75, 147–50.

    Google Scholar 

  • Halpern, J. Y., Pass, R., & Seeman, L. (2014). Decision theory with resource-bounded agents. Topics in Cognitive Science, 6, 245–57.

    Google Scholar 

  • Hammond, P. J. (1988). Consequentialist foundations for expected utility theory. Theory and Decision, 25, 25–78.

    Google Scholar 

  • Hare, C. (2011). Obligation and regret when there is no fact of the matter about what would have happened if you had not done what you did. Noûs, 45, 190–206.

    Google Scholar 

  • Hare, C., & Hedden, B. (2016). Self-reinforcing and self-frustrating decisions. Noûs, 50, 604–28.

    Google Scholar 

  • Harper, W. (1986). Mixed strategies and ratifiability in causal decision theory. Erkenntnis, 24, 25–36.

    Google Scholar 

  • Horgan, T. (1981). Counterfactuals and Newcomb’s problem. Journal of Philosophy, 78, 331–56.

    Google Scholar 

  • Hunter, D., & Richter, R. (1978). Counterfactuals and Newcomb’s paradox. Synthese, 39, 249–61.

    Google Scholar 

  • Icard, T. (2018). Bayes, bounds, and rational analysis. Philosophy of Science, 85, 79–101.

    Google Scholar 

  • Jeffrey, R. (1965). The logic of decision. Chicago: University of Chicago Press.

    Google Scholar 

  • Jeffrey, R. (1983). The logic of decision (2nd ed.). Chicago: University of Chicago Press.

    Google Scholar 

  • Joyce, J. (1999). The foundations of causal decision theory. Cambridge: Cambridge University Press.

    Google Scholar 

  • Joyce, J. (2012). Regret and stability in causal decision theory. Synthese, 187, 123–45.

    Google Scholar 

  • Joyce, J. (2018). Deliberation and stability in Newcomb problems and psuedo-Newcomb problems. In A. Ahmed (Ed.), Newcomb’s problem (pp. 138–59). Cambridge: Cambridge University Press.

    Google Scholar 

  • Kagan, S. (2018). The paradox of methods. Politics, Philosophy, and Economics, 17, 148–68.

    Google Scholar 

  • Kment, B. (2014). Modality and explanatory reasoning. Oxford: Oxford University Press.

    Google Scholar 

  • Kolodny, N., & MacFarlane, J. (2010). Ifs and oughts. Journal of Philosophy, 107, 115–43.

    Google Scholar 

  • Lewis, D. (1981). Causal decision theory. Australasian Journal of Philosophy, 59, 5–30.

    Google Scholar 

  • List, C., & Dietrich, F. (2016). Mentalism versus behaviorism in economics: A philosophy-of-science perspective. Economics and Philosophy, 32, 249–81.

    Google Scholar 

  • Lorkowski, J., & Kreinovich, V. (2018). Bounded rationality in decision uncertainty: Towards optimal granularity. Berlin: Springer.

    Google Scholar 

  • Meacham, C. J. G., & Weisberg, J. (2011). Representation theorems and the foundations of decision theory. Australasian Journal of Philosophy, 89, 641–63.

    Google Scholar 

  • Mill, J. S. (1988[1861]). Utilitarianism, R. Crisp (Ed.), Oxford: Oxford University Press.

  • Moore, G. E. (1903). Principia ethica. Cambridge: Cambridge University Press.

    Google Scholar 

  • Moore, G. E. (1912). Ethics. Oxford: Oxford University Press.

    Google Scholar 

  • Nozick, R. (1969). Newcomb’s problem and two principles of choice. In N. Rescher (Ed.), Essays in honor of Carl G. Hempel (pp. 114–46). Kufstein: Reidel.

    Google Scholar 

  • Oddie, G., & Menzies, P. (1992). An objectivist’s guide to subjective value. Ethics, 102, 512–33.

    Google Scholar 

  • Paul, L. A., & Quiggin, J. (2018). Real world problems. Episteme, 15, 363–82.

    Google Scholar 

  • Pettigrew, R. (2015). Risk, rationality, and expected utility theory. Canadian Journal of Philosophy, 47, 798–826.

    Google Scholar 

  • Pettigrew, R. (2019). Choosing for changing selves. Oxford: Oxford University Press.

    Google Scholar 

  • Rabinowicz, W. (1988). Ratifiability and stability. In P. Gärdenfors & N. Sahlin (Eds.), Decision, probability, and utility (pp. 406–25). Cambridge: Cambridge University Press.

    Google Scholar 

  • Rabinowicz, W. (1989). Stable and retrievable options. Philosophy of Science, 56, 624–41.

    Google Scholar 

  • Ramsey, F. P. (1990[1926]). Truth and probability. In D. H. Mellor (Ed.), Philosophical papers. Cambridge: Cambridge University Press.

  • Rawls, J. (1971). A theory of justice. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Russell, S. J., & Subramanian, D. (1995). Provably bounded-optimal agents. Journal of Artificial Intelligence Research, 2, 575–609.

    Google Scholar 

  • Savage, L. J. (1954). The foundations of statistics. New York: Wiley.

    Google Scholar 

  • Sen, A. (1970). Collective choice and social welfare. San Francisco: Holden-Day.

    Google Scholar 

  • Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological Review, 63, 129–38.

    Google Scholar 

  • Simon, H. A. (1957). Models of man. New York: Wiley.

    Google Scholar 

  • Simon, H. A. (1983). Reason in human affairs. Redwood City: Stanford University Press.

    Google Scholar 

  • Skyrms, B. (1982). Causal decision theory. Journal of Philosophy, 79, 695–711.

    Google Scholar 

  • Skyrms, B. (1984). Pragmatics and empiricism. New Haven: Yale University Press.

    Google Scholar 

  • Skyrms, B. (1990). The dynamics of rational deliberation. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Sobel, J. H. (1994). Taking chances: Essays on rational choice. Cambridge: Cambridge University Press.

    Google Scholar 

  • Spencer, J. (forthcoming). An argument against causal decision theory. Analysis.

  • Spencer, J., & Wells, I. (2019). Why take both boxes? Philosophy and Phenomenological Research, 99, 27–48.

    Google Scholar 

  • Stalnaker, R. (1981). Letter to David Lewis of 21 May 1972. In R. Stalnaker, W. Harper, & G. Pearce (Eds.), Ifs: conditionals, belief, decision, chance and time (pp. 151–153). Kufstein: Reidel.

    Google Scholar 

  • Thomson, J. J. (2008). Normativity. Chicago: Open Court.

    Google Scholar 

  • von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior. Princeton: Princeton University Press.

    Google Scholar 

  • Vul, E., Goodman, N. D., Griffiths, T. L., & Tenenbaum, J. B. (2014). One and done? Optimal decisions from very few samples. Cognitive Science, 38, 599–637.

    Google Scholar 

  • Wedgwood, R. (2013). Gandalf’s solution to the Newcomb problem. Synthese, 190, 2643–75.

    Google Scholar 

  • Weirich, P. (1985). Decision instability. Australasian Journal of Philosophy, 63, 465–72.

    Google Scholar 

  • Weirich, P. (1988). Hierarchical maximization of two kinds of expected utility. Philosophy of Science, 55, 560–82.

    Google Scholar 

  • Weirich, P. (2004). Realistic decision theory: Rules for nonideal agents in nonideal circumstances. Oxford: Oxford University Press.

    Google Scholar 

  • Wells, I. (2019). Equal opportunity and Newcomb’s problem. Mind, 128, 429–57.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jack Spencer.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Proof of Supervenient Optimality

Appendix: Proof of Supervenient Optimality

The proof of Supervenient Optimality has two parts. First, we show that U weakly D-dominates any supervenient quantity that diverges from U. Then we show that any quantity that is distinct from U, but does not diverge from U, violates a plausible continuity constraint.

If \(Max(Q,w,d) \not \subset Max(U,w,d)\) for any point \(\langle w,d \rangle\), I will say that there is a point of divergence between Q and U. If Q is supervenient and there is a point of divergence between Q and U, then U weakly D-dominates Q. After all, suppose that \(\langle w,d \rangle\) is a point of divergence between Q and U. Since Q and U are both supervenient, the d-score of Q is the average of the U-values of the options in Max(Qwd), and the d-score of U is the average of the U-values of the options in Max(Uwd). Hence, since at least one member of Max(Qwd) fails to maximize U relative to d, the average of the U-values of the options in Max(Qwd) is strictly less than the average of the U-values of the options in Max(Uwd). Hence, \(S(Q,d) < S(U,d)\). Moreover, if Q is supervenient, then, for any d, \(S(Q,d) \le S(U,d)\). So it follows that U weakly D-dominates Q. And given my assumption that the ordinal rankings of quantities respect relations of weak D-domination, it follows that U scores higher than does Q.

The supervenient quantities that are distinct from U, but score as highly as U, are subset quantities: quantities that are always maximized by U-maximizing options, but not always maximized by every U-maximizing option. (Think, for example, about the quantity that corresponds to being the leftmost U-maximizing option.) But subset quantities violate an intuitively plausible continuity constraint. If U is a utility function, and \(u(w)=x\), then let \(u^{w,\epsilon }\) and \(u^{w,-\epsilon }\) be utility functions that are exactly like U, except that \(u^{w,\epsilon }(w) = x+ \epsilon\) and \(u^{w,-\epsilon }(w) = x- \epsilon\). If \(d = \langle C,u,A,K \rangle\), then let \(d^{w,\epsilon } = \langle C,u^{w,\epsilon },A,K \rangle\) and let \(d^{w,-\epsilon } = \langle C,u^{w,-\epsilon },A,K \rangle\). The relevant continuity constraint then can be stated as follows:

Utility Continuity. If \(a \notin Max(Q,w,d)\), then, for any world \(w_i\) there is some \(\epsilon\) such that \(a \notin Max(Q,w,d^{w_i,\epsilon })\) and \(a \notin Max(Q,w,d^{w_i,-\epsilon })\).

In effect, Utility Continuity says that small changes to utilities assigned to any particular world should precipitate only small changes in the values that a quantity assigns to options.

To see that every subset quantity violates Utility Continuity, suppose that Q is a subset quantity, and suppose that a is among the options that maximize U at \(\langle w,d \rangle\), but not among the options that maximize Q at \(\langle w,d \rangle\). There will then be some a-world, \(w_i\), to which the credence function in d assigns nonzero probability, which is such that, increasing its utility, while keeping the utility of every other world the same, increases the U-value of a, but does not increase the U-value of any other option in A. So, for any \(\epsilon\), a uniquely maximizes U at \(\langle w,d^{w_i,\epsilon } \rangle\). Since Q is a subset quantity, a also uniquely maximizes Q at \(\langle w,d^{w_i,\epsilon } \rangle\). But that shows that Q violates Utility Continuity.

Thus, Supervenient Optimality holds: U is the highest-scoring supervenient quantity that satisfies Utility Continuity.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Spencer, J. Rational monism and rational pluralism. Philos Stud 178, 1769–1800 (2021). https://doi.org/10.1007/s11098-020-01509-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-020-01509-9

Keywords

Navigation