Skip to main content
Log in

Expected choiceworthiness and fanaticism

  • Published:
Philosophical Studies Aims and scope Submit manuscript

Abstract

Maximize Expected Choiceworthiness (MEC) is a theory of decision-making under moral uncertainty. It says that we ought to handle moral uncertainty in the way that Expected Value Theory (EVT) handles descriptive uncertainty. MEC inherits from EVT the problem of fanaticism. Roughly, a decision theory is fanatical when it requires our decision-making to be dominated by low-probability, high-payoff options. Proponents of MEC have offered two main lines of response. The first is that MEC should simply import whatever are the best solutions to fanaticism on offer in decision theory. The second is to propose statistical normalization as a novel solution on behalf of MEC. This paper argues that the first response is open to serious doubt and that the second response fails. As a result, MEC appears significantly less plausible when compared to competing accounts of decision-making under moral uncertainty, which are not fanatical.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Moral uncertainty is a proper subset of normative uncertainty. We can also be normatively uncertain about epistemic rationality, prudence, etc.

  2. See also MacAskill and Ord (2020). For other iterations of MEC, see Lockhart (2000), Ross (2006), Sepielli (2009), and Wedgwood (2013). Carr (2020) proposes a variant of MEC that does not depend on intertheoretic value comparisons, which are explained presently in the main text. Riedener (2020) argues that we should handle axiological uncertainty in the way that MEC handles deontic uncertainty.

  3. MacAskill, Bykvist, and Ord remain neutral on whether the ought of moral uncertainty is “moral (second-order), rational, virtue ethical or something else” (2020a: 30). In contrast, MacAskill and Ord say that it’s a rational ought and suggest that this is the “established view in the literature” (2020: 350, fn. 11). I follow MacAskill, Bykvist, and Ord in remaining neutral, though I flag here that the nature of the ought of moral uncertainty will become relevant in §3.

  4. MacAskill, Bykvist, and Ord remain neutral on whether the relevant probabilities are the agent’s actual credences (i.e., their subjective probabilities) or their epistemic credences (i.e., the credences that they should have, given their evidence) (2020a: 4). I follow them in remaining neutral, though I note that they later appeal to credences that one “should have,” indicating that they have epistemic credences in mind, at least at that stage in their argument (2020a: 152).

  5. MacAskill et al. (2020a: 133, 145).

  6. MacAskill et al. (2020a: 133).

  7. MacAskill et al. (2020a: 47-48).

  8. Here I follow Wilkinson (2023: 626-28) in distinguishing Expected Value Theory from orthodox EUT.

  9. Wilkinson (2023: 627-28). I have omitted a footnote from the quoted text.

  10. However, MacAskill et al. (2020a: 48, fn. 15) express their openness to decision theories that depart from risk neutrality. I therefore explore in §3 two leading theories that do so.

  11. On which see Wilkinson (2022), Beckstead and Thomas (2023), and Russell (2023).

  12. Due to Bostrom (2009).

  13. The phrase ‘infinitarian paralysis’ is due to Bostrom (2011).

  14. Cf. Hájek’s (2003) insight, in response to Pascal’s Wager, that by Pascal’s lights, every option seemingly has infinite expected utility, because every option is associated with some nonzero probability of resulting in theistic belief.

  15. For discussion and more precise formulation of approaches in this vein, see Slesinger (1994), Hájek and Nover (2004), and Bostrom (2011: 35-36). Although this patch succeeds at blocking paralysis, it remains implausible. Imagine that you must choose exactly one of three prospects. The first and second offer some tiny probability of ∞, an even tinier probability of -∞, and a near certainty of an enormous, though finite, quantity of extreme suffering. The third is a certainty of an astronomical, though finite, quantity of the summum bonum (whatever it is). The decision rule we’re considering will require you to choose one of the first two prospects—whichever has the greater (p(∞) − p(-∞))—because it doesn’t care about finite prospects when infinite prospects are on the table. This is intuitively intolerable.

  16. See McGee (1999) and Russell and Isaacs (2021).

  17. By a ‘live’ moral theory I just mean one that gets included in the expected choiceworthiness calculation. I consider the proposal that we can exclude certain moral theories from the calculation on the basis of knowledge in §4.

  18. Note that a value function can be finite but unbounded.

  19. Cf. Beckstead and Thomas (2023: §3.2).

  20. See Thomas (2022) for arguments in favor of this view, called Separability, and for explanation of the close connection between Separability and total welfarist consequentialism. However, see Goodsell (2021) for an objection to a related principle (Anteriority) that draws on the St. Petersburg paradox.

  21. Cf. Beckstead and Thomas (2023: §6).

  22. Again, see McGee (1999) and Russell and Isaacs (2021) for arguments to this effect.

  23. Hájek (2012: 422-23), Buchak (2013: 73), Smith (2014: 496-97), and Cibinel (2023) each make a version of this point. See also Wilkinson (2022: 460, fn. 45) for a distinct but related objection to bounded utility functions.

  24. In this section, for simplicity, I will focus on risk aversion in the pursuit of positive choiceworthiness. However, see footnote 34 for discussion of the role of risk seekingness vis-à-vis negative choiceworthiness.

  25. See Buchak (2013: 49-50). An example Buchak uses to illustrate risk aversion is the risk function r(p) = p2. Consider the gamble G in which a fair coin is tossed. If the coin lands heads, you get 10 utils; if it lands tails, you get nothing. The risk function of a risk-neutral agent is r(p) = p, so for such an agent, REU(G) = 5 utils. In contrast, REU(G) for the risk-averse agent is (0.5)2(10 utils) + 0 = 2.5 utils.

  26. Cf. Beckstead and Thomas (2023: §2.3) on tail discounting. To my knowledge, the objections from Isaacs (2016), Kosonen (2022), and Cibinel (2023) cited presently apply to tail discounting as well.

  27. Buchak (2013: 73-74) acknowledges this mutatis mutandis (in discussing individual utility, rather than moral choiceworthiness) in her discussion of the St. Petersburg paradox.

  28. Isaacs (2016).

  29. Kosonen (2022: chapter 4).

  30. Cibinel (2023).

  31. See Buchak (2013: chapter 1) for a critique of the way in which orthodox EUT models risk aversion. Notice, though, that although Buchak herself worries that “bounding the utility function seems ad hoc” (2013: 73), bounded utility is compatible with REU. One might therefore consider a risk-weighted iteration of Maximize Expected Utility (MEU), introduced presently in the main text; but the two objections to MEU given below will apply to any risk-weighted iteration as well.

  32. MacAskill et al. (2020a: 153, fn. 7) express skepticism about bounded value functions in general, citing problems highlighted in Beckstead and Thomas (2023)—on which more below in the main text.

  33. Beckstead and Thomas (2023: §3) make this point in the context of decision-making under descriptive uncertainty.

  34. What sort of uncertainty is in play here will depend on what sort of ought the agent takes the ought of moral uncertainty to be. If she takes it to be a second-order moral ought, then her uncertainty will be second-order moral uncertainty. If she takes it to be an ought of instrumental rationality—i.e., the sort of rationality with which decision theory is concerned—then her uncertainty will concern the rationality of risk aversion. And the very same considerations that MacAskill, Bykvist, and Ord adduce in favor of taking first-order moral uncertainty seriously are equally strong considerations in favor of taking second-order moral uncertainty and decision-theoretic uncertainty seriously; see MacAskill et al. (2020a: 11-14). Note also that whereas orthodox-EUT-style risk aversion allows us to avoid fanaticism in the pursuit of positive choiceworthiness, it’s risk seekingness that allows us to avoid the corresponding problem in the context of negative choiceworthiness. I gloss over this complication in the main text for presentational simplicity; see Beckstead and Thomas (2023: §2.2 and §3.3) for discussion.

  35. MacAskill, Bykvist, and Ord acknowledge that we can be uncertain which theory of moral uncertainty is true (2020a: 30-33). Moreover, they “do not want to deny that there might be a need for a theory that can deal with higher-order uncertainty” (2020a: 31). One might get off the boat here—I introduce a hard externalist response below.

  36. Again, what type of uncertainty this is will depend on what type of ought the agent takes the ought of moral uncertainty to be. See footnote 34 for further detail.

  37. Strictly speaking, to take this expectation, we must extend U to include ± ∞ in its domain via completion. To do so, we define U(± ∞) to be the limiting value of U(x) as x approaches ± ∞, namely ± 100.

  38. Cf. MacAskill et al. (2021), who argue that in certain cases where (i) causal decision theory (CDT) and evidential decision theory (EDT) issue conflicting verdicts and (ii) the stakes are intuitively much higher according to EDT than they are according to CDT, one ought to act in accordance with EDT, even if one’s credence in CDT is significantly greater. Taking a similar approach to the decision problem in Table 8 will naturally militate against choosing in accordance with MEU, modulo worries about intertheoretic comparisons between MEC and MEU.

  39. See e.g. Hawthorne and Stanley (2008), Weatherson (2012), Liu (2022), and Hong (fc.).

  40. MacAskill et al. (2020a: 150-151). Cf. Jackson and Smith (2006).

  41. The quotation is from Greaves (2016: 313).

  42. See Greaves and MacAskill (2021: §9).

  43. On longtermism, see Bostrom (2003), Beckstead (2013), Ord (2020), MacAskill (2022), and especially Greaves and MacAskill (2021). For skepticism that longtermism follows from (total welfarist decision-theoretic) consequentialism, see Mogensen (2021) and Thorstad (2023).

  44. This glosses over some subtleties for the sake of brevity. See MacAskill et al. (2020a: 125-31 and 147-48) for further detail on theory amplification.

  45. MacAskill et al. (2020a: 147-48).

  46. The similarity in abductive status between a moral theory and its amplifications marks an important disanalogy with outlandish descriptive hypotheses, such as the hypothesis that Pascal’s mugger is telling the truth. Very often (if not always), outlandish descriptive hypotheses fare significantly worse on abductive grounds than their run-of-the-mill competitors, such as the hypothesis that Pascal’s mugger is lying.

  47. I owe this argument to Sebastian Liu (pc.).

  48. Williamson (2000: 76). I have omitted a footnote from the quoted text.

  49. See MacAskill et al. (2020a: chapter 4 and 153-55) and MacAskill et al. (2020b). MacAskill, Bykvist, and Ord propose normalization to deal with moral theories that are interval-scale measurable but incomparable with one another. However, they later discuss normalization in the context of fanaticism (2020a: 154-55); and at any rate, the thought that we should normalize competing moral theories against each other when we’re morally uncertain is prima facie plausible and sufficiently common to warrant assessment. Here are the technical details of normalization: MacAskill et al. (2020a: 86-94) and MacAskill et al. (2020b: 74-86) defend variance voting. This normalization procedure “corresponds to linearly rescaling all of the theory’s choiceworthiness values so that their variance is equal to 1, while keeping their means unchanged. This doesn’t change the ordering of the options by that theory’s lights, it just compresses it or stretches it so that it has the same variance as the others. One can then apply MEC to these normalized choiceworthiness functions” (MacAskill et al. (2020a: 93)). Two further technical details: firstly, if a theory says that all options are equally choiceworthy, then the normalization procedure does not change its choiceworthiness function (MacAskill et al. (2020a: 87)). Secondly, MacAskill, Bykvist, and Ord note that we can employ variance voting only when there are finitely many options on the table (2020a: 94, n.20) and tentatively defend the view that the relevant options are those available to the agent in a given decision-situation (2020a: 101-05).

  50. MacAskill et al. (2020a: 90-91); MacAskill et al. (2020b: 72).

  51. Dudeism is modeled on Jeff Bridges’ character, The Dude, from the Coen brothers’ 1998 film The Big Lebowski. The interested reader is invited to read more at https://dudeism.com/whatisdudeism/.

  52. If you don’t have this intuition, abstract away from the details of the case and imagine that you have credence = 0.5 that it’s extremely important for you to Φ and credence = 0.5 that you should Ψ, but only in some very weak sense of ‘should’. Assuming that you can’t both Φ and Ψ, intuitively, you should Φ.

  53. MacAskill et al. (2020b: 73-74).

  54. Here I paraphrase Tarsney’s (2020: 1019) gloss on externalism about morality. For discussion see Russell (forthcoming) and Tarsney (forthcoming).

  55. Or, if the relevant probabilities for decision-making under moral uncertainty are simply your actual credences, the top priority will be introspection to discover your own credence distribution over various fanatical moral theories.

  56. Cf. Chen and Rubio (2020: §4.2-4.4).

  57. See Harman (2015) and Weatherson (2019).

  58. Gustafsson and Torpman (2014), though see Gustafsson (2022).

  59. Newberry and Ord (2021); cf. Greaves and Cotton-Barratt (2023).

References

  • Beckstead, N. (2013). On the Overwhelming Importance of Shaping the Far Future. Ph.D. thesis, Rutgers University. https://rucore.libraries.rutgers.edu/rutgers-lib/40469/PDF/1/play/

  • Beckstead, N., & Thomas, T. (2023). A paradox for tiny probabilities and enormous values. Noûs. https://doi.org/10.1111/nous.12462

    Article  Google Scholar 

  • Bostrom, N. (2003). Astronomical waste: The opportunity cost of delayed technological development. Utilitas, 15(3), 308–314.

    Article  Google Scholar 

  • Bostrom, N. (2009). Pascal’s Mugging. Analysis, 69(3), 443–445.

    Article  Google Scholar 

  • Bostrom, N. (2011). Infinite ethics. Analysis and Metaphysics, 10, 9–59.

    Google Scholar 

  • Buchak, L. (2013). Risk and rationality. Oxford University Press.

    Book  Google Scholar 

  • Carr, J. R. (2020). Normative uncertainty without theories. Australasian Journal of Philosophy, 98(4), 747–762.

    Article  Google Scholar 

  • Chen, E. K., & Rubio, D. (2020). Surreal decisions. Philosophy and Phenomenological Research, 100(1), 54–74.

    Article  Google Scholar 

  • Cibinel, P. (2023). A dilemma for Nicolausian discounting. Analysis, 83(4), 662–672.

    Article  Google Scholar 

  • Goodsell, Z. (2021). A St Petersburg Paradox for risky welfare aggregation. Analysis, 81(3), 420–426.

    Article  Google Scholar 

  • Greaves, H. (2016). Cluelessness. Proceedings of the Aristotelian Society, 116(3), 311–339.

    Article  Google Scholar 

  • Greaves, H., & Cotton-Barratt, O. (2023). A bargaining-theoretic approach to moral uncertainty. Journal of Moral Philosophy. https://doi.org/10.1163/17455243-20233810

    Article  Google Scholar 

  • Greaves, H., & MacAskill, W. (2021). The Case for Strong Longtermism. Global Priorities Institute Working Paper No.5-2021. https://globalprioritiesinstitute.org/hilary-greaves-william-macaskill-the-case-for-strong-longtermism-2/

  • Gustafsson, J. E. (2022). Second thoughts about my favorite theory. Pacific Philosophical Quarterly, 103(3), 448–470.

    Article  Google Scholar 

  • Gustafsson, J. E., & Torpman, O. (2014). In defence of my favourite theory. Pacific Philosophical Quarterly, 95(2), 159–174.

    Article  Google Scholar 

  • Hájek, A. (2003). Waging War on Pascal’s Wager. The Philosophical Review, 112(1), 27–56.

    Article  Google Scholar 

  • Hájek, A. (2012). Is strict coherence coherent? Dialectica, 66(3), 411–424.

    Article  Google Scholar 

  • Harman, E. (2015). The irrelevance of moral uncertainty. In R. Safer-Landau (Ed.), Oxford studies in metaethics (Vol. 10, pp. 53–79). Oxford University Press.

    Chapter  Google Scholar 

  • Hawthorne, J., & Stanley, J. (2008). Knowledge and action. The Journal of Philosophy, 105(10), 571–590.

    Article  Google Scholar 

  • Hong, F. (forthcoming). Know your way out of St. Petersburg: An exploration of ‘knowledge‑first’ decision theory. Erkenntnis.

  • Isaacs, Y. (2016). Probabilities cannot be rationally neglected. Mind, 125(499), 759–762.

    Article  Google Scholar 

  • Jackson, F., & Smith, M. (2006). Absolutist moral theories and uncertainty. The Journal of Philosophy, 103(6), 267–283.

    Article  Google Scholar 

  • Kosonen, P. (2022). Tiny probabilities of vast value. Ph.D. thesis. University of Oxford.

  • Liu, S. (2022). Don’t bet the farm: Decision theory, inductive knowledge, and the St. Petersburg Paradox. Unpublished Manuscript.

  • Lockhart, T. (2000). Moral uncertainty and its consequences. Oxford University Press.

    Book  Google Scholar 

  • MacAskill, W. (2022). What we owe the future. Basic Books.

    Google Scholar 

  • MacAskill, W., & Ord, T. (2020). Why maximize expected choiceworthiness? Noûs, 54(2), 327–353.

    Article  Google Scholar 

  • MacAskill, W., Bykvist, K., & Ord, T. (2020a). Moral uncertainty. Oxford University Press.

    Book  Google Scholar 

  • MacAskill, W., Cotton-Barratt, O., & Ord, T. (2020b). Statistical normalization methods in interpersonal and intertheoretic comparisons. The Journal of Philosophy, 117(2), 61–95.

    Article  Google Scholar 

  • MacAskill, W., Vallinder, A., Oesterheld, C., Shulman, C., & Treutlein, J. (2021). The Evidentialist’s Wager. The Journal of Philosophy, 118(6), 320–342.

    Article  Google Scholar 

  • McGee, V. (1999). An airtight Dutch book. Analysis, 59(4), 257–265.

    Article  Google Scholar 

  • Mogensen, A. L. (2021). Maximal cluelessness. The Philosophical Quarterly, 71(1), 141–162.

    Article  Google Scholar 

  • Monton, B. (2019). How to avoid maximizing expected utility. Philosophers’ Imprint, 19(18), 1–25.

    Google Scholar 

  • Newberry, T., & Ord, T. (2021). The Parliamentary Approach to Moral Uncertainty. Future of Humanity Institute Technical Report 2021–2. https://www.fhi.ox.ac.uk/wp-content/uploads/2021/06/Parliamentary-Approach-to-Moral-Uncertainty.pdf

  • Nover, H., & Hájek, A. (2004). Vexing expectations. Mind, 113(450), 237–249.

    Article  Google Scholar 

  • Ord, T. (2020). The precipice: Existential risk and the future of humanity. Bloomsbury.

    Google Scholar 

  • Riedener, S. (2020). An axiomatic approach to axiological uncertainty. Philosophical Studies, 177(2), 483–504.

    Article  Google Scholar 

  • Ross, J. (2006). Rejecting ethical deflationism. Ethics, 116(4), 742–768.

    Article  Google Scholar 

  • Russell, J. S. (2023). On two arguments for fanaticism. Noûs. https://doi.org/10.1111/nous.12461

    Article  Google Scholar 

  • Russell, J. S. (forthcoming). The value of normative information. Australasian Journal of Philosophy.

  • Russell, J. S., & Isaacs, Y. (2021). Infinite prospects. Philosophy and Phenomenological Research, 103(1), 178–198.

    Article  Google Scholar 

  • Sepielli, A. (2009). What to do when you don’t know what to do. In R. Shafer-Landau (Ed.), Oxford studies in metaethics (Vol. 4, pp. 5–28). Oxford University Press.

    Chapter  Google Scholar 

  • Singer, P. (1972). Famine, affluence, and morality. Philosophy & Public Affairs, 1(3), 229–243.

    Google Scholar 

  • Slesinger, G. (1994). A Central Theistic Argument. In J. Jordan (ed.), Gambling on God: Essays on Pascal’s Wager. Rowman & Littlefield.

  • Smith, N. J. J. (2014). Is evaluative compositionality a requirement of rationality? Mind, 123(490), 457–502.

    Article  Google Scholar 

  • Tarsney, C. (2020). Normative Externalism, by Brian Weatherson. Mind, 130(519), 1018–1028.

    Article  Google Scholar 

  • Tarsney, C. (forthcoming). Metanormative regress: An escape plan. Philosophical Studies.

  • Thomas, T. (2022). Separability and population ethics. In G. Arrhenius, K. Bykvist, T. Campbell, & E. Finneron-Burns (Eds.), The Oxford handbook of population ethics (pp. 271–295). Oxford University Press.

    Chapter  Google Scholar 

  • Thorstad, D. (2023). High risk, low reward: A challenge to the astronomical value of existential risk mitigation. Philosophy & Public Affairs, 51(4), 373–412.

    Article  Google Scholar 

  • von Neumann, J., & Morgenstern, O. (1947). Theory of games and economic behavior (2nd ed.). Princeton University Press.

    Google Scholar 

  • Weatherson, B. (2012). Knowledge, bets, and interests. In J. Brown & M. Gerken (Eds.), Knowledge ascriptions (pp. 75–103). Oxford University Press.

    Chapter  Google Scholar 

  • Weatherson, B. (2019). Normative externalism. Oxford University Press.

    Book  Google Scholar 

  • Wedgwood, R. (2013). Akrasia and uncertainty. Organon F, 20(4), 484–506.

    Google Scholar 

  • Wilkinson, H. (2022). In defense of fanaticism. Ethics, 132(2), 445–477.

    Article  Google Scholar 

  • Wilkinson, H. (2023). Can risk aversion survive the long run? The Philosophical Quarterly, 73(2), 625–647.

    Article  Google Scholar 

  • Williamson, T. (2000). Margins for error: A reply. The Philosophical Quarterly, 50(198), 76–81.

    Article  Google Scholar 

Download references

Acknowledgements

I am indebted to Lara Buchak, Krister Bykvist, Pietro Cibinel, Adam Elga, Sam Fullhart, Elizabeth Harman, Harvey Lederman, Sebastian Liu, Jake Nebel, Teruji Thomas, Henry Wilson, and several anonymous referees for valuable comments on earlier drafts of this article and to Hezekiah Grayer II for valuable discussion.

Funding

Funding was provided by Forethought Foundation (Grant No. Global Priorities Fellowship), Princeton University (Grant No. Mildred W. and Alfred T. Carton, Class of 1905 Fellowship Fund).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Calvin Baker.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Baker, C. Expected choiceworthiness and fanaticism. Philos Stud 181, 1237–1256 (2024). https://doi.org/10.1007/s11098-024-02146-2

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-024-02146-2

Keywords

Navigation