Skip to main content
Log in

What decision theory can’t tell us about moral uncertainty

  • Published:
Philosophical Studies Aims and scope Submit manuscript

Abstract

We’re often unsure what morality requires, but we need to act anyway. There is a growing philosophical literature on how to navigate moral uncertainty. But much of it asks how to rationally pursue the goal of acting morally, using decision-theoretic models to address that question. I argue that using these popular approaches leaves some central and pressing questions about moral uncertainty unaddressed. To help us make sense of experiences of moral uncertainty, we should shift away from focusing on what it’s rational to do when facing moral uncertainty, and instead look directly at what it’s moral to do about moral uncertainty—for example, how risk averse we morally ought to be, or which personal sacrifices we’re morally obligated to make in order to reduce our risk of moral wrongdoing. And orthodox, expectation-maximizing, decision-theoretic models aren’t well-suited to this task—in part because they presuppose the answers to some important moral questions. For example, if approaching moral uncertainty in a moral way requires us to “maximize expected moral rightness,” that’s, itself, a contentious claim about the demands of morality—one that requires significant moral argument, and that I ultimately suggest is mistaken. Of course, it’s possible to opt, instead, for a variety of alternative decision-theoretic models. But, in order to choose between proposed decision-theoretic models, and select one that is well-suited to handling these cases, we first would need to settle more foundational, moral questions—about, for example, what we should be willing to give up in order to reduce the risk that we’re acting wrongly. Decision theory may be able to formalize the conclusions of these deliberations, but it is not a substitute for them, and it won’t be able to settle the right answers in advance. For now, when we discuss moral uncertainty, we need to wade directly into moral debate, without the aid of decision theory’s formalism.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. See e.g., Alexander A. Guerrero, “Don’t Know, Don’t Kill: Moral Ignorance, Culpability, and Caution,” Philosophical Studies 136, no. 1 (2007): 59–97; Claire Field, “Recklessness and Uncertainty: Jackson Cases and Merely Apparent Asymmetry,” Journal of Moral Philosophy 16, no. 4 (2019): 391–413; Johan E. Gustafsson and Olle Torpman, “In Defense of My Favourite Theory,” Pacific Philosophical Quarterly 95, no. 2 (2014): 159–174; Elizabeth Harman, “The Irrelevance of Moral Uncertainty,” in Oxford Studies in Metaethics, Vol. 10, ed. Russ Shafer–Landau (New York: Oxford University Press, 2015): 53–79; Brian Hedden, “Does MITE Make Right?” in Oxford Studies in Metaethics, Vol. 11, ed. Russ Shafer–Landau (New York: Oxford University Press, 2016): 102-128; Amelia Hicks, “Moral Uncertainty and Value Comparison,” in Oxford Studies in Metaethics, Vol. 13, ed. Russ Shafer–Landau, (New York: Oxford University Press, 2018): 161–183; Ted Lockhart, Moral Uncertainty and its Consequences (New York: Oxford University Press, 2000); William MacAskill, “The Infectiousness of Nihilism,” Ethics 123, no. 3 (2013): 508–520; William MacAskill and Toby Ord, “Why Maximize Expected Choice-Worthiness?” Nous 54, no. 2 (2020); Dan Moller, “Abortion and Moral Risk.” Philosophy 86, no. 3 (2011): 425–443; Jacob Ross, “Rejecting Ethical Deflationism,” Ethics 116, no. 4 (2006): 742–768; Andrew Sepielli, “What to Do When You Don’t Know What to Do,” in Oxford Studies in Metaethics Vol. 4, ed. Russ Shafer-Landau (New York: Oxford University Press, 2009): 5–28; Holly Smith, “The Subjective Moral Duty to Inform Oneself Before Acting,” Ethics 125, no. 1 (2014): 11–38.; Christian Tarsney, “Intertheoretic Value Comparison: A Modest Proposal,” Journal of Moral Philosophy 15, no. 3 (2018): 324–344; Brian Weatherson, Normative Externalism (New York: Oxford University Press, 2019); Brian Weatherson, “Running Risks Morally,” Philosophical Studies 167, no. 1 (2014): 141–163; Michael J. Zimmerman, Ignorance and Moral Obligation, New York: Oxford University Press, 2014.

  2. Andrew Sepielli, “What to Do When You Don’t Know What to Do,” 9.

  3. Sepielli, 11.

  4. Sepielli, 11.

  5. Sepielli, 7.

  6. Sepielli, 7.

  7. Lockhart, Moral Uncertainty and Its Consequences.

  8. Jacob Ross, “Rejecting Ethical Deflationism.” See also Jacob Ross, “Acceptance and Practical Reason,” (PhD diss., Rutgers University, 2006).

  9. William MacAskill, “Normative Uncertainty,” (PhD diss., Oxford University, 2014) 16, 16n9, 20. In later work with Toby Ord, MacAskill offers a different definition of appropriateness: “an appropriate action is what would be selected by a rational and morally conscientious agent who had the same set of options and beliefs” (MacAskill and Ord, “Why Maximize Expected Choice-Worthiness?” 329). But this formulation also has implicit in it that higher-order rational norms and higher-order moral norms will have the same content. It’s worth noting that, while MacAskill endorses an expectation-maximizing approach where one is possible, he offers an alternative for when this isn’t an option (e.g., due to difficulties making value comparisons across different proposed moral theories). For these cases, he proposes that we treat normative uncertainty as a voting problem. See William MacAskill, “Normative Uncertainty as a Voting Problem,” Mind 125, no. 500 (2016): 967–1004.

  10. The worries I raise are, by no means, the only challenges facing expected value maximization accounts of moral uncertainty. There is an extensive literature on the question of whether it’s possible to make the type of intertheoretic value comparisons that such accounts require. (See e.g., Gustafsson and Torpman, “In Defense of My Favourite Theory;” Hedden, “Does MITE Make Right?;” Hicks, “Moral Uncertainty and Value Comparison;” Lockhart, Moral Uncertainty and its Consequences; MacAskill, “Normative Uncertainty as a Voting Problem;” Sepielli, “What to Do When You Don’t Know What to Do;” Tarsney, “Intertheoretic Value Comparison: A Modest Proposal.”) And Brian Hedden has argued that some moral theories, including those involving supererogation, cannot be adequately incorporated into the framework used by expected value maximization accounts (Hedden, “Does MITE Make Right?”). But even if these obstacles can be overcome, there are more general problems, as I suggest here.

  11. Chelsea Rosenthal, “Trying to Be Moral, Morally,” in “Ethics for Fallible People” (PhD diss., New York University, 2019). See also Chelsea Rosenthal, “Ethics for Fallible People,” [article manuscript].

  12. For ease of discussion, I’ll be using the terms “oughts” and “norms” interchangeably.

  13. Gideon Rosen has discussed the related phenomenon of what he terms “procedural epistemic obligations”: “as you move through the world you are required to take certain steps to inform yourself about matters that might bear on the permissibility of your conduct,” and these steps are your procedural epistemic obligations. Rosen’s focus is different than mine—he is specifically looking at the way that taking, or failing to take, appropriate steps in advance of acting can impact our blameworthiness. But his “procedural epistemic obligations” might be seen as a subset of the “procedural oughts” I discuss here. Gideon Rosen, “Skepticism about Moral Responsibility,” Philosophical Perspectives 18, no. 1 (2004): 301. See also Elizabeth Harman’s discussion of procedural moral obligations in “Ethics is Hard! What Follows?” [manuscript].

  14. Discussion of this point below draws heavily on Rosenthal, “Trying to Be Moral, Morally,” in “Ethics for Fallible People” (PhD diss.); see also Rosenthal, “Ethics for Fallible People,” [article manuscript].

  15. Andrew Sepielli, “What to Do When You Don’t Know What to Do When You Don’t Know What to Do …” Nous 48, no. 3 (2014): 536.

  16. Chelsea Rosenthal, “Why Desperate Times (But Only Desperate Times) Call for Consequentialism,” in Oxford Studies in Normative Ethics, Vol. 8, ed. Mark Timmons (New York: Oxford University Press, 2018); Chelsea Rosenthal, “Tolerating Each Other,” in “Ethics for Fallible People” (PhD diss., New York University, 2019).

  17. See Lockhart, Moral Uncertainty and Its Consequences.

  18. I’m grateful to Daniel Wodak for suggesting that I incorporate an example involving reparations.

  19. I’m grateful to Jake Nebel for raising this point.

  20. Lockhart, Moral Uncertainty and Its Consequences; Sepielli, “What to Do When You Don’t Know What to Do.”

  21. To some extent, MacAskill and Ord leave open the possibility of using a risk averse account instead of this type of risk neutral approach. Whether this is called for, according to them, will depend upon whether risk aversion is also rational in the case of empirical uncertainty, as Lara Buchak has argued that it is (Risk and Rationality (New York: Oxford University Press, 2013)). Their primary commitment is to treating normative uncertainty like we treat empirical uncertainty. (See MacAskill and Ord, “Why Maximize Expected Choice-Worthiness?” 338 and MacAskill, “Normative Uncertainty,” 34.) This gives them a potential avenue for avoiding the “stakes” worry, by rejecting risk neutrality. But doing so would require accepting much more general claims about decision-making under empirical uncertainty—like Buchak’s—that are quite controversial (although I’m sympathetic to Buchak’s account). And MacAskill and Ord don’t seem inclined to take this approach—although they don’t weigh in on the issue, they treat risk-neutral, expected value maximization as the default position and adopt it for purposes of discussion (MacAskill and Ord, 338; MacAskill, “Normative Uncertainty,” 51).

  22. For arguments in favor of alternative decision-theoretic analyses that can treat risk aversion as rational, see Buchak, Risk and Rationality.

  23. For a similar point, see Moller, “Abortion and Moral Risk.” Moller suggests that the costs to the agent will have to be among the factors relevant to determining how we morally ought to handle moral uncertainty (440). Moller, however, is not mainly focused on “working out the details of a complete theory of moral risk,” instead focusing primarily on showing that considerations of moral risk give us a significant reason not to get abortions (whether or not that reason is ultimately outweighed). Worries about demandingness are also implicit in Brian Weatherson’s review of Lockhart’s Moral Uncertainty and Its Consequences (Mind 111, no. 443 (2002): 693–696).

  24. Lockhart, Moral Uncertainty and Its Consequences, 50–73; William MacAskill, “Moral Recklessness and Moral Caution” (manuscript); William MacAskill, “Practical Ethics Given Moral Uncertainty,” 80,000 Hours Blog, January 31, 2012, https://80000h.org/2012/01/practical-ethics-given-moral-uncertainty/. For a claim along these lines that is developed outside of a decision-theoretic framework, see Dan Moller’s argument that considerations of moral uncertainty give us reasons not to have abortions (though those reasons may be outweighed) (Moller, “Abortion and Moral Risk”).

  25. I’m grateful to Jake Nebel for discussion of this point. See also MacAskill, “Normative Uncertainty,” 40–41 and MacAskill and Ord, “Why Maximize Expected Choice-Worthiness?” 342–343.

  26. Peter Singer, “Famine, Affluence, and Morality,” Philosophy and Public Affairs 1, no. 1 (1972): 229–243.

  27. For additional discussion of demandingness, moral uncertainty, and Peter Singer, see MacAskill, “Normative Uncertainty,” 39–42 and MacAskill and Ord, “Why Maximize Expected Choice-Worthiness?” 342–343.

  28. I’m grateful to an anonymous referee for making this point.

  29. See MacAskill and Ord, “Why Maximize Expected Choice-Worthiness?” 342–343 for some discussion of this idea.

  30. See, for example, the contrast between Samuel Scheffler’s discussion of personal prerogatives within morality in The Rejection of Consequentialism (New York: Oxford University Press, 1994), and Susan Wolf’s suggestion that we shouldn’t aim to be perfectly morally good in “Moral Saints,” Journal of Philosophy 79, no. 8 (1982).

  31. Lockhart, Moral Uncertainty and Its Consequences, 98–110.

  32. Andrew Sepielli, “‘Along an Imperfectly-Lighted Path’: Practical Rationality and Normative Uncertainty” (PhD diss., Rutgers University, 2010), 104.

  33. I’m grateful to Rob Hopkins for a question about this point.

  34. MacAskill and Ord, “Why Maximize Expected Choice-Worthiness?” 338–340. See also MacAskill, “Normative Uncertainty,” 34–36.

  35. Of course, this leaves open the possibility that expected value maximization is the wrong approach to all types of choice under uncertainty, but defending this would require adopting a much larger body of contentious claims than I address here.

Acknowledgements

For helpful feedback on the ideas presented in this paper (in some cases in earlier forms), I am grateful to James Fritz, B.R. George, Alex Guerrero, Elizabeth Harman, Chris Howard, Zoë Johnson-King, Colin David Jones, Matthew Liao, Alex London, Robert Long, Will MacAskill, Jordan MacKenzie, Ishani Maitra, Liam Murphy, Jake Nebel, Japa Pallikkathayil, Samuel Scheffler, Andrew Sepielli, David Storrs-Fox, Sharon Street, Christian Tarsney, Brian Weatherson, Alex Worsnip, Jake Zuehl, and an anonymous referee, as well as the NYU Graduate Student Extreme Value Theory Group, the Philosophy Thesis Prep Seminar at NYU, the 2018 Foundations of Normativity Workshop at the University of Edinburgh, the 2019 Chapel Hill Normativity Workshop, and the audience at my dissertation defense. During work on this paper, I have been fortunate to receive support from an Andrew W. Mellon Dissertation Fellowship and a Henry M. MacCracken Fellowship.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chelsea Rosenthal.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rosenthal, C. What decision theory can’t tell us about moral uncertainty. Philos Stud 178, 3085–3105 (2021). https://doi.org/10.1007/s11098-020-01571-3

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-020-01571-3

Keywords

Navigation