Skip to main content
Log in

Options and the subjective ought

  • Published:
Philosophical Studies Aims and scope Submit manuscript

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Notes

  1. The sense of ought in which you ought not give your friend the pills is often called the objective ought. In the case of prudential rationality, what you objectively ought to do is whatever would in fact maximize your utility, while what you subjectively ought to do is bring about whichever proposition has highest expected utility. In ethics, consequentialists will likely say that what you objectively ought to do is whatever would maximize moral value (total world happiness, say), while what you subjectively ought to do is bring about whichever proposition has highest expected moral value. The objective/subjective distinction can also be drawn in non-consequentialist moral theories, although there is less consensus on how exactly to do so.

  2. To emphasize—I am understanding the requirement that the subjective ought be ‘action-guiding’ as the requirement that you be in a position to know what you ought to do. Thus, for the subjective ought to be action-guiding, it is not required that you always in fact know what you ought to do (for you might make a mistake or fail to consider the question), nor is it required that you consciously employ the theory of the subjective ought in coming to a conclusion about what you ought to do. All that is required for the subjective ought to be action-guiding, in my sense, is for facts about what you ought to do to be in principle accessible to you.

  3. Importantly, we only predict that you will do what you subjectively ought to do when we hold onto the background assumption that you are rational. But often, we have evidence that you fall short of ideal rationality in various respects, and in these cases we will not want to predict that you will do what you subjectively ought to do. For instance, we may have evidence from behavioral economics that you employ certain biases and heuristics that lead you to be irrational in certain systematic ways, and if such biases and heuristics are relevant in the case at hand, we will not want to predict that you will in fact do what you subjectively ought to do.

  4. This is just to express sympathy with internalism about practical and epistemic rationality. Externalists will predictably be unsympathetic, but in some sense this paper can be seen as exploring the viability of internalism about practical rationality and determining how internalists should conceive of a decision-maker’s options.

  5. Thanks to Matthew Noah Smith for raising this worry.

  6. This is the definition of expected utility employed in Evidential Decision Theory and is sometimes called evidential expected utility. The definition of expected utility employed in Causal Decision Theory is slightly more complex, but the distinction between evidentialist and causalist views of expected utility will not matter in what follows.

  7. Of course, there would certainly be other propositions with higher expected utility, such as the proposition that someone discovered a cure for cancer and deposited $10,000 in my bank account. In fact, it may be that there is no proposition with highest expected utility.

  8. The set must be maximal in the sense that there is no other proposition incompatible with the members of that set which is also such that the agent has the ability to bring it about. Note that this proposal allows for the possibility of multiple sets of options for an agent, since we can cut up the things that she is able to bring about in more or less fine-grained ways and still have a maximal set of mutually exclusive propositions, each of which she is able to bring about.

  9. Jeffrey (1965, p. 84) regards options as acts, where ‘An act is then a proposition which is within the agent’s power to make true if he pleases.’ And in ‘Preference among Preferences,’ he writes that ‘To a first approximation, an option is a sentence which the agent can be sure is true, if he wishes it to be true’ (Jeffrey 1992, p. 164). In ‘Causal Decision Theory,’ Lewis writes, ‘Suppose we have a partition of propositions that distinguish worlds where the agent acts differently...Further, he can act at will so as to make any one of these propositions hold; but he cannot act at will to make any proposition hold that implies but is not implied by (is properly included in) a proposition in the partition. The partition gives the most detailed specifications of his present action over which he has control. Then this is a partition of the agents’ alternative options’ (Lewis 1981, p. 7).

  10. One might prefer, here and elsewhere, to replace talk of what the agent actually believes and desires with talk of what the agent ought to believe and desire. In this way, what an agent subjectively ought to do would depend not on what she believes and desires, but on what she ought to believe and desire. Importantly, adopting this view will not affect the arguments in this paper; one will still be pushed to adopt my favored theory of options. I will continue to put things in terms of the agent’s actual beliefs and desires for the sake of consistency, and also because I favor keeping epistemic and practical rationality distinct. In cases where an agent has beliefs (or desires) that she ought not have, but acts in a way that makes sense, given those misguided beliefs (or desires), we should criticize her for being epistemically irrational without also accusing her of practical irrationality.

  11. Heather Logue has pointed out to me that Desideratum 2 may not actually be necessary to motivate my favored theory of options, since my theory of options may also be the only one which can satisfy both Desideratum 1 and Desideratum 3 (below). Still, I include Desideratum 2 since I think it is a genuine desideratum, even if it is not needed to motivate my view.

  12. Of course, if we think of oughts as attaching to act tokens, rather than act types, there is a harmless sense in which what an agent ought to do will not supervene on that agent’s mental states. Perhaps my physically identical doppelgänger and I are in exactly the same mental states, but while I ought to bring it about that I donate money to charity, my doppelgänger ought to bring it about that he donate money to charity. There is nothing disconcerting about this. It would be more problematic if what an agent ought to do, put in terms of act types like donating to charity (perhaps modelled using sets of centered worlds instead of sets of worlds), failed to supervene on beliefs and desires. This more worrying type of failure of supervenience is entailed by Proposal 1.

  13. Of course, supervenience of what an agent ought to do on her beliefs and desires is not by itself sufficient for her to be in a position to know what she ought to do. She must also know her beliefs and desires. The important point is that self-knowledge and supervenience of oughts on beliefs and desires are individually necessary and jointly sufficient for the agent to be in a position to know what she ought to do. Therefore, if supervenience fails, then even a self-knowing agent would not be in a position to know what she ought to do. Importantly, knowledge of one’s beliefs and desires is already required for one to be in a position to know what one ought to do, since even knowing what one’s options are, one needs to know what one believes and desires in order to know how to rank those options. Provided that an agent’s options supervene on her beliefs and desires, there are no obstacles to her being in a position to know what her options are that are not already obstacles to her being in a position to know how those options are to be ranked.

  14. Again, the set must be ‘maximal’ in the sense that there is no other proposition incompatible with the members of that set which is also such that the agent has the ability to bring it about.

  15. A close cousin of Proposal 2 would characterize an agent’s options in normative terms, so that an agent’s options consist not of the things which she actually believes she can do, but rather of the things which she ought to believe she can do. This proposal, however, will likewise violate Desideratum 1 and is unacceptable on this account.

  16. Once again, the set must be ‘maximal’ in the sense that there is no other proposition incompatible with the members of that set which is also such that the agent knows that she is able to bring it about.

  17. Actually, this assumes a perhaps controversial application of the KK principle, which states that when an agent know that P, she is in a position to know that she knows P. This is because on Proposal 3, knowing that something is an option for you requires knowing that you know you are able to bring it about. If KK fails, then so much the worse for Proposal 3.

  18. Once again, ‘maximal’ means that there is no proposition of the form S decides at t to ϕ which is not a member of the set but which is incompatible with each member of the set. Note that maximality and mutual exclusivity apply not to the contents of decisions, but to propositions about which decision was made. Hence the set {S decides at t to  ϕ, S decides at t not to  ϕ} will not count as a set of options, since it does not include propositions about other decisions that S might have made (e.g. the proposition that S decides at t to ψ.

  19. Actually, Bratman is discussing intentions, but I think that the relevant considerations apply equally to decisions, insofar as there is any difference between decisions and intentions. This theory of abilities to make decisions gains support from Kavka’s (1983) Toxin Puzzle. Suppose that in one hour, you will be offered a drink containing a toxin which will make you temporarily ill. Now, you are offered a large sum of money if you make the decision to drink the beverage. You will receive the money even if you do not then go ahead and drink the beverage; the payment depends only on your now making the decision to drink it. It seems that you cannot win the money in this case; you cannot decide to drink the beverage. Why not? Because you believe that, if you were to make the decision to drink the beverage, you would later reconsider and refuse to drink it. You cannot make a decision if you believe you will not carry it out. Supposing that this is the only restriction on agents’ abilities to make decisions, we get Bratman’s theory of abilities to make decisions.

  20. Anscombe (1957) famously holds that in order to be able to decide to ϕ, you do not even need to lack the belief that your decision to ϕ would be ineffective. You can make decisions that you believe you will not carry out. For instance, as you are being led to the interrogation room, you can decide not to give up your comrades, even though you know you will crack under the torture. Some have interpreted Anscombe as holding that there are no restrictions on which decisions you are able to make. If this (admittedly somewhat implausible) view is true, Options-as-Decisions will still satisfy Desiderata 1–3, for trivially an agent will always be able to know which decisions she make, and which decisions she can make will supervene on her beliefs and desires.

  21. This example is a slightly modified version of a case presented in Jackson and Pargetter (1986).

  22. Chisholm puts his case in terms of conditionals, but I prefer for the sake of simplicity to put express it using conjunctions. See footnote 24, below, for the version of the paradox based on conditionals, along with the dissolution of that version of the paradox based on Options-as-Decisions.

  23. Jackson and Pargetter (1986) argue that you can in fact do all that you ought to do in this case. But they are considering what you objectively ought to do. In considering what you objectively ought to do, whether you ought to accept the invitation depends not on what you believe you will later do, but on what you will in fact later do. Then, in a case where if you accept the invitation, you in fact won’t write the review, it appears both that you ought to accept and write, and that you ought to decline. But in this case, it is still possible for you to fulfill all of your requirements, since if you were to accept and write, it would no longer be the case that you ought to have declined. It is only given the truth of the conditional if you accept, then you won’t write that you ought to decline. But you are able to affect the truth value of that conditional. But when we are considering the subjective ought, things are different. Whether you ought to decline depends not on the actual truth value of the conditional if you accept, then you won’t write, but on whether you believe that that conditional is true. And while your actions can affect the truth of that conditional, they cannot affect whether you presently believe that conditional to be true.

  24. Chisholm originally presented the paradox using conditionals. The following statements are supposed to all be true descriptions of the case, but they are jointly incompatible with standard deontic logic: (i) You ought to write the paper; (ii) It ought to be that if you write the paper, you accept the invitation; (iii) If you believe you won’t write the paper, you ought not accept the invitation; and (iv) You believe you won’t write the paper. From (i) and (ii) it follows, by standard deontic logic, that you ought to accept the invitation, while from (iii) and (iv) it follows, by modus ponens, that you ought not accept the invitation. But on standard deontic logic, it cannot be the case both that you ought to accept and that you ought not accept.

    But while these statements all sound compelling, Options-as-Decisions entails that (i) and (ii) are simply false. (i) is false because writing the paper is not an option for you, and (ii) is false because making this conditional true is not an option for you. Chisholm’s Paradox shows that an intuitive description of this case (expressed in the statements (i)-(iv)), including a description of your obligations therein, is incompatible with standard deontic logic, given a standard interpretation of the conditionals in (ii) and (iii). One response is to modify standard deontic logic. Another response is to try to reinterpret the conditionals in (ii) and (iii) to avoid incompatibility with standard deontic logic. A third response, which falls out of Options-as-Decisions, is to simply deny the description of your obligations in this case. If Chisholm’s description of your obligations is incorrect, then the paradox dissolves even without any modification of standard deontic logic or non-standard interpretation of the conditionals.

  25. See especially Levi (1974), Joyce (2005), White (2009), and Elga (2010) for discussion.

  26. See Hare (2010) for compelling discussion of this issue.

  27. Aside from the aforementioned brief discussions in Jeffrey (1965) and Lewis (1981), this issue is discussed in Jackson and Pargetter (1986), Joyce (1999), Pollock (2002), and Smith (2010).

References

  • Anscombe, G. E. M. (1957). Intention. Oxford: Oxford University Press.

    Google Scholar 

  • Bratman, M. (1987). Intentions, plans, and practical reason. Stanford, CA: CSLI.

    Google Scholar 

  • Chisholm, R. (1963). Contrary-to-duty imperatives and deontic logic. Analysis, 24, 33–36.

    Google Scholar 

  • Elga, A. (2010). Subjective probabilities should be sharp. Philosopher’s Imprint, 10.

  • Frankfurt, H. (1969). Alternate possibilities and moral responsibility. Journal of Philosophy, 66, 829–839.

    Article  Google Scholar 

  • Hare, C. (2010). Take the sugar. Analysis, 70, 237–247.

    Google Scholar 

  • Jackson, F., & Robert, P. (1986) . Oughts, options, and actualism. Philosophical Review, 95, 233–255.

    Article  Google Scholar 

  • Jeffrey, R. (1965). The logic of decision. Chicago: University of Chicago Press.

    Google Scholar 

  • Jeffrey, R. (1992). Preference among preferences. In Probability and the art of judgment. Cambridge: Cambridge University Press.

  • Joyce, J. (1999). The foundations of causal decision theory. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Joyce, J. (2005). How probabilities reflect evidence. Philosophical Perspectives, 19, 153–178.

    Article  Google Scholar 

  • Kavka, G. (1983). The toxin puzzle. Analysis, 43, 33–36.

    Google Scholar 

  • Levi, I. (1974). On indeterminate probabilities. Journal of Philosophy, 71, 391–418.

    Article  Google Scholar 

  • Lewis, D. (1981). Causal decision theory. Australasian Journal of Philosophy, 59, 5–30.

    Article  Google Scholar 

  • Pollock, J. (2002). Rational choice and action omnipotence. Philosophical Review, 111, 1–23.

    Google Scholar 

  • Smith, M. N. (2010). Practical imagination and its limits. Philosophers’ Imprint, 10.

  • von Wright, G. H. (1951). Deontic logic. Mind, 60, 1–15.

    Article  Google Scholar 

  • White, R. (2009). Evidential symmetry and mushy credence. In Oxford studies in epistemology (Vol. 3). Oxford: Oxford University Press.

Download references

Acknowledgments

I would like to thank Dan Greco, Caspar Hare, Richard Holton, Heather Logue, Tyler Paytas, Agustín Rayo, Miriam Schoenfield, Paulina Sliwa, Matthew Noah Smith, Robert Stalnaker, Roger White, and Steve Yablo, as well as audiences at the 2011 MITing of the Minds Conference, the 2011 Bellingham Summer Philosophy, and the 2011 Rocky Mountain Ethics Congress, for very helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Brian Hedden.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hedden, B. Options and the subjective ought . Philos Stud 158, 343–360 (2012). https://doi.org/10.1007/s11098-012-9880-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-012-9880-0

Keywords

Navigation