Skip to main content
Log in

Evidence and rationalization

  • Published:
Philosophical Studies Aims and scope Submit manuscript

Abstract

Suppose that you have to take a test tomorrow but you do not want to study. Unfortunately you should study, since you care about passing and you expect to pass only if you study. Is there anything you can do to make it the case that you should not study? Is there any way for you to ‘rationalize’ slacking off? I suggest that such rationalization is impossible. Then I show that if evidential decision theory is true, rationalization is not only possible but sometimes advisable.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. Thanks to Bernhard Salow for helpful discussion here.

  2. Thanks to Caspar Hare for helpful discussion here.

  3. The theory originated with Jeffrey (1965) and has been most recently and extensively defended by Ahmed (2014).

  4. To forestall potential confusion about the structure of my argument, let me clarify some phrases used in this paragraph. By “rationalization is impossible,” I mean that it is never rational to manipulate the demands of rationality in the way elucidated in Sect. 1. By “evidential decision theory permits rationalization,” I mean that it is sometimes rational, according to evidential decision theory, to manipulate the demands of rationality in such a way.

  5. This theory is sometimes associated with Savage (1954). However, Savage intended his states to form a privileged partition—essentially a partition of dependency hypothesis, a la Lewis (1981). So Savage’s theory is best understood as an early version of causal decision theory, for which the smoking problem does not arise. Thanks to Bob Stalnaker for clarification on this point.

  6. Attributed to physicist William Newcomb, Newcomb was popularized by Nozick (1969).

  7. See Spencer and Wells (2017) for a more detailed defense of two-boxing.

  8. In order to account for decision problems in which the objective chance of an action yielding a particular consequence is neither 0 nor 1, we would need to alter the framework slightly, removing the stipulation that each ac entails a unique consequence and requiring that the c specify objective conditional chances of consequences on actions. However, the decision problems discussed in this paper require no such alteration, so we will work with the simpler albeit less general framework sketched above.

  9. See, for example, Gibbard and Harper (1978), Adams and Rosenkrantz (1980), Skyrms (1990), Arntzenius (2008), Meacham (2010), Ahmed (2014), Hedden (2015) and Wells (2017).

  10. The Switch Problem is a modification of a problem called Newcomb Coin Toss, presented recently in Wells (2017). In both problems, the probabilistic relations are such that if the agent gathers a certain piece of evidence then, no matter what she learns, evidential decision theory will require her to make a decision that it does not antecedently require her to make. Moreover, in both problems, the agent is in a position to know this in advance of gathering the evidence. Now, there is a minor difference between The Switch Problem and Newcomb Coin Toss: in The Switch Problem, EDT permits the gathering of evidence, whereas in Newcomb Coin Toss, it does not. For this reason, Newcomb Coin Toss showcases EDT’s violation of Good’s Theorem (see note 13), while The Switch Problem showcases no such violation. However, this difference can be easily erased by increasing the cost of not observing the light in Newcomb Coin Toss. Thanks to an anonymous referee at this journal for clarification on this point. Nevertheless, there is a major difference between the two cases, and it can be stated rather precisely. Let us say that a probability function P instantiates Simpson’s paradox just if there are propositions X, Y and Z such that:

    $$\begin{aligned} P(X \mid YZ)&> P(X \mid \lnot YZ),\\ P(X \mid Y\lnot Z)&> P(X \mid \lnot Y\lnot Z),\hbox { yet} \\ P(X \mid Y)&\le P(X \mid \lnot Y). \end{aligned}$$

    In The Switch Problem, for fixed X and Y, there is a Z satisfying the above inequalities, and also a \(Z^{\prime }\) satisfying their reversal. The Switch Problem thus contains two instances of Simpson’s paradox. Newcomb Coin Toss, like the original Newcomb problem, contains only one. This difference is significant. Whereas an agent facing Newcomb Coin Toss can gather evidence so as to ensure that EDT will give one particular piece of advice (i.e. she can look at the light so as to ensure that EDT will advise buying the box), an agent facing The Switch Problem can gather evidence so as to ensure that EDT will give either of two contradictory pieces of advice. This seems to me to aggravate the case against EDT considerably, as discussed in Sect. 5.

  11. This assumption may lead to Sue assigning zero probability to some of her available actions, in which case the evidential expected utilities of those actions would be undefined. To avoid this, we may assume instead that the probability that the predictor is mistaken is non-zero but negligible. Thanks to Bob Stalnaker for clarification on this point.

  12. This assumption may also lead to Sue assigning zero probability to some of her available actions. As before, we can avoid this by assuming instead that irrational actions get negligible positive probability.

  13. Proof Suppose that Sue learns R. By hypothesis, \(p(TakeA) = 1\). Hence, in this case, the causal expected utility of taking box A equals its evidential expected utility, which, we have seen, is 100. The causal expected utility of taking box B is \(p_{R}(InB)(100)\), or equivalently \(p_{R}(InB|TakeA)\). We have seen that \(p_{R}(InA|TakeA) = 1\). Hence, \(p_{R}(InB|TakeA) = 0\). Hence, in this case, after learning R, Sue’s causal expected utility of taking box A exceeds that of taking box B by 100.

  14. Proof Suppose that Sue learns \(\lnot R\). By hypothesis, \(p(TakeA) = 1\). Hence, in this case, the causal expected utility of taking box A equals its evidential expected utility, which, we have seen, is 100/3. The causal expected utility of taking box B is \(p_{\lnot R}(InB)(100)\), or equivalently \(p_{\lnot R}(InB|TakeA)(100)\). We have seen that \(p_{\lnot R}(InA|TakeA)=1/3\). Hence, \(p_{\lnot R}(InB|TakeA)=2/3\). Hence, in this case, after learning \(\lnot R\), Sue’s causal expected utility of taking box B exceeds that of taking box A, 200/3 to 100/3.

  15. There is a famous theorem due to I. J. Good (1967) according to which it is always rational to gather more evidence before making a decision, provided that the cost of so doing is negligible. Viewcomb is a counterexample to the version of Good’s theorem wherein ‘rational’ is given an evidential interpretation. EDT’s violation of Good’s theorem is often used as an argument against the theory, but Maher (1990) has shown that CDT also violates Good’s theorem on occasion. Hence, violations of Good’s theorem do not, on their own, cut any ice in the debate between EDT and CDT. Nevertheless, this paper suggests that there is a problem for EDT surrounding its treatment of evidence gathering that arises not when the theory prohibits the collection of cost-free evidence, but rather when it permits such collection.

  16. Thanks to Caspar Hare for helpful discussion here.

  17. Thanks to an anonymous referee at this journal for bringing these worries to my attention.

References

  • Adams, E., & Rosenkrantz, R. (1980). Applying the Jeffrey decision model to rational betting and information acquisition. Theory and Decision, 12, 1–20.

    Article  Google Scholar 

  • Ahmed, A. (2014). Evidence, decision and causality. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Arntzenius, F. (2008). No regrets, or: Edith Piaf revamps decision theory. Erkenntnis, 68, 277–297.

    Article  Google Scholar 

  • Gibbard, A., & Harper, W. (1978). Counterfactuals and two kinds of expected utility. In J. Leach, A. Hooker, & E. McClennen (Eds.), Foundations and applications of decision theory (pp. 125–162). Dordrecht: Reidel.

    Google Scholar 

  • Good, I. J. (1967). On the principle of total evidence. British Journal for the Philosophy of Science, 17, 319–321.

    Article  Google Scholar 

  • Hedden, B. (2015). Reasons without persons. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Jeffrey, R. (1965). The logic of decision. Chicago: University of Chicago Press.

    Google Scholar 

  • Lewis, D. (1981). Causal decision theory. Australasian Journal of Philosophy, 59, 5–30.

    Article  Google Scholar 

  • Maher, P. (1990). Symptomatic acts and the value of evidence in causal decision theory. Philosophy of Science, 57, 479–98.

    Article  Google Scholar 

  • Meacham, C. (2010). Binding and its consequences. Philosophical Studies, 149, 49–71.

    Article  Google Scholar 

  • Nozick, R. (1969). Newcomb’s problem and two principles of choice. In N. Rescher (Ed.), Essays in honor of Carl G (pp. 114–146). Reidel, Dordrecht: Hempel.

    Chapter  Google Scholar 

  • Savage, L. (1954). The foundations of statistics. New York: Wiley Publications in Statistics.

    Google Scholar 

  • Skyrms, B. (1990). The value of knowledge. Minnesota Studies in the Philosophy of Science, 14, 245–266.

    Google Scholar 

  • Spencer, J., & Wells, I. (2017). Why take both boxes? Philosophy and Phenomenological Research. https://doi.org/10.1111/phpr.12466.

    Article  Google Scholar 

  • Wells, I. (2017). Equal opportunity and Newcomb’s problem. Mind. https://doi.org/10.1093/mind/fzx018.

    Article  Google Scholar 

Download references

Acknowledgements

For valuable feedback on the ideas in this paper, I thank Caspar Hare, Bernhard Salow, Jack Spencer, Bob Stalnaker and Roger White.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ian Wells.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wells, I. Evidence and rationalization. Philos Stud 177, 845–864 (2020). https://doi.org/10.1007/s11098-018-1209-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-018-1209-1

Keywords

Navigation