Skip to main content
Log in

Infallibility in the Newcomb Problem

  • Original Article
  • Published:
Erkenntnis Aims and scope Submit manuscript

Abstract

It is intuitively attractive to think that it makes a difference in Newcomb’s problem whether or not the predictor is infallible, in the sense of being certainly actually correct. This paper argues that that view (A) is irrational and (B) manifests a well-documented cognitive illusion.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Nozick (1969, 207–208).

  2. Lewis (1981) and Joyce (1999) are standard presentations.

  3. Jeffrey (1965) is a standard presentation. (The second edition of this book, Jeffrey 1983, amended Evidential Decision Theory precisely in order to accommodate two-boxing in the Newcomb Problem).

  4. For a view of this kind, see Meek and Glymour (1994).

  5. Proponents of this argument, and of the Causal Decision Theory underlying it, include Nozick (1969, 219–26), Gibbard and Harper (1978, 361), Skyrms (1980, 128–30), Lewis (1981, 309), Joyce (1999, 150–151), Weirich (1998, 116–117; 2001, 126) and Sloman (2005, 90). Egan (2007, 94–96) and Wedgwood (2013, 2647) both endorse O2 in Newcomb’s problem but reject Causal Decision Theory for other reasons.

  6. For the sake of clarity: when I say that the predictor is infallible, or that he is definitely right, or that you are entirely certain that he is right, I mean that you have Cr (S1 ↔ O1) = 1, where Cr is your subjective credence function (i.e. it is a probability function taking maximal value here). Similarly when I discuss ‘Certainty Effect’ in s. 4, I intend it to apply to cases in which we are explicitly told that the predictor is infallible, certainly correct etc. in the sense just outlined.

  7. Be careful to distinguish what I am here calling infallibility (that the predictor is certainly in fact correct) from necessary infallibility (that the predictor is necessarily correct). We are given in the story that the prediction was made yesterday, and the point of this is supposed to be that the prediction was causally independent of your present choice. (At least this is so if we put aside Price’s worry (2012, 510ff) that this makes the story inconsistent.). It follows that the predictor is at best contingently correct; for if he is in fact correct, then had you chosen otherwise he would have been incorrect. (At least this is so on some non-‘backtracking’ interpretation of that counterfactual: see Lewis 1979, 33–35; cf. Horgan 1981, 162–165). Hence he might easily have been wrong even if he is certainly in fact correct. One unfortunate complication of the literature is that although my usage is prevalent, some writers (e.g. Fischer (1994)) use ‘infallible’ to mean necessary correctness.

    Two further points on ‘infallible’: first, the distinction between necessary correctness and actual correctness is quite different from the distinction between the normative and the descriptive in decision theory. Newcomb’s problem is a normative problem; but to say that the predictor is ‘actually correct’ is not to treat it as a descriptive one. The correctness of the predictor consists solely in the truth of his prediction and can be treated as exogenous in this problem. Newcomb’s problem is a problem of rational choice only for the person facing the boxes, not for the predictor. To describe the predictor as certainly in fact correct is not to misapply descriptive decision theory but only to specify further what this normative problem involves.

    Second, one might question even whether a predictor who is infallible is possible. For instance, Ledwig and Spohn have both argued that for any predictor, one can always imagine a possible world in which that predictor predicts incorrectly (Ledwig 2000, 172). At least in the modal system S5 it follows, and I agree, that a necessarily correct predictor is impossible. But this is consistent with the predictor’s being infallible in the more restricted sense intended here, namely: that the agent has credence 1 that the predictor is actually if contingently correct on this occasion.

  8. The point of ‘in its neighbourhood’ is to focus attention away from a discontinuity to which Evidential Decision Theory is independently committed. EDT endorses two-boxing if your confidence that the predictor is correct (i.e. your Cr (S1∣O1) = Cr (S2∣O2)) is less than n0 = def. (M + K)/2 M. But as soon as your confidence rises beyond that level, which is approximately 0.5, it abruptly switches to one-boxing. This discontinuity is both defensible and irrelevant.

  9. E.g. Leeds (1984, 106), Clark (2007, 143–144), Hubin and Ross (1985, 439). Bach (1987, 416) claims that most proponents of two-boxing against fallible predictors back off from this policy against infallible predictors. Leslie (1991, 73–74) says something similar, although he himself is a thoroughgoing one-boxer. More forthrightly, Levi (1975) writes that in Newcomb’s problem as (under)-described above, you typically do not know what probabilities to apply, and so should follow a strategy like ‘maximin’ (which recommends two-boxing). But if you know that the predictor is in fact perfectly accurate then you should one-box.

  10. E.g. Nozick (1969, 232; but see immediately below in the main text), Gibbard and Harper (1978: 370) and Seidenfeld (1984: 203). Horgan grounds the present application of the CP in premises about what the agent has or has not the power to do (1985, 230–231). Sobel (1988, 109–111) criticizes this for misusing the word ‘power’. But one might take the CP to support the DS quite independently of that semantic question.

  11. Nozick (1969), 232.

  12. Two possible explanations of this have been suggested to me, namely (1) that people consider a probability of 1 as a qualitative difference whereas in fact it is only a quantitative one; (2) that people think that the Sorites paradox gets it all wrong that if one removes one grain of sand each time from a heap of 10,000,000 grains of sand, in the end one still does have a heap, because one grain of sand shouldn't make a difference. I think that (1) is not very far from my own story in s. 4: what I am adding is a more detailed account of how the difference ‘between one in n and none in n’ is ‘qualitative’ i.e. that it induces a discontinuity. As for (2): perhaps it is true that the best answer to the Sorites paradox is that one grain can make all the difference; but that leaves open something that the present treatment tries to answer, namely why in the present problem it should always be this grain of sand, i.e. the very last one, that seems decisive. (Thanks to a referee.).

  13. Obviously this is not so much an argument as an appeal to the reader’s intuition. But it is possible to argue that for any risk averse subject who falls short of being infinitely risk averse, it is possible to choose an n < 1 such that the subject prefers (i)–(ii). For instance, suppose the subject to have a standard utility function for dollars U (z) = −e −Rz, R > 0 being the Arrow–Pratt coefficient of risk aversion. Then she prefers (i)–(ii) just in case n (1 – e −RM)(1 + e−RK) > 1 − e −R(M+K). This inequality always has a solution n < 1 for R < ∞.

  14. It is true that in the situation that I described Chas and Dave are probably not distinct persons, although they may be, if (1) the Chas-stages are psychologically continuous with one another; and (2) so are the Dave-stages; but (3) the Chas-stages are not psychologically continuous with the Dave-stages. Certainly they are not temporally continuous persons. But nothing in Newcomb’s problem, or in anyone’s intuitions regarding it, or in any significant arguments concerning it, depends on the predictor’s counting even as a person, let alone a temporally continuous one. If it could be the market (Broome 1989), an alien being (Nozick 1969, 207) or even God (Craig 1987, Resnik 1987, 111), then why couldn’t it be a temporally scattered sub-personal aggregate of person-stages?

  15. This is Evidential Decision Theory or EDT, as defended in its pure form in Jeffrey (1965). The ‘natural assumption’ is that your value for a lottery is its expected value. In that case and in Jeffrey’s terminology, the CP demands one boxing just in case nV(M) > (1 − n)V(M + K) + nV(K) i.e. iff Cr (S1∣O1) V(M) > Cr (S1∣O2) V (M + K) + Cr (S2∣O2) V (K) i.e. iff EDT demands it. (Here I am writing V (X) for your news value for the proposition that you get $X and setting V (0) = 0). In particular, if we assume that V (You get $X) = X then the CP demands one-boxing iff nM > (1 − n)(M + K) + nK i.e. iff n > (M + K)/2 M = n0, as defined at n. 8.

  16. Kahneman and Tversky (1981, 30) with trivial alterations.

  17. E.g. Shafir et al. (2008).

  18. An obvious problem with this strategy is that it seems illegitimately to confuse descriptive with normative approaches to decision theory. Certainty Effect is a matter of descriptive psychology (this is something that people actually do) whereas Newcomb’s Problem is a problem in the normative theory: should you one-box or two-box? It is true that this paper uses both normative and descriptive arguments, but it segregates their applications. The purely normative argument of s. 3 is directed at question (A) and aims to establish the normative conclusion that you should not find the DS attractive. The descriptive argument of s. 4 is directed at question (B) and aims to explain why people do in fact (and contrary to this advice) find it attractive.

  19. Whilst I am here following the standard presentation of Newcomb’s problem (as in, e.g., Joyce 1999, 146–154), some philosophers have gone deeper, claiming that the alleged opposition between causal and evidential approaches can be overcome given either (i) a clearer sense of the internal evidence available to the deliberator (Eells 1982, ch. 7–8) or (ii) a proper understanding of causation and its role in the problem (for different elaborations of which see Price 2012 s. 8 and Spohn 2012). I cannot enter into the matter here, but very briefly: Ad (i): there are in my view (fallibilistic) versions of the Newcomb paradox to which Eells’s ‘tickle’-style defence does not apply (see my forthcoming, s. 4.6). And ad (ii): I am generally pessimistic about the prospects for an irenic resolution of the dispute along Pricean lines, for reasons given in my forthcoming, ch. 8. But I should add that Spohn’s recent work probably represents the most serious challenge to the standard picture assumed here. I hope to address this in future work.

  20. So I accept the Certainty Principle and endorse one-boxing in the Newcomb Problem exactly where Evidential Decision Theory does. I cannot argue for this position here (but see my forthcoming, ss. 7.3–7.4). On the other hand, you needn’t agree with me on this point to accept my main argument, which does not rely on the truth of the Certainty Principle but only on its inconsistency with the DS. The position of this paper is that since the DS is inconsistent with its own sole motivation, namely the CP, it cannot be rational to accept the DS, whatever you think about the CP. (Thanks to a referee.).

  21. So I am here supposing that most of us do in fact respond to a probabilistically weighted combination of decision-theoretic motivations in something like the manner recommended in Nozick (1993) Part II. It is a nice question whether people should rationally have such a mixed response to these mixed motives. Fortunately, I can set aside that task for the purely descriptive purposes of this section. (For more on the division of descriptive/normative labour in this paper, see n. 18 above).

  22. It might concern you that I am not comparing like with like: in the move from option 1A to option 2A the associated chance fell by the same multiplicative factor as in the move from option 1B to option 2B. But in the move from case (a) to case (b) the associated probability fell by the same additive increment as in the move from (b) to (c). But if N is large then this difference is insignificant, since in that case ((N − 1)/N)2 ≈ (N − 2)/N, and so the factor by which the predictor’s strike rate in (b) exceeds that in (c) is roughly equal to the factor by which the strike rate in (a) (=1) exceeds that in (b).

  23. I should here explicitly acknowledge what will anyway be obvious, that the psychological source of this intuitive illusion is really an empirical matter that could only be decisively settled by experiment. But this section was written entirely from an armchair. Let me give two excuses for that. First: to my knowledge nobody has ever posited, let alone tested, any explanation of why the DS is intuitively plausible. So even from the armchair it is possible to advance matters by positing one. Second: there is in the nature of infallible Newcomb cases some difficulty in testing the relative strength of E-motives and C-motives. It may be hard to get anyone to believe that somebody else is a very accurate predictor of her choice. How could you ever convince someone that the predictor is infallible? Perhaps this explains the dearth of even aspirationally rigorous studies of intuition in this special case (Anand 1990 being the only one known to me). In any case it makes it plausible that the present enquiry is best pursued, because only pursuable, from the armchair.

  24. Of course, the paradoxical nature of the Newcomb Problem is visible even in the ‘imperfect’ versions. But the pull of evidentialist arguments is much clearer in the ‘perfect’ cases. Notwithstanding my own sympathies for evidentialism, I must therefore eschew the use of the perfect cases to make people feel this pull, since on the view defended here what underlies it in these cases is a form of irrational overweighting. The present paper also rules out arguing for one-boxing in the imperfect case on the grounds that these are not relevantly different from the perfect case, where one-boxing is highly intuitive (Seidenfeld 1984, 203–204 appears to suggest this strategy).

References

  • Ahmed, A. (2014). Evidence, decision and causality. Cambridge: CUP (forthcoming).

  • Allais, M. (1953). Le comportement de l’homme rationnel devant le risque: Critique des postulats et axiomes de l’école américaine. Econometrica, 21, 503–546.

    Article  Google Scholar 

  • Anand, P. (1990). Two types of utility. Greek Economic Review, 12, 58–74.

    Google Scholar 

  • Bach, K. (1987). Newcomb’s problem: The $1M solution. Canadian Journal of Philosophy, 17, 409–425.

    Google Scholar 

  • Broome, J. (1989). An economic Newcomb problem. Analysis, 49, 220–222.

    Article  Google Scholar 

  • Clark, M. (2007). Paradoxes from A to Z (2nd ed.). London: Routledge.

    Google Scholar 

  • Craig, W. L. (1987). Divine foreknowledge and Newcomb’s paradox. Philosophia, 17, 331–350.

    Article  Google Scholar 

  • Eells, E. (1982). Rational decision and causality. Cambridge: CUP.

    Google Scholar 

  • Egan, A. (2007). Some counterexamples to causal decision theory. The Philosophical Review, 116, 93–114.

    Article  Google Scholar 

  • Fischer, J. M. (1994). The metaphysics of free will: An essay on control. Oxford: OUP.

    Google Scholar 

  • Gibbard, A., & Harper, W. (1978). Counterfactuals and two kinds of expected utility. In C. Hooker, J. Leach & E. McClennen (Eds.), Foundations and applications of decision theory (pp. 125–162). Riedel, Dordrecht. Reprinted in P. Gärdenfors & N.-E. Sahlin (Eds.), Decision, probability and utility (1988). CUP, Cambridge.

  • Horgan, T. (1981). Counterfactuals and Newcomb’s problem. The Journal of Philosophy, 78, 331–356.

    Article  Google Scholar 

  • Horgan, T. (1985). Newcomb’s problem: A stalemate. In R. Campbell & L. Sowden (Eds.), Paradoxes of rationality and cooperation: Prisoner’s dilemma and Newcomb’s problem (1985) (pp. 223–234). Vancouver: UBC Press.

    Google Scholar 

  • Hubin, D., & Ross, G. (1985). Newcomb’s perfect predictor. Noûs, 19, 439–446.

    Article  Google Scholar 

  • Jeffrey, R. (1965). The logic of decision. Chicago: Chicago UP.

    Google Scholar 

  • Jeffrey, R. (1983). The logic of decision (2nd ed.). Chicago: Chicago UP.

    Google Scholar 

  • Joyce, J. (1999). Foundations of causal decision theory. Cambridge: CUP.

    Book  Google Scholar 

  • Kahneman, D., & Tversky, A. (1981). The framing of decisions and the psychology of choice. Science, 4481, 453–458.

    Google Scholar 

  • Ledwig, M. (2000). Newcomb’s problem. Dissertation submitted to the University of Constance. http://kops.ub.uni-konstanz.de/bitstream/handle/urn:nbn:de:bsz:352-opus-5241/ledwig.pdf?sequence=1. Accessed 2 Feb 2014.

  • Leeds, S. (1984). Eells and Jeffrey on Newcomb’s problem. Philosophical Studies, 46, 97–107.

    Article  Google Scholar 

  • Leslie, J. (1991). Ensuring two bird deaths with one throw. Mind, 100, 73–86.

    Article  Google Scholar 

  • Levi, I. (1975). Newcomb’s many problems. Theory and Decision, 6, 161–175.

    Article  Google Scholar 

  • Lewis, D. (1979). Counterfactual dependence and time’s arrow. Nous, 13, 455–476. Reprinted in his Philosophical papers (Vol. 2, 1986). Oxford: OUP.

  • Lewis, D. (1981). Causal decision theory. Australian Journal of Philosophy, 59, 5–30.

    Article  Google Scholar 

  • Meek, C., & Glymour, C. (1994). Conditioning and intervening. British Journal for the Philosophy of Science, 45, 1001–1021.

    Article  Google Scholar 

  • Nozick, R. (1969). Newcomb’s problem and two principles of choice. In N. Rescher (Ed.) Essays in honor of Carl G. Hempel (pp. 114–146). D. Reidel, Dordrecht. Reprinted in Moser P. (ed.) Rationality in action: Contemporary approaches (1990). Cambridge: CUP.

  • Nozick, R. (1993). The nature of rationality. Princeton: Princeton UP.

    Google Scholar 

  • Price, H. (2012). Causation, chance and the rational significance of supernatural evidence. The Philosophical Review, 121, 483–538.

    Article  Google Scholar 

  • Resnik, M. (1987). Choices: An introduction to decision theory. Minneapolis: University of Minnesota Press.

    Google Scholar 

  • Seidenfeld, T. (1984). Comments on causal decision theory. PSA, 1984, 201–212.

    Google Scholar 

  • Shafir, S., Reich, T., Tsur, E., Erev, I., & Lotem, A. (2008). Perceptual accuracy and conflicting effects of certainty on risk-taking behaviour. Nature, 453, 917–920.

    Article  Google Scholar 

  • Skyrms, B. (1980). Causal necessity. New Haven: Yale UP.

    Google Scholar 

  • Sloman, S. (2005). Causal models. Oxford: OUP.

    Book  Google Scholar 

  • Sobel, J. H. (1988). Infallible predictors. Philosophical Review, 97, 3–24. Reprinted in his Taking chances (1994). Cambridge: CUP.

  • Spohn, W. (2012). Reversing 30 years of discussion: Why causal decision theorists should one-box. Synthese, 187, 95–122.

    Article  Google Scholar 

  • Wedgwood, R. (2013). Gandalf’s solution to the Newcomb problem. Synthese, 190, 2643–2675.

    Article  Google Scholar 

  • Weirich, P. (1998). Equilibrium and rationality: Game theory revised by decision rules. Cambridge: CUP.

    Book  Google Scholar 

  • Weirich, P. (2001). Decision space: Multidimensional utility analysis. Cambridge: CUP.

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arif Ahmed.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ahmed, A. Infallibility in the Newcomb Problem. Erkenn 80, 261–273 (2015). https://doi.org/10.1007/s10670-014-9625-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10670-014-9625-x

Keywords

Navigation