## Abstract

Suppose a being in whose power to predict your choices you have enormous confidence. (One might tell a science-fiction story about a being from another planet, with an advanced technology and science, who you know to be friendly, etc.) You know that this being has often correctly predicted your choices in the past (and has never, so far as you know, made an incorrect prediction about your choices), and furthermore you know that this being has often correctly predicted the choices of other people, many of whom are similar to you, in the particular situation to be described below. One might tell a longer story, but all this leads you to believe that almost certainly this being’s prediction about your choice in the situation to be discussed will be correct.

Both it and its opposite must involve no mere artificial illusion such as at once vanishes upon detection, but a natural and unavoidable illusion, which even after it has ceased to beguile still continues to delude though not to deceive us, and which though thus capable of being rendered harmless can never be eradicated.

Immanuel Kant,

Critique of Pure Reason, A422, B450

It is not clear that I am entitled to present this paper. For the problem of choice which concerns me was constructed by someone else, and I am not satisfied with my attempts to work through the problem. But since I believe that the problem will interest and intrigue Peter Hempel and his many friends, and since its publication may call forth a solution which will enable me to stop returning, periodically, to it, here it is. It was constructed by a physicist, Dr. William Newcomb, of the Livermore Radiation Laboratories in California. I first heard the problem, in 1963, from his friend Professor Martin David Kruskal of the Princeton University Department of Astrophysical Sciences. I have benefitted from discussions, in 1963, with William Newcomb, Martin David Kruskal, and Paul Benacerraf. Since then, on and off, I have discussed the problem with many other friends whose attempts to grapple with it have encouraged me to publish my own. It is a beautiful problem. I wish it were mine.

## Access this chapter

Tax calculation will be finalised at checkout

Purchases are for personal use only

## Preview

Unable to display preview. Download preview PDF.

### Similar content being viewed by others

## References

If the being predicts that you will consciously randomize your choice, e.g., flip a coin, or decide to do one of the actions if the next object you happen to see is blue, and otherwise do the other action, then he does not put the $

*M*in the second box.Try it on your friends or students and see for yourself. Perhaps some psychologists will investigate whether responses to the problem are correlated with some other interesting psychological variable that they know of.

If the questions and problems are handled as I believe they should be, then some of the ensuing discussion would have to be formulated differently. But there is no point to introducing detail extraneous to the central problem of this paper here.

This divergence between the dominance principle and the expected utility principle is pointed out in Robert Nozick.

*The Normative Theory of Individual Choice*,unpublished doctoral dissertation, Princeton University, Princeton, 1963, and in Richard Jeffrey,*The Logic of Decision*,McGraw-Hill, New York, 1965.This is shorthand for: action

*A*is done and state*S*12 obtains or action*B*is done and state*S*1 obtains. The ‘or’ is the exclusive or.Note that

*S*_{1}=*A*_{1}&*S*_{3}or*A*_{2}&*S*_{4}*S*_{2}=*A*_{1}&*S*_{4}or*A*_{2}&*S*_{3}*S*_{3}=*A*_{1}&*S*_{1}or*A*_{2}&*S*_{2}*S*_{4}=*A*_{1}&*S*_{2}or*A*_{2}&*S*_{1}Similarly, the above identities hold for Newcomb’s example, with which I began, if one lets*S*_{1}= The money is in the second box.*S*_{2}= The money is not in the second box.*S*_{3}= The being predicts your choice correctly.*S*_{4}= The being incorrectly predicts your choice.*A*_{1}= You take only what is in the second box.*A*_{2}= You take what is in both boxes.State

*S*is not probabilistically independent of actions*A*and*B*if prob (*S*obtains/A is done) ≠ prob (*S*obtains/B is done).In Newcomb’s predictor example, assuming that ‘He predicts correctly’ and ‘He predicts incorrectly’ are each probabilistically independent of my actions, then it is not the case that ‘He puts the money in’ and ‘He does not put the money in’ are each probabilistically independent of my actions. Usually it will be the case that if the members of the set of exhaustive and exclusive states are each probabilistically independent of the actions

*A*_{1}and*A*_{2}, then it will not be the case that the states equivalent to our contrived states are each probabilistically independent of both Al and A2. For example, suppose prob (*S*_{1}/*A*_{1}) = prob (*S*_{1}/*A*_{2}) = = prob (*S*_{1}); prob (*S*_{2}/*A*_{2}) = prob (*S*_{2}/*A*_{1}) = prob (*S*_{2}). Let:*S*_{3}=*A*_{1}&*S*_{1}or*A*_{2}&*S*_{2}*S*_{4}=*A*_{l}&*S*_{2}or*A*_{2}&*S*_{1}If prob (*S*_{1}) ≠ prob (*S*_{2}), then*S*_{3}and*S*_{4}are not probabilistically independent of*A*_{1}and*A*_{2}. For prob (*S*_{3}/*A*_{1}) = prob (*S*_{1}/*A*_{1}) = prob (*S*_{1}), and prob (*S*_{3}/*A*_{2}) = prob (*S*_{2}/*A*_{2}) = prob (*S*_{2}). Therefore if prob (*S*_{1}) ≠ prob (*S*_{2}), then prob (*S*_{3}/*A*_{1}) ≠ prob (*S*_{3}/*A*_{2}). If prob (*S*_{1}) = prob (*S*_{2}) = 1/2, then the possibility of describing the states as we have will not matter. For if, for example,*A*_{l}can be shifted around so as to dominate*A*_{2}, then before the shifting it will have a higher expected utility than Az. Generally, if the members of the set of exclusive and exhaustive states are probabilistically independent of both*A*_{1}and*A*_{2}, then the members of the contrived set of states will be probabilistically independent of both*A*_{1}and*A*_{2}only if the probabilities of the original states which are components of the contrived states are identical. And in this case it will not matter which way one sets up the situation.Note that this procedure seems to work quite well for situations in which the states are not only not probabilistically independent of the actions, but are not logically independent either. Suppose that a person is asked whether he prefers doing

*A*to doing*B*,where the outcome of*A is /p*if*S*_{1}and*r*if*S*_{2}/ and the outcome of*B*is*/q*if*S*_{2}and*r*if*S*_{1}/. And suppose that he prefers*p*to*q*to*r*,and that*S*_{1}= I do*B*,and*S*_{2}= I do*A.*The person realizes that if he does*A*,*S*_{2}will be the case and the outcome will be*r*,and he realizes that if he does*B*,*S*_{1}will be the case and the outcome will be*r.*Since the outcome will be*r*in any case, he is indifferent between doing*A*and doing*B.*So let us suppose he flips a coin in order to decide which to do. But given that the coin is fair, it is now the case that the probability of*S*_{1}=1/2 and the probability of*S*_{2}= 1/2. If we mechanically started to compute the expected utility of*A*,and of*B*,we would find that*A*has a higher expected utility than does*B.*For mechanically computing the expected utilities, it would turn out that the expected utility of*A*= = 1/2 x*u(p)*+ 1/2 x*u(r)*,and the expected utility of*B*= 1/2 x*u*(*q*) + 1/2 x*u*(*r*). If, however, we use the conditional probabilities, then the expected utility of*A*= prob (*S*_{1}/*A*) x*u*(*p*) + prob (*S*_{2}/*A*) x*u*(*r*) = 0 x*u*(*p*) + 1 x*u*(*r*) =*u*(*r*). And the expected utility of*B*= prob (*S*_{2}/*B*) x*u*(*q*)+ prob (*S*_{1}/*B*) x*u*(*r*) = 0 x*u*(*q*)+1 x*u*(*r*)=*u*(*r*). Thus the expected utilities of*A*and*B*are equal, as one would wish.This position was suggested, with some reservations due to Newcomb’s example, in Robert Nozick,

*The Normative Theory of Individual Choice*,*op. cit.*It was also suggested in Richard Jeffrey,*The Logic of Decision*,*op. cit.*Ishould mention, what the reader has no doubt noticed, that the previous

*example*is not fully satisfactory. For it seems that preferring the academic life to the athlete’s life should be as strong evidence for the tendency as is choosing the academic life. And hence*P*’s choosing the athlete’s life, though he prefers the academic life, on expected utility grounds does not seem to make it likely that he does not have the tendency. What the example seems to require is an inherited tendency to decide to dowhich is such that (1) The probability of its presence cannot be estimated on the basis of the person’s preferences, but only on the basis of knowing the genetic make-up of his parents, or knowing his actual decisions; and (2) The theory about how the tendency operates yields the result that it is unlikely that it is present if the person decides not to do*A**A*in the example-situation, even though he makes this decision on the basis of the stated expected utility grounds. It is not clear how, for this example, the details are to be coherently worked out.That is, the Dominance Principle is legitimately applicable to situations in which ~ (∃

*S*) (∃*A*) (∃*B*) [prob (*S*obtains/*A*is done) ≠ prob (*S*obtains/*B*is done)].The other eleven possibilities about the states are:

Unless it is possible that there be causality or influence backwards in time. Ishall not here consider this possibility, though it may be that only on its basis can one defend, for some choice situations, the refusal to use the dominance principle. I try to explain later why, for some situations, even if one grants that there is no influence back in time, one may not escape the feeling that, somehow, there is.

Cf. R.Duncan Luce and Howard Raiffa,

*Games and Decisions*,John Wiley & Sons, New York, 1957, pp. 94–102.Almost certainty

_{1}> almost certainty2, since almost certainty2 is some function of the probability that brother Ihas the dominant action gene given that he performs the dominant action (= almost certaintyi), and of the probability that brother IIdoes the dominant action given that he has the dominant action gene.In choosing the headings for the rows, I have ignored more complicated possibilities, which must be investigated for a fuller theory, e.g., some actions influence which state obtains and others do not.

I here consider only the case of two actions. Obvious and messy problems for the kind of policy about to be proposed are raised by the situation in which more than two actions are available (e.g., under what conditions do pairwise comparisons lead to a linear order), whose consideration is best postponed for another occasion.

See R. Duncan Luce and Howard Raiffa,

*op. cit.*,pp. 275–298 and the references therein; Daniel Ellsberg, ‘Risk, Ambiguity, and the Savage Axioms’,*Quarterly Journal of Economics***75**(1961), 643–669, and the articles by his fellow symposiasts Howard Raiffa and William Feller.If the distinctions I have drawn are correct, then some of the existing literature is in need of revision. Many of the writers might be willing to just draw the distinctions we have adumbrated. But for the specific theories offered by some personal probability theorists, it is not clear how this is to be done. For example, L. J. Savage in

*The Foundations of Statistics*,John Wiley & Sons, New York, 1954, recommends unrestricted use of dominance principles (his postulate*P*2), which would not do in case (I). And Savage seems explicitly to wish to deny himself the means of distinguishing case (I) from the others. (For further discussion, some of which must be revised in the light of this paper, of Savage’s important and ingenious work, see Robert Nozick,*op. cit.*,Chapter V.) And Richard Jeffrey,*The Logic of Decision*,*op. cit.*,recommends universal use of maximizing expected utility relative to the conditional probabilities of the states given the actions (see footnote 10 above). This will not do, I have argued, in cases (III) and (IV). But Jeffrey also sees it as a special virtue of this theory that it does not utilize certain notions, and these notions look like they might well be required to draw the distinctions between the different kinds of cases. While on the subject of how to distinguish the cases, let me (be the first to) say that I have used without explanation, and in this paper often interchangeably, the notions of influency, affecting, etc. I have felt free to use them without paying them much attention because even such unreflective use serves to open a whole area of concern. A detailed consideration of the different possible cases with many actions, some influencing, and in different degrees, some not influencing, combined with an attempt to state detailed principles using precise ‘influence’ notions undoubtedly would bring forth many intricate and difficult problems. These would show, I think, that my quick general statements about influence and what distinguishes the cases, are not, strictly speaking, correct. But going into these details would necessitate going into these details. So I will not.Though perhaps it explains why I

*momentarily*felt I had succeeded too well in constructing the vaccine case, and that perhaps one*should*perform the non-dominant action there.But it also seems relevant that in Newcomb’s example not only is the action referred to in the explanation of which state obtains (though in a nonextensional belief context), but also there is another explanatory tie between the action and the state; namely, that both the state’s obtaining, and your actually performing the action are both partly explained in terms of some third thing (your being in a certain initial state earlier). A fuller investigation would have to pursue yet more complicated examples which incorporated this.

## Author information

### Authors and Affiliations

## Editor information

## Rights and permissions

## Copyright information

© 1969 Springer Science+Business Media Dordrecht

## About this chapter

### Cite this chapter

Nozick, R. (1969). Newcomb’s Problem and Two Principles of Choice. In: Rescher, N. (eds) Essays in Honor of Carl G. Hempel. Synthese Library, vol 24. Springer, Dordrecht. https://doi.org/10.1007/978-94-017-1466-2_7

### Download citation

DOI: https://doi.org/10.1007/978-94-017-1466-2_7

Publisher Name: Springer, Dordrecht

Print ISBN: 978-90-481-8332-6

Online ISBN: 978-94-017-1466-2

eBook Packages: Springer Book Archive