The Doomsday Argument and the Simulation Argument

Abstract

The Doomsday Argument and the Simulation Argument share certain structural features, and hence are often discussed together (Bostrom 2003, Are you living in a computer simulation, Philosophical Quarterly, 53:243–255; Aranyosi 2004, The Doomsday Simulation Argument. Or why isn’t the end nigh, and you’re not living in a simulation, http://philsci-archive.pitt.edu/190/; Richmond 2008, Doomsday, Bishop Ussher and simulated worlds, Ratio, 21:201–217; Bostrom and Kulczycki 2011 A patch for the Simulation Argument, Analysis, 71:54–61). Both are cases where reflecting on one’s location among a set of possibilities yields a counter-intuitive conclusion—in the first case that the end of humankind is closer than you initially thought, and in the second case that it is more likely than you initially thought that you are living in a computer simulation. Indeed, the two arguments do have some structural similarities. But there are also significant disanalogies between the two arguments, and I argue that these disanalogies mean that the Simulation Argument succeeds and the Doomsday Argument fails.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Notes

  1. 1.

    The appeal to indifference in this paragraph and the next is for simplicity only, and plays no role in the argument. I indicate how the argument generalizes to non-uniform priors below.

  2. 2.

    Under the LU distribution, your initial credences in \(H_{2}\) and \(H_{3}\) are 1/3 and 1/2 respectively, and your final credences are 1/2 each. Under the HU distribution, your initial credences in \(H_{2}\) and \(H_{3}\) are 1/3 each, and your final credences are 3/5 and 2/5 respectively. In each case \(H_{2}\) is confirmed.

  3. 3.

    Suppose your credences in \(H_{1},\; H_{2}\) and \(H_{3}\) are \(p_{1}\), \(p_{2}\) and \(p_{3}\), where \(p_{1}+p_{2}+p_{3}=1\). Under HU, your initial credence in \(H_{1}\) is \(p_{1}\) and your final credence is \(p_{1}\)/\(q\), where \(q=p_{1}+p_{2}\)/2 \(+ p_{3}\)/3 \(<\) 1, so \(H_{1}\) is confirmed. Under LU, your initial credence in \(H_{1}\) is \(p_{1}\)/\(q\) and your final credence is \(p_{1}\), where \(q=p_{1} + 2p_{2} + 3p_{3}>\) 1, so again \(H_{1}\) is confirmed.

  4. 4.

    This is easy to see qualitatively for uniform priors. Under LU, your initial credences in the \(H_{i}\) increase as \(i\) gets larger, but your final credences in the \(H_{i}\) are uniform over \(i \ge r\). So your credence is redistributed to small-\(i\) hypotheses, and \(H_{r}\) is confirmed. Under HU, your initial credences in the \(H_{i}\) are uniform, but your final credences for \(i \ge r\) decrease as \(i\) gets larger. So again your credence is redistributed to small-\(i\) hypotheses, and \(H_{r}\) is confirmed.

  5. 5.

    Under the LU distribution, each location initially has a credence of 1/(1 \(+\) 2 \(+\) ... \(+\,n)=2/n(n + 1)\). Hence the diagonal hypothesis \(D\) initially has a credence of 2/(\(n + 1\)). If you learn that your birth rank is 1, \(D\) has a final credence of 1/\(n\), which is less than its initial credence provided \(n>\) 1. Hence \(D\) is disconfirmed. If you learn that your birth rank is \(r\), \(D\) has a final credence of 1/(\(n\)\(r + 1\)), which is less than its initial credence provided \(n>\) 2\(r\) – 1. Hence \(D\) is disconfirmed for any birth rank less than (\(n + 1\))/2.

  6. 6.

    Under the HU distribution, the locations in row \(i\) initially have credences of 1/in. Hence the diagonal hypothesis \(D\) has an initial credence of (\(1 + 1/2 + {\ldots } + 1/n)/n\), and if you learn that your birth rank is 1, \(D\) has a final credence of \(1/(1 + 1/2 + {\ldots } + 1/n)\). Numerical solution shows that \(D\) is disconfirmed for 1 \(< n \le 6\), and confirmed for \(n \ge 7\).

  7. 7.

    Admittedly, even though the thirder solution to the Sleeping Beauty puzzle is widely held, it can be challenged. So a more circumspect conclusion so far would be that if the thirder solution is right, then the Doomsday Argument fails (Dieks 1992). But this is still interesting; compare footnote 11 below.

  8. 8.

    Bostrom’s actual target is the Self-Indication Assumption, which says “Given the fact that you exist, you should (other things being equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist” (Bostrom 2002, p. 66). This assumption makes no mention of self-location uncertainty, and the presumptuous philosopher argument may well be telling against it. But this just shows that the SIA is far too general; taken as an objection to LU, the presumptuous philosopher argument is ineffective.

  9. 9.

    See the papers in Sect. 3 and 4 of Saunders et al. (2010) for arguments on both sides of this issue.

  10. 10.

    Bostrom (2003) in fact argues for a disjunctive thesis: either the human species is very likely to go extinct before developing the required technology, or any civilization with such technology is extremely unlikely to run a significant number of simulations, or we are almost certainly living in a simulation. What Bostrom calls “the core of the Simulation Argument” is the argument that if the first two disjuncts are false, then you should revise your credence that you are living in a simulation upwards to almost 1. It is this core argument that I address here.

  11. 11.

    A more circumspect conclusion is that if the thirder position is correct, then the Simulation Argument succeeds. But recall that the equivalent conclusion for the Doomsday Argument is that it fails if the thirder position is correct. Even in this conditional form, the conclusion is interesting: the two arguments should not be taken as simply two instances of the same form of reasoning.

  12. 12.

    Since the prior probabilities are not uniform, we need to use a generalized LU distribution. That is, if your prior probabilities in the hypotheses \(H_{i}\) are \(p_{i}\), your credence in each possible self-location along the \(H_{i}\) row is ap \(_{i}\), where \(a\) is a constant given by \(\sum _{i }\) iap \(_{i}=1\). In this case, \(p_{1}\) is 0.99, \(p_{2}\) through \(p_{n}\) are \(10^{-8}\), and \(n\) is 10\(^{6}\), resulting in a value for \(a\) of 1/5001. Hence your credence in \(H_{1}\) becomes ap \(_{1}=0.02\) %.

  13. 13.

    The locations eliminated by this evidence (the left-hand column below the top row) have a total credence of \(a(p_{2}+p_{3} + {\ldots } + p_{n})\), which is of the order of \(10^{-6}\). Hence a negligible proportion of your total credence is redistributed by this evidence.

References

  1. Aranyosi, I.A. (2004) The Doomsday Simulation Argument. Or why isn’t the end nigh, and you’re not living in a simulation. http://philsci-archive.pitt.edu/1590/.

  2. Bostrom, N. (1999). The doomsday argument is alive and kicking. Mind, 108, 539–551.

    Article  Google Scholar 

  3. Bostrom, N. (2002). Anthropic bias: observer selection effects in science and philosophy. New York: Routledge.

    Google Scholar 

  4. Bostrom, N. (2003). Are you living in a computer simulation? Philosophical Quarterly, 53, 243–255.

    Article  Google Scholar 

  5. Bostrom, N., & Cirković, M. M. (2003). The Doomsday Argument and the self-indication assumption: reply to Olum. Philosophical Quarterly, 53, 83–91.

    Article  Google Scholar 

  6. Bostrom, N., & Kulczycki, M. (2011). A patch for the Simulation Argument. Analysis, 71, 54–61.

    Article  Google Scholar 

  7. Dieks, D. (1992). Doomsday–or: The dangers of statistics. Philosophical Quarterly, 42, 78–84.

    Article  Google Scholar 

  8. Elga, A. (2000). Self-locating belief and the Sleeping Beauty problem. Analysis, 60, 143–147.

    Article  Google Scholar 

  9. Korb, K. B., & Oliver, J. J. (1998). A refutation of the doomsday argument. Mind, 107, 403–410.

    Article  Google Scholar 

  10. Leslie, J. (1990). Is the end of the world nigh? Philosophical Quarterly, 40, 65–72.

    Article  Google Scholar 

  11. Lewis, P. J. (2010). A note on the Doomsday Argument. Analysis, 70, 27–30.

    Article  Google Scholar 

  12. Olum, K. D. (2002). The Doomsday Argument and the number of possible observers. Philosophical Quarterly, 52, 164–184.

    Article  Google Scholar 

  13. Pisaturo, R. (2009). Past longevity as evidence for the future. Philosophy of Science, 76, 73–100.

    Article  Google Scholar 

  14. Price, H. (2008). Probability in the everett world: comments on wallace and greaves. http://philsci-archive.pitt.edu/2719/.

  15. Richmond, A. M. (2008). Doomsday Bishop Ussher and simulated worlds’. Ratio, 21, 201–217.

    Article  Google Scholar 

  16. Saunders, Simon, Barrett, J., Kent, A., & Wallace, D. (Eds.). (2010). Many worlds: everett, quantum theory and reality. Oxford: Oxford University Press.

    Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Peter J. Lewis.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Lewis, P.J. The Doomsday Argument and the Simulation Argument. Synthese 190, 4009–4022 (2013). https://doi.org/10.1007/s11229-013-0245-9

Download citation

Keywords

  • Doomsday argument
  • Simulation argument
  • Self-locating belief