Abstract
The Doomsday Argument and the Simulation Argument share certain structural features, and hence are often discussed together (Bostrom 2003, Are you living in a computer simulation, Philosophical Quarterly, 53:243–255; Aranyosi 2004, The Doomsday Simulation Argument. Or why isn’t the end nigh, and you’re not living in a simulation, http://philsci-archive.pitt.edu/190/; Richmond 2008, Doomsday, Bishop Ussher and simulated worlds, Ratio, 21:201–217; Bostrom and Kulczycki 2011 A patch for the Simulation Argument, Analysis, 71:54–61). Both are cases where reflecting on one’s location among a set of possibilities yields a counter-intuitive conclusion—in the first case that the end of humankind is closer than you initially thought, and in the second case that it is more likely than you initially thought that you are living in a computer simulation. Indeed, the two arguments do have some structural similarities. But there are also significant disanalogies between the two arguments, and I argue that these disanalogies mean that the Simulation Argument succeeds and the Doomsday Argument fails.
Similar content being viewed by others
Notes
The appeal to indifference in this paragraph and the next is for simplicity only, and plays no role in the argument. I indicate how the argument generalizes to non-uniform priors below.
Under the LU distribution, your initial credences in \(H_{2}\) and \(H_{3}\) are 1/3 and 1/2 respectively, and your final credences are 1/2 each. Under the HU distribution, your initial credences in \(H_{2}\) and \(H_{3}\) are 1/3 each, and your final credences are 3/5 and 2/5 respectively. In each case \(H_{2}\) is confirmed.
Suppose your credences in \(H_{1},\; H_{2}\) and \(H_{3}\) are \(p_{1}\), \(p_{2}\) and \(p_{3}\), where \(p_{1}+p_{2}+p_{3}=1\). Under HU, your initial credence in \(H_{1}\) is \(p_{1}\) and your final credence is \(p_{1}\)/\(q\), where \(q=p_{1}+p_{2}\)/2 \(+ p_{3}\)/3 \(<\) 1, so \(H_{1}\) is confirmed. Under LU, your initial credence in \(H_{1}\) is \(p_{1}\)/\(q\) and your final credence is \(p_{1}\), where \(q=p_{1} + 2p_{2} + 3p_{3}>\) 1, so again \(H_{1}\) is confirmed.
This is easy to see qualitatively for uniform priors. Under LU, your initial credences in the \(H_{i}\) increase as \(i\) gets larger, but your final credences in the \(H_{i}\) are uniform over \(i \ge r\). So your credence is redistributed to small-\(i\) hypotheses, and \(H_{r}\) is confirmed. Under HU, your initial credences in the \(H_{i}\) are uniform, but your final credences for \(i \ge r\) decrease as \(i\) gets larger. So again your credence is redistributed to small-\(i\) hypotheses, and \(H_{r}\) is confirmed.
Under the LU distribution, each location initially has a credence of 1/(1 \(+\) 2 \(+\) ... \(+\,n)=2/n(n + 1)\). Hence the diagonal hypothesis \(D\) initially has a credence of 2/(\(n + 1\)). If you learn that your birth rank is 1, \(D\) has a final credence of 1/\(n\), which is less than its initial credence provided \(n>\) 1. Hence \(D\) is disconfirmed. If you learn that your birth rank is \(r\), \(D\) has a final credence of 1/(\(n\) – \(r + 1\)), which is less than its initial credence provided \(n>\) 2\(r\) – 1. Hence \(D\) is disconfirmed for any birth rank less than (\(n + 1\))/2.
Under the HU distribution, the locations in row \(i\) initially have credences of 1/in. Hence the diagonal hypothesis \(D\) has an initial credence of (\(1 + 1/2 + {\ldots } + 1/n)/n\), and if you learn that your birth rank is 1, \(D\) has a final credence of \(1/(1 + 1/2 + {\ldots } + 1/n)\). Numerical solution shows that \(D\) is disconfirmed for 1 \(< n \le 6\), and confirmed for \(n \ge 7\).
Admittedly, even though the thirder solution to the Sleeping Beauty puzzle is widely held, it can be challenged. So a more circumspect conclusion so far would be that if the thirder solution is right, then the Doomsday Argument fails (Dieks 1992). But this is still interesting; compare footnote 11 below.
Bostrom’s actual target is the Self-Indication Assumption, which says “Given the fact that you exist, you should (other things being equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist” (Bostrom 2002, p. 66). This assumption makes no mention of self-location uncertainty, and the presumptuous philosopher argument may well be telling against it. But this just shows that the SIA is far too general; taken as an objection to LU, the presumptuous philosopher argument is ineffective.
Bostrom (2003) in fact argues for a disjunctive thesis: either the human species is very likely to go extinct before developing the required technology, or any civilization with such technology is extremely unlikely to run a significant number of simulations, or we are almost certainly living in a simulation. What Bostrom calls “the core of the Simulation Argument” is the argument that if the first two disjuncts are false, then you should revise your credence that you are living in a simulation upwards to almost 1. It is this core argument that I address here.
A more circumspect conclusion is that if the thirder position is correct, then the Simulation Argument succeeds. But recall that the equivalent conclusion for the Doomsday Argument is that it fails if the thirder position is correct. Even in this conditional form, the conclusion is interesting: the two arguments should not be taken as simply two instances of the same form of reasoning.
Since the prior probabilities are not uniform, we need to use a generalized LU distribution. That is, if your prior probabilities in the hypotheses \(H_{i}\) are \(p_{i}\), your credence in each possible self-location along the \(H_{i}\) row is ap \(_{i}\), where \(a\) is a constant given by \(\sum _{i }\) iap \(_{i}=1\). In this case, \(p_{1}\) is 0.99, \(p_{2}\) through \(p_{n}\) are \(10^{-8}\), and \(n\) is 10\(^{6}\), resulting in a value for \(a\) of 1/5001. Hence your credence in \(H_{1}\) becomes ap \(_{1}=0.02\) %.
The locations eliminated by this evidence (the left-hand column below the top row) have a total credence of \(a(p_{2}+p_{3} + {\ldots } + p_{n})\), which is of the order of \(10^{-6}\). Hence a negligible proportion of your total credence is redistributed by this evidence.
References
Aranyosi, I.A. (2004) The Doomsday Simulation Argument. Or why isn’t the end nigh, and you’re not living in a simulation. http://philsci-archive.pitt.edu/1590/.
Bostrom, N. (1999). The doomsday argument is alive and kicking. Mind, 108, 539–551.
Bostrom, N. (2002). Anthropic bias: observer selection effects in science and philosophy. New York: Routledge.
Bostrom, N. (2003). Are you living in a computer simulation? Philosophical Quarterly, 53, 243–255.
Bostrom, N., & Cirković, M. M. (2003). The Doomsday Argument and the self-indication assumption: reply to Olum. Philosophical Quarterly, 53, 83–91.
Bostrom, N., & Kulczycki, M. (2011). A patch for the Simulation Argument. Analysis, 71, 54–61.
Dieks, D. (1992). Doomsday–or: The dangers of statistics. Philosophical Quarterly, 42, 78–84.
Elga, A. (2000). Self-locating belief and the Sleeping Beauty problem. Analysis, 60, 143–147.
Korb, K. B., & Oliver, J. J. (1998). A refutation of the doomsday argument. Mind, 107, 403–410.
Leslie, J. (1990). Is the end of the world nigh? Philosophical Quarterly, 40, 65–72.
Lewis, P. J. (2010). A note on the Doomsday Argument. Analysis, 70, 27–30.
Olum, K. D. (2002). The Doomsday Argument and the number of possible observers. Philosophical Quarterly, 52, 164–184.
Pisaturo, R. (2009). Past longevity as evidence for the future. Philosophy of Science, 76, 73–100.
Price, H. (2008). Probability in the everett world: comments on wallace and greaves. http://philsci-archive.pitt.edu/2719/.
Richmond, A. M. (2008). Doomsday Bishop Ussher and simulated worlds’. Ratio, 21, 201–217.
Saunders, Simon, Barrett, J., Kent, A., & Wallace, D. (Eds.). (2010). Many worlds: everett, quantum theory and reality. Oxford: Oxford University Press.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Lewis, P.J. The Doomsday Argument and the Simulation Argument. Synthese 190, 4009–4022 (2013). https://doi.org/10.1007/s11229-013-0245-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11229-013-0245-9