Skip to main content
Log in

The epistemic challenge to longtermism

  • Original Research
  • Published:
Synthese Aims and scope Submit manuscript

Abstract

Longtermists claim that what we ought to do is mainly determined by how our actions might affect the very long-run future. A natural objection to longtermism is that these effects may be nearly impossible to predict—perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present actions is mainly determined by near-term considerations. This paper aims to precisify and evaluate one version of this epistemic objection to longtermism. To that end, I develop two simple models for comparing ‘longtermist’ and ‘neartermist’ interventions, incorporating the idea that it is harder to make a predictable difference to the further future. These models yield mixed conclusions: if we simply aim to maximize expected value, and don’t mind premising our choices on minuscule probabilities of astronomical payoffs, the case for longtermism looks robust. But on some prima facie plausible empirical worldviews, the expectational superiority of longtermist interventions depends heavily on these ‘Pascalian’ probabilities. So the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. Advocates of longtermism include Beckstead (2013, 2019), Greaves and MacAskill (2021), and MacAskill (2022) (who defend the view generally and explore its practical implications); Bostrom (2003, 2013) and Ord (2020) (who focus on the long-term value of reducing existential risks to human civilization); and Cowen (2018) (who focuses on the long-term value of economic growth).

  2. This is conservative as an answer to the question, ‘How long is it possible for human-originating civilization to survive?’ It could of course be very optimistic as an answer to the question, ‘How long will human-originating civilization survive?’

  3. Versions of this epistemic challenge have been noted in academic discussions of longtermism (e.g. by Greaves and MacAskill (2021)), and are frequently raised in conversation, but have not yet been extensively explored. For expressions of epistemically-motivated skepticism toward longtermism in non-academic venues, see for instance Matthews (2015), Johnson (2019), and Schwitzgebel (2022).

    Closely related concerns about the predictability of long-run effects are frequently raised in discussions of consequentialist ethics—see for instance the recent literature on ‘cluelessness’ (e.g. Lenman (2000), Burch-Brown (2014), Greaves (2016)). Going back further, there is this passage from Moore’s Principia: ‘[I]t is quite certain that our causal knowledge is utterly insufficient to tell us what different effects will probably result from two different actions, except within a comparatively short space of time; we can certainly only pretend to calculate the effects of actions within what may be called an ‘immediate’ future. No one, when he proceeds upon what he considers a rational consideration of effects, would guide his choice by any forecast that went beyond a few centuries at most; and, in general, we consider that we have acted rationally, if we think we have secured a balance of good within a few years or months or days’ (Moore 1903, Sect. 93). This amounts to a concise statement of the epistemic challenge to longtermism, though of course that was not Moore’s purpose.

  4. See for instance Makridakis and Hibon (1979) (in particular Table 10 and discussion on p. 115), Fye et al. (2013) (who even conclude that ‘there is statistical evidence that long-term forecasts have a worse success rate than a random guess’ (p. 1227)), and Muehlhauser (2019) (in particular fn. 17, which reports unpublished data from Tetlock’s Good Judgment Project).

    Muehlhauser gives a useful survey of the extant empirical literature on ‘long-term’ forecasting (drawing heavily on research by Mullins (2018)). For our purposes, though, the forecasts covered by this survey are better described as ‘medium-term’—the criterion of inclusion is a time horizon \(\ge 10\) years. To my knowledge, there is nothing like a data set of truly long-term forecasts (e.g., with time horizons greater than a century) from which we could presently draw conclusions about forecasting accuracy on these timescales. And as Muehlhauser persuasively argues, the conclusions we can draw from the current literature even about medium-term forecasting accuracy are quite limited for various reasons—e.g., the forecasts are often imprecise, non-probabilistic, and hard to assess for difficulty.

  5. For discussions of extreme sensitivity to initial conditions in social systems, see for instance Pierson (2000) and Martin et al. (2016). Tetlock also attributes the challenges of long-term forecasting to chaotic behavior in social systems, when he writes: ‘[T]here is no evidence that geopolitical or economic forecasters can predict anything ten years out beyond the excruciatingly obvious—‘there will be conflicts’—and the odd lucky hits that are inevitable whenever lots of forecasters make lots of forecasts. These limits on predictability are the predictable results of the butterfly dynamics of nonlinear systems. In my [Expert Political Judgment] research, the accuracy of expert predictions declined toward chance five years out’ (Tetlock & Gardner, 2015). But Tetlock may be drawing too pessimistic a conclusion from his own data, which show that the accuracy of expert predictions declines toward chance, while remaining significantly above chance—for discussion, see Sect. 1.7 of Muehlhauser (2019).

  6. There are some arguable counterexamples to this claim—e.g., the founders of family fortunes or dynasties who may predict with better-than-chance accuracy the effects of their present actions on their distant heirs. (Thanks to Philip Trammell for this point.) But on the whole, the history of thinking about the distant future seems more notable for its failures than for its successes.

  7. For a version of the epistemic challenge that arises in an imprecise probabilist framework, see Mogensen (2021). For discussion of axiological and ethical challenges to longtermism, see Beckstead (2013) and Greaves and MacAskill (2021). And for discussion of the decision-theoretic worry that the case for longtermism depends on ‘fanatical’ application of expected value reasoning, see Tarsney (2020), Greaves and MacAskill (2021, sect. 8), Balfour (2021), Temkin (2022, Appendix A), and Kosonen (2023).

  8. This closely resembles the initial, informal statement of ‘Deontic Strong Longtermism’ in Greaves and MacAskill (2021, p. 3).

  9. The exception is ‘far future’, which I will later precisify as ‘more than 1000 years from the present’. This gives a rough sense of what longtermists mean by ‘long-term’ or ‘far future’, though some longtermists think that what we ought to do is mainly determined by considerations much more than 1000 years in the future.

    For one attempt to precisify ‘mainly determined by’, see Greaves and MacAskill’s final statement of Deontic Strong Longtermism (Greaves & MacAskill, 2021, p. 26).

  10. Not all longtermist strategies have this form—or at least, not all are most naturally described as having this form. For instance, ‘speed-up’ strategies aim to bring about either a permanent acceleration or a one-time forward shift in some positive trend (e.g., economic growth (Cowen, 2018) or space settlement (Bostrom, 2003)). If such a strategy succeeds, then at each future time, we will be more advanced/better off than we otherwise would have been. But we will not (in any intuitive sense) be put in some persistent state that we would otherwise not have been in. Or, more abstractly, one might imagine that on some important dimension (say, the quality of our moral values), human societies follow an unbiased random walk, changing for better or worse with equal probability in each time period. An intervention that moved a society one step in the positive direction on this dimension (say, to \(+3\) instead of \(+2\)) would improve our expected position at each future time by one step, without making any persistent difference (e.g., not keeping us persistently at \(+3\) or persistently above \(+2\))

    Nevertheless, I will limit my focus to persistent-difference strategies, which seem to capture most (though not all) plausible strategies for improving the far future. Insofar as this one type of longtermist strategy can survive the epistemic challenge (which will be my provisional conclusion), then longtermism itself can survive the epistemic challenge. But it would, of course, be interesting to investigate epistemic worries about non-persistence-based strategies for improving the far future as well.

  11. The persistence skeptic need not claim that it is impossible to make any highly persistent difference the world. They might, for instance, concede that some trivial differences can be extremely persistent—e.g., if I bury a corrosion-resistant object deep underground, or launch Russell’s teapot into a stable orbit around the Sun, I can be reasonably confident that even a million years from now, my object (i) will be where I put it and (ii) would not have been there, if not for my action. The objection to these actions as strategies for improving the far future is, of course, that the difference they make is unimportant.

  12. Though the latter question is not entirely idle: it tells us something about the expected value of improving our epistemic position, bringing our epistemic probabilities into closer alignment with the objective chances.

  13. I choose existential risk mitigation as the working example of a persistent-difference strategy mainly because it’s especially easy to quantify—that is, it’s easier to make empirically motivated estimates of the various model parameters for this application than for others. But the model is meant to describe persistent-difference strategies in general, so it could also be applied, for instance, to efforts to persistently improve political institutions or moral values.

  14. For interventions whose primary benefit is saving lives, GiveWell estimates an average cost per life saved. In its most recent cost-effectiveness estimates (GiveWell, 2021), vitamin A supplementation (as implemented by Helen Keller International) had the lowest estimated cost per life saved, at approximately $3000. (For more details, see GiveWell’s cost-effectiveness models at https://www.givewell.org/how-we-work/our-criteria/cost-effectiveness/cost-effectiveness-models.) Assuming constant returns (in line with our practice of making empirical assumptions unfavorable to longtermism), this implies that $1 million in funding for vitamin A supplements will save \(333 \frac{1}{3}\) lives in expectation. To allow comparison with our longtermist intervention \(L\), it is useful to convert this to QALYs. I will therefore assume that expected value of saving a life is 30 QALYs, meaning that $1 million spent on vitamin A supplements has an expected value of 10,000 QALYs.

    It is not obvious, of course, that public health interventions aimed at saving lives in the developing world are the most cost-effective neartermist intervention. Some interventions to benefit poor people in the developing world have other primary benefits (e.g., direct cash transfers and deworming treatments), but are arguably competitive with life-saving interventions in terms of value per dollar spent. And interventions focused on the welfare of non-human animals (e.g., to promote veganism or improve conditions for farmed animals) are arguably more cost-effective than any neartermist interventions primarily benefiting humans. I focus on life-saving public health interventions to avoid tendentious and highly uncertain value comparisons between very different altruistic payoffs. But also, as we will see, the main qualitative conclusions we reach below would not be changed very much by an adjustment of one or two orders of magnitude in the expected value of the benchmark neartermist intervention, so as long the estimate we’re using is not vastly too low, it should be adequate for our purposes.

  15. ‘The accessible region of the universe’ or ‘accessible universe’ refers to our future light cone, that is, the region of spacetime that it is possible to reach from Earth today travelling at or below the speed of light.

    For the sake of conservatism, I will assume throughout the paper that we are in fact limited by the speed of light, and cannot reach or exploit the resources of regions outside our future light cone. Likewise, I set aside various other physical and technological possibilities that might greatly expand the reach or increase the capacities of future civilization: e.g., that we live in a Gödel spacetime containing closed timelike curves, or can construct computers capable of computational supertasks in finite time, or can persist as a civilization for infinite time (as in some cyclic cosmological models). In general, accounting for such possibilities is only likely to strengthen our qualitative conclusions, by increasing the potential scale of the far future and thereby making the expectational case for longtermism even more robust under uncertainty, but also exacerbating worries about Pascalian fanaticism (assuming we assign these scenarios low probability).

  16. By assuming a constant speed of space settlement, the cubic growth model neglects two effects that are important over very long timescales: first, the assumption of a constant speed of space settlement in comoving coordinates (implicit in taking spatial volume as a proxy for resources) ignores cosmic expansion, which becomes significant when we consider timescales on the order of billions of years or longer (Armstrong & Sandberg, 2013, pp. 8–9). Second, it ignores the declining density (even in comoving coordinates) of resources like usable mass and negentropy predicted by thermodynamics, which becomes significant on even longer timescales. If we were using the model to make comparisons between longtermist interventions, these considerations would be significant and would have to be accounted for. But for our purpose of comparing a longtermist with a neartermist intervention, these effects can be safely ignored: as we will see, if events a billion years or more in the future make any non-trivial difference to \(\textrm{EV}(L)\), then \(L\) has already handily defeated \(N\) on the basis of nearer-term considerations.

  17. Our conversion of resources into value has clearly become more efficient over time, at least in certain respects. (For instance, as a referee pointed out, a contemporary physics textbook requires no more resources to produce than its ancient equivalent, but contains far more useful information.) And it’s reasonable to speculate that a general process of this kind will continue indefinitely. But it’s also possible that these efficiency gains will reach or approach an upper bound at some point in the coming centuries. I make the latter assumption in line with the policy of making empirical assumptions unfavorable to longtermism.

  18. Of course, there may be no such time. (For instance, in a cyclic cosmology like the Steinhardt-Turok model, a civilization might be able to persist indefinitely if it can transmit information and therefore perpetuate itself from one cycle to the next.) But I assume for the sake of conservatism that there is such a bound.

  19. Note that p is not a probability but a difference of probabilities, and can therefore be negative. But of course an agent will only entertain \(L\) as a strategy for ensuring that the world is in state S at \(t = 0\) if she judges that \(p= Pr(S_0 | L) - Pr(S_0 | N) > 0\).

  20. Here I follow López-Corredoira et al. (2018), who give evidence that the galactic disk of the Milky Way extends to a radius of at least 31.5 kiloparsecs (\(\approx \) 103,000 light years) from the galactic center. Since our solar system is about 27,000 light years from galactic center, this means that the furthest edge of the galactic disk is at least 130,000 light years away. While the galactic disk might extend still further, the vast majority of stars in the Milky Way are certainly within this radius.

  21. I use the star density of the Virgo Supercluster rather than the accessible universe as a whole because whether \(L\) or \(N\) has greater expected value in the model is almost entirely determined by the ‘early’ period of space settlement—on the order of tens to hundreds of millions of years—during which we remain confined to the supercluster.

  22. The assumption of merely-polynomial growth may seem revisionary relative to the exponential growth assumed in standard economic models. But this latter is growth in consumption whereas we are concerned with growth in total welfare, which is not standardly assumed to be exponential (since individual welfare or utility is treated as a concave function of consumption, often even assumed to be bounded above).

  23. For a case against the hypothesis, see Thorstad (2022).

  24. Assume a working population of 5 billion, working 40 hours a week, 50 weeks a year. This yields a total of \(40 \times 50 \times 5 \times 10^9 = 10^{13}\) work hours per year, or \(10^{16}\) work hours over the next 1000 years. Assume that $1 million is enough to hire ten people for a year (or two people for five years, etc), for a total of 20,000 work hours. This amounts to \(2 \times 10^{-12}\) (two trillionths) of humanity’s total labor supply over the next thousand years, and yields \(p= 2 \times 10^{-14}\).

  25. For comparison, Millett and Snyder-Beattie (2017) estimate that the risk of human extinction in the next century from accidental or intentional misuse of biotechnology is between \(1.6 \times 10^{-6}\) and \(2 \times 10^{-2}\), and that $250 billion in biosecurity spending could reduce this risk by at least 1%. Again assuming that spending on existential risk mitigation has either constant or diminishing marginal returns, and ignoring the difference between the 100 and 1000 year timeframes (which means ignoring both potential benefits of risk reduction in the next century on risk in later centuries, but also the possibility that despite averting an existential catastrophe in the next 100 years, we fail to survive the next 1000 years), this implies \(p\ge 6.4 \times 10^{-14}\) (using the lowest estimate of extinction risk from biotechnology), though this could increase to as much as \(p\ge 8 \times 10^{-10}\) if we took a higher estimate of status quo risk levels. (Note two points: first, if the risk of extinction from biotechnology is much below 1% in the next century, then there are probably other, more pressing existential risks on which our notional philanthropist could more impactfully spend her $1 million. Second, the numbers from Millett and Snyder-Beattie are model-based estimates of objective risk, whereas \(p\) is meant to capture a change in the epistemic probability of extinction. Given our uncertainties, the epistemic probability of extinction from biotechnology is likely to be orders of magnitude greater than our lower-bound estimate of the objective risk.)

    As another point of comparison, Todd (2017) estimates that $100 billion spent on reducing extinction risk could achieve an absolute risk reduction of 1% (e.g., reducing total risk from 4% to 3%). Again assuming constant or diminishing marginal returns and ignoring the difference in timeframes, this implies \(p\ge 10^{-7}\). None of these numbers should be taken too seriously, but they indicate the wide range of plausible values for \(p\).

  26. Bostrom’s estimate is conservative in a number of ways, relative to the assumptions of the Dyson Sphere scenario. It assumes that we would need to simulate all the computations performed by a human brain (as opposed to, say, just simulating the cerebral cortex, while simulating the rest of the brain and the external environment in a much more coarse-grained way, or simulating minds with a fundamentally different architecture than our own) and that the minds we simulate would have only the same welfare as the average present-day healthy human being. There may also be other ways of converting mass and energy into computation that are orders of magnitude more efficient than Matrioshka brains (Sandberg et al., 2016). But the conservative estimate is enough to illustrate the point.

  27. This estimate sets aside the welfare of non-human animals on Earth, or rather, implicitly assumes that in the far future, the total welfare of non-human animals on Earth will be roughly the same whether or not an intelligent civilization exists on Earth. One could argue for either a net positive or net negative effect of far future human civilization on non-human animal welfare on Earth. (And, particularly conditional on a ‘space opera’ scenario for space settlement, one could argue for positive or negative adjustments to \(v_s\) to account for non-human welfare.) But I set these considerations aside for simplicity.

  28. The main constraint on \(s\) appears to be the density of the interstellar medium and the consequent risk of high-energy collisions. In terms of the mass requirements of a probe capable of settling new star systems and the energy needed to accelerate/decelerate that probe, Armstrong and Sandberg (2013) argue convincingly that speeds well above 0.9c are achievable. On an intergalactic scale, such speeds may be feasible tout court (Armstrong & Sandberg, 2013, p. 9). But there may be a lower speed limit on intragalactic settlement, given the greater density of gas and dust particles. The Breakthrough Starshot initiative aims to launch very small probes toward nearby star systems at \(\sim 0.2c\), which appears to be feasible given modest levels of shielding (Hoang et al., 2017). Though larger probes will incur greater risk of collisions, this probably will not greatly reduce achievable velocities, since probes can be designed to minimize cross-sectional area, so that collision risk increases only modestly as a function of mass.

    Admittedly, \(s= 0.1c\) still seems to be less conservative than the other parameter values I have chosen. It is hard to identify a most-conservative-within-reason value for \(s\), but we could for instance take the speed of Voyager 1, currently leaving the Solar System at \(\sim 0.000057c\). But using such a small value for \(s\) would make the cubic growth model essentially identical to the steady state model (in which interstellar settlement simply never happens; see Sect. 5), except for very small values of \(r\). So a less-than-maximally-conservative value of \(s\) is in line with the less-than-maximally-conservative assumption of the cubic growth model itself that interstellar settlement will eventually be feasible.

  29. For instance, if we instead used \(t_l= 500\) years, the crucial value of \(r\) below which \(L\) overtakes \(N\) in expected value would only decrease from \(\sim 0.000135\) to \(\sim 0.000133\).

  30. To my knowledge, the most pessimistic estimate of near-term existential risk in the academic literature belongs to Rees (2003), who gives a 0.5 probability that humanity will not survive the next century. Assuming a constant hazard rate, this implies an annual risk of roughly \(6.9 \times 10^{-3}\). Sandberg and Bostrom (2008) report an informal survey of 19 participants at a workshop on catastrophic risks in which the highest estimate for the probability of human extinction by the year 2100 was also 0.5 (as compared to a median estimate of 0.19). Other estimates, though more optimistic, generally imply an annual risk of at least \(10^{-4}\). For a collection of such estimates, see Tonn and Stiefel (2014, pp. 134–135).

  31. This particular number should not be taken seriously, since when \(r= 0\), some of the simplifications in the model become extremely significant—in particular, ignoring cosmic expansion and overestimating star density outside the Virgo Supercluster. The point is simply that even small values of r do a lot to limit \(\textrm{EV}(L)\).

  32. More precisely, there are different ‘regimes’ in the model corresponding to different intervals in the value of \(r\). When \(r\) is large, \(\textrm{EV}(L)\) is driven primarily by the stream of value of Earth, and so \(\textrm{EV}(L)\) grows inversely to \(r\) (with an order-of-magnitude decrease in \(r\) generating an order-of-magnitude increase in \(\textrm{EV}(L)\)). Once \(r\) is small enough for the polynomially-increasing value of interstellar settlement to become significant, the relationship becomes inverse quartic. This relationship is interrupted by the transition from the resource-rich Milky Way to the sparse environment of the wider Virgo Supercluster, but resumes once \(r\) is small enough that extra-galactic settlement becomes the dominant contributor to \(\textrm{EV}(L)\). Finally, for still smaller values of \(r\), the eschatological bound \(t_f\) begins to impinge on \(\textrm{EV}(L)\), and its growth rate in \(r\) slows again (asymptotically to zero, as \(r\) goes to zero).

  33. With respect to capability, see for instance Armstrong and Sandberg (2013). With respect to motivation, see for instance Bostrom (2012) on resource acquisition as a convergent instrumental goal of intelligent agents.

  34. Adopting the larger figure would have almost no effect on the values of \(\textrm{EV}(L)\) reported in Table 2 except for the smallest values of \(r\) (below \(10^{-8}\)), where it would increase \(\textrm{EV}(L)\) by up to one order of magnitude.

  35. In the case of \(v_s\), which can take negative values, we must also assume that its expected value conditional on its being less than the ‘Dyson sphere’ value of \(10^{20}\) (V/yr)/star is non-negative.

  36. See Armstrong and Sandberg (2013) for arguments for the feasibility of interstellar travel at speeds greater than 0.8c and of Dyson swarms (vast collections of satellites orbiting a star that capture most or all of its energy output while avoiding the principal engineering challenges of the classic Dyson sphere). Again, if it is technologically feasible for our future civilization to settle the universe at high speed and harness the full energy resources of stars, it seems plausible (though far from certain) that we will chose to do so, since resource acquisition is a ‘convergent instrumental goal’ for intelligent agents that can serve a vast array of final goals (Bostrom, 2012).

    Notably, one anonymous reviewer commented that they might assign much smaller probabilities to the ‘Dyson spheres’ scenario and \(s\ge 0.8c\)—perhaps as small as \(10^{-1000}\). This strikes me as overconfident, for reasons given above. But I can hardly claim that this judgment follows self-evidently or indisputably from the available evidence—as I say, the exercise we’re engaged in requires some judgment calls that are inevitably partly subjective. I encourage readers to consider what confidence bounds they consider appropriately conservative, and try computing minimum values for EV(L) on that basis.

  37. These calculations assume that \(r\), \(s\) and \(v_s\) are either independent conditional on the cubic growth model, or correlated in such a way that values of one parameter more favorable to longtermism (smaller values of \(r\), larger values of \(s\) and \(v_s\)) predict more favorable values for the other parameters. It seems natural that there should be at least some of this correlation between ‘optimistic’ parameter values, which would further increase the expected value of \(L\).

  38. It is worth noting that uncertainty about \(r\) makes \(r\) effectively time-dependent in the cubic growth and steady state models. What matters in these models is when the first ENE occurs, after which the state of the world no longer depends on its state at \(t = 0\). This means we are interested, not in the unconditional probability of an ENE occurring at time t, but in the probability that an ENE occurs at t conditional on no ENE having occurred sooner. If we know that ENEs come along at a fixed rate, but don’t know what that rate is, then this conditional probability decreases with time: conditioning on no ENE having occurred before time t favors hypotheses on which the rate of ENEs is low, more strongly for larger values of t. This is just another way of understanding the fact that, when we are unsure what discount rate to apply to a stream of value, the discount factor at later times will converge with that implied by the lowest possible discount rate.

  39. It is controversial, however, whether we should reason expectationally in response to normative uncertainty, even given that this is the right response to empirical uncertainty. For defense of broadly expectational approaches to normative uncertainty, see Lockhart (2000), Sepielli (2009), and MacAskill and Ord (2020), among others. For rival views, see Nissan-Rozen (2012), Gustafsson and Torpman (2014), Weatherson (2014), and Harman (2015), among others.

    This debate may also be relevant in deciding how to weigh outré possibilities like the Dyson spheres scenario that involve large numbers of non-human-like minds. (Thanks to Hilary Greaves for this point.) If we are uncertain whether or to what degree the ‘artificial’ or ‘simulated’ minds that might exist in a Matrioshka brain are morally statused, should we simply discount their putative interests by the probability that those interests carry moral weight? Arguably, our uncertainty here is a kind of ‘quasi-empirical’ uncertainty: we simply don’t know whether these minds would have the sort of subjective experiences we care about. But it may also seem more akin to moral uncertainty, and we may therefore feel reluctant to simply go by expected value.

  40. See for instance Bostrom (2009), Monton (2019), and Temkin (2022, Appendix A). For a reply to these worries, see Wilkinson (2022). For illuminating discussion of arguments on both sides, see Beckstead and Thomas (2020) and Russell (2021).

  41. We can make this precise in the framework of risk-weighted expected utility theory (Quiggin, 1982; Buchak, 2013), with a risk function of the form:

    $$\begin{aligned} r(x) = {\left\{ \begin{array}{ll} 0 &{} 0 \le x \le \mu \\ \frac{x - 0.5}{1 - 2\mu } + 0.5 &{} \mu \le x \le 1 - \mu \\ 1 &{} 1 - \mu \le x \le 1 \end{array}\right. } \end{aligned}$$

    We then choose the option that maximizes \(u_1 + \sum _{i = 2}^{n} r(Pr(u \ge u_{i + 1}))(u_{i + 1} - u_i)\), where the possible payoffs \(u_1,... u_n\) are ordered from worst to best. A similar sort of truncation is suggested by Buchak as a response to the St. Petersburg game (Buchak, 2013, pp. 73–74). This truncation method is one way to precisify the more general idea of ignoring very small probabilities, which has a long history (see Monton (2019) for a useful survey).

  42. This measure is imperfect in that it will classify as highly Pascalian some choice situations that are not intuitively Pascalian, but where two or more options are just very nearly tied for best. But the measure is only intended as a rough heuristic, not as something that should play any role in our normative decision theory.

  43. In Tarsney (2020), I try to develop a principled anti-fanatical view, based on stochastic dominance, that tells us when it’s permissible to deviate from expected value maximization and ignore small probabilities. The thresholds for ‘small’ probability that this view generates depend on various features of the choice situation, including in particular the agent’s degree of ‘background uncertainty’ about sources of value in the world unaffected by her choices. (Greater background uncertainty generates stronger stochastic dominance constraints that narrow the class of Pascalian choices in which deviations from expected value maximization are permitted.) §5.4 of the paper considers the implications for choices like our working example, between existential risk mitigation and interventions with more certain, near-term payoffs. My own conclusion is that our background uncertainty is probably great enough that even the anti-fanatic is required to prioritize existential risk mitigation when it maximizes expected value. But this conclusion is not particularly robust—other reasonable estimates of the relevant epistemic probabilities might lead to the opposite conclusion. So by the lights of my own preferred anti-fanatical view, cases like our working example are borderline; deciding whether expected value maximization is mandatory or optional in such cases will require more precise estimates of the relevant probabilities and stakes.

References

  • Adams, F. C., & Laughlin, G. (1997). A dying universe: The long-term fate and evolution of astrophysical objects. Reviews of Modern Physics, 69(2), 337.

    Article  Google Scholar 

  • Armstrong, S., & Sandberg, A. (2013). Eternity in six hours: Intergalactic spreading of intelligent life and sharpening the Fermi paradox. Acta Astronautica, 89, 1–13.

    Article  Google Scholar 

  • Balfour, D. (2021). Pascal’s mugger strikes again. Utilitas, 33(1), 118–124.

    Article  Google Scholar 

  • Beckstead, N. (2013). On the overwhelming importance of shaping the far future. Ph. D. thesis, Rutgers University Graduate School.

  • Beckstead, N. (2019). A brief argument for the overwhelming importance of shaping the far future. In H. Greaves & T. Pummer (Eds.), Effective altruism: Philosophical issues (pp. 80–98). Oxford University Press.

    Chapter  Google Scholar 

  • Beckstead, N., & Thomas, T. (2020). A paradox for tiny probabilities and enormous values. Global Priorities Institute Working Paper Series. GPI Working Paper No. 10-2020.

  • Bostrom, N. (2003). Astronomical waste: The opportunity cost of delayed technological development. Utilitas, 15(3), 308–314.

    Article  Google Scholar 

  • Bostrom, N. (2009). Pascal’s mugging. Analysis, 69(3), 443–445.

    Article  Google Scholar 

  • Bostrom, N. (2012). The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines, 22(2), 71–85.

    Article  Google Scholar 

  • Bostrom, N. (2013). Existential risk prevention as global priority. Global Policy, 4(1), 15–31.

    Article  Google Scholar 

  • Buchak, L. (2013). Risk and rationality. Oxford University Press.

    Book  Google Scholar 

  • Burch-Brown, J. M. (2014). Clues for consequentialists. Utilitas, 26(1), 105–119.

    Article  Google Scholar 

  • Cowen, T. (2018). Stubborn attachments: A vision for a society of free, prosperous, and responsible individuals. Stripe Press.

    Google Scholar 

  • Fye, S. R., Charbonneau, S. M., Hay, J. W., & Mullins, C. A. (2013). An examination of factors affecting accuracy in technology forecasts. Technological Forecasting and Social Change, 80(6), 1222–1231.

    Article  Google Scholar 

  • GiveWell. (2021). Our top charities. Retrieved January 31, 2022, from https://www.givewell.org/charities/top-charities.

  • Greaves, H. (2016). Cluelessness. Proceedings of the Aristotelian Society, 116(3), 311–339.

    Article  Google Scholar 

  • Greaves, H., & MacAskill, W. (2021). The case for strong longtermism. Global Priorities Institute Working Paper Series. GPI Working Paper No. 5-2021.

  • Greaves, H., & Ord, T. (2017). Moral uncertainty about population axiology. Journal of Ethics and Social Philosophy, 12(2), 135–167.

    Article  Google Scholar 

  • Gustafsson, J. E., & Torpman, O. (2014). In defence of My Favourite Theory. Pacific Philosophical Quarterly, 95(2), 159–174.

    Article  Google Scholar 

  • Hare, C. (2011). Obligation and regret when there is no fact of the matter about what would have happened if you had not done what you did. Noûs, 45(1), 190–206.

    Article  Google Scholar 

  • Harman, E. (2015). The irrelevance of moral uncertainty. In R. Shafer-Landau (Ed.), Oxford studies in metaethics. (Vol. 10). Oxford University Press.

    Google Scholar 

  • Hoang, T., Lazarian, A., Burkhart, B., & Loeb, A. (2017). The interaction of relativistic spacecrafts with the interstellar medium. The Astrophysical Journal, 837(5), 1–16.

    Google Scholar 

  • Johnson, J. (2019). Good at doing good: Effective altruism ft. The Neoliberal Podcast.

    Google Scholar 

  • Kosonen, P. (2023). Tiny probabilities and the value of the far future. Global Priorities Institute Working Paper Series. GPI Working Paper No. 1-2023.

  • Lenman, J. (2000). Consequentialism and cluelessness. Philosophy and Public Affairs, 29(4), 342–370.

    Article  Google Scholar 

  • Lockhart, T. (2000). Moral uncertainty and its consequences. Oxford University Press.

    Google Scholar 

  • López-Corredoira, M., Prieto, C. A., Garzón, F., Wang, H., Liu, C., & Deng, L. (2018). Disk stars in the milky way detected beyond 25 kpc from its center. Astronomy & Astrophysics, 612, L8.

    Article  Google Scholar 

  • Lorenz, E. N. (1963). Deterministic nonperiodic flow. Journal of the Atmospheric Sciences, 20(2), 130–141.

    Article  Google Scholar 

  • MacAskill, W. (2022). What we owe the future. Basic Books.

    Google Scholar 

  • MacAskill, W., & Ord, T. (2020). Why maximize expected choice-worthiness? Noûs, 54(2), 327–353.

    Article  Google Scholar 

  • Makridakis, S., & Hibon, M. (1979). Accuracy of forecasting: An empirical investigation. Journal of the Royal Statistical Society: Series A (General), 142(2), 97–145.

    Article  Google Scholar 

  • Martin, T., Hofman, J. M., Sharma, A., Anderson, A., & Watts, D. J. (2016). Exploring limits to prediction in complex social systems. arXiv e-prints, arXiv:1602.01013.

  • Matthews, D. (2015). I spent a weekend at Google talking with nerds about charity. I came away ... worried. Vox. https://www.vox.com/2015/8/10/9124145/effective-altruism-global-ai. Published 10 August 2015. Retrieved March 24, 2023.

  • Millett, P., & Snyder-Beattie, A. (2017). Existential risk and cost-effective biosecurity. Health Security, 15(4), 373–383.

    Article  Google Scholar 

  • Mogensen, A. L. (2021). Maximal cluelessness. Philosophical Quarterly, 71(1), 141–162.

    Article  Google Scholar 

  • Monton, B. (2019). How to avoid maximizing expected utility. Philosophers’ Imprint, 19(18), 1–25.

    Google Scholar 

  • Moore, G. E. (1903). Principia ethica. Cambridge University Press.

    Google Scholar 

  • Muehlhauser, L. (2019). How feasible is long-range forecasting? The Open Philanthropy Blog. https://www.openphilanthropy.org/blog/how-feasible-long-range-forecasting. Published 10 October 2019. Retrieved March 24, 2023.

  • Mullins, C. A. (2018). Retrospective analysis of long-term forecasts. Bryce Space and Technology.

    Google Scholar 

  • Ng, Y.-K. (2016). The importance of global extinction in climate change policy. Global Policy, 7(3), 315–322.

    Article  Google Scholar 

  • Nissan-Rozen, I. (2012). Doing the best one can: A new justification for the use of lotteries. Erasmus Journal for Philosophy and Economics, 5(1), 45–72.

    Article  Google Scholar 

  • Ord, T. (2020). The precipice: Existential risk and the future of humanity. Bloomsbury Publishing.

    Google Scholar 

  • Pierson, P. (2000). Increasing returns, path dependence, and the study of politics. American Political Science Review, 94(2), 251–267.

    Article  Google Scholar 

  • Portmore, D. (2016). Uncertainty, indeterminacy, and agent-centered constraints. Australasian Journal of Philosophy, 1–15.

  • Quiggin, J. (1982). A theory of anticipated utility. Journal of Economic Behavior & Organization, 3(4), 323–343.

    Article  Google Scholar 

  • Rees, M. (2003). Our final century: Will the human race survive the twenty-first century? William Heinemann Ltd.

    Google Scholar 

  • Russell, J. S. (2021). On two arguments for fanaticism. Global Priorities Institute Working Paper Series. GPI Working Paper No. 17-2021.

  • Sagan, C. (1994). Pale Blue Dot: A vision of the human future in space (1st ed.). Random House.

    Google Scholar 

  • Sandberg, A., Armstrong, S., & Ćirković, M. (2016). That is not dead which eternal lie: The aestivation hypothesis for resolving Fermi’s paradox. Journal of the British Interplanetary Society, 69, 405–415.

    Google Scholar 

  • Sandberg, A., & Bostrom, N. (2008). Global catastrophic risks survey. Technical Report 2008-1, Future of Humanity Institute, Oxford University.

  • Sandberg, A., Drexler, E., & Ord, T. (2018). Dissolving the Fermi Paradox. arXiv e-prints, arXiv:1806.02404.

  • Schuster, H. G., & Just, W. (2006). Deterministic chaos: An introduction (4th ed.). Wiley-VCH.

    Google Scholar 

  • Schwitzgebel, E. (2022). Against longtermism. The Splintered Mind. https://schwitzsplinters.blogspot.com/2022/01/against-longtermism.html. Published 5 January 2022. Retrieved January 10, 2022.

  • Sepielli, A. (2009). What to do when you don’t know what to do. In R. Shafer-Landau (Ed.), Oxford studies in metaethics (Vol. 4, pp. 5–28). Oxford University Press.

    Google Scholar 

  • Sittler, T. M. The expected value of the long-term future. Unpublished manuscript, January 2018.

  • Tarsney, C. (2020). Exceeding expectations: Stochastic dominance as a general decision theory. Global Priorities Institute Working Paper Series. GPI Working Paper No. 3-2020.

  • Temkin, L. S. (2022). Being good in a world of need. Oxford University Press.

    Book  Google Scholar 

  • Tetlock, P., & Gardner, D. (2015). Superforecasting: The art and science of prediction. Crown Publishers.

    Google Scholar 

  • Tetlock, P. E. (2005). Expert political judgment: How good is it? How can we know? Princeton University Press.

    Google Scholar 

  • Thorstad, D. (2022). Existential risk pessimism and the time of perils. Global Priorities Institute Working Paper Series. GPI Working Paper No. 1-2022.

  • Todd, B. (2017). The case for reducing existential risks. 80,000 Hours. Retrieved March 24, from 2023. https://80000hours.org/articles/extinction-risk/.

  • Tonn, B., & Stiefel, D. (2014). Human extinction risk and uncertainty: Assessing conditions for action. Futures, 63, 134–144.

    Article  Google Scholar 

  • Weatherson, B. (2014). Running risks morally. Philosophical Studies, 167(1), 141–163.

    Article  Google Scholar 

  • Wilkinson, H. (2022). In defense of fanaticism. Ethics, 132(2), 445–477.

    Article  Google Scholar 

Download references

Acknowledgements

For comments and/or discussion that improved this paper, I am grateful to Michael Aird, Alex Barry, Matthias Endres, Ozzie Gooen, Hilary Greaves, Andreas Mogensen, Toby Ord, Michael Plant, Muireall Prase, Carl Shulman, Dean Spears, Benedict Snodin, Catherine Tarsney, Teru Thomas, Philip Trammell, and participants at a Global Priorities Institute workshop on longtermism. For research assistance, I am grateful to Elliott Thornley and Caleb Parikh.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christian Tarsney.

Ethics declarations

Conflict of interest

The author has no relevant financial or non-financial interests to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tarsney, C. The epistemic challenge to longtermism. Synthese 201, 195 (2023). https://doi.org/10.1007/s11229-023-04153-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11229-023-04153-y

Keywords

Navigation