Skip to main content
Log in

Why AI Doomsayers are Like Sceptical Theists and Why it Matters

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

An advanced artificial intelligence (a “superintelligence”) could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the existence of God. And while this analogy is interesting in its own right, what is more interesting are its potential implications. It has been repeatedly argued that sceptical theism has devastating effects on our beliefs and practices. Could it be that AI-doomsaying has similar effects? I argue that it could. Specifically, and somewhat paradoxically, I argue that it could amount to either a reductio of the doomsayers position, or an important and additional reason to join their cause. I use this paradox to suggest that the modal standards for argument in the superintelligence debate need to be addressed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Here I appeal to two theses defended by Bostrom in his recent book Superintelligence (Bostrom 2014): the strategic advantage thesis and the orthogonality thesis. The latter thesis is particularly important for the doomsday scenario discussed in the text. It maintains that pretty much any level of intelligence is compatible with pretty much any final goal. The thesis has been defended elsewhere as well (Bostrom 2012; Armstrong 2013).

  2. The three leading examples are The Future of Humanity Institute based at Oxford University and headed by Nick Bostrom (see http://www.fhi.ox.ac.uk); the Centre for the Study of Existential Risk or CSER, based at Cambridge University (see http://cser.org); and the Machine Intelligence Research Institute or MIRI, not affiliated to any university but based out of Berkeley, CA (see http://intelligence.org). Only the latter dedicates itself entirely to the topic of AI risk. The other institutes address other potential risks as well.

  3. In addition to Bostrom’s work, which is discussed at length below, there have been Eden et al. (2012); Blackford and Broderick (2014); Chalmers (2010), which led to a subsequent symposium double-edition of the same journal, see Journal of Consciousness Studies Volume 19, Issues 1&2.

  4. The standard presentation is that of Rowe (1979); for a more detailed overview, see Trakakis (2007).

  5. I defend this argument from recent attacks on the “logical necessity” condition in Danaher (2014).

  6. The idea was introduced originally by Wykstra (1996). For more up-to-date overviews, see McBrayer (2010), Dougherty (2012), Dougherty and McBrayer (2014).

  7. The summary is based on the discussion of sceptical theism in Bergmann (2001, 2009).

  8. See fn 3 above for sources.

  9. The argument for this is found in chapter 5 of Bostrom’s book.

  10. This is the orthogonality thesis as defended in Bostrom (2012) and Armstrong (2013).

  11. This orthogonality thesis could be criticised. Some would argue that intelligence and benevolence go hand in hand, i.e. the more intelligent someone is the more likely they are to behave in a morally appropriate manner. I have some sympathy for this view. I believe that if there are genuine objectively verifiable moral truths, then the more intelligent the more likely they are to discover and act upon the moral truth. Indeed, this view is popular among some theists. For instance, Richard Swinburne has argued that omniscience may imply omnibenevolence. I am indebted to an anonymous reviewer for urging me to clarify this point.

  12. This is the instrumental convergence thesis. See Bostrom (2012, 2014), pp. 109–114.

  13. They may not be if the designers themselves of malevolent goals, but that’s a distinct issue, having to do with our understanding of human agency, not superintelligent machine agency.

  14. The leading critics in the academic literature are probably Ben Goertzel and Richard Loosemore; online, Alexander Kruel maintains a regularly updated blog critiquing the doomsday scenario. See http://www.kruel.co.

  15. This is how Bostrom (2014) describes is at pp. 129–131; it is also referred to as ‘Leakproofing’ by Yampolskiy (2012).

  16. Bostrom (2014), p. 117 “One might think that the reasoning described above is so obvious that no credible project to develop artificial general intelligence could possibly overlook it. But one should not be too confident that this is so.” He then proceeds to give an example which suggests we may be overconfident in our inferences from past experiences.

  17. Bostrom (2014), p. 117 “an unfriendly AI may become smart enough to realize that it is better off concealing some of its capability gains.” This could even involve adjusting its source code to deceive the testers.

  18. Bostrom (2014), p. 119 “For example, an AI might not play nice in order that it be allowed to survive and prosper. Instead, the AI might calculate that if it is terminated, the programmers who built it will develop a new and somewhat different AI architecture, but one that will be given a similar utility function.”

  19. Bostrom (2014), p. 113 and later at chapter 12 and the discussion of the value-loading problem.

  20. To be clear, this does not mean that an infinitesimal probability of an existential risk should be taken seriously. But, say, a 0.05 or 0.1 risk may be sufficient, given what is at stake.

  21. I refer to this as the “consequential critique” of sceptical theism in [reference omitted].

  22. Schellenberg (2007) refers to beliefs of this sort as being forms of “ultimism”.

  23. The one exception here might be beliefs about logical or mathematical truths, though there are theists who claim that those truths are dependent on God as well.

  24. Note how the focus here is limited to how the treacherous turn affects inductive inferences we make about artificial intelligences only. It does not affect all inductive inferences. This is unlike the situation with respect to sceptical theism.

  25. Richard Loosemore has made these complaints. I’m not aware of any academic publications in which he has made them, but he has done so in two online articles (Loosemore 2012, 2014).

  26. I am indebted to an anonymous reviewer for encouraging me to make this point.

  27. This is the view of the Machine Intelligence Research Institute and some of its affiliated scholars, e.g. see Muehlhauser and Salamon (2012).

References

  • Almeida, M., & Oppy, G. (2003). Sceptical theism and evidential arguments from evil. Australasian Journal of Philosophy, 81, 496–516.

    Article  Google Scholar 

  • Anderson, D. (2012). Skeptical theism and value judgments. International Journal for the Philosophy of Religion, 72, 27–39.

    Article  Google Scholar 

  • Armstrong, S. (2013). General purpose intelligence: Arguing the orthogonality thesis. Analysis and Metaphysics, 12, 68–84.

    Google Scholar 

  • Barrat, J. (2013). Our final invention: Artificial intelligence and the end of the human era. New York: St. Martin’s Press.

    Google Scholar 

  • Bergmann, M. (2001). Skeptical theism and Rowe’s new evidential argument from evil. Nous, 35, 228.

    Article  Google Scholar 

  • Bergmann, M. (2009). Skeptical theism and the problem of evil. In T. P. Thomas & M. Rea (Eds.), The Oxford handbook of philosophical theology. Oxford: OUP.

    Google Scholar 

  • Bergmann, M., & Rea, M. (2005). In defence of skeptical theism: A reply to Almeida and Oppy. Australasian Journal of Philosophy, 83, 241–251.

  • Bostrom, N. (2012). The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines, 22(2), 71–85.

    Article  MathSciNet  Google Scholar 

  • Bostrom, N. (2013). Existential risk prevention as a global priority. Global Policy, 4, 15–31.

    Article  Google Scholar 

  • Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford: OUP.

    Google Scholar 

  • Bringsjord, S., Bringsjord, A., & Bello, A. (2012). Belief in the singularity is fideistic. In A. Eden, J. Moor, J. Soraker, & E. Steinhardt (Eds.), Singularity hypotheses: A scientific and philosophical assessment. Dordrecht: Springer.

    Google Scholar 

  • Danaher, J. (2014). Skeptical theism and divine permission: A reply to Anderson. International Journal for Philosophy of Religion, 75(2), 101–118.

    Article  MathSciNet  Google Scholar 

  • Doctorow, C., & Stross, C. (2012). The rapture of the nerds. New York: Tor Books.

    Google Scholar 

  • Dougherty, T. (2012). Recent work on the problem of evil. Analysis, 71, 560–573.

    Article  Google Scholar 

  • Dougherty, T., & McBrayer, J. P. (Eds.). (2014). Skeptical theism: New essays. Oxford: OUP.

    Google Scholar 

  • Eden, A., Moor, J., Soraker, J., & Steinhardt, E. (Eds.). (2012). Singularity hypotheses: A scientific and philosophical assessment. Dordrecht: Springer.

    Google Scholar 

  • Hasker, W. (2010). All too skeptical theism. International Journal for Philosophy of Religion, 68, 15–29.

    Article  Google Scholar 

  • Loosemore, R. (2012). The fallacy of dumb superintelligence. IEET. Retrieved October 31, 2014 http://ieet.org/index.php/IEET/more/loosemore20121128.

  • Loosemore, R. (2014). The Maverick Nanny with a CC: Debunking fallacies in the theory of AI motivation. IEET. Retrieved October 31, 2014 from http://ieet.org/index.php/IEET/more/loosemore20140724.

  • Lovering, R. (2009). On what god would do (2009). International Journal for the Philosophy of Religion, 66(2), 87–104.

    Article  Google Scholar 

  • Maitzen, S. (2013). The moral skepticism objection to skeptical theism. In J. McBrayer & D. Howard-Snyder (Eds.), A companion to the problem of evil. Oxford: Wiley.

    Google Scholar 

  • McBrayer, J. (2010). Skeptical theism. Philosophy Compass, 5, 611–623.

    Article  Google Scholar 

  • Muehlhauser, L., & Salamon, A. (2012). Intelligence explosion: Evidence and import. In A. Eden, J. Moor, J. Soraker, & E. Steinhardt (Eds.), Singularity hypotheses: A scientific and philosophical assessment. Dordrecht: Springer.

    Google Scholar 

  • Piper, M. (2008). Why theists cannot accept skeptical theism. Sophia, 47(2), 129–148.

    Article  Google Scholar 

  • Rowe, W. (1979). The problem of evil and some varieties of atheism. American Philosophical Quarterly, 16(4), 335–341.

    MathSciNet  Google Scholar 

  • Schellenberg, J. L. (2007). The wisdom to doubt. Ithaca, NY: Cornell University Press.

    Google Scholar 

  • Sehon, S. (2010). The problem of evil: Skeptical theism leads to moral paralysis. International Journal for the Philosophy of Religion, 67, 67–80.

    Article  Google Scholar 

  • Street, S. (forthcoming). If there’s a reason for everything then we don’t know what reasons are: Why the price of theism is normative skepticism. In U. Bergmann, & Z. N. Kain (Eds.), Challenges to religious and moral belief: Disagreement and evolution. Oxford: OUP.

  • Trakakis, N. (2007). The god beyond belief: In defence of William Rowe’s argument from evil. Dordrecht: Springer.

    Google Scholar 

  • Wielenberg, E. (2010). Sceptical theism and divine lies. Religious Studies, 46, 509–523.

    Article  Google Scholar 

  • Wielenberg, E. (2014). Divine deception. In T. Dougherty & J. P. McBrayer (Eds.), Skeptical theism: New essays. Oxford: OUP.

    Google Scholar 

  • Wykstra, S. (1996). Rowe’s noseeum arguments from evil. In D. Howard-Snyder (Ed.), The evidential argument from evil. Bloomington, IN: Indiana University Press.

    Google Scholar 

  • Yampolskiy, R. (2012). Leakproofing the singularity. Journal of Consciousness Studies, 19, 194–214.

    Google Scholar 

  • Yudkowsky, E. (2008). Artificial intelligence as a positive and negative factor in global risk. In N. Bostrom & M. Cirkovic (Eds.), Global catastrophic risks. oxford: OUP.

    Google Scholar 

Download references

Acknowledgments

I would like to thank Stephen Maitzen, Felipe Leon and Alexander Kruel for conversations and feedback on some of the ideas in this paper. I would also like to thank an anonymous reviewer for helpful criticism on a previous draft.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John Danaher.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Danaher, J. Why AI Doomsayers are Like Sceptical Theists and Why it Matters. Minds & Machines 25, 231–246 (2015). https://doi.org/10.1007/s11023-015-9365-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-015-9365-y

Keywords

Navigation