Skip to main content

AI Risk Skepticism

  • Conference paper
  • First Online:
Philosophy and Theory of Artificial Intelligence 2021 (PTAI 2021)

Part of the book series: Studies in Applied Philosophy, Epistemology and Rational Ethics ((SAPERE,volume 63))

Included in the following conference series:

Abstract

In this work, we survey skepticism regarding AI risk and show parallels with other types of scientific skepticism. We start by classifying different types of AI Risk skepticism and analyze their root causes. We conclude by suggesting some intervention approaches, which may be successful in reducing AI risk skepticism, at least amongst artificial intelligence researchers.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Agar, N. (2016). Don’t worry about superintelligence. Journal of Evolution and Technology, 26(1), 73–82.

    Google Scholar 

  • Alexander, S. (2016). AI persuasion experiment results. In Slate Start Codex. Retrieved October 24, 2016, from https://slatestarcodex.com/2016/10/24/ai-persuasion-experiment-results/.

  • Alexander, S. (2015). AI researchers on AI risk. Retrieved May 22, 2015, from https://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/.

  • Alfonseca, M., et al. (2021). Superintelligence cannot be contained: Lessons from computability theory. Journal of Artificial Intelligence Research, 70, 65–76.

    Article  Google Scholar 

  • Aliman, N.-M., Kester, L., & Yampolskiy, R. (2021). Transdisciplinary AI observatory—retrospective analyses and future-oriented contradistinctions. Philosophies, 6(1), 6.

    Article  Google Scholar 

  • Anonymous (2002). Existential risk from artificial general intelligence – Skepticism. In Wikipedia. Retrieved September 16, 2002, from https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence#Skepticism.

  • Arkin, R. (2009) Governing lethal behavior in autonomous robots. CRC Press.

    Google Scholar 

  • Armstrong, S. (2013). General purpose intelligence: Arguing the orthogonality thesis. Analysis and Metaphysics, 12, 68–84.

    Google Scholar 

  • Aronson, J. (2015) Five types of skepticism. Bmj, 350, h1986.

    Google Scholar 

  • Atkinson, R. D. (2016). It's going to kill Us!' and other myths about the future of artificial intelligence. Information Technology & Innovation Foundation.

    Google Scholar 

  • Babcock, J., Kramar, J., & Yampolskiy, R. (2016). The AGI containment problem. In The Ninth Conference on Artificial General Intelligence (AGI2015). July 16–19, 2016: NYC, USA.

    Google Scholar 

  • Babcock, J., Kramár, J., & Yampolskiy, R. V. (2019). Guidelines for artificial intelligence containment. In A.E. Abbas(Ed.), Next-generation ethics: Engineering a better society (pp. 90–112).

    Google Scholar 

  • Baum, S. D. (2018a). Countering superintelligence misinformation. Information, 9(10), 244.

    Article  Google Scholar 

  • Baum, S. (2018b). Superintelligence skepticism as a political tool. Information, 9(9), 209.

    Article  Google Scholar 

  • Baum, S., Barrett, A., & Yampolskiy, R. V. (2017). Modeling and interpreting expert disagreement about artificial superintelligence. Informatica, 41(7), 419–428.

    Google Scholar 

  • Benthall, S. (2017) Don't fear the reaper: Refuting Bostrom's superintelligence argument. arXiv:1702.08495.

  • Binsted, K., et al. (2006). Computational humor. IEEE Intelligent Systems, 21(2), 59–69.

    Article  Google Scholar 

  • Bolukbasi, T., et al. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in Neural Information Processing Systems

    Google Scholar 

  • Booch, G. (2016). Don't fear superintelligent AI. In TED. November 2016: Retrieved https://www.ted.com/talks/grady_booch_don_t_fear_superintelligent_ai.

  • Bostrom, N. (2000). Existential risks: Analyzing human extinction scenarios and related hazards. Journal of Evolution and Technology, 9.

    Google Scholar 

  • Bostrom, N. (2003). Taking intelligent machines seriously: Reply to critics. Futures, 35(8), 901–906.

    Article  Google Scholar 

  • Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.

    Google Scholar 

  • Braga, A., & Logan, R. K. (2017). The emperor of strong AI has no clothes: Limits to artificial intelligence. Information, 8(4), 156.

    Article  Google Scholar 

  • Bringsjord, S. (2012). Belief in the singularity is logically brittle. Journal of Consciousness Studies, 19(7), 14.

    Google Scholar 

  • Bringsjord, S., Bringsjord, A., & Bello, P. (2012). Belief in the singularity is fideistic. Singularity Hypotheses (pp. 395–412). Springer.

    Chapter  Google Scholar 

  • Brown, J. S., & Duguid, (2001). A response to Bill Joy and the doom-and-gloom technofuturists. In AAAS Science and Technology Policy Yearbook (pp. 77–83).

    Google Scholar 

  • Brown T. B., et al. (2020). Language models are few-shot learners. arXiv:2005.14165.

  • Brundage, M. (2014). Limitations and risks of machine ethics. Journal of Experimental & Theoretical Artificial Intelligence, 26(3), 355–372.

    Article  Google Scholar 

  • Brundage, M., et al. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv:1802.07228.

  • Bundy, A. (2017). Smart machines are not a threat to humanity. Communications of the ACM, 60(2), 40–42.

    Article  Google Scholar 

  • Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186.

    Article  Google Scholar 

  • Callaghan, V., et al. (2017). Technological singularity. Springer.

    Google Scholar 

  • Cantor, L. (2016). Superintelligence: The idea that smart people refuse to think about. Retrieved December 24, 2016, from https://laptrinhx.com/superintelligence-the-idea-that-smart-people-refuse-to-think-about-1061938969/.

  • Ceglowski, M. (2016). Superintelligence: The idea that eats smart people. In Web Camp Zagreb. Retrieved October 29, 2016, from https://idlewords.com/talks/superintelligence.htm.

  • Chalmers, D. J. (1996). The conscious mind: In search of a fundamental theory. Oxford University Press.

    Google Scholar 

  • Charisi, V., et al. (2017). Towards moral autonomous systems. arXiv:1703.04741.

  • Chen, Y.-N.K., & Wen, C.-H.R. (2021). Impacts of attitudes toward government and corporations on public trust in artificial intelligence. Communication Studies, 72(1), 115-131``.

    Article  Google Scholar 

  • Corabi, J. (2017). Superintelligent AI and skepticism. Journal of Evolution and Technology, 27(1), 4.

    Google Scholar 

  • Dafoe, A., & Russell, S. (2016). Yes, we are worried about the existential risk of artificial intelligence. In MIT Technology Review. Retrieved November 2, 2016, from https://www.technologyreview.com/2016/11/02/156285/yes-we-are-worried-about-the-existential-risk-of-artificial-intelligence/.

  • Dietterich, T. G., & Horvitz, E. J. (2015). Rise of concerns about AI: Reflections and directions. Communications of the ACM, 58(10), 38–40.

    Article  Google Scholar 

  • Doctorow, C. (2016). AI Alarmism: Why smart people believe dumb things about our future AI overlords. Retrieved December 23, 2016, from https://boingboing.net/2016/12/23/ai-alarmism-why-smart-people.html.

  • Dreyfus, H. L. (1972). What computers can't do; A critique of artificial reason. Harper & Row.

    Google Scholar 

  • Dubhashi, D., & Lappin, S. (2017). AI dangers: Imagined and real. Communications of the ACM, 60(2), 43–45.

    Article  Google Scholar 

  • Ecoffet, A., et al. (2021). First return, then explore. Nature, 590(7847), 580–586.

    Article  Google Scholar 

  • Elkus, A. (2016). A rebuttal to a rebuttal on AI values. Retrieved April 27, 2016, from https://aelkus.github.io/blog/2016-04-27-rebuttal_values.html.

  • Etzioni, O. (2016). Artificial Intelligence will empower us, not exterminate us. In TEDx. Retrieved November 2016, from https://tedxseattle.com/talks/artificial-intelligence-will-empower-us-not-exterminate-us/.

  • Etzioni, O. (2016). No, the experts don’t think superintelligent AI is a threat to humanity. In MIT Technology Review. Retrieved September 20, 2016, from https://www.technologyreview.com/2016/09/20/70131/no-the-experts-dont-think-superintelligent-ai-is-a-threat-to-humanity/.

  • Everitt, T., Lea, G., & Hutter, M. (2018). AGI safety literature review. arXiv:1805.01109.

  • Fast, E., & Horvitz, E. (2016). Long-term trends in the public perception of artificial intelligence. arXiv:1609.04904.

  • Fox, J., & Shulman, C. (2010). Superintelligence does not imply benevolence. In 8th European Conference on Computing and Philosophy. October 4–6, 2010 Munich, Germany.

    Google Scholar 

  • Garfinkel, B., Dafoe, A., & Catton-Barratt, O. (2016). A survey on AI risk communication strategies. Retrieved August 8, 2016, from https://futureoflife.org/ai-policy-resources/.

  • Garis, H. d. (2005). The artilect war. ETC publications.

    Google Scholar 

  • Goertzel, B. (2015). Superintelligence: Fears, promises and potentials. Journal of Evolution and Technology, 25(2), 55–87.

    Google Scholar 

  • Grace, K., et al. (2018). When will AI exceed human performance? Evidence from AI experts. Journal of Artificial Intelligence Research, 62, 729–754.

    Article  Google Scholar 

  • Graves, M. (2017). Response to CegÅ‚owski on superintelligence. Retrieved January 13, 2017, from https://intelligence.org/2017/01/13/response-to-ceglowski-on-superintelligence/.

  • Haggstrom, O. (2017). Vulgopopperianism. Retrieved February 20, 2017, from http://haggstrom.blogspot.com/2017/02/vulgopopperianism.html.

  • Hawkins, J. (2015). The terminator is not coming. The future will thank Us. Retrieved March 2, 2015, from https://www.vox.com/2015/3/2/11559576/the-terminator-is-not-coming-the-future-will-thank-us.

  • Henighan, T., et al. (2020). Scaling laws for autoregressive generative modeling. arXiv:2010.14701.

  • Herley, C. (2016). Unfalsifiability of security claims. Proceedings of the National Academy of Sciences, 113(23), 6415–6420.

    Article  Google Scholar 

  • Holm, S., & Harris, J. (1999). Precautionary principle stifles discovery. Nature, 400(6743), 398–398.

    Article  Google Scholar 

  • Horvitz, E., & Selman, B. (2009). Interim report from the AAAI presidential panel on long-term AI futures. Retrieved August 2009, from http://www.aaai.org/Organization/Panel/panel-note.pdf.

  • Howe, W. J., & Yampolskiy, R. V. (2020). Impossibility of unambiguous communication as a source of failure in AI systems. 2020. Retrieved from https://api.deepai.org/publication-download-pdf/impossibility-of-unambiguous-communication-as-a-source-of-failure-in-ai-systems.

  • Hurley, P. M. (1968). The confirmation of continental drift. Scientific American, 218(4), 52–68.

    Article  Google Scholar 

  • Johnson, D. G., & Verdicchio, M. (2017). AI anxiety. Journal of the Association for Information Science and Technology, 68(9), 2267–2270.

    Article  Google Scholar 

  • Juric, M., Sandic, A., & Brcic, M. (2020). AI safety: State of the field through quantitative lens. arXiv:2002.05671.

  • Kaplan, J., et al. (2020). Scaling laws for neural language models. arXiv:2001.08361.

  • Kelly, K. (2017). The myth of a superhuman AI. In Wired. Retrieved April 15, 2017, from https://www.wired.com/2017/04/the-myth-of-a-superhuman-ai/.

  • Kelly, K. (2014). Why I don't fear super intelligence (Comments section). In Edge. Retrieved November 14, 2014, from https://edge.org/conversation/jaron_lanier-the-myth-of-ai.

  • Khatchadourian, R. (2015). The Doomsday invention. In New Yorker. Retrieved November 23, 2015, from https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom.

  • Kruel, A. (2013). Four arguments against AI risk. Retrieved July 11, 2013, from : http://kruel.co/2013/07/11/four-arguments-against-ai-risks/.

  • Kruel, A. (2011). Why I am skeptical of risks from AI. Retrieved July 21, 2011, from http://kruel.co/2011/07/21/why-i-am-skeptical-of-risks-from-ai/.

  • Kurzweil, R. (2005). The singularity is near: When humans transcend biology. Viking Press.

    Google Scholar 

  • Kurzweil, R. (2014). Don’t fear artificial intelligence. Time Magazine 28.

    Google Scholar 

  • Lanier, J. (2014). The myth of AI. In Edge. Retrieved November 14, 2014, from https://edge.org/conversation/jaron_lanier-the-myth-of-ai.

  • Li, J., & Huang, J.-S. (2020). Dimensions of artificial intelligence anxiety based on the integrated fear acquisition theory. Technology in Society, 63, 101410.

    Article  Google Scholar 

  • Logan, R. K. (2017). Can computers become conscious, an essential condition for the singularity? Information, 8(4), 161.

    Article  Google Scholar 

  • Loosemore, R. P. (2014). The Maverick Nanny with a dopamine drip: Debunking fallacies in the theory of AI motivation. In 2014 AAAI Spring Symposium Series.

    Google Scholar 

  • Majot, A. M., & Yampolskiy, R. V. (2014a). AI safety engineering through introduction of self-reference into felicific calculus via artificial pain and pleasure. In 2014a IEEE International Symposium on Ethics in Science, Technology and Engineering, 2014a. IEEE.

    Google Scholar 

  • Majot, A. M., & Yampolskiy, R. V. (2014b). AI safety engineering through introduction of self-reference into felicific calculus via artificial pain and pleasure. In 2014b IEEE International Symposium on Ethics in Science, Technology and Engineering. 2014b. IEEE.

    Google Scholar 

  • McCauley, L. (2007). Countering the Frankenstein complex. In AAAI Spring Symposium: Multidisciplinary Collaboration for Socially Assistive Robotics.

    Google Scholar 

  • Miller, J. D., Yampolskiy, R., & Häggström, O. (2020). An AGI modifying its utility function in violation of the strong orthogonality thesis. Philosophies, 5(4), 40.

    Article  Google Scholar 

  • Miller, J. D., Yampolskiy, R., & Häggström, O. (2020). An AGI modifying its utility function in violation of the orthogonality thesis. arXiv:2003.00812.

  • Modis, T. (2012). Why the singularity cannot happen. Singularity Hypotheses (pp. 311–346). Springer.

    Chapter  Google Scholar 

  • Muehlhauser, L. (2016). What should we learn from past AI forecasts? Retrieved May 2016, from https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/what-should-we-learn-past-ai-forecasts.

  • Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. Fundamental issues of artificial intelligence (pp. 555–572). Springer.

    Chapter  Google Scholar 

  • Neri, H., & Cozman, F. (2019). The role of experts in the public perception of risk of artificial intelligence. AI & SOCIETY (pp. 1–11).

    Google Scholar 

  • Omohundro, S. M. (2008). The basic AI drives. In P. Wang, B. Goertzel, & S. Franklin (Eds.), Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications. February 2008, IOS Press.

    Google Scholar 

  • Ord, T. (2020). The precipice: existential risk and the future of humanity. Hachette Books.

    Google Scholar 

  • O'Riordan, T. (2013). Interpreting the precautionary principle. Routledge.

    Google Scholar 

  • Ozlati, S., & Yampolskiy, R. (2017). The formalization of AI risk management and safety standards. In Workshops at the Thirty-First AAAI Conference on Artificial Intelligence.

    Google Scholar 

  • Pinker, S. (2012). The better angels of our nature: Why violence has declined. Penguin Group USA.

    Google Scholar 

  • Pistono, F., & Yampolskiy, R. V. (2016). Unethical research: How to create a malevolent artificial intelligence. in 25th International Joint Conference on Artificial Intelligence (IJCAI-16). Ethics for Artificial Intelligence Workshop (AI-Ethics-2016).

    Google Scholar 

  • Radu, S. (2016). Artificial intelligence alarmists win ITIF’s annual luddite award. In Information Technology & Innovation Foundation. Retrieved January 19, 2016, from https://itif.org/publications/2016/01/19/artificial-intelligence-alarmists-win-itif%E2%80%99s-annual-luddite-award.

  • Ramamoorthy, A., & Yampolskiy, R. (2018). Beyond mad? the race for artificial general intelligence. ITU J, 1, 1–8.

    Google Scholar 

  • Russell, S. (2017). Provably beneficial artificial intelligence. Exponential Life, The Next Step.

    Google Scholar 

  • Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control. Penguin.

    Google Scholar 

  • Searle, J. (1980). Minds, brains and programs. Behavioral and Brain Sciences, 3(3), 417–457.

    Article  Google Scholar 

  • Searle, J. R. (2014). What your computer can’t know. The New York review of books (Vol. 9).

    Google Scholar 

  • Sharkey, L. (2017). An intervention to shape policy dialogue, communication, and AI research norms for AI safety. Retrieved October 1, 2017, from https://forum.effectivealtruism.org/posts/4kRPYuogoSKnHNBhY/an-intervention-to-shape-policy-dialogue-communication-and.

  • Shermer, M. (2017). Why artificial intelligence is not an existential threat. Skeptic (altadena, CA), 22(2), 29–36.

    Google Scholar 

  • Smith, M. (2017). Address the consequences of AI in advance. Communications of the ACM, 60(3), 10–11.

    Article  Google Scholar 

  • Sotala, K., & Yampolskiy, R. (2017b). Responses to the journey to the singularity. The Technological Singularity 25–83.

    Google Scholar 

  • Sotala, K., & Yampolskiy, R. V. (2014). Responses to catastrophic AGI risk: A survey. Physica Scripta, 90(1), 018001.

    Article  Google Scholar 

  • Sotala, K., & Yampolskiy, R. (2017a). Risks of the journey to the singularity. The technological singularity (pp. 11–23). Springer.

    Chapter  Google Scholar 

  • Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Knopf.

    Google Scholar 

  • Thierer, A. (2013). Technopanics, threat inflation, and the danger of an information technology precautionary principle. The Minnesota Journal of Law, Science & Technology, 14, 309.

    Google Scholar 

  • Togelius, J. (2020). How many AGIs can dance on the head of a pin? Retrieved October 30, 2020, from http://togelius.blogspot.com/2020/10/how-many-agis-can-dance-on-head-of-pin.html.

  • Tomasik, B. (2013). Center on Long-Term Risk.

    Google Scholar 

  • Toole, B. A. (2010). Ada, the enchantress of numbers: Poetical science. Betty Alexandra Toole.

    Google Scholar 

  • Turing, A. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460.

    Article  Google Scholar 

  • Tutt, A. (2017). An FDA for algorithms. The Administrative Law Review, 69, 83.

    Google Scholar 

  • Vanderelst, D., & Winfield, A. (2016). The dark side of ethical robots. arXiv:1606.02583.

  • Vardi, M. Y. (2019). Quantum hype and quantum skepticism. Communications of the ACM, 62(5), 7–7.

    Article  Google Scholar 

  • Voss, P. (2016). AI safety research: A road to nowhere. Retrieved October 19, 2016, from https://medium.com/@petervoss/ai-safety-research-a-road-to-nowhere-f1c7c20e8875.

    Google Scholar 

  • Walsh, T. (2017). The singularity may never be near. AI Magazine, 38(3), 58–62.

    Article  Google Scholar 

  • Waser, M. R. (2011). Wisdom does imply benevolence. In First International Conference of IACAP. July 4–6, 2011: Aarhus University (pp. 148–150).

    Google Scholar 

  • Weld, D. S., & Etzioni, O. (1994). The first law of robotics (a call to arms). In Twelfth National Conference on Artificial Intelligence (AAAI) (pp. 1042–1047).

    Google Scholar 

  • Wiblin, R., & Harris, K. (2019). DeepMind’s plan to make AI systems robust & reliable, why it’s a core issue in AI design, and how to succeed at AI research. Retrieved June 3, 2019, from https://80000hours.org/podcast/episodes/pushmeet-kohli-deepmind-safety-research/.

  • Wiedermann, J. (2012). A computability argument against superintelligence. Cognitive Computation, 4(3), 236–245.

    Article  Google Scholar 

  • Wilks, Y. (2017). Will there be superintelligence and would it hate Us? AI Magazine, 38(4), 65–70.

    Article  Google Scholar 

  • Williams, R. M., & Yampolskiy, R. V. (2021). Understanding and avoiding AI failures: A practical guide. Retrieved April 30, 2021, from https://arxiv.org/abs/2104.12582.

  • Wissing, B. G., & Reinhard, M.-A. (2018). Individual differences in risk perception of artificial intelligence. Swiss Journal of Psychology, 77(4), 149.

    Article  Google Scholar 

  • Yampolskiy, R. V. (2020). On controllability of AI. arXiv:2008.04071.

  • Yampolskiy, R. V. (2011). What to do with the singularity paradox? In Philosophy and Theory of Artificial Intelligence (PT-AI2011). October 3–4, 2011: Thessaloniki, Greece.

    Google Scholar 

  • Yampolskiy, R. V. (2012). Leakproofing singularity-artificial intelligence confinement problem. Journal of Consciousness Studies JCS.

    Google Scholar 

  • Yampolskiy, R. V. (2015a). Artificial superintelligence: A futuristic approach. CRC Press.

    Google Scholar 

  • Yampolskiy, R. V. (2015b). On the limits of recursively self-improving AGI. In Artificial General Intelligence: 8th International Conference, AGI 2015b, AGI 2015b, Berlin, Germany, July 22–25, 2015b, Proceedings, 2015b (Vol. 9205, p. 394).

    Google Scholar 

  • Yampolskiy, R. V. (2015c). The space of possible mind designs. Artificial General Intelligence (pp. 218–227). Springer.

    Chapter  Google Scholar 

  • Yampolskiy, R. V. (2017). What are the ultimate limits to computational techniques: Verifier theory and unverifiability. Physica Scripta, 92(9), 093001.

    Article  Google Scholar 

  • Yampolskiy, R. V. (2018b). Artificial consciousness: An illusionary solution to the hard problem. Reti, Saperi, Linguaggi, 2, 287–318.

    Google Scholar 

  • Yampolskiy, R. V. (2018c). The singularity may be near. Information, 9(8), 190.

    Article  Google Scholar 

  • Yampolskiy, R. V. (2020a). Unexplainability and Incomprehensibility of AI. Journal of Artificial Intelligence and Consciousness, 7(02), 277–291.

    Article  Google Scholar 

  • Yampolskiy, R. V. (2020b). Unpredictability of AI: On the impossibility of accurately predicting all actions of a smarter agent. Journal of Artificial Intelligence and Consciousness, 7(01), 109–118.

    Article  Google Scholar 

  • Yampolskiy, R. V. (2016). Taxonomy of pathways to dangerous artificial intelligence. In Workshops at the Thirtieth AAAI Conference on Artificial Intelligence.

    Google Scholar 

  • Yampolskiy, R. V. (2018a). Artificial intelligence safety and security. 2018a: Chapman and Hall/CRC.

    Google Scholar 

  • Yampolskiy, R. V. (2019). Predicting future AI failures from historic examples. Foresight, 21(1), 138–152.

    Google Scholar 

  • Yudkowsky, E. (2008). Artificial intelligence as a positive and negative factor in global risk. In N. Bostrom & M. M. Cirkovic (Eds.), Global Catastrophic Risks (pp. 308–345). Oxford University Press.

    Google Scholar 

  • Yudkowsky, E. S. (2001). Creating Friendly AI - The Analysis and Design of Benevolent Goal Architectures. Retrieved from http://singinst.org/upload/CFAI.html.

  • Yudkowsky, E., & Hanson, R. (2008). The Hanson-Yudkowsky AI-foom debate. In MIRI Technical Report. Retrieved from http://intelligence.org/files/AIFoomDebate.pdf.

  • Ziesche, S., & Yampolskiy, R. (2020). Introducing the concept of ikigai to the ethics of AI and of human enhancements. In 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR). 2020. IEEE.

    Google Scholar 

Download references

Acknowledgements

The author is grateful to Seth Baum for sharing a lot of relevant literature and providing feedback on an early draft of this paper. In addition, author would like to acknowledge his own bias, as an AI safety researcher I would benefit from flourishing of the field of AI safety. I also have a conflict of interest, as a human being with a survival instinct I would benefit from not being exterminated by uncontrolled AI.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Roman V. Yampolskiy .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yampolskiy, R.V. (2022). AI Risk Skepticism. In: Müller, V.C. (eds) Philosophy and Theory of Artificial Intelligence 2021. PTAI 2021. Studies in Applied Philosophy, Epistemology and Rational Ethics, vol 63. Springer, Cham. https://doi.org/10.1007/978-3-031-09153-7_18

Download citation

Publish with us

Policies and ethics