Why an Intelligence Explosion is Probable

  • Richard Loosemore
  • Ben Goertzel
Part of the The Frontiers Collection book series (FRONTCOLL)


The hypothesis is considered that: Once an AI system with roughly human-level general intelligence is created, an “intelligence explosion” involving the relatively rapid creation of increasingly more generally intelligent AI systems will very likely ensue, resulting in the rapid emergence of dramatically superhuman intelligences. Various arguments against this hypothesis are considered and found wanting.


General Intelligence Software Complexity Economic Growth Rate Clock Speed Hardware Requirement 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. Baum, S. D., Goertzel, B., & Goertzel, T. G. (2011). How long until human-level AI? results from an expert assessment. Technological Forecasting and Social Change, 78(1), 185–195.CrossRefGoogle Scholar
  2. Chalmers, D. (2010). The singularity: A philosophical analysis. Journal of Consciousness Studies, 17, 7–65.Google Scholar
  3. Goertzel, B. (2010). Toward a formal characterization of real-world general intelligence. Proceedings of AGI-10, Lugano.Google Scholar
  4. Good, I. J. (1965). Speculations concerning the first ultraintelligent machine. Advances in Computers, 6, 99.Google Scholar
  5. Hutter, M. (2005). Universal AI. Berlin: Springer.Google Scholar
  6. Kurzweil, R. (2001). The law of accelerating returns.
  7. Sandberg, A. (2011, January 19). Limiting factors of intelligence explosion speeds. Extropy email discussion list.

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  1. 1.Department of Mathematical and Physical SciencesWells CollegeAuroraUSA
  2. 2.Novamente LLCRockvilleUSA

Personalised recommendations