Minds and Machines

, Volume 17, Issue 3, pp 249–259

Self-improving AI: an Analysis


DOI: 10.1007/s11023-007-9065-3

Cite this article as:
Hall, J.S. Minds & Machines (2007) 17: 249. doi:10.1007/s11023-007-9065-3


Self-improvement was one of the aspects of AI proposed for study in the 1956 Dartmouth conference. Turing proposed a “child machine” which could be taught in the human manner to attain adult human-level intelligence. In latter days, the contention that an AI system could be built to learn and improve itself indefinitely has acquired the label of the bootstrap fallacy. Attempts in AI to implement such a system have met with consistent failure for half a century. Technological optimists, however, have maintained that a such system is possible, producing, if implemented, a feedback loop that would lead to a rapid exponential increase in intelligence. We examine the arguments for both positions and draw some conclusions.


Artificial intelligenceLearningSelf-improvingAutogenyComplexity barrierBootstrap fallacy

Copyright information

© Springer Science+Business Media B.V. 2007

Authors and Affiliations

  1. 1.StorrmontLaporteUSA