Skip to main content

Intelligence Explosion: Evidence and Import

  • Chapter
  • First Online:
Singularity Hypotheses

Part of the book series: The Frontiers Collection ((FRONTCOLL))

Abstract

In this chapter we review the evidence for and against three claims: that (1) there is a substantial chance we will create human-level AI before 2100, that (2) if human-level AI is created, there is a good chance vastly superhuman AI will follow via an “intelligence explosion,” and that (3) an uncontrolled intelligence explosion could destroy everything we value, but a controlled intelligence explosion would benefit humanity enormously if we can achieve it. We conclude with recommendations for increasing the odds of a controlled intelligence explosion relative to an uncontrolled intelligence explosion.

The best answer to the question, “Will computers ever be as smart as humans?” is probably “Yes, but only briefly”.

Vernor Vinge

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 84.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We will define “human-level AI” more precisely later in the chapter.

  2. 2.

    Chalmers (2010) suggested that AI will lead to intelligence explosion if an AI is produced by an “extendible method,” where an extendible method is “a method that can easily be improved, yielding more intelligent systems.” McDermott (2012a, b) replies that if P≠NP (see Goldreich 2010 for an explanation) then there is no extendible method. But McDermott’s notion of an extendible method is not the one essential to the possibility of intelligence explosion. McDermott’s formalization of an “extendible method” requires that the program generated by each step of improvement under the method be able to solve in polynomial time all problems in a particular class—the class of solvable problems of a given (polynomially step-dependent) size in an NP-complete class of problems. But this is not required for an intelligence explosion in Chalmers’ sense (and in our sense). What intelligence explosion (in our sense) would require is merely that a program self-improve to vastly outperform humans, and we argue for the plausibility of this in section From AI to Machine Superintelligence of our chapter. Thus while we agree with McDermott that it is probably true that P≠NP, we do not agree that this weighs against the plausibility of intelligence explosion. (Note that due to a miscommunication between McDermott and the editors, a faulty draft of McDermott (McDermott 2012a) was published in Journal of Consciousness Studies. We recommend reading the corrected version at http://cs-www.cs.yale.edu/homes/dvm/papers/chalmers-singularity-response.pdf.).

  3. 3.

    This definition is a useful starting point, but it could be improved. Future work could produce a definition of intelligence as optimization power over a canonical distribution of environments, with a penalty for resource use—e.g. the “speed prior” described by Schmidhuber (2002). Also see Goertzel (2006, p. 48, 2010), Hibbard (2011).

  4. 4.

    To take one of many examples, Simon (1965, p. 96) predicted that “machines will be capable, within twenty years, of doing any work a man can do.” Also see Crevier (1993).

  5. 5.

    Armstrong (1985), Woudenberg (1991), Rowe and Wright (2001). But, see Parente and Anderson-Parente (2011).

  6. 6.

    Bostrom (2003), Bainbridge (2006), Legg (2008), Baum et al. (2011), Sandberg and Bostrom (2011), Nielsen (2011).

  7. 7.

    A software bottleneck may delay AI but create greater risk. If there is a software bottleneck on AI, then when AI is created there may be a “computing overhang”: large amounts of inexpensive computing power which could be used to run thousands of AIs or give a few AIs vast computational resources. This may not be the case if early AIs require quantum computing hardware, which is less likely to be plentiful and inexpensive than classical computing hardware at any given time.

  8. 8.

    We can make a simple formal model of this evidence by assuming (with much simplification) that every year a coin is tossed to determine whether we will get AI that year, and that we are initially unsure of the weighting on that coin. We have observed more than 50 years of “no AI” since the first time serious scientists believed AI might be around the corner. This “56 years of no AI” observation would be highly unlikely under models where the coin comes up “AI” on 90 % of years (the probability of our observations would be 10^-56), or even models where it comes up “AI” in 10 % of all years (probability 0.3 %), whereas it’s the expected case if the coin comes up “AI” in, say, 1 % of all years, or for that matter in 0.0001 % of all years. Thus, in this toy model, our “no AI for 56 years” observation should update us strongly against coin weightings in which AI would be likely in the next minute, or even year, while leaving the relative probabilities of “AI expected in 200 years” and “AI expected in 2 million years” more or less untouched. (These updated probabilities are robust to choice of the time interval between coin flips; it matters little whether the coin is tossed once per decade, or once per millisecond, or whether one takes a limit as the time interval goes to zero). Of course, one gets a different result if a different “starting point” is chosen, e.g. Alan Turing’s seminal paper on machine intelligence (Turing 1950) or the inaugural conference on artificial general intelligence (Wang et al. 2008). For more on this approach and Laplace’s rule of succession, see Jaynes (2003), Chap. 18. We suggest this approach only as a way of generating a prior probability distribution over AI timelines, from which one can then update upon encountering additional evidence.

  9. 9.

    Relatedly, Good (1970) tried to predict the first creation of AI by surveying past conceptual breakthroughs in AI and extrapolating into the future.

  10. 10.

    The technical measure predicted by Moore’s law is the density of components on an integrated circuit, but this is closely tied to the price-performance of computing power.

  11. 11.

    For important qualifications, see Nagy et al. (2010), Mack (2011).

  12. 12.

    Quantum computing may also emerge during this period. Early worries that quantum computing may not be feasible have been overcome, but it is hard to predict whether quantum computing will contribute significantly to the development of machine intelligence because progress in quantum computing depends heavily on relatively unpredictable insights in quantum algorithms and hardware (Rieffel and Polak 2011).

  13. 13.

    On the other hand, some worry (Pan et al. 2005) that the rates of scientific fraud and publication bias may currently be higher in China and India than in the developed world.

  14. 14.

    Also, a process called "iterated embryo selection" (Uncertain Future 2012) could be used to produce an entire generation of scientists with the cognitive capabilities of Albert Einstein or John von Neumann, thus accelerating scientific progress and giving a competitive advantage to nations which choose to make use of this possibility.

  15. 15.

    In our two quotes from Hutter (2012b) we have replaced Hutter’s AMS-style citations with Chicago-style citations.

  16. 16.

    The creation of AI probably is not, however, merely a matter of finding computationally tractable AIXI approximations that can solve increasingly complicated problems in increasingly complicated environments. There remain many open problems in the theory of universal artificial intelligence (Hutter 2009). For problems related to allowing some AIXI-like models to self-modify, see Orseau and Ring (2011), Ring and Orseau (2011), Orseau (2011); Hibbard (Forthcoming). Dewey (2011) explains why reinforcement learning agents like AIXI may pose a threat to humanity.

  17. 17.

    Note that given the definition of intelligence we are using, greater computational resources would not give a machine more “intelligence” but instead more “optimization power”.

  18. 18.

    For example see Omohundro (1987).

  19. 19.

    If the first self-improving AIs at least partially require quantum computing, the system states of these AIs might not be directly copyable due to the no-cloning theorem (Wooters and Zurek 1982).

  20. 20.

    Something similar is already done with technology-enabled business processes. When the pharmacy chain CVS improves its prescription-ordering system, it can copy these improvements to more than 4,000 of its stores, for immediate productivity gains (McAfee and Brynjolfsson 2008).

  21. 21.

    Many suspect that the slowness of cross-brain connections has been a major factor limiting the usefulness of large brains (Fox 2011).

  22. 22.

    Bostrom (2012) lists a few special cases in which an AI may wish to modify the content of its final goals.

  23. 23.

    When the AI can perform 10 % of the AI design tasks and do them at superhuman speed, the remaining 90 % of AI design tasks act as bottlenecks. However, if improvements allow the AI to perform 99 % of AI design tasks rather than 98 %, this change produces a much larger impact than when improvements allowed the AI to perform 51 % of AI design tasks rather than 50 % (Hanson, forthcoming). And when the AI can perform 100 % of AI design tasks rather than 99 % of them, this removes altogether the bottleneck of tasks done at slow human speeds.

  24. 24.

    This may be less true for early-generation WBEs, but Omohundro (2008) argues that AIs will converge upon being optimizing agents, which exhibit a strict division between goals and cognitive ability.

  25. 25.

    Hanson (2012) reframes the problem, saying that “we should expect that a simple continuation of historical trends will eventually end up [producing] an ‘intelligence explosion’ scenario. So there is little need to consider [Chalmers’] more specific arguments for such a scenario. And the inter-generational conflicts that concern Chalmers in this scenario are generic conflicts that arise in a wide range of past, present, and future scenarios. Yes, these are conflicts worth pondering, but Chalmers offers no reasons why they are interestingly different in a ‘singularity’ context.” We briefly offer just one reason why the “inter-generational conflicts” arising from a transition of power from humans to superintelligent machines are interestingly different from previous the inter-generational conflicts: as Bostrom (2002) notes, the singularity may cause the extinction not just of people groups but of the entire human species. For a further reply to Hanson, see Chalmers (Forthcoming).

  26. 26.

    A utility function assigns numerical utilities to outcomes such that outcomes with higher utilities are always preferred to outcomes with lower utilities (Mehta 1998).

  27. 27.

    It may also be an option to constrain the first self-improving AIs just long enough to develop a Friendly AI before they cause much damage.

  28. 28.

    Our thanks to Nick Bostrom, Steve Rayhawk, David Chalmers, Steve Omohundro, Marcus Hutter, Brian Rabkin, William Naaktgeboren, Michael Anissimov, Carl Shulman, Eliezer Yudkowsky, Louie Helm, Jesse Liptrap, Nisan Stiennon, Will Newsome, Kaj Sotala, Julia Galef, and anonymous reviewers for their helpful comments.

References

  • Anderson, B. (1993). Evidence from the rat for a general factor that underlies cognitive performance and that relates to brain size: intelligence? Neuroscience Letters, 153(1), 98–102. doi:10.1016/0304-3940(93)90086-Z.

    Article  Google Scholar 

  • Arbesman, S. (2011). Quantifying the ease of scientific discovery. Scientometrics, 86(2), 245–250. doi:10.1007/s11192-010-0232-6.

    Article  Google Scholar 

  • Armstrong, J. S. (1985). Long-range forecasting: from crystal ball to computer (2nd ed.). New York: Wiley.

    Google Scholar 

  • Armstrong, S., Sandberg, A., & Bostrom N. Forthcoming. Thinking inside the box: using and controlling an Oracle AI. Minds and Machines.

    Google Scholar 

  • Ashby, F. G., & Helie S. (2011). A tutorial on computational cognitive neuroscience: modeling the neurodynamics of cognition. Journal of Mathematical Psychology, 55(4), 273–289. doi:10.1016/j.jmp.2011.04.003.

    Google Scholar 

  • Bainbridge, W. S., & Roco, M. C. (Eds.). (2006). Managing nano-bio-info-cogno innovations: converging technologies in society. Dordrecht: Springer.

    Google Scholar 

  • Baum, S. D., Goertzel, B., & Goertzel, T. G. (2011). How long until human-level AI? Results from an expert assessment. Technological Forecasting and Social Change, 78(1), 185–195. doi:10.1016/j.techfore.2010.09.006.

    Article  Google Scholar 

  • Bellman, R. E. (1957). Dynamic programming. Princeton: Princeton University Press.

    MATH  Google Scholar 

  • Berger, J. O. (1993). Statistical decision theory and bayesian analysis (2nd edn). Springer Series in Statistics. New York: Springer.

    Google Scholar 

  • Bertsekas, D. P. (2007). Dynamic programming and optimal control (Vol. 2). Nashua: Athena Scientific.

    Google Scholar 

  • Block, N. (1981). Psychologism and behaviorism. Philosophical Review, 90(1), 5–43. doi:10.2307/2184371.

    Article  Google Scholar 

  • Bostrom, N. (2002). Existential risks: Analyzing human extinction scenarios and related hazards. Journal of Evolution and Technology, 9 http://www.jetpress.org/volume9/risks.html.

  • Bostrom, N. (2003). Ethical issues in advanced artificial intelligence. In I. Smit & G. E. Lasker (Eds.), Cognitive, emotive and ethical aspects of decision making in humans and in artificial intelligence. Windsor: International Institute of Advanced Studies in Systems Research/Cybernetics. Vol. 2.

    Google Scholar 

  • Bostrom, N. (2006). What is a singleton? Linguistic and Philosophical Investigations, 5(2), 48–54.

    Google Scholar 

  • Bostrom, N. (2007). Technological revolutions: Ethics and policy in the dark. In M. Nigel, S. de Cameron, & M. E. Mitchell (Eds.), Nanoscale: Issues and perspectives for the nano century (pp. 129–152). Hoboken: Wiley. doi:10.1002/9780470165874.ch10.

    Google Scholar 

  • Bostrom, N. Forthcoming(a). Superintelligence: A strategic analysis of the coming machine intelligence revolution. Manuscript, in preparation.

    Google Scholar 

  • Bostrom, N. (2012). The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines. Preprint at, http://www.nickbostrom.com/superintelligentwill.pdf.

  • Bostrom, N., & Ćirković, M. M. (Eds.). (2008). Global catastrophic risks. New York: Oxford University Press.

    Google Scholar 

  • Bostrom, N., & Sandberg, A. (2009). Cognitive enhancement: Methods, ethics, regulatory challenges. Science and Engineering Ethics, 15(3), 311–341. doi:10.1007/s11948-009-9142-5.

    Article  Google Scholar 

  • Brynjolfsson, E., & McAfee, A. (2011). Race against the machine: How the digital revolution is accelerating innovation, driving productivity, and irreversibly transforming employment and the economy. Lexington: Digital Frontier Press. Kindle edition.

    Google Scholar 

  • Caplan, B. (2008). The totalitarian threat. In Bostrom and Ćirković 2008, 504–519.

    Google Scholar 

  • Cartwright, E. (2011). Behavioral economics. New York: Routledge Advanced Texts in Economics and Finance.

    Google Scholar 

  • Cattell, R, & Parker, A. (2012). Challenges for brain emulation: why is building a brain so difficult? Synaptic Link, Feb. 5. http://synapticlink.org/Brain%20Emulation%20Challenges.pdf.

  • Chalmers, D. J. (1996). The conscious mind: In search of a fundamental theory. New York: Oxford University Press. (Philosophy of Mind Series).

    MATH  Google Scholar 

  • Chalmers, D. J. (2010). The singularity: A philosophical analysis. Journal of Consciousness Studies 17(9–10), 7–65. http://www.ingentaconnect.com/content/imp/jcs/2010/00000017/f0020009/art00001.

    Google Scholar 

  • Chalmers, D. J. Forthcoming. The singularity: A reply. Journal of Consciousness Studies 19.

    Google Scholar 

  • Crevier, D. (1993). AI: The tumultuous history of the search for artificial intelligence. New York: Basic Books.

    Google Scholar 

  • de Blanc, P. (2011). Ontological crises in artificial agents’ value systems. San Francisco: Singularity Institute for Artificial Intelligence, May 19. http://arxiv.org/abs/1105.3821.

  • de Garis, H., Shuo, C., Goertzel, B., & Ruiting, L. (2010). A world survey of artificial brain projects, part I: Large-scale brain simulations. Neurocomputing, 74(1–3), 3–29. doi:10.1016/j.neucom.2010.08.004.

    Article  Google Scholar 

  • Dennett, D. C. (1996). Kinds of minds: Toward an understanding of consciousness., Science Master New York: Basic Books.

    Google Scholar 

  • Dewey, D. (2011). Learning what to value. In Schmidhuber, J., Thórisson, KR., & Looks, M. 2011, 309–314.

    Google Scholar 

  • Dreyfus, H. L. (1972). What computers can’t do: A critique of artificial reason. New York: Harper & Row.

    Google Scholar 

  • Eden, A., Søraker, J., Moor, J. H., & Steinhart, E. (Eds.). (2012). The singularity hypothesis: A scientific and philosophical assessment. Berlin: Springer.

    Google Scholar 

  • Feldman, J. A., & Ballard, D. H. (1982). Connectionist models and their properties. Cognitive Science, 6(3), 205–254. doi:10.1207/s15516709cog0603_1.

    Article  Google Scholar 

  • Floreano, D., & Mattiussi, C. (2008). Bio-inspired artificial intelligence: Theories, methods, and technologies. Intelligent Robotics and Autonomous Agents. MIT Press: Cambridge.

    Google Scholar 

  • Fox, D. (2011). The limits of intelligence. Scientific American, July, 36–43.

    Google Scholar 

  • Fregni, F., Boggio, P. S., Nitsche, M., Bermpohl, F., Antal, A., Feredoes, E., et al. (2005). Anodal transcranial direct current stimulation of prefrontal cortex enhances working memory. Experimental Brain Research, 166(1), 23–30. doi:10.1007/s00221-005-2334-6.

    Article  Google Scholar 

  • Friedman, M. (1953). The methodology of positive economics. In Essays in positive economics (pp. 3–43). Chicago: Chicago University Press.

    Google Scholar 

  • Friedman, James W., (Ed.) (1994). Problems of coordination in economic activity (Vol. 35). Recent Economic Thought. Boston: Kluwer Academic Publishers.

    Google Scholar 

  • Gödel, K. (1931). Über formal unentscheidbare sätze der Principia Mathematica und verwandter systeme I. Monatshefte für Mathematik, 38(1), 173–198. doi:10.1007/BF01700692.

    Article  Google Scholar 

  • Goertzel, B. (2006). The hidden pattern: A patternist philosophy of mind. Boco Raton: BrownWalker Press.

    Google Scholar 

  • Goertzel, B. (2010). Toward a formal characterization of real-world general intelligence. In E. Baum, M. Hutter, & E. Kitzelmann (Eds.) Artificial general intelligence: Proceedings of the third conference on artificial general intelligence, AGI 2010, Lugano, Switzerland, March 58, 2010, 19–24. Vol. 10. Advances in Intelligent Systems Research. Amsterdam: Atlantis Press. doi:10.2991/agi.2010.17.

  • Goertzel, B. (2012). Should humanity build a global AI nanny to delay the singularity until it’s better understood? Journal of Consciousness Studies 19(1–2), 96–111. http://ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00006.

    Google Scholar 

  • Goertzel, B., & Pennachin, C. (Eds.) (2007). Artificial general intelligence. Cognitive Technologies. Berlin: Springer. doi:10.1007/978-3-540-68677-4.

  • Goldreich, O. (2010). P, NP, and NP-Completeness: The basics of computational complexity. New York: Cambridge University Press.

    Book  MATH  Google Scholar 

  • Good, I. J. (1959). Speculations on perceptrons and other automata. Research Lecture, RC-115. IBM, Yorktown Heights, New York, June 2. http://domino.research.ibm.com/library/cyberdig.nsf/ papers/58DC4EA36A143C218525785E00502E30/$File/rc115.pdf.

  • Good, I. J. (1965). Speculations concerning the first ultraintelligent machine. In F. L. Alt & M. Rubinoff (Eds.) Advances in computers (pp. 31–88. Vol. 6). New York: Academic Press. doi:10.1016/S0065-2458(08)60418-0.

  • Good, I. J. (1970). Some future social repercussions of computers. International Journal of Environmental Studies, 1(1–4), 67–79. doi:10.1080/00207237008709398.

    Article  Google Scholar 

  • Good, I. J. (1982). Ethical machines. In J. E. Hayes, D. Michie, & Y.-H. Pao (Eds.) Machine intelligence (pp. 555–560, Vol. 10). Intelligent Systems: Practice and Perspective. Chichester: Ellis Horwood.

    Google Scholar 

  • Greenfield, S. (2012). The singularity: Commentary on David Chalmers. Journal of Consciousness Studies 19(1–2), 112–118. http://www.ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00007.

  • Griffin, D., & Tversky, A. (1992). The weighing of evidence and the determinants of confidence. Cognitive Psychology, 24(3), 411–435. doi:10.1016/0010-0285(92)90013-R.

    Article  Google Scholar 

  • Groß, D. (2009). Blessing or curse? Neurocognitive enhancement by “brain engineering”. Medicine Studies, 1(4), 379–391. doi:10.1007/s12376-009-0032-6.

    Article  Google Scholar 

  • Gubrud, M. A. (1997). Nanotechnology and international security. Paper presented at the Fifth Foresight Conference on Molecular Nanotechnology, Palo Alto, CA, Nov. 5–8. http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/.

  • Halevy, A., Norvig, P., & Pereira, F. (2009). The unreasonable effectiveness of data. IEEE Intelligent Systems, 24(2), 8–12. doi:10.1109/MIS.2009.36.

    Article  Google Scholar 

  • Hanson, R. (2008). Economics of the singularity. IEEE Spectrum, 45(6), 45–50. doi:10.1109/MSPEC.2008.4531461.

    Article  Google Scholar 

  • Hanson, R. (2012). Meet the new conflict, same as the old conflict. Journal of Consciousness Studies 19(1–2), 119–125. http://www.ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00008.

    Google Scholar 

  • Hanson, R. Forthcoming. Economic growth given machine intelligence. Journal of Artificial Intelligence Research.

    Google Scholar 

  • Hanson, R., & Yudkowsky, E. (2008). The Hanson-Yudkowsky AI-foom debate. LessWrong Wiki. http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate (accessed Mar. 13, 2012).

  • Hibbard, B. (2011). Measuring agent intelligence via hierarchies of environments. In Schmidhuber, J., Thórisson, KR., & Looks, M. 2011, 303–308.

    Google Scholar 

  • Hibbard, B. Forthcoming. Model-based utility functions. Journal of Artificial General Intelligence.

    Google Scholar 

  • Hutter, M. (2005). Universal artificial intelligence: Sequential decisions based on algorithmic probability. Texts in Theoretical Computer Science. Berlin: Springer. doi:10.1007/b138233.

  • Hutter, M. (2009). Open problems in universal induction & intelligence. Algorithms, 2(3), 879–906. doi:10.3390/a2030879.

    Article  MathSciNet  Google Scholar 

  • Hutter, M. (2012a). Can intelligence explode? Journal of Consciousness Studies 19(1–2), 143–166. http://www.ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00010.

  • Hutter, M. (2012b). One decade of universal artificial intelligence. In P. Wang & B. Goertzel (eds.) Theoretical foundations of artificial general intelligence (Vol. 4). Atlantis Thinking Machines. Paris: Atlantis Press.

    Google Scholar 

  • Jaynes, E. T., & Bretthorst, G. L. (Eds.) (2003). Probability theory: The logic of science. New York: Cambridge University Press. doi:10.2277/0521592712.

  • Jones, B. F. (2009). The burden of knowledge and the “Death of the Renaissance Man”: Is innovation getting harder? Review of Economic Studies, 76(1), 283–317. doi:10.1111/j.1467-937X.2008.00531.x.

    Article  MATH  Google Scholar 

  • Kaas, S., Rayhawk,S., Salamon, A., & Salamon, P. (2010). Economic implications of software minds. San Francisco: Singularity Institute for Artificial Intelligence, Aug. 10. http://www.singinst.co/upload/economic-implications.pdf.

  • Kandel, E. R., Schwartz, J. H., & Jessell, T. M. (Eds.). (2000). Principles of neural science. New York: McGraw-Hill.

    Google Scholar 

  • Kolmogorov, A. N. (1968). Three approaches to the quantitative definition of information. International Journal of Computer Mathematics, 2(1–4), 157–168. doi:10.1080/00207166808803030.

    Article  MathSciNet  MATH  Google Scholar 

  • Koza, J. R. (2010). Human-competitive results produced by genetic programming. Genetic Programming and Evolvable Machines, 11(3–4), 251–284. doi:10.1007/s10710-010-9112-3.

    Article  Google Scholar 

  • Krichmar, J. L., & Wagatsuma, H. (Eds.). (2011). Neuromorphic and brain-based robots. New York: Cambridge University Press.

    Google Scholar 

  • Kryder, M. H., & Kim, C. S. (2009). After hard drives—what comes next? IEEE Transactions on Magnetics, 2009(10), 3406–3413. doi:10.1109/TMAG.2009.2024163.

    Article  Google Scholar 

  • Kurzweil, R. (2005). The singularity is near: When humans transcend biology. New York: Viking.

    Google Scholar 

  • Lampson, B. W. (1973). A note on the confinement problem. Communications of the ACM, 16(10), 613–615. doi:10.1145/362375.362389.

    Article  Google Scholar 

  • Legg, S. (2008). Machine super intelligence. PhD diss., University of Lugano. http://www.vetta.org/documents/Machine_Super_Intelligence.pdf.

  • Legg, S., & Hutter, M. (2007). A collection of definitions of intelligence. In B. Goertzel & P. Wang (Eds.) Advances in artificial general intelligence: Concepts, architectures and algorithmsproceedings of the AGI workshop 2006 (Vol. 157). Frontiers in Artificial Intelligence and Applications. Amsterdam: IOS Press.

    Google Scholar 

  • Li, M., & Vitányi, P. M. B. (2008). An introduction to Kolmogorov complexity and its applications. Texts in Computer Science. New York: Springer. doi:10.1007/978-0-387-49820-1.

  • Lichtenstein, S., Fischoff, B., & Phillips, L. D. (1982). Calibration of probabilities: The state of the art to 1980. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgement under uncertainty: Heuristics and biases (pp. 306–334). New York: Cambridge University Press.

    Google Scholar 

  • Loosmore, R., & Goertzel, B. (2011). Why an intelligence explosion is probable. H+ Magazine, Mar. 7. http://hplusmagazine.com/2011/03/07/why-an-intelligence-explosion-is-probable/.

  • Lucas, J. R. (1961). Minds, machines and Gödel. Philosophy, 36(137), 112–127. doi:10.1017/S0031819100057983.

    Article  Google Scholar 

  • Lundstrom, M. (2003). Moore’s law forever? Science, 299(5604), 210–211. doi:10.1126/science.1079567.

    Article  Google Scholar 

  • Mack, C. A. (2011). Fifty years of Moore’s law. IEEE Transactions on Semiconductor Manufacturing, 24(2), 202–207. doi:10.1109/TSM.2010.2096437.

    Article  Google Scholar 

  • Marcus, G. (2008). Kluge: The haphazard evolution of the human mind. Boston: Houghton Mifflin.

    Google Scholar 

  • McAfee, A., & Brynjolfsson, E. (2008). Investing in the IT that makes a competitive difference. Harvard Business Review, July. http://hbr.org/2008/07/investing-in-the-it-that-makes-a-competitive-difference.

  • McCorduck, P. (2004). Machines who think: A personal inquiry into the history and prospects of artificial intelligence (2nd ed.). Natick: A. K. Peters.

    Google Scholar 

  • McDaniel, M. A. (2005). Big-brained people are smarter: A meta-analysis of the relationship between in vivo brain volume and intelligence. Intelligence, 33(4), 337–346. doi:10.1016/j.intell.2004.11.005.

    Article  Google Scholar 

  • McDermott, D. (2012a). Response to “The Singularity” by David Chalmers. Journal of Consciousness Studies 19(1–2): 167–172. http://www.ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00011.

    Google Scholar 

  • McDermott, D. (2012b). There are no “Extendible Methods” in David Chalmers’s sense unless P=NP. Unpublished manuscript. http://cs-www.cs.yale.edu/homes/dvm/papers/no-extendible-methods.pdf (accessed Mar. 19, 2012).

  • Mehta, G. B. (1998). Preference and utility. In S. Barbera, P. J. Hammond, & C. Seidl (Eds.), Handbook of utility theory (Vol. I, pp. 1–47). Boston: Kluwer Academic Publishers.

    Google Scholar 

  • Minsky, M. (1984). Afterword to Vernor Vinge’s novel, “True Names.” Unpublished manuscript, Oct. 1. http://web.media.mit.edu/~minsky/papers/TrueNames.Afterword.html (accessed Mar. 26, 2012).

  • Modha, D. S., Ananthanarayanan, R., Esser, S. K., Ndirango, A., Sherbondy, A. J., & Singh, R. (2011). Cognitive computing. Communications of the ACM, 54(8), 62–71. doi:10.1145/1978542.1978559.

    Article  Google Scholar 

  • Modis, T. (2012). There will be no singularity. In Eden, Søraker, Moor, & Steinhart 2012.

    Google Scholar 

  • Moravec, H. P. (1976). The role of raw rower in intelligence. May 12. http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html (accessed Mar. 13, 2012).

  • Moravec, H. (1998). When will computer hardware match the human brain? Journal of Evolution and Technology 1. http://www.transhumanist.com/volume1/moravec.htm.

  • Moravec, H. (1999). Rise of the robots. Scientific American, Dec., 124–135.

    Google Scholar 

  • Muehlhauser, L. (2011). So you want to save the world. Last modified Mar. 2, 2012. http://lukeprog.com/SaveTheWorld.html.

  • Muehlhauser, L., & Helm, L. (2012). The singularity and machine ethics. In Eden, Søraker, Moor, & Steinhart 2012.

    Google Scholar 

  • Murphy, A. H., & Winkler, R. L. (1984). Probability forecasting in meteorology. Journal of the American Statistical Association, 79(387), 489–500.

    Google Scholar 

  • Nagy, B., Farmer, J. D., Trancik, J. E., & Bui, QM. (2010). Testing laws of technological progress. Santa Fe Institute, NM, Sept. 2. http://tuvalu.santafe.edu/ bn/workingpapers/NagyFarmerTrancikBui.pdf.

  • Nagy, B., Farmer, J. D., Trancik, J. E., & Gonzales, J. P. (2011). Superexponential long-term trends in information technology. Technological Forecasting and Social Change, 78(8), 1356–1364. doi:10.1016/j.techfore.2011.07.006.

    Article  Google Scholar 

  • Nielsen, M. (2011). What should a reasonable person believe about the singularity? Michael Nielsen (blog). Jan. 12. http://michaelnielsen.org/blog/what-should-a-reasonable-person-believe-about-the-singularity/ (accessed Mar. 13, 2012).

  • Nilsson, N. J. (2009). The quest for artificial intelligence: A history of ideas and achievements. New York: Cambridge University Press.

    Google Scholar 

  • Nordmann, A. (2007). If and then: A critique of speculative nanoethics. NanoEthics, 1(1), 31–46. doi:10.1007/s11569-007-0007-6.

    Article  Google Scholar 

  • Omohundro, S. M. (1987). Efficient algorithms with neural network behavior. Complex Systems 1(2), 273–347. http://www.complex-systems.com/abstracts/v01_i02_a04.html.

  • Omohundro, S. M. (2007). The nature of self-improving artificial intelligence. Paper presented at the Singularity Summit 2007, San Francisco, CA, Sept. 8–9. http://singinst.org/summit2007/overview/abstracts/#omohundro.

  • Omohundro, S. M. (2008). The basic AI drives. In Wang, Goertzel, & Franklin 2008, 483–492.

    Google Scholar 

  • Omohundro, S. M. 2012. Rational artificial intelligence for the greater good. In Eden, Søraker, Moor, & Steinhart 2012.

    Google Scholar 

  • Orseau, L. (2011). Universal knowledge-seeking agents. In Algorithmic learning theory: 22nd international conference, ALT 2011, Espoo, Finland, October 57, 2011. Proceedings, ed. Jyrki Kivinen, Csaba Szepesvári, Esko Ukkonen, and Thomas Zeugmann. Vol. 6925. Lecture Notes in Computer Science. Berlin: Springer. doi 10.1007/978-3-642-24412-4_28.

    Google Scholar 

  • Orseau, L., & Ring, M. (2011). Self-modification and mortality in artificial agents. In Schmidhuber, Thórisson, and Looks 2011, 1–10.

    Google Scholar 

  • Pan, Z., Trikalinos, T. A., Kavvoura, F. K., Lau, J., & Ioannidis, J. P. A. (2005). Local literature bias in genetic epidemiology: An empirical evaluation of the Chinese literature. PLoS Medicine, 2(12), e334. doi:10.1371/journal.pmed.0020334.

    Article  Google Scholar 

  • Parente, R., & Anderson-Parente, J. (2011). A case study of long-term Delphi accuracy. Technological Forecasting and Social Change, 78(9), 1705–1711. doi:10.1016/j.techfore.2011.07.005.

    Article  Google Scholar 

  • Pennachin, C, & Goertzel, B. (2007). Contemporary approaches to artificial general intelligence. In Goertzel & Pennachin 2007, 1–30.

    Google Scholar 

  • Penrose, R. (1994). Shadows of the mind: A search for the missing science of consciousness. New York: Oxford University Press.

    Google Scholar 

  • Plebe, A., & Perconti, P. (2012). The slowdown hypothesis. In Eden, Søraker, Moor, & Steinhart 2012.

    Google Scholar 

  • Posner, R. A. (2004). Catastrophe: Risk and response. New York: Oxford University Press.

    Google Scholar 

  • Proudfoot, D., & Jack Copeland, B. (2012). Artificial intelligence. In E. Margolis, R. Samuels, & S. P. Stich (Eds.), The Oxford handbook of philosophy of cognitive science. New York: Oxford University Press.

    Google Scholar 

  • Rathmanner, S., & Hutter, M. (2011). A philosophical treatise of universal induction. Entropy, 13(6), 1076–1136. doi:10.3390/e13061076.

    Article  MathSciNet  Google Scholar 

  • Richards, M. A., & Shaw, G. A. (2004). Chips, architectures and algorithms: Reflections on the exponential growth of digital signal processing capability. Unpublished manuscript, Jan. 28. http://users.ece.gatech.edu/ mrichard/Richards&Shaw_Algorithms01204.pdf (accessed Mar. 20, 2012).

  • Rieffel, E., & Polak, W. (2011). Quantum computing: A gentle introduction. Scientific and Engineering Computation. Cambridge: MIT Press.

    Google Scholar 

  • Ring, M., & Orseau, L. (2011). Delusion, survival, and intelligent agents. In Schmidhuber, Thórisson, & Looks 2011, 11–20.

    Google Scholar 

  • Rowe, G., & Wright, G. (2001). Expert opinions in forecasting: The role of the Delphi technique. In J. S. Armstrong (Ed.), Principles of forecasting: A handbook for researchers and practitioners, (Vol. 30). International Series in Operations Research & Management Science. Boston: Kluwer Academic Publishers.

    Google Scholar 

  • Russell, S. J., & Norvig, P. (2009). Artificial intelligence: A modern approach (3rd ed.). Upper Saddle River: Prentice-Hall.

    Google Scholar 

  • Sandberg, A. (2010). An overview of models of technological singularity. Paper presented at the Roadmaps to AGI and the future of AGI workshop, Lugano, Switzerland, Mar. 8th. http://agi-conf.org/2010/wp-content/uploads/2009/06/agi10singmodels2.pdf.

  • Sandberg, A. (2011). Cognition enhancement: Upgrading the brain. In J. Savulescu, R. ter Meulen, & G. Kahane (Eds.), Enhancing human capacities (pp. 71–91). Malden: Wiley-Blackwell.

    Google Scholar 

  • Sandberg, A., & Bostrom, N. (2008). Whole brain emulation: A roadmap. Technical Report, 2008-3. Future of Humanity Institute, University of Oxford. www.fhi.ox.ac.uk/reports/2008-3.pdf.

  • Sandberg, A., & Bostrom, N. (2011). Machine intelligence survey. Technical Report, 2011-1. Future of Humanity Institute, University of Oxford. www.fhi.ox.ac.uk/reports/2011-1.pdf.

  • Schaul, T., & Schmidhuber, J. (2010). Metalearning. Scholarpedia, 5(6), 4650. doi:10.4249/scholarpedia.4650.

    Article  Google Scholar 

  • Schierwagen, A. (2011). Reverse engineering for biologically inspired cognitive architectures: A critical analysis. In C. Hernández, R. Sanz, J. Gómez-Ramirez, L. S. Smith, A. Hussain, A. Chella, & I. Aleksander (Eds.), From brains to systems: Brain-inspired cognitive systems 2010, (pp. 111–121, Vol. 718). Advances in Experimental Medicine and Biology. New York: Springer. doi:10.1007/978-1-4614-0164-3_10.

    Google Scholar 

  • Schmidhuber, J. (2002). The speed prior: A new simplicity measure yielding near-optimal computable predictions. In J. Kivinen & R. H. Sloan, Computational learning theory: 5th annual conference on computational learning theory, COLT 2002 Sydney, Australia, July 810, 2002 proceedings, (pp. 123–127, Vol. 2375). Lecture Notes in Computer Science. Berlin: Springer. doi:10.1007/3-540-45435-7_15.

    Google Scholar 

  • Schmidhuber, J. (2007). Gödel machines: Fully self-referential optimal universal self-improvers. In Goertzel & Pennachin 2007, 199–226.

    Google Scholar 

  • Schmidhuber, J. (2009). Ultimate cognition à la Gödel. Cognitive Computation, 1(2), 177–193. doi:10.1007/s12559-009-9014-y.

    Article  Google Scholar 

  • Schmidhuber, J. (2012). Philosophers & futurists, catch up! Response to The Singularity. Journal of Consciousness Studies 19(1–2), 173–182. http://www.ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00012.

  • Schmidhuber, J., Thórisson, K. R., & Looks, M. (Eds.) (2011). Artificial General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 36, 2011. Proceedings (Vol. 6830). Lecture Notes in Computer Science. Berlin: Springer. doi:10.1007/978-3-642-22887-2.

  • Schneider, S. (2010). Homo economicusor more like Homer Simpson? Current Issues. Deutsche Bank Research, Frankfurt, June 29. http://www.dbresearch.com/PROD/DBR_INTERNET_EN-PROD/PROD0000000000259291.PDF.

  • Schoenemann, P. T. (1997). An MRI study of the relationship between human neuroanatomy and behavioral ability. PhD diss., University of California, Berkeley. http://mypage.iu.edu/ toms/papers/dissertation/Dissertation_title.htm.

  • Schwartz, J. T. (1987). Limits of artificial intelligence. In S. C. Shapiro & D. Eckroth (Eds.), Encyclopedia of artificial intelligence (pp. 488–503, Vol. 1). New York: Wiley.

    Google Scholar 

  • Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(03), 417–424. doi:10.1017/S0140525X00005756.

    Article  Google Scholar 

  • Shulman, C., & Bostrom, N. (2012). How hard is artificial intelligence? Evolutionary arguments and selection effects. Journal of Consciousness Studies 19.

    Google Scholar 

  • Shulman, C., & Sandberg, A. (2010). Implications of a software-limited singularity. Paper presented at the 8th European Conference on Computing and Philosophy (ECAP), Munich, Germany, Oct. 4–6.

    Google Scholar 

  • Simon, H. A. (1965). The shape of automation for men and management. New York: Harper & Row.

    Google Scholar 

  • Solomonoff, R. J. (1964a). A formal theory of inductive inference. Part I. Information and Control, 7(1), 1–22. doi:10.1016/S0019-9958(64)90223-2.

    Article  MathSciNet  MATH  Google Scholar 

  • Solomonoff, R. J. (1964b). A formal theory of inductive inference. Part II. Information and Control, 7(2), 224–254. doi:10.1016/S0019-9958(64)90131-7.

    Article  MathSciNet  MATH  Google Scholar 

  • Solomonoff, R. J. (1985). The time scale of artificial intelligence: Reflections on social effects. Human Systems Management, 5, 149–153.

    Google Scholar 

  • Sotala, K. (2012). Advantages of artificial intelligences, uploads, and digital minds. International Journal of Machine Consciousness 4.

    Google Scholar 

  • Stanovich, K. E. (2010). Rationality and the reflective mind. New York: Oxford University Press.

    Book  Google Scholar 

  • Tetlock, P. E. (2005). Expert political judgment: How good is it? How can we know?. Princeton: Princeton University Press.

    Google Scholar 

  • The Royal Society. (2011). Knowledge, networks and nations: Global scientific collaboration in the 21st century. RS Policy document, 03/11. The Royal Society, London. http://royalsociety.org/uploadedFiles/Royal_Society_Content/policy/publications/2011/4294976134.pdf.

  • Trappenberg, T. P. (2009). Fundamentals of computational neuroscience (2nd ed.). New York: Oxford University Press.

    Google Scholar 

  • Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460. doi:10.1093/mind/LIX.236.433.

    Article  MathSciNet  Google Scholar 

  • Turing, A. M. (1951). Intelligent machinery, a heretical theory. A lecture given to `51 Society’ at Manchester.

    Google Scholar 

  • Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131. doi:10.1126/science.185.4157.1124.

    Article  Google Scholar 

  • Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review, 90(4), 293–315. doi:10.1037/0033-295X.90.4.293.

    Article  Google Scholar 

  • The Uncertain Future. (2012). What is multi-generational in vitro embryo selection? The Uncertain Future. http://www.theuncertainfuture.com/faq.html#7 (accessed Mar. 25, 2012).

  • Van der Velde, F. (2010). Where artificial intelligence and neuroscience meet: The search for grounded architectures of cognition. Advances in Artificial Intelligence, no. 5. doi:10.1155/2010/918062.

  • Van Gelder, T., & Port, R. F. (1995). It’s about time: An overview of the dynamical approach to cognition. In R. F. Port & T. van Gelder. Mind as motion: Explorations in the dynamics of cognition, Bradford Books. Cambridge: MIT Press.

    Google Scholar 

  • Veness, J., Ng, K. S., Hutter, M., Uther, W., & Silver, D. (2011). A Monte-Carlo AIXI approximation. Journal of Artificial Intelligence Research, 40, 95–142. doi:10.1613/jair.3125.

    MathSciNet  MATH  Google Scholar 

  • Vinge, V. (1993). The coming technological singularity: How to survive in the post-human era. In Vision-21: Interdisciplinary science and engineering in the era of cyberspace, 11–22. NASA Conference Publication 10129. NASA Lewis Research Center. http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19940022855_1994022855.pdf.

  • Von Neumann, J., & Burks, A. W. (Eds.) (1966). Theory of self-replicating automata. Urbana: University of Illinois Press.

    Google Scholar 

  • Walter, C. (2005). Kryder’s law. Scientific American, July 25. http://www.scientificamerican.com/article.cfm? id = kryders-law.

  • Wang, P., Goertzel, B., & Franklin, S. (Eds.). (2008). Artificial General Intelligence 2008: Proceedings of the First AGI Conference (Vol. 171). Frontiers in Artificial Intelligence and Applications. Amsterdam: IOS Press.

    Google Scholar 

  • Williams, L. V. (Ed.). (2011). Prediction markets: Theory and applications (Vol. 66). Routledge International Studies in Money and Banking. New York: Routledge.

    Google Scholar 

  • Wootters, W. K., & Zurek, W. H. (1982). A single quantum cannot be cloned. Nature, 299(5886), 802–803. doi:10.1038/299802a0.

    Article  Google Scholar 

  • Woudenberg, F. (1991). An evaluation of Delphi. Technological Forecasting and Social Change, 40(2), 131–150. doi:10.1016/0040-1625(91)90002-W.

    Article  Google Scholar 

  • Yampolskiy, R. V. (2012). Leakproofing the singularity: Artificial intelligence confinement problem. Journal of Consciousness Studies 19(1–2), 194–214. http://www.ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00014.

    Google Scholar 

  • Yates, J. F., Lee, J.-W., Sieck, W. R., Choi, I., & Price, P. C. (2002). Probability judgment across cultures. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 271–291). New York: Cambridge University Press.

    Chapter  Google Scholar 

  • Yudkowsky, E. (2001). Creating Friendly AI 1.0: The analysis and design of benevolent goal architectures. The Singularity Institute, San Francisco, CA, June 15. http://singinst.org/upload/CFAI.html.

  • Yudkowsky, E. (2008a). Artificial intelligence as a positive and negative factor in global risk. In Bostrom & Ćirković 2008, 308–345.

    Google Scholar 

  • Yudkowsky, E. (2008b). Efficient cross-domain optimization. LessWrong. Oct. 28. http://lesswrong.com/lw/vb/efficient_crossdomain_optimization/ (accessed Mar. 19, 2012).

  • Yudkowsky, E. (2011). Complex value systems in friendly AI. In Schmidhuber, Thórisson, & Looks 2011, 388–393.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luke Muehlhauser .

Editor information

Editors and Affiliations

Robin Hanson on Muehlhauser and Salamon’s “Intelligence Explosion: Evidence and Import”

Robin Hanson on Muehlhauser and Salamon’s “Intelligence Explosion: Evidence and Import”

Muehlhauser and Salamon [M&S] talk as if their concerns are particular to an unprecedented new situation: the imminent prospect of “artificial intelligence” (AI). But in fact their concerns depend little on how artificial will be our descendants, nor on how intelligence they will be. Rather, Muehlhauser and Salamon’s concerns follow from the general fact that accelerating rates of change increase intergenerational conflicts. Let me explain.

Here are three very long term historical trends:

  1. 1.

    Our total power and capacity has consistently increased. Long ago this enabled increasing population, and lately it also enables increasing individual income.

  2. 2.

    The rate of change in this capacity increase has also increased. This acceleration has been lumpy, concentrated in big transitions: from primates to humans to farmers to industry.

  3. 3.

    Our values, as expressed in words and deeds, have changed, and changed faster when capacity changed faster. Genes embodied many earlier changes, while culture embodies most today.

Increasing rates of change, together with constant or increasing lifespans, generically imply that individual lifetimes now see more change in capacity and in values. This creates more scope for conflict, wherein older generations dislike the values of younger more-powerful generations with whom their lives overlap.

As rates of change increase, these differences in capacity and values between overlapping generations increase. For example, Muehlhauser and Salamon fear that their lives might overlap with

[descendants] superior to us in manufacturing, harvesting resources, scientific discovery, social charisma, and strategic action, among other capacities. We would not be in a position to negotiate with them, for [we] could not offer anything of value [they] could not produce more effectively themselves. … This brings us to the central feature of [descendant] risk: Unless a [descendant] is specifically programmed to preserve what [we] value, it may destroy those valued structures (including [us]) incidentally.

The quote actually used the words “humans”, “machines” and “AI”, and Muehlhauser and Salamon spend much of their chapter discussing the timing and likelihood of future AI. But those details are mostly irrelevant to the concerns expressed above. It doesn’t matter much if our descendants are machines or biological meat, or if their increased capacities come from intelligence or raw physical power. What matters is that descendants could have more capacity and differing values.

Such intergenerational concerns are ancient, and in response parents have long sought to imprint their values onto their children, with modest success.

Muehlhauser and Salamon find this approach completely unsatisfactory. They even seem wary of descendants who are cell-by-cell emulations of prior human brains, “brain-inspired AIs running on human-derived “spaghetti code”, or `opaque’ AI designs …produced by evolutionary algorithms.” Why? Because such descendants “may not have a clear `slot’ in which to specify desirable goals.”

Instead Muehlhauser and Salamon prefer descendants that have “a transparent design with a clearly definable utility function,” and they want the world to slow down its progress in making more capable descendants, so that they can first “solve the problem of how to build [descendants] with a stable, desirable utility function.”

If “political totalitarians” are central powers trying to prevent unwanted political change using thorough and detailed control of social institutions, then “value totalitarians” are central powers trying to prevent unwanted value change using thorough and detailed control of everything value-related. And like political totalitarians willing to sacrifice economic growth to maintain political control, value totalitarians want us to sacrifice capacity growth until they can be assured of total value control.

While the basic problem of faster change increasing intergenerational conflict depends little on change being caused by AI, the feasibility of this value totalitarian solution does seem to require AI. In addition, it requires transparent-design AI to be an early and efficient form of AI. Furthermore, either all the teams designing AIs must agree to use good values, or the first successful team must use good values and then stop the progress of all other teams.

Personally, I’m skeptical that this approach is even feasible, and if feasible, I’m wary of the concentration of power required to even attempt it. Yes we teach values to kids, but we are also often revolted by extreme brainwashing scenarios, of kids so committed to certain teachings that they can no longer question them. And we are rightly wary of the global control required to prevent any team from creating descendants who lack officially approved values.

Even so, I must admit that value totalitarianism deserves to be among the range of responses considered to future intergenerational conflicts.

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Muehlhauser, L., Salamon, A. (2012). Intelligence Explosion: Evidence and Import. In: Eden, A., Moor, J., Søraker, J., Steinhart, E. (eds) Singularity Hypotheses. The Frontiers Collection. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-32560-1_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-32560-1_2

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-32559-5

  • Online ISBN: 978-3-642-32560-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics