Skip to main content

Why an Intelligence Explosion is Probable

  • Chapter
  • First Online:
Singularity Hypotheses

Part of the book series: The Frontiers Collection ((FRONTCOLL))

Abstract

The hypothesis is considered that: Once an AI system with roughly human-level general intelligence is created, an “intelligence explosion” involving the relatively rapid creation of increasingly more generally intelligent AI systems will very likely ensue, resulting in the rapid emergence of dramatically superhuman intelligences. Various arguments against this hypothesis are considered and found wanting.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 84.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.fhi.ox.ac.uk/archived_events/winter_conference

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Richard Loosemore .

Editor information

Editors and Affiliations

Peter Bishop’s on Loosemore and Goertzel’s “Why an Intelligence Explosion is Probable”

Peter Bishop’s on Loosemore and Goertzel’s “Why an Intelligence Explosion is Probable”

The authors were kind enough to put the main idea in the title and very early in the piece–“…as Good predicted (1965), [given the appearance of machine intelligence], an intelligence explosion is indeed a very likely outcome (Page 2).” After a few preliminary remarks, the authors structure the article around a series of potential limitations to the intelligence explosion suggested by Sander in a post to the Extropy blog.

I am a professional futurist, not a computer scientist, much less an expert in artificial intelligence, so I will leave the technical details of the argument to others. The futurist’s approach to such an argument is to examine the assumptions required for the proposed future to come about and to assess whether alternatives to those assumptions are plausible. Each plausible alternative assumption supports an alternative future (a scenario). Fortunately, it is not necessary to decide whether the original or its alternative is “correct” or not. Rather all plausible alternative futures constitute the range of scenarios that describe the future.

The authors state their first assumption right away—“…there is one absolute prerequisite for an explosion to occur: an artificial general intelligence (AGI) must become smart enough to understand its own design”. I am afraid that here the authors get into trouble right away. The premise for the article is that humans have created the AGI. Yet humans do not understand their own design today, and they may not understand it even after creating an AGI. The authors seem to assume that the AGI is intelligent in the same way that humans are since we first had to understand our own design before building it into the AGI. But it is conceivable that AGI intelligence uses a different design. Thus the AGI does not have to understand itself any more than humans have to understand themselves in order to create the AGI in the first place.

The alternative assumption is supported later when the authors consider that incredibly faster clock speeds might lead to an intelligent design (Page 9). In that case, even humans might not understand how the AGI is intelligent, much less the AGI understanding that itself. One of the designs might be a massively parallel neural network. Neural networks are powerful learning machines, yet they do not have programs the way algorithmic computers do. Therefore, it is literally impossible to understand why they make the judgments that they do. As a result, we humans may never fully understand the basis of our own intelligence because it is definitely not algorithmic nor would we understand the basis for an AGI if it were a neural network. Therefore, the premise of this future is that humans are smart enough to create an AGI, but it is only an assumption that AGI understand the basis for its own intelligence.

A second assumption, particularly in the first part of the article, is that the “seed AGI,” the “AGI with the potential to launch an intelligence explosion,” is a single machine, perhaps a massively complex supercomputer. But having all that intelligence residing in one machine is not necessary. Just as it is highly unlikely that one human would create the AGI so one AGI might not lead to the explosion that the authors foresee. It is more likely that teams of humans would create the AGI with members of the team contributing their individual expertise—hardware, software, etc. Similarly the AGIs would work in teams. The authors do suggest that assumption later on when they discuss whether the bandwidth among machines would limit the rate of development. So whether the AGI that touches off the explosion is a single machine or a set of communicating machines is another important assumption.

These assumptions aside, the question in this article is the rate at which machine intelligence will develop once one or more AGIs are created. The first assumption is that machine intelligence will develop at all after that event, but it is hard to support the alternative—that the AGI is the last intelligent device invented. The premise is that one intelligent species (human) has already created another intelligent device (AGI) so it is highly likely that further intelligence species or devices will emerge. The issue is how fast that will occur. Will it be an explosion as the authors claim or a rather slow evolutionary development?

First of all, the authors are reluctant to define exactly what an explosive rate would be. Even if it were to occur in mid-century, as many suggest, or even in the next few millennia, we have only one (presumed) case of one intelligent entity creating another one and that some 50,000 years. That’s not particularly explosive. Kurzweil (2001) also predicts an explosive rate because intelligent machines will not carry the burden of biologically evolved intelligence, including emotions, culture and tradition. Still to go from 50,000 years to an explosion resulting in 100H (100 times human intelligence) in a short time (whose length is itself undefined) seems quite a stretch. Development? Probably. Explosive development? Who knows? In the end, an argument about that rate might even be futile.

“Given the existence of angels, how many can stand on the head of a pin?” Nevertheless, it’s a great exercise in intellectual calisthenics because it forces us to discover just how many assumptions we make about the future.

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Loosemore, R., Goertzel, B. (2012). Why an Intelligence Explosion is Probable. In: Eden, A., Moor, J., Søraker, J., Steinhart, E. (eds) Singularity Hypotheses. The Frontiers Collection. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-32560-1_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-32560-1_5

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-32559-5

  • Online ISBN: 978-3-642-32560-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics