Philosophy and Theory of Artificial Intelligence
Volume 5 of the series Studies in Applied Philosophy, Epistemology and Rational Ethics pp 335-347
Risks and Mitigation Strategies for Oracle AI
- Stuart ArmstrongAffiliated withFuture of Humanity Institute, University of Oxford
Abstract
There is no strong reason to believe human level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed goals or motivation systems. Oracle AIs (OAI), confined AIs that can only answer questions, are one particular approach to this problem. However even Oracles are not particularly safe: humans are still vulnerable to traps, social engineering, or simply becoming dependent on the OAI. But OAIs are still strictly safer than general AIs, and there are many extra layers of precautions we can add on top of these. This paper looks at some of them and analyses their strengths and weaknesses.
Keywords
Artificial Intelligence Superintelligence Security Risks Motivational control Capability control- Title
- Risks and Mitigation Strategies for Oracle AI
- Book Title
- Philosophy and Theory of Artificial Intelligence
- Pages
- pp 335-347
- Copyright
- 2013
- DOI
- 10.1007/978-3-642-31674-6_25
- Print ISBN
- 978-3-642-31673-9
- Online ISBN
- 978-3-642-31674-6
- Series Title
- Studies in Applied Philosophy, Epistemology and Rational Ethics
- Series Volume
- 5
- Series ISSN
- 2192-6255
- Publisher
- Springer Berlin Heidelberg
- Copyright Holder
- Springer-Verlag GmbH Berlin Heidelberg
- Additional Links
- Topics
- Keywords
-
- Artificial Intelligence
- Superintelligence
- Security
- Risks
- Motivational control
- Capability control
- Industry Sectors
- eBook Packages
- Editors
-
- Vincent C. Müller (ID1)
- Editor Affiliations
-
- ID1. and University of Oxford, Anatolia College/ACT
- Authors
-
- Stuart Armstrong (1)
- Author Affiliations
-
- 1. Future of Humanity Institute, University of Oxford, Oxford, USA
Continue reading...
To view the rest of this content please follow the download PDF link above.