Summary
Artificial superintelligence (ASI) is increasingly recognized as a significant future risk. This chapter surveys established methodologies for risk analysis and risk management as they can be applied to ASI risk. For ASI risk analysis, an important technique is to model the sequences of steps that could result in ASI catastrophe. Each step can then be studied to get an overall understanding of the total risk. These models are called fault trees or event trees. To help build the models, it can be helpful to ask experts for their judgments on various parts of the model. Experts don’t always get their judgments right so it’s important to ask them carefully, using established procedures from risk analysis. For ASI risk management, there are two approaches. One is to make ASI technology safer. The other is to manage the human process of ASI research and development, in order to steer it towards safer ASI and away from dangerous ASI. Risk analysis and the related field of decision analysis can help people make better ASI risk management decisions. In particular, the analysis can help identify which options would be the most cost-effective, meaning that they would achieve the largest reduction in ASI risk for the amount of money spent on them.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Armstrong, S. and K. Sotala (2012). How We’re Predicting AI—or Failing To. Beyond AI: Artificial Dreams. Pilsen, Czech Republic, University of West Bohemia: 52–75.
Armstrong, S., K. Sotala and S. S. Ó hÉigeartaigh (2014). “The errors, insights and lessons of famous AI predictions – and what they mean for the future.” Journal of Experimental & Theoretical Artificial Intelligence.
Barrett, A. M. (2014). “Analyzing Current and Future Catastrophic Risks from Emerging-Threat Technologies.” Retrieved May 5, 2014, from http://research.create.usc.edu/cgi/viewcontent.cgi?article=1062&context=current_synopses.
Barrett, A. M. and S. D. Baum (2014). Value of GCR Information: Cost Effective Reduction of Global Catastrophic Risks (GCRs). Advances in Decision Analysis. Washington, D.C.
Barrett, A. M., S. D. Baum and K. R. Hostetler (2013). “Analyzing and Reducing the Risks of Inadvertent Nuclear War Between the United States and Russia.” Science & Global Security 21(2): 106–133.
Baum, S. D. (2010). “Is humanity doomed? Insights from astrobiology.” Sustainability 2(2): 591–603.
Baum, S. D., B. Goertzel and T. G. Goertzel (2011). “How long until human-level AI? Results from an expert assessment.” Technological Forecasting & Social Change 78(1): 185–195.
Beckstead, N. (2013). On The Overwhelming Importance Of Shaping The Far Future. Department of Philosophy. New Brunswick, New Jersey, Rutgers University.
Bolstad, W. M. (2007). Introduction to Bayesian Statistics. Second ed. Hoboken, New Jersey, John Wiley & Sons.
Bostrom, N. (2002). “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards.” Journal of Evolution and Technology 9(1).
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford, United Kingdom, Oxford University Press.
Broome, J. (1991). “Utility.” Economics and Philosophy 7(1–12).
Chalmers, D. (2010). “The Singularity: A Philosophical Analysis.” Journal of Consciousness Studies 17: 7–65.
Clemen, R. T. and T. Reilly (2001). Making Hard Decisions. 2nd ed. Pacific Grove, California, Duxbury.
Cooke, R. (1991). Experts in Uncertainty: Opinion and Subjective Probability in Science. New York, Oxford University Press.
Dillon, R. L., C. H. Tinsley and W. J. Burns (2014). “Near-Misses and Future Disaster Preparedness.” Risk Analysis.
Fallenstein, B. (2013). Predicting AGI: What can we say when we know so little? Berkeley, California, Machine Intelligence Research Institute.
Joy, B. (2000). “Why the future doesn’t need us.” Wired Retrieved 9 October 2011, from http://www.wired.com/wired/archive/8.04/joy.html.
Keeney, R. and H. Raiffa (1976). Decisions with Multiple Objectives: Preferences and Value Trade-Offs. New York, John Wiley & Sons, Inc.
Keith, D. W. (1996). “When is it appropriate to combine expert judgments?” Climatic Change 33(139–144).
Kumamoto, H. and E. J. Henley (1996). Probabilistic Risk Assessment and Management for Engineers and Scientists. 2nd edition ed. New York, IEEE Press.
Kurzweil, R. (2012). On Modis’ “Why the Singularity Cannot Happen”. Singularity Hypotheses: A Scientific and Philosophical Assessment. A. H. Eden, J. H. Moor, J. H. Soraker and E. Steinhart. New York, Springer: 343–348.
Lempert, R. J., S. W. Popper and S. C. Bankes (2003). Shaping the Next One Hundred Years: New Methods for Quantitative, Long-Term Policy Analysis. Santa Monica, California, RAND.
Maher Jr., T. M. and S. D. Baum (2013). “Adaptation to and recovery from global catastrophe.” Sustainability 5(4): 1461–1479.
McGinnis, J. O. (2010). “Accelerating AI.” Northwestern University Law Review 104: 366–381.
Meyer, M. A. and J. M. Booker (1991). Eliciting and analyzing expert judgment: a practical guide. London, Academic Press Limited.
Millett, S. M. and E. J. Honton (1991). A manager’s guide to technology forecasts and strategy analysis methods. Columbus, Ohio, Battelle Press.
Modis, T. (2012). Why the Singularity Cannot Happen. Singularity Hypotheses: A Scientific and Philosophical Assessment. A. H. Eden, J. H. Moor, J. H. Soraker and E. Steinhart. New York, Springer: 311–340.
Morgan, M. G. and M. Henrion (1990). Uncertainty: A guide to dealing with uncertainty in quantitative risk and policy analysis. Cambridge, Cambridge University Press.
Muehlhauser, L. (2013). “Mathematical Proofs Improve But Don’t Guarantee Security, Safety, and Friendliness.” Retrieved May 10, 2014, from http://intelligence.org/2013/10/03/proofs/.
Ng, Y.-K. (1991). “Should we be very cautious or extremely cautious on measures that may involve our destruction?” Social Choice and Welfare 8: 79–88.
Ng, Y.-K. (1995). “Towards welfare biology: evolutionary economics of animal consciousness and suffering.” Biology and Philosophy 10: 255–285.
O’Hagan, A., C. E. Buck, A. Daneshkhah, J. R. Eiser, P. H. Garthwaite, D. J. Jenkinson, J. E. Oakley and T. Rakow (2006). Uncertain Judgements: Eliciting Expert Probabilities. Chichester, West Sussex, England, John Wiley & Sons.
Omohundro, S. (2008). The Basic AI Drives. Proceedings of the First AGI Conference, Frontiers in Artificial Intelligence and Applications. P. Wang, B. Goertzel and S. Franklin, IOS Press. 171.
Posner, R. A. (2004). Catastrophe: Risk and Response. New York, Oxford University Press.
Pratt, J. W., H. Raiffa and R. Schlaifer (1995). Introduction to Statistical Decision Theory. Cambridge, Massachusetts, MIT Press.
Rayhawk, S., A. Salamon, T. McCabe, M. Anissimov and R. Nelson (2009a). Changing the frame of AI futurism: From storytelling to heavy-tailed, high-dimensional probability distributions. 7th European Conference on Computing and Philosophy (ECAP). Bellaterra, Spain.
Rayhawk, S., A. Salamon, T. McCabe, M. Anissimov and R. Nelson (2009b). “The Uncertain Future.” Retrieved 23 October 2011, from http://www.theuncertainfuture.com/.
Sagan, C. (1983). “Nuclear war and climatic catastrophe: Some policy implications.” Foreign Affairs 62(2): 257–292.
Salamon, A. (2009). How Much it Matters to Know What Matters: A Back of the Envelope Calculation. Singularity Summit 2009. New York.
Sandberg, A. (2010). An overview of models of technological singularity. Workshop on Roadmaps to AGI and the future of AGI (at AGI10 conference). Lugano, Switzerland.
Sandberg, A. and N. Bostrom (2008). Whole Brain Emulation: A Roadmap, Future of Humanity Institute, Oxford University. Technical Report #2008-3.
Shulman, C. and S. Armstrong (2009). Arms control and intelligence explosions. 7th European Conference on Computing and Philosophy (ECAP). Bellatera, Spain.
Shulman, C. and A. Sandberg (2010). Implications of a Software-Limited Singularity. ECAP10: VIII european conference on computing and philosophy. K. Mainzer. Munich, Verlag Dr. Hut.
Slovic, P., B. Fischhoff and S. Lichtenstein (1979). “Rating the Risks.” Environment 21(3): 14–20, 36-39.
Sotala, K. and R. V. Yampolskiy (2015). “Responses to Catastrophic AGI Risk: A Survey.” Physica Scripta 90(1).
Stokes, J. (2008). “Understanding Moore’s Law.” Retrieved January 25, 2014, from http://arstechnica.com/gadgets/2008/09/moore/.
Weitzman, M. L. (2009). “On modeling and interpreting the economics of catastrophic climate change.” Review of Economics and Statistics 91(1): 1–19.
Yudkowsky, E. (2008). Artificial intelligence as a positive and negative factor in global risk. Global Catastrophic Risks. N. Bostrom and M. M. Cirkovic. Oxford, Oxford University Press: 308–345.
Yudkowsky, E. (2013). Intelligence Explosion Microeconomics. Berkeley, California, Machine Intelligence Research Institute.
Acknowledgements
Thanks to the editors for comments on the chapter manuscript, and Stuart Armstrong, Luke Muehlhauser, Miles Brundage, Roman Yampolskiy, and others for comments on related research. Any opinions, findings and conclusions or recommendations in this document are those of the authors and do not necessarily reflect views of the Global Catastrophic Risk Institute, nor of others.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer-Verlag GmbH Germany
About this chapter
Cite this chapter
Barrett, A.M., Baum, S.D. (2017). Risk Analysis and Risk Management for the Artificial Superintelligence Research and Development Process. In: Callaghan, V., Miller, J., Yampolskiy, R., Armstrong, S. (eds) The Technological Singularity. The Frontiers Collection. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-54033-6_6
Download citation
DOI: https://doi.org/10.1007/978-3-662-54033-6_6
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-54031-2
Online ISBN: 978-3-662-54033-6
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)