Advertisement

The Threat of a Reward-Driven Adversarial Artificial General Intelligence

  • Itamar Arel
Chapter
Part of the The Frontiers Collection book series (FRONTCOLL)

Abstract

Once introduced, Artificial General Intelligence (AGI) will undoubtedly become humanity’s most transformative technological force. However, the nature of such a force is unclear with many contemplating scenarios in which this novel form of intelligence will find humans an inevitable adversary. In this chapter, we argue that if one is to consider reinforcement learning principles as foundations for AGI, then an adversarial relationship with humans is in fact inevitable. We further conjecture that deep learning architectures for perception in concern with reinforcement learning for decision making pave a possible path for future AGI technology and raise the primary ethical and societal questions to be addressed if humanity is to evade catastrophic clashing with these AGI beings.

Keywords

Intrinsic Motivation Reinforcement Learning Deep Learning Belief State Analog Circuit 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Arel, D. R., & Coop, R. (November, 2009). DeSTIN: A scalable deep learning architecture with application to high-dimensional robust pattern recognition. In Proceedings of the AAAI 2009 Fall Symposium on Biologically Inspired Cognitive Architectures.Google Scholar
  2. Arel, I. (2013). The threat of a reward-driven adversarial artificial general intelligence. In A. H. Eden, J. H. Moor, J. H. Søraker, & E. Steinhart (Eds.), The singularity hypothesis: a scientific and philosophical analysis (pp. 43–58). Heidelberg: Springer.Google Scholar
  3. Arel, I., Rose, D., & Karnowski, T. (2010). Deep machine learning—a new frontier in artificial intelligence research. IEEE Computational Intelligence Magazine, 14, 12–18.Google Scholar
  4. Baillargeon, R. (1994). Physical Reasoning in young infants: seeking explanations for impossible events. British Journal of Developmental Psychology, 12, 9–33.CrossRefGoogle Scholar
  5. Bellman, R. (1957). Dynamic programming. Princeton: Princeton University Press.zbMATHGoogle Scholar
  6. Bengio, Y. (2009). Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1), 1–127.MathSciNetzbMATHCrossRefGoogle Scholar
  7. Dietrich, E. (2001, October). Homo sapiens 2.0: why we should build the better robots of our nature. Journal of Experimental and Theoretical Artificial Intelligence, 13(4), 323–328.Google Scholar
  8. Dietrich, E. (2007). After the humans are gone. Journal of Experimental and Theoretical Artificial Intelligence, 19(1), 55–67.MathSciNetCrossRefGoogle Scholar
  9. Duda, R., Hart, P., & Stork, D. (2000). Pattern recognition (2nd edn ed.). New York: Wiley-Interscience.Google Scholar
  10. Ginsburg, G. S., & Bronstein, P. (1993). Family factors related to children’s intrinsic/extrinsic motivational orientation and academic performance. Child Development, 64, 1461–1474.CrossRefGoogle Scholar
  11. Gray, P., Hurst, P., Lewis, S., & Meyer, R. (2001). Analysis and design of analog integrated circuits. New York: Wiley.Google Scholar
  12. Hasler, P., & Dugger, J. (2005). An analog floating-gate node for supervised learning. IEEE Transactions on Circuits and Systems I, 52(5), 834–845.CrossRefGoogle Scholar
  13. Hasler, P., Diorio, C., Minch, B. A., & Mead, C. (1995). Single transistor learning synapse with long term storage. IEEE International Symposium on Circuits and Systems, 1660–1663.Google Scholar
  14. Joel, D., Niv, Y., & Ruppin, E. (2002). Actor-critic models of the basal ganglia: new anatomical and computational perspectives. Neural Networks, 15, 535–547.CrossRefGoogle Scholar
  15. Kelley, S. A., Brownell, C. A., & Campbell, S. B. (2000). Mastery motivation and self-evaluative affect in toddlers: longitudinal relations with maternal behavior. Child Development, 71, 1061–71.CrossRefGoogle Scholar
  16. Lee, T. (2003). Hierarchical bayesian inference in the visual cortex. Journal of the Optical Society of America, 20(7), 1434–1448.CrossRefGoogle Scholar
  17. Lee, T., Mumford, D., Romero, R., & Lamme, V. (1998). The role of the primary visual cortex in higher level vision. Vision Research, 38, 2429–2454.CrossRefGoogle Scholar
  18. Millner, A. D., & Goodale, M. A. (1996). The visual brain in action. Oxford: Oxford University Press.Google Scholar
  19. Mishkin, M., Ungerkeuder, L. G., & Macko, K. A. (1983). Object vision and spatial vision: two cortical pathways. Trends in Neuroscience, 6, 414–417.CrossRefGoogle Scholar
  20. Rapaport, W. J. (2000). How to pass a turing test: syntactic semantics, natural—language understanding, and first-person cognition. Journal of Logic, Language, and Information, 9(4): 467–490. (Reprinted in The turing test: the elusive standard of artificial intelligence, pp. 161–14, by H. M. James, Ed., 2003, Dordrecht: Kluwer).Google Scholar
  21. Rapaport, W. J. (2003). What did you mean by that? Misunderstanding, negotiation, and syntactic semantics. Minds and Machines, 13(3), 397–427.CrossRefGoogle Scholar
  22. Schultz, W. (1998). Predictive reward signal of dopamine neurons. The Journal of Neurophysiology, 80(1), 1–27.Google Scholar
  23. Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3, 417–457.CrossRefGoogle Scholar
  24. Searle, J. R. (1982). The myth of the computer. New York Review of Books (29 April 1982): 3–6; cf. correspondence, same journal (24 June 1982): 56–57.Google Scholar
  25. Suri, R. E., & Schultz, W. (1999). A neural network model with dopamine-like reinforcement signal that learns a spatial delayed response task. Neuroscience, 91(3), 871–890.CrossRefGoogle Scholar
  26. Sutton, R. S., & Barto, A. G. (1998). Reinforcement Learning: An Introduction. Cambridge, MA: MIT Press.Google Scholar
  27. Wallis, G., & Bülthoff, H. (1999). Learning to recognize objects. Trends in Cognitive Sciences, 3(1), 23–31.CrossRefGoogle Scholar
  28. Wallis, G., & Rolls, E. (1997). Invariant face and object recognition in the visual system. Progress in Neurobiology, 51, 167–194.CrossRefGoogle Scholar
  29. Winston, P. H. (1975). Learning structural descriptions from examples. in Patrick HenryWinston (ed.), The Psychology of Computer Vision (New York: McGraw-Hill): 157– 209. (Reprinted in Readings in knowledge representation, pp. 141–168, by J. B. Ronald & J. L. Hector, Eds., 1985, Los Altos, CA: Morgan Kaufmann).Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  1. 1.Department of Electrical Engineering and Computer ScienceMachine Intelligence Lab, University of TennesseeKnoxvilleUSA

Personalised recommendations