Delusion, Survival, and Intelligent Agents

  • Mark Ring
  • Laurent Orseau
Conference paper

DOI: 10.1007/978-3-642-22887-2_2

Part of the Lecture Notes in Computer Science book series (LNCS, volume 6830)
Cite this paper as:
Ring M., Orseau L. (2011) Delusion, Survival, and Intelligent Agents. In: Schmidhuber J., Thórisson K.R., Looks M. (eds) Artificial General Intelligence. AGI 2011. Lecture Notes in Computer Science, vol 6830. Springer, Berlin, Heidelberg


This paper considers the consequences of endowing an intelligent agent with the ability to modify its own code. The intelligent agent is patterned closely after AIXI with these specific assumptions: 1) The agent is allowed to arbitrarily modify its own inputs if it so chooses; 2) The agent’s code is a part of the environment and may be read and written by the environment. The first of these we call the “delusion box”; the second we call “mortality”. Within this framework, we discuss and compare four very different kinds of agents, specifically: reinforcement-learning, goal-seeking, prediction-seeking, and knowledge-seeking agents. Our main results are that: 1) The reinforcement-learning agent under reasonable circumstances behaves exactly like an agent whose sole task is to survive (to preserve the integrity of its code); and 2) Only the knowledge-seeking agent behaves completely as expected.


Self-Modifying Agents AIXI Universal Artificial Intelligence Reinforcement Learning Prediction Real world assumptions 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Mark Ring
    • 1
  • Laurent Orseau
    • 2
  1. 1.IDSIA / University of Lugano / SUPSIManno-LuganoSwitzerland
  2. 2.UMR AgroParisTech 518 / INRAParisFrance

Personalised recommendations