Space-Time Embedded Intelligence

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7716)


This paper presents the first formal measure of intelligence for agents fully embedded within their environment. Whereas previous measures such as Legg’s universal intelligence measure and Russell’s bounded optimality provide theoretical insights into agents that interact with an external world, ours describes an intelligence that is computed by, can be modified by, and is subject to the time and space constraints of the environment with which it interacts. Our measure merges and goes beyond Legg’s and Russell’s, leading to a new, more realistic definition of artificial intelligence that we call Space-Time Embedded Intelligence.


Intelligence measure AIXI bounded optimality real-world assumptions 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Conway, J.: The game of life. Scientific American 303(6), 43–44 (1970)Google Scholar
  2. 2.
    Goertzel, B.: Toward a Formal Characterization of Real-World General Intelligence. In: Proceedings of the 3rd Conference on Artificial General Intelligence, AGI 2010, pp. 19–24. Atlantis Press (2010)Google Scholar
  3. 3.
    Hutter, M.: Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability. Springer, Berlin (2005)zbMATHGoogle Scholar
  4. 4.
    Legg, S.: Machine Super Intelligence. Department of Informatics, University of Lugano (2008)Google Scholar
  5. 5.
    Orseau, L., Ring, M.: Self-Modification and Mortality in Artificial Agents. In: Schmidhuber, J., Thórisson, K.R., Looks, M. (eds.) AGI 2011. LNCS (LNAI), vol. 6830, pp. 1–10. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  6. 6.
    Orseau, L., Ring, M.: Memory Issues of Intelligent Agents. In: Bach, J., Goertzel, B., Iklé, M. (eds.) AGI 2012. LNCS (LNAI), vol. 7716, pp. 219–231. Springer, Heidelberg (2012)Google Scholar
  7. 7.
    Ortega, D.A., Braun, P.A.: Information, Utility and Bounded Rationality. In: Schmidhuber, J., Thórisson, K.R., Looks, M. (eds.) AGI 2011. LNCS (LNAI), vol. 6830, pp. 269–274. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  8. 8.
    Ring, M., Orseau, L.: Delusion, Survival, and Intelligent Agents. In: Schmidhuber, J., Thórisson, K.R., Looks, M. (eds.) AGI 2011. LNCS (LNAI), vol. 6830, pp. 11–20. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  9. 9.
    Robinson, H.: Dualism. In: The Stanford Encyclopedia of Philosophy. Winter 2011 edn. (2011)Google Scholar
  10. 10.
    Russell, S.J., Subramanian, D.: Provably Bounded-Optimal Agents. Perspective 2, 575–609 (1995)zbMATHGoogle Scholar
  11. 11.
    Schmidhuber, J.: Ultimate Cognition à la Gödel. Cognitive Computation 1(2), 177–193 (2009)CrossRefGoogle Scholar
  12. 12.
    Solomonoff, R.J.: A Formal Theory of Inductive Inference. Part I. Information and Control 7(1), 1–22 (1964)MathSciNetzbMATHCrossRefGoogle Scholar
  13. 13.
    Stoljar, D.: Physicalism. In: The Stanford Encyclopedia of Philosophy. Fall 2009 edn. (2009)Google Scholar
  14. 14.
    Sutton, R., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
  15. 15.
    Zvonkin, A.K., Levin, L.A.: The complexity of finite objects and the development of the concepts of information and randomness by means of the theory of algorithms. Russian Mathematical Surveys 25(6), 83–124 (1970)MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  1. 1.AgroParisTech UMR 518 / INRAParisFrance
  2. 2.IDSIA / University of Lugano / SUPSIManno-LuganoSwitzerland

Personalised recommendations