Advertisement

Problems of Self-reference in Self-improving Space-Time Embedded Intelligence

  • Benja Fallenstein
  • Nate Soares
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8598)

Abstract

By considering agents to be a part of their environment, Orseau and Ring’s space-time embedded intelligence [10] is a better fit to the real world than the traditional agent framework. However, a self-modifying AGI that sees future versions of itself as an ordinary part of the environment may run into problems of self-reference. We show that in one particular model based on formal logic, naive approaches either lead to incorrect reasoning that allows an agent to put off an important task forever (the procrastination paradox), or fail to allow the agent to justify even obviously safe rewrites (the Löbian obstacle). We argue that these problems have relevance beyond our particular formalism, and discuss partial solutions.

Keywords

Partial Solution Future Version Naive Approach Safe Action Incompleteness Theorem 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Christiano, P., Yudkowsky, E., Herreshoff, M., Barasz, M.: Definability of truth in probabilistic logic (2013), http://intelligence.org/files/DefinabilityOfTruthInProbabilisticLogic-EarlyDraft.pdf
  2. 2.
    Fallenstein, B.: An infinitely descending sequence of sound theories each proving the next consistent (2013), https://intelligence.org/files/ConsistencyWaterfall.pdf
  3. 3.
    Fallenstein, B.: Procrastination in probabilistic logic (2014), https://intelligence.org/files/ProbabilisticLogicProcrastinates.pdf
  4. 4.
    Goertzel, B.: Golem: Toward an agi meta-architecture enabling both goal preservation and radical self-improvement (2010), http://goertzel.org/GOLEM.pdf
  5. 5.
    Hutter, M.: Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability. Springer, Berlin (2005)Google Scholar
  6. 6.
    Legg, S., Hutter, M.: A formal measure of machine intelligence. In: Proc. 15th Annual Machine Learning Conference of Belgium and the Netherlands (Benelearn 2006), Ghent, Belgium, pp. 73–80 (2006)Google Scholar
  7. 7.
    Lob, M.H.: Solution of a problem of Leon Henkin. J. Symb. Log. 20(2), 115–118 (1955)CrossRefMathSciNetGoogle Scholar
  8. 8.
    Muehlhauser, L., Orseau, L.: Laurent Orseau on Artificial General Intelligence (interview) (2013), http://intelligence.org/2013/09/06/laurent-orseau-on-agi/
  9. 9.
    Neumann, L.J., Morgenstern, O.: Theory of games and economic behavior, vol. 60. Princeton University Press, Princeton (1947)Google Scholar
  10. 10.
    Orseau, L., Ring, M.: Space-time embedded intelligence. In: Bach, J., Goertzel, B., Iklé, M. (eds.) AGI 2012. LNCS, vol. 7716, pp. 209–218. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  11. 11.
    Robinson, H.: Dualism. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy. Winter 2012 edition (2012)Google Scholar
  12. 12.
    Schmidhuber, J.: Ultimate cognition à la Gödel. Cognitive Computation 1(2), 177–193 (2009)CrossRefGoogle Scholar
  13. 13.
    Yudkowsky, E.: The procrastination paradox (2013), https://intelligence.org/files/ProcrastinationParadox.pdf
  14. 14.
    Yudkowsky, E., Herreshoff, M.: Tiling agents for self-modifying AI, and the Löbian obstacle (2013)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Benja Fallenstein
    • 1
  • Nate Soares
    • 1
  1. 1.Machine Intelligence Research InstituteBerkeleyUSA

Personalised recommendations