Gödel Machines: Fully Self-referential Optimal Universal Self-improvers

  • Jürgen Schmidhuber
Part of the Cognitive Technologies book series (COGTECH)

Summary

We present the first class of mathematically rigorous, general, fully self-referential, self-improving, optimally efficient problem solvers. Inspired by Kurt Gödel’s celebrated self-referential formulas (1931), such a problem solver rewrites any part of its own code as soon as it has found a proof that the rewrite is useful, where the problem-dependent utility function and the hardware and the entire initial code are described by axioms encoded in an initial proof searcher which is also part of the initial code. The searcher systematically and efficiently tests computable proof techniques (programs whose outputs are proofs) until it finds a provably useful, computable self-rewrite. We show that such a self-rewrite is globally optimal—no local maxima!—since the code first had to prove that it is not useful to continue the proof search for alternative self-rewrites. Unlike previous non-self-referential methods based on hardwired proof searchers, ours not only boasts an optimal order of complexity but can optimally reduce any slowdowns hidden by the O()-notation, provided the utility of such speed-ups is provable at all.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Banzhaf W, Nordin P, Keller RE, Francone FD (1998) Genetic Programming — An Introduction. Morgan Kaufmann Publishers, San Francisco, CA.MATHGoogle Scholar
  2. 2.
    Bellman R (1961) Adaptive Control Processes. Princeton University Press, Princeton, NJ.MATHGoogle Scholar
  3. 3.
    Blum M (1967) A machine-independent theory of the complexity of recursive functions. Journal of the ACM, 14(2):322–336.CrossRefMathSciNetMATHGoogle Scholar
  4. 4.
    Blum M On effective procedures for speeding up algorithms. Journal of the ACM, 18(2):290–305.Google Scholar
  5. 5.
    Cantor G Über eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen. Crelle’s Journal für Mathematik, 77:258–263.Google Scholar
  6. 6.
    Chaitin GJ (1975) A theory of program size formally identical to information theory. Journal of the ACM, 22:329–340.CrossRefMathSciNetMATHGoogle Scholar
  7. 7.
    Clocksin WF, Mellish CS (1987) Programming in Prolog. Springer, Berlin, 3rd edition.MATHGoogle Scholar
  8. 8.
    Cramer NL (1985) A representation for the adaptive generation of simple sequential programs. In Grefenstette JJ (ed) Proceedings of an International Conference on Genetic Algorithms and Their Applications, Carnegie-Mellon University, July 24–26, 1985, Lawrence Erlbaum, Hillsdale, NJ.Google Scholar
  9. 9.
    Crick F, Koch C (1998) Consciousness and neuroscience. Cerebral Cortex, 8:97–107.CrossRefGoogle Scholar
  10. 10.
    Fitting MC (1996) First-Order Logic and Automated Theorem Proving. Graduate Texts in Computer Science. Springer, Berlin, 2nd edition.MATHGoogle Scholar
  11. 11.
    Gödel K (1931) Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I. Monatshefte für Mathematik und Physik, 38:173–198.CrossRefGoogle Scholar
  12. 12.
    Heisenberg W (1925) Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik. Zeitschrift für Physik, 33:879–893.CrossRefGoogle Scholar
  13. 13.
    Hochreiter S, Younger AS, Conwell PR (2001) Learning to learn using gradient descent. In Proc. Intl. Conf. on Artificial Neural Networks (ICANN-2001), volume 2130 of LLCS Springer, Berlin, Heidelberg.Google Scholar
  14. 14.
    Hofstadter D (1979) Gödel, Escher, Bach: an Eternal Golden Braid. Basic Books, New York.Google Scholar
  15. 15.
    Holland JH (1975) Properties of the bucket brigade. In Proceedings of an International Conference on Genetic Algorithms. Lawrence Erlbaum, Hillsdale, NJ.Google Scholar
  16. 16.
    Hutter M (2001) Towards a universal theory of artificial intelligence based on algorithmic probability and sequential decisions. Proceedings of the 12 th European Conference on Machine Learning (ECML-2001).Google Scholar
  17. 17.
    Hutter M (2002) The fastest and shortest algorithm for all well-defined problems. International Journal of Foundations of Computer Science, 13(3):431–443.CrossRefMathSciNetMATHGoogle Scholar
  18. 18.
    Hutter M (2002) Self-optimizing and Pareto-optimal policies in general environments based on Bayes-mixtures. In Proc. 15th Annual Conf. on Computational Learning Theory (COLT 2002), volume 2375 of LNAI, Springer, Berlin.Google Scholar
  19. 19.
    Kaelbling LP, Littman ML, Moore AW Reinforcement learning: a survey. Journal of AI research, 4:237–285.Google Scholar
  20. 20.
    Kolmogorov AN (1933) Grundbegriffe der Wahrscheinlichkeitsrechnung. Springer, Berlin, 1933.Google Scholar
  21. 21.
    Kolmogorov AN (1965) Three approaches to the quantitative definition of information. Problems of Information Transmission, 1:1–11.Google Scholar
  22. 22.
    Lenat D (1983) Theory formation by heuristic search. Machine Learning, 21.Google Scholar
  23. 23.
    Levin LA (1973) Universal sequential search problems. Problems of Information Transmission, 9(3):265–266.Google Scholar
  24. 24.
    Levin LA (1974) Laws of information (nongrowth) and aspects of the foundation of probability theory. Problems of Information Transmission, 10(3):206–210.Google Scholar
  25. 25.
    Levin LA (1984) Randomness conservation inequalities: Information and independence in mathematical theories. Information and Control, 61:15–37.CrossRefMathSciNetMATHGoogle Scholar
  26. 26.
    Li M, Vitányi PMB (1997) An Introduction to Kolmogorov Complexity and its Applications. Springer, Berlin, 2nd edition.MATHGoogle Scholar
  27. 27.
    Löwenheim L (1915) Über Möglichkeiten im Relativkalkül. Mathematische Annalen, 76:447–470.CrossRefMathSciNetGoogle Scholar
  28. 28.
    Moore CH, Leach GC (1970) FORTH-a language for interactive computing, 1970. http://www.ultratechnology.com.Google Scholar
  29. 29.
    Penrose R (1994) Shadows of the mind. Oxford University Press, Oxford.Google Scholar
  30. 30.
    Popper KR (1999) All Life Is Problem Solving. Routledge, London.Google Scholar
  31. 31.
    Samuel AL (1959) Some studies in machine learning using the game of checkers. IBM Journal on Research and Development, 3:210–229.CrossRefGoogle Scholar
  32. 32.
    Schmidhuber J (1987) Evolutionary principles in self-referential learning. Diploma thesis, Institut für Informatik, Technische Universität München.Google Scholar
  33. 33.
    Schmidhuber J (1991) Reinforcement learning in Markovian and non-Markovian environments. In Lippman DS, Moody JE, Touretzky DS (eds) Advances in Neural Information Processing Systems 3, Morgan Kaufmann, Los Altos, CA.Google Scholar
  34. 34.
    Schmidhuber J A self-referential weight matrix. In Proceedings of the International Conference on Artificial Neural Networks, Amsterdam, Springer, Berlin.Google Scholar
  35. 35.
    Schmidhuber J (1994) On learning how to learn learning strategies. Technical Report FKI-198-94, Fakultät für Informatik, Technische Universität München, 1994. See [50, 48].Google Scholar
  36. 36.
    Schmidhuber J (1995) Discovering solutions with low Kolmogorov complexity and high generalization capability. In Prieditis A and Russell S (eds) Machine Learning: Proceedings of the Twelfth International Conference. Morgan Kaufmann, San Francisco, CA.Google Scholar
  37. 37.
    Schmidhuber J (1997) A computer scientist’s view of life, the universe, and everything. In Freksa C, Jantzen M, Valk R (eds) Foundations of Computer Science: Potential-Theory-Cognition, volume 1337 of LLNCS, Springer, Berlin.Google Scholar
  38. 38.
    Schmidhuber J (1997) Discovering neural nets with low Kolmogorov complexity and high generalization capability. Neural Networks, 10(5):857–873.CrossRefGoogle Scholar
  39. 39.
    Schmidhuber J (2000) Algorithmic theories of everything. Technical Report IDSIA-20-00, quant-ph/0011122, IDSIA. Sections 1–5: see [40]; Section 6: see [41].Google Scholar
  40. 40.
    Schmidhuber J (2002) Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit. International Journal of Foundations of Computer Science, 13(4):587–612.CrossRefMathSciNetMATHGoogle Scholar
  41. 41.
    Schmidhuber J (2002) The Speed Prior: a new simplicity measure yielding near-optimal computable predictions. In Kivinen J, Sloan RH (eds) Proceedings of the 15th Annual Conference on Computational Learning Theory (COLT 2002), Lecture Notes in Artificial Intelligence, Springer, Berlin.Google Scholar
  42. 42.
    Schmidhuber J (2003) Bias-optimal incremental problem solving. In Becker S, Thrun S, Obermayer K (eds) Advances in Neural Information Processing Systems 15, MIT Press, Cambridge, MA.Google Scholar
  43. 43.
    Schmidhuber J (2003) Gödel machines: self-referential universal problem solvers making provably optimal self-improvements. Technical Report IDSIA-19-03, arXiv:cs.LO/0309048 v2, IDSIA.Google Scholar
  44. 44.
    J. Schmidhuber. The new AI: General & sound & relevant for physics. In this volume.Google Scholar
  45. 45.
    Schmidhuber J (2004) Optimal ordered problem solver. Machine Learning, 54:211–254.CrossRefMATHGoogle Scholar
  46. 46.
    Schmidhuber J (2005) Gödel machines: Towards a Technical Justification of Consciousness. In Kudenko D, Kazakov D, Alonso E (eds) Adaptive Agents and Multi-Agent Systems III, LNCS 3394, Springer, Berlin.Google Scholar
  47. 47.
    Schmidhuber J (2005) Completely Self-Referential Optimal Reinforcement Learners. In Duch W et al (eds) Proc. Intl. Conf. on Artificial Neural Networks ICANN’05, LNCS 3697, Springer, Berlin, Heidelberg.Google Scholar
  48. 48.
    Schmidhuber J, Zhao J, Schraudolph N (1997) Reinforcement learning with self-modifying policies. In Thrun S, Pratt L (eds) Learning to learn, Kluwer, Norwell, MA.Google Scholar
  49. 49.
    Schmidhuber J, Zhao J, Wiering M (1996) Simple principles of metalearning. Technical Report IDSIA-69-96, IDSIA. See [50, 48].Google Scholar
  50. 50.
    Schmidhuber J, Zhao J, Wiering M (1997) Shifting inductive bias with success-story algorithm, adaptive Levin search, and incremental self-improvement. Machine Learning, 28:105–130.CrossRefGoogle Scholar
  51. 51.
    Skolem T (1919) Logisch-kombinatorische Untersuchungen über Erfüllbarkeit oder Beweisbarkeit mathematischer Sätze nebst einem Theorem über dichte Mengen. Skrifter utgit av Videnskapsselskapet in Kristiania, I, Mat.-Nat. Kl., N4:1–36.Google Scholar
  52. 52.
    Solomonoff R (1964) A formal theory of inductive inference. Part I. Information and Control, 7:1–22.CrossRefMathSciNetMATHGoogle Scholar
  53. 53.
    Solomonoff R (1978) Complexity-based induction systems. IEEE Transactions on Information Theory, IT-24(5):422–432.CrossRefMathSciNetGoogle Scholar
  54. 54.
    Solomonoff R (2003) Progress in incremental machine learning—Preliminary Report for NIPS 2002 Workshop on Universal Learners and Optimal Search; revised Sept 2003. Technical Report IDSIA-16-03, IDSIA.Google Scholar
  55. 55.
    Sutton R, Barto A (1998) Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA.Google Scholar
  56. 56.
    Turing A (1936) On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, Series 2, 41:230–267.Google Scholar
  57. 57.
    Wolpert DH, Macready DG (1997) No free lunch theorems for search. IEEE Transactions on Evolutionary Computation, 1.Google Scholar
  58. 58.
    Zuse K (1969) Rechnender Raum. Friedrich Vieweg & Sohn, Braunschweig. English translation: Calculating Space, MIT Technical Translation AZT-70-164-GEMIT, MIT (Proj. MAC), Cambridge, MA.MATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Jürgen Schmidhuber
    • 1
    • 2
  1. 1.IDSIAManno (Lugano)Switzerland
  2. 2.TU MunichGarching, MünchenGermany

Personalised recommendations