Advertisement

The New AI: General & Sound & Relevant for Physics

  • Jürgen Schmidhuber
Part of the Cognitive Technologies book series (COGTECH)

Summary

Most traditional artificial intelligence (AI) systems of the past 50 years are either very limited, or based on heuristics, or both. The new millennium, however, has brought substantial progress in the field of theoretically optimal and practically feasible algorithms for prediction, search, inductive inference based on Occam’s razor, problem solving, decision making, and reinforcement learning in environments of a very general type. Since inductive inference is at the heart of all inductive sciences, some of the results are relevant not only for AI and computer science but also for physics, provoking nontraditional predictions based on Zuse’s thesis of the computer-generated universe.

Keywords

Reinforcement Learning Turing Machine Inductive Inference Optimal Search Kolmogorov Complexity 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Beeson M (1985) Foundations of Constructive Mathematics. Springer-Verlag, Berlin, New York, Heidelberg.zbMATHGoogle Scholar
  2. 2.
    Bell JS (1966) On the problem of hidden variables in quantum mechanics. Rev. Mod. Phys., 38:447–452.CrossRefGoogle Scholar
  3. 3.
    Bennett CH, DiVicenzo DP Quantum information and computation. Nature, 404(6775):256–259.Google Scholar
  4. 4.
    Bishop CM (1995) Neural networks for pattern recognition. Oxford University Press.Google Scholar
  5. 5.
    Brouwer LEJ (1907) Over de Grondslagen der Wiskunde. Dissertation, Doctoral Thesis, University of Amsterdam.Google Scholar
  6. 6.
    Cajori F (1919) History of mathematics. Macmillan, New York, 2nd edition.Google Scholar
  7. 7.
    Cantor G (1874) Über eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen. Crelle’s Journal für Mathematik, 77:258–263.Google Scholar
  8. 8.
    Chaitin GJ (1975) A theory of program size formally identical to information theory. Journal of the ACM, 22:329–340.CrossRefMathSciNetGoogle Scholar
  9. 9.
    Chaitin GJ (1987) Algorithmic Information Theory. Cambridge University Press, Cambridge, UK.Google Scholar
  10. 10.
    Deutsch D (1997) The Fabric of Reality. Allen Lane, New York, NY.Google Scholar
  11. 11.
    Erber T, Putterman S (1985) Randomness in quantum mechanics — nature’s ultimate cryptogram? Nature, 318(7):41–43.CrossRefGoogle Scholar
  12. 12.
    Everett III H (1957) ‘Relative State’ formulation of quantum mechanics. Reviews of Modern Physics, 29:454–462.CrossRefMathSciNetGoogle Scholar
  13. 13.
    Fredkin EF, Toffoli T (1982) Conservative logic. International Journal of Theoretical Physics, 21(3/4):219–253.CrossRefMathSciNetGoogle Scholar
  14. 14.
    Freyvald RV (1977) Functions and functionals computable in the limit. Transactions of Latvijas Vlasts Univ. Zinatn. Raksti, 210:6–19.Google Scholar
  15. 15.
    Gács P (1983) On the relation between descriptional complexity and algorithmic probability. Theoretical Computer Science, 22:71–93.CrossRefMathSciNetGoogle Scholar
  16. 16.
    Gödel K (1931) Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I. Monatshefte für Mathematik und Physik, 38:173–198.CrossRefGoogle Scholar
  17. 17.
    Gold EM (1965) Limiting recursion. Journal of Symbolic Logic, 30(1):28–46.CrossRefMathSciNetGoogle Scholar
  18. 18.
    Green MB, Schwarz JH, Witten E (1987) Superstring Theory. Cambridge University Press, Cambridge, UK.zbMATHGoogle Scholar
  19. 19.
    Hochreiter S, Younger AS, Conwell PR (2001) Learning to learn using gradient descent. In Lecture Notes on Comp. Sci. 2130, Proc. Intl. Conf. on Artificial Neural Networks (ICANN-2001), Springer, Berlin, Heidelberg.Google Scholar
  20. 20.
    Hutter M (2001) Convergence and error bounds of universal prediction for general alphabet. Proceedings of the 12th European Conference on Machine Learning (ECML-2001), Technical Report IDSIA-07-01, cs.AI/0103015), 2001.Google Scholar
  21. 21.
    Hutter M (2001) General loss bounds for universal sequence prediction. In Brodley CE, Danyluk AP (eds) Proceedings of the 18 th International Conference on Machine Learning (ICML-2001).Google Scholar
  22. 22.
    Hutter M (2001) Towards a universal theory of artificial intelligence based on algorithmic probability and sequential decisions. Proceedings of the 12 th European Conference on Machine Learning (ECML-2001).Google Scholar
  23. 23.
    Hutter M (2002) The fastest and shortest algorithm for all well-defined problems. International Journal of Foundations of Computer Science, 13(3):431–443.CrossRefMathSciNetGoogle Scholar
  24. 24.
    Hutter M (2002) Self-optimizing and Pareto-optimal policies in general environments based on Bayes-mixtures. In Proc. 15th Annual Conf. on Computational Learning Theory (COLT 2002), volume 2375 of LNAI, Springer, Berlin.Google Scholar
  25. 25.
    Hutter M (2005) A gentle introduction to the universal algorithmic agent AIXI. In this volume.Google Scholar
  26. 26.
    Jordan MI, Rumelhart DE (1990) Supervised learning with a distal teacher. Technical Report Occasional Paper #40, Center for Cog. Sci., MIT.Google Scholar
  27. 27.
    Kaelbling LP, Littman ML, Moore AW Reinforcement learning: a survey. Journal of AI research, 4:237–285.Google Scholar
  28. 28.
    Kolmogorov AN (1965) Three approaches to the quantitative definition of information. Problems of Information Transmission, 1:1–11.Google Scholar
  29. 29.
    Levin LA (1973) Universal sequential search problems. Problems of Information Transmission, 9(3):265–266.Google Scholar
  30. 30.
    Levin LA (1974) Laws of information (nongrowth) and aspects of the foundation of probability theory. Problems of Information Transmission, 10(3):206–210.Google Scholar
  31. 31.
    Li M, Vitányi PMB (1997) An Introduction to Kolmogorov Complexity and its Applications. Springer, Berlin, 2nd edition.zbMATHGoogle Scholar
  32. 32.
    Löwenheim L (1915) Über Möglichkeiten im Relativkalkül. Mathematische Annalen, 76:447–470.CrossRefMathSciNetGoogle Scholar
  33. 33.
    Merhav N, Feder M (1998) Universal prediction. IEEE Transactions on Information Theory, 44(6):2124–2147.CrossRefMathSciNetGoogle Scholar
  34. 34.
    Mitchell T (1997) Machine Learning. McGraw Hill.Google Scholar
  35. 35.
    Moore CH, Leach GC (1970) FORTH: a language for interactive computing, 1970. http://www.ultratechnology.com.Google Scholar
  36. 36.
    Newell A, Simon H (1963) GPS, a Program that Simulates Human Thought, In: Feigenbaum E, Feldman J (eds), Computers and Thought, MIT Press, Cambridge, MA.Google Scholar
  37. 37.
    Nguyen, Widrow B (1989) The truck backer-upper: An example of self learning in neural networks. In Proceedings of the International Joint Conference on Neural Networks.Google Scholar
  38. 38.
    Penrose R The Emperor’s New Mind. Oxford University Press, Oxford.Google Scholar
  39. 39.
    Popper KR (1934) The Logic of Scientific Discovery. Hutchinson, London.Google Scholar
  40. 40.
    Putnam H (1965) Trial and error predicates and the solution to a problem of Mostowski. Journal of Symbolic Logic, 30(1):49–57.CrossRefMathSciNetGoogle Scholar
  41. 41.
    Rissanen J (1986) Stochastic complexity and modeling. The Annals of Statistics, 14(3):1080–1100.MathSciNetGoogle Scholar
  42. 42.
    Rogers, Jr. H (1967) Theory of Recursive Functions and Effective Computability. McGraw-Hill, New York.zbMATHGoogle Scholar
  43. 43.
    Rosenbloom PS, Laird JE, and Newell A. The SOAR Papers. MIT Press, 1993.Google Scholar
  44. 44.
    Rumelhart DE, Hinton GE, Williams RJ (1986) Learning internal representations by error propagation. In Rumelhart DE, McClelland JL (eds) Parallel Distributed Processing, volume 1, MIT Press.Google Scholar
  45. 45.
    Schmidhuber C (2000) Strings from logic. Technical Report CERN-TH/2000-316, CERN, Theory Division. http://xxx.lanl.gov/abs/hep-th/0011065.Google Scholar
  46. 46.
    Schmidhuber J (1991) Reinforcement learning in Markovian and non-Markovian environments. In Lippman DS, Moody JE, Touretzky DS (eds) Advances in Neural Information Processing Systems 3, Morgan Kaufmann, Los Altos, CA.Google Scholar
  47. 47.
    Schmidhuber J (1995) Discovering solutions with low Kolmogorov complexity and high generalization capability. In Prieditis A and Russell S (eds) Machine Learning: Proceedings of the Twelfth International Conference. Morgan Kaufmann, San Francisco, CA.Google Scholar
  48. 48.
    Schmidhuber J (1997) A computer scientist’s view of life, the universe, and everything. In Freksa C, Jantzen M, Valk R (eds) Foundations of Computer Science: Potential — Theory — Cognition, volume 1337 of LLNCS, Springer, Berlin.Google Scholar
  49. 49.
    Schmidhuber J (1997) Discovering neural nets with low Kolmogorov complexity and high generalization capability. Neural Networks, 10(5):857–873.CrossRefGoogle Scholar
  50. 50.
    Schmidhuber J (2000) Algorithmic theories of everything. Technical Report IDSIA-20-00, quant-ph/0011122, IDSIA. Sections 1–5: see [52]; Section 6: see [54].Google Scholar
  51. 51.
    Schmidhuber J (2001) Sequential decision making based on direct search. In Sun R, Giles CL (eds) Sequence Learning: Paradigms, Algorithms, and Applications. volume 1828 of LLAI, Springer, Berlin.Google Scholar
  52. 52.
    Schmidhuber J (2002) Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit. International Journal of Foundations of Computer Science, 13(4):587–612.CrossRefMathSciNetGoogle Scholar
  53. 53.
    Schmidhuber J (2004) Optimal ordered problem solver. Machine Learning, 54(3):211–254.CrossRefGoogle Scholar
  54. 54.
    Schmidhuber J (2002) The Speed Prior: a new simplicity measure yielding nearoptimal computable predictions. In Kivinen J, Sloan RH (eds) Proceedings of the 15th Annual Conference on Computational Learning Theory (COLT 2002), Lecture Notes in Artificial Intelligence, Springer, Berlin.Google Scholar
  55. 55.
    Schmidhuber J (2003) Bias-optimal incremental problem solving. In Becker S, Thrun S, Obermayer K (eds) Advances in Neural Information Processing Systems 15, MIT Press, Cambridge, MA.Google Scholar
  56. 56.
    Schmidhuber J (2003) Gödel machines: self-referential universal problem solvers making provably optimal self-improvements. Technical Report IDSIA-19-03, arXiv:cs.LO/0309048 v2, IDSIA.Google Scholar
  57. 57.
    Schmidhuber J (2003) The new AI: General & sound & relevant for physics. Technical Report TR IDSIA-04-03, Version 1.0, cs.AI/0302012 v1, IDSIA.Google Scholar
  58. 58.
    Schmidhuber J (2003) Towards solving the grand problem of AI. In Quaresma P, Dourado A, Costa E, Costa JF (eds) Soft Computing and complex systems, Centro Internacional de Mathematica, Coimbra, Portugal. Based on [57].Google Scholar
  59. 59.
    Schmidhuber J and Hutter M (2002) NIPS 2002 workshop on universal learning algorithms and optimal search. Additional speakers: R. Solomonoff, P. M. B. Vitányi, N. Cesa-Bianchi, I. Nemenmann. Whistler, CA.Google Scholar
  60. 60.
    Schmidhuber J, Zhao J, Wiering M (1997) Shifting inductive bias with success-story algorithm, adaptive Levin search, and incremental self-improvement. Machine Learning, 28:105–130.CrossRefGoogle Scholar
  61. 61.
    Skolem T (1919) Logisch-kombinatorische Untersuchungen über Erfüllbarkeit oder Beweisbarkeit mathematischer Sätze nebst einem Theorem üuber dichte Mengen. Skrifter utgit av Videnskapsselskapet in Kristiania, I, Mat.-Nat. Kl., N4:1–36.Google Scholar
  62. 62.
    Solomonoff R (1964) A formal theory of inductive inference. Part I. Information and Control, 7:1–22.CrossRefMathSciNetGoogle Scholar
  63. 63.
    Solomonoff R (1978) Complexity-based induction systems. IEEE Transactions on Information Theory, IT-24(5):422–432.CrossRefMathSciNetGoogle Scholar
  64. 64.
    Solomonoff R (1986) An application of algorithmic probability to problems in artificial intelligence. In Kanal L, Lemmer J (eds) Uncertainty in Artificial Intelligence, Elsevier Science Publishers/North Holland, Amsterdam.Google Scholar
  65. 65.
    Solomonoff R (1989) A system for incremental learning based on algorithmic probability. In Proceedings of the Sixth Israeli Conference on Artificial Intelligence, Computer Vision and Pattern Recognition.Google Scholar
  66. 66.
    ’t Hooft G (1999) Quantum gravity as a dissipative deterministic system. Classical and Quantum Gravity (16):3263–3279.CrossRefMathSciNetGoogle Scholar
  67. 67.
    Turing A (1936) On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, Series 2, 41:230–267.Google Scholar
  68. 68.
    Ulam S (1950) Random processes and transformations. In Proceedings of the International Congress on Mathematics, volume 2, pages 264–275.Google Scholar
  69. 69.
    Vapnik V The Nature of Statistical Learning Theory. Springer, New York, 1995.zbMATHGoogle Scholar
  70. 70.
    von Neumann J (1966) Theory of Self-Reproducing Automata. University of Illionois Press, Champain, IL.Google Scholar
  71. 71.
    Wallace CS, Boulton DM (1968) An information theoretic measure for classification. Computer Journal, 11(2):185–194.Google Scholar
  72. 72.
    Werbos PJ (1974) Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. PhD thesis, Harvard University.Google Scholar
  73. 73.
    Werbos PJ (1987) Learning how the world works: Specifications for predictive networks in robots and brains. In Proceedings of IEEE International Conference on Systems, Man and Cybernetics, N.Y..Google Scholar
  74. 74.
    Wiering M, Schmidhuber J (1996) Solving POMDPs with Levin search and EIRA. In Saitta L (ed) Machine Learning: Proceedings of the Thirteenth International Conference, Morgan Kaufmann, San Francisco, CA.Google Scholar
  75. 75.
    Zuse K (1967) Rechnender Raum. Elektronische Datenverarbeitung, 8:336–344.Google Scholar
  76. 76.
    Zuse K (1969) Rechnender Raum. Friedrich Vieweg & Sohn, Braunschweig. English translation: Calculating Space, MIT Technical Translation AZT-70-164-GEMIT, MIT (Proj. MAC), Cambridge, MA.zbMATHGoogle Scholar
  77. 77.
    Zvonkin AK, Levin LA (1970) The complexity of finite objects and the algorithmic concepts of information and randomness. Russian Math. Surveys, 25(6):83–124.CrossRefMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Jürgen Schmidhuber
    • 1
    • 2
  1. 1.IDSIAManno (Lugano)Switzerland
  2. 2.TU MunichGarching, MünchenGermany

Personalised recommendations