Advertisement

A computer scientist's view of life, the universe, and everything

  • Jürgen Schmidhuber
Chapter
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1337)

Abstract

Is the universe computable? If so, it may be much cheaper in terms of information requirements to compute all computable universes instead of just ours. I apply basic concepts of Kolmogorov complexity theory to the set of possible universes, and chat about perceived and true randomness, life, generalization, and learning in a given universe.

Keywords

Short Program Input Tape Universal Turing Machine Great Programmer Code Theorem 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    G.J. Chaitin. Algorithmic Information Theory. Cambridge University Press, Cambridge, 1987.Google Scholar
  2. 2.
    A.N. Kolmogorov. Three approaches to the quantitative definition of information. Problems of Information Transmission, 1:1–11, 1965.zbMATHGoogle Scholar
  3. 3.
    L. A. Levin. Laws of information (nongrowth) and aspects of the foundation of probability theory. Problems of Information Transmission, 10(3):206–210, 1974.Google Scholar
  4. 4.
    L. A. Levin. Randomness conservation inequalities: Information and independence in mathematical theories. Information and Control, 61:15–37, 1984.zbMATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    J. Schmidhuber. Discovering neural nets with low Kolmogorov complexity and high generalization capability. Neural Networks, 1997. In press.Google Scholar
  6. 6.
    J. Schmidhuber, J. Zhao, and N. Schraudolph. Reinforcement learning with selfmodifying policies. In S. Thrun and L. Pratt, editors, Learning to learn. Kluwer, 1997. To appear.Google Scholar
  7. 7.
    J. Schmidhuber, J. Zhao, and M. Wiering. Shifting inductive bias with successstory algorithm, adaptive Levin search, and incremental self-improvement. Machine Learning, 26, 1997. In press.Google Scholar
  8. 8.
    C. E. Shannon. A mathematical theory of communication (parts I and II). Bell System Technical Journal, XXVII:379–423, 1948.MathSciNetGoogle Scholar
  9. 9.
    R.J. Solomonoff. A formal theory of inductive inference. Part I. Information and Control, 7:1–22, 1964.zbMATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    D. H. Wolpert. The lack of a priori distinctions between learning algorithms. Neural Computation, 8(7):1341–1390, 1996.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Jürgen Schmidhuber
    • 1
  1. 1.IDSIALuganoSwitzerland

Personalised recommendations