Advertisement

Learning Agents with Evolving Hypothesis Classes

  • Peter Sunehag
  • Marcus Hutter
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7999)

Abstract

It has recently been shown that a Bayesian agent with a universal hypothesis class resolves most induction problems discussed in the philosophy of science. These ideal agents are, however, neither practical nor a good model for how real science works. We here introduce a framework for learning based on implicit beliefs over all possible hypotheses and limited sets of explicit theories sampled from an implicit distribution represented only by the process by which it generates new hypotheses. We address the questions of how to act based on a limited set of theories as well as what an ideal sampling process should be like. Finally, we discuss topics in philosophy of science and cognitive science from the perspective of this framework.

Keywords

Feature Vector Bayesian Inference Reinforcement Learning Inductive Logic Programming Kolmogorov Complexity 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [ALL+09]
    Asmuth, J., Li, L., Littman, M.L., Nouri, A., Wingate, D.: A bayesian sampling approach to exploration in reinforcement learning. In: Uncertainty in Artificial Intelligence (UAI), pp. 19–26 (2009)Google Scholar
  2. [And91]
    Anderson, J.R.: Is human cognition adaptive? Behavioral & Brain Sciences 14(3), 471–517 (1991)CrossRefGoogle Scholar
  3. [Ber04]
    Berg, B.A.: Markov Chain Monte Carlo Simulations And Their Statistical Analysis: With Web-based Fortran Code. World Scientific Publishing Company (2004)Google Scholar
  4. [BT92]
    Box, G.E.P., Tiao, G.C.: Bayesian Inference in Statistical Analysis (Wiley Classics Library). Wiley-Interscience (1992)Google Scholar
  5. [CG92]
    Casella, G., George, E.I.: Explaining the gibbs sampler. The American Statistician 46(3), 167–174 (1992)MathSciNetGoogle Scholar
  6. [Dem12]
    Demski, A.: Logical prior probability. In: Bach, J., Goertzel, B., Iklé, M. (eds.) AGI 2012. LNCS, vol. 7716, pp. 50–59. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  7. [Edw68]
    Edwards, W.: Conservatism in human information processing. In: Formal Representation of Human Judgment, pp. 17–52 (1968)Google Scholar
  8. [GKT08]
    Griffiths, T.L., Kemp, C., Tenenbaum, J.B.: Bayesian Models of Cognition. Cambridge University Press (2008)Google Scholar
  9. [Goo84]
    Good, I.J.: A Bayesian approach in the philosophy of inference. British Journal for the Philosophy of Science, 161–166 (1984)Google Scholar
  10. [GS82]
    Gaifman, H., Snir, M.: Probabilities over rich languages, testing and randomness. J. Symb. Log. 47(3), 495–548 (1982)MathSciNetzbMATHCrossRefGoogle Scholar
  11. [GT09]
    Griffiths, T.L., Tenenbaum, J.B.: Theory-based causal induction. Psychological Review 116(4), 661–716 (2009)CrossRefGoogle Scholar
  12. [Han58]
    Hanson, N.R.: Patterns of Discovery. Cambridge University Press, Cambridge (1958)Google Scholar
  13. [HLNU13]
    Hutter, M., Lloyd, J.W., Ng, K.S., Uther, W.T.B.: Probabilities on sentences in an expressive logic. Journal of Applied Probability (2013)Google Scholar
  14. [Hol75]
    Holland, J.H.: Adaptation in Natural and Artificial Systems. University of Michigan Press (1975)Google Scholar
  15. [HU05]
    Howson, C., Urbach, P.: Scientific Reasoning: The Bayesian Approach, 3rd edn., Open Court (2005)Google Scholar
  16. [Hut05]
    Hutter, M.: Universal Articial Intelligence: Sequential Decisions based on Algorithmic Probability. Springer, Berlin (2005)Google Scholar
  17. [Hut07]
    Hutter, M.: On universal prediction and bayesian confirmation. Theoretical Computer Science 384, 33–48 (2007)MathSciNetzbMATHCrossRefGoogle Scholar
  18. [Hut09]
    Hutter, M.: Feature reinforcement learning: Part I. Unstructured MDPs. Journal of General Artificial Intelligence (2009)Google Scholar
  19. [Hut12]
    Hutter, M.: The subjective computable universe. In: A Computable Universe: Understanding and Exploring Nature as Computation, pp. 399–416. World Scientific (2012)Google Scholar
  20. [Jay03]
    Jaynes, E.T.: Probability Theory: The Logic of Science. Cambridge University Press (2003)Google Scholar
  21. [KT79]
    Kahneman, D., Tversky, A.: Prospect theory: An analysis of decision under risk. Econometrica 47, 263–291 (1979)zbMATHCrossRefGoogle Scholar
  22. [KTNG10]
    Kemp, C., Tenenbaum, J.B., Niyogi, S., Griffiths, T.L.: A probabilistic model of theory formation. Cognition 114(2) (2010)Google Scholar
  23. [Kuh70]
    Kuhn, T.S.: The structure of scientific revolutions. University of Chicago Press (1970)Google Scholar
  24. [LH11]
    Lattimore, T., Hutter, M.: Asymptotically optimal agents. In: Kivinen, J., Szepesvári, C., Ukkonen, E., Zeugmann, T. (eds.) ALT 2011. LNCS, vol. 6925, pp. 368–382. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  25. [LV93]
    Li, M., Vitany, P.: An Introduction to Kolmogov Complexity and Its Applications. Springer (1993)Google Scholar
  26. [Mug91]
    Muggleton, S.: Inductive logic programming. New Generation Computing 8(4), 295–318 (1991)zbMATHCrossRefGoogle Scholar
  27. [NBvC08]
    Nestler, S., Blank, H., von Collani, G.: Hindsight bias doesn’t always come easy: Causal models, cognitive effort, and creeping determinism. Journal of Experimental Psychology: Learning 34(5) (2008)Google Scholar
  28. [Nya93]
    Nyarko, Y.: The savage-bayesian foundations of economic dynamics. Working Papers 93-35, C.V. Starr Center for Applied Economics, New York University (1993)Google Scholar
  29. [OL13]
    Orseau, L., Lattimore, T.: Univeral knowledge-seeking agents for stochastic environments (submitted, 2013)Google Scholar
  30. [OR94]
    Osborne, M.J., Rubinstein, A.: A Course in Game Theory. MIT Press Books. The MIT Press (1994)Google Scholar
  31. [Ors11]
    Orseau, L.: Universal knowledge-seeking agents. In: Kivinen, J., Szepesvári, C., Ukkonen, E., Zeugmann, T. (eds.) ALT 2011. LNCS, vol. 6925, pp. 353–367. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  32. [Rec71]
    Rechenberg, I.: Evolutionsstrategie. Frommann-Holzboog-Verlag (1971)Google Scholar
  33. [RH11]
    Rathmanner, S., Hutter, M.: A philosophical treatise of universal induction. Entropy 13(6), 1076–1136 (2011)MathSciNetCrossRefGoogle Scholar
  34. [RN10]
    Russell, S.J., Norvig, P.: Artificial Intelligence: A Modern Approach, 3rd edn. Prentice Hall, Englewood Cliffs (2010)Google Scholar
  35. [Rob94]
    Robert, C.P.: The Bayesian choice: a decision-theoretic motivation. Springer, New York (1994)zbMATHCrossRefGoogle Scholar
  36. [Sav54]
    Savage, L.: The Foundations of Statistics. Wiley, New York (1954)zbMATHGoogle Scholar
  37. [SH11]
    Sunehag, P., Hutter, M.: Axioms for rational reinforcement learning. In: Kivinen, J., Szepesvári, C., Ukkonen, E., Zeugmann, T. (eds.) ALT 2011. LNCS, vol. 6925, pp. 338–352. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  38. [SH12]
    Sunehag, P., Hutter, M.: Optimistic agents are asymptotically optimal. In: Thielscher, M., Zhang, D. (eds.) AI 2012. LNCS, vol. 7691, pp. 15–26. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  39. [SH13]
    Sunehag, P., Hutter, M.: Rational general reinforcement learning (submitted, 2013)Google Scholar
  40. [Sol64]
    Solomonoff, R.J.: A formal theory of inductive inference. Part i and ii. Information and Control 7(1,2), 1–22, 224–254 (1964)Google Scholar
  41. [Sta06]
    Kyle Stanford, P.: Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives. Oxford University Press (2006)Google Scholar
  42. [Str00]
    Strens, M.J.A.: A bayesian framework for reinforcement learning. In: Proceedings of the Seventeenth International Conference on Machine Learning (ICML 2000), pp. 943–950 (2000)Google Scholar
  43. [Str06]
    Strevens, M.: The bayesian approach to the philosophy of science. In: Encyclopedia of Philosophy, 2nd edn. (2006)Google Scholar
  44. [TKGG11]
    Tenenbaum, J.B., Kemp, C., Griffiths, T.L., Goodman, N.D.: How to Grow a Mind: Statistics, Structure, and Abstraction. Science 331(6022), 1279–1285 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  45. [Wei80]
    Weinstein, N.D.: Unrealistic optimism about future life events. Journal of Personality and Social Psychology 39(5), 806–820 (1980)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Peter Sunehag
    • 1
  • Marcus Hutter
    • 1
  1. 1.Research School of Computer ScienceAustralian National UniversityCanberraAustralia

Personalised recommendations