Asymptotic Learnability of Reinforcement Problems with Arbitrary Dependence

  • Daniil Ryabko
  • Marcus Hutter
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4264)


We address the problem of reinforcement learning in which observations may exhibit an arbitrary form of stochastic dependence on past observations and actions, i.e. environments more general than (PO) MDPs. The task for an agent is to attain the best possible asymptotic reward where the true generating environment is unknown but belongs to a known countable family of environments. We find some sufficient conditions on the class of environments under which an agent exists which attains the best asymptotic reward for any environment in the class. We analyze how tight these conditions are and how they relate to different probabilistic assumptions known in reinforcement learning and related fields, such as Markov Decision Processes and mixing conditions.


Optimal Policy Reinforcement Learning Markov Decision Process Countable Family Average Reward 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [Bos96]
    Bosq, D.: Nonparametric Statistics for Stochastic Processes. Springer, Heidelberg (1996)zbMATHGoogle Scholar
  2. [BT99]
    Brafman, R.I., Tennenholtz, M.: A general polynomial time algorithm for near-optimal reinforcement learning. In: Proc. 17th International Joint Conference on Artificial Intelligence (IJCAI 2001), pp. 734–739 (1999)Google Scholar
  3. [CBL06]
    Cesa-Bianchi, N., Lugosi, G.: Prediction, Learning, and Games. Cambridge University Press, Cambridge (in preparation, 2006)zbMATHCrossRefGoogle Scholar
  4. [CS04]
    Csiszar, I., Shields, P.C.: Notes on information theory and statistics. In: Foundations and Trends in Communications and Information Theory (2004)Google Scholar
  5. [Doo53]
    Doob, J.L.: Stochastic Processes. John Wiley & Sons, New York (1953)zbMATHGoogle Scholar
  6. [EDKM05]
    Even-Dar, E., Kakade, S.M., Mansour, Y.: Reinforcement learning in POMDPs without resets. In: IJCAI, pp. 690–695 (2005)Google Scholar
  7. [HP04]
    Hutter, M., Poland, J.: Prediction with expert advice by following the perturbed leader for general weights. In: Ben-David, S., Case, J., Maruoka, A. (eds.) ALT 2004. LNCS (LNAI), vol. 3244, pp. 279–293. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  8. [Hut02]
    Hutter, M.: Self-optimizing and pareto-optimal policies in general environments based on bayes-mixtures. In: Kivinen, J., Sloan, R.H. (eds.) COLT 2002. LNCS (LNAI), vol. 2375, pp. 364–379. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  9. [Hut03]
    Hutter, M.: Optimality of universal Bayesian prediction for general loss and alphabet. Journal of Machine Learning Research 4, 971–1000 (2003)CrossRefMathSciNetGoogle Scholar
  10. [Hut05]
    Hutter, M.: Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability, p. 300. Springer, Berlin (2005), zbMATHGoogle Scholar
  11. [KV86]
    Kumar, P.R., Varaiya, P.P.: Stochastic Systems: Estimation, Identification, and Adaptive Control. Prentice Hall, Englewood Cliffs, NJ (1986)zbMATHGoogle Scholar
  12. [PH05]
    Poland, J., Hutter, M.: Defensive universal learning with experts. In: Jain, S., Simon, H.U., Tomita, E. (eds.) ALT 2005. LNCS (LNAI), vol. 3734, pp. 356–370. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  13. [PH06]
    Poland, J., Hutter, M.: Universal learning of repeated matrix games. In: Conference Benelearn 2006 and GTDT workshop at AAMAS 2006, Ghent (2006)Google Scholar
  14. [dFM04]
    de Farias, D.P., Megiddo, N.: How to combine expert (and novice) advice when actions impact the environment? In: Thrun, S., Saul, L., Schölkopf, B. (eds.) Advances in Neural Information Processing Systems 16, MIT Press, Cambridge, MA (2004)Google Scholar
  15. [RN95]
    Russell, S.J., Norvig, P.: Artificial Intelligence. A Modern Approach. Prentice-Hall, Englewood Cliffs (1995)zbMATHGoogle Scholar
  16. [SB98]
    Sutton, R., Barto, A.: Reinforcement learning: An introduction. MIT Press, Cambridge, MA (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Daniil Ryabko
    • 1
  • Marcus Hutter
    • 1
  1. 1.IDSIAManno-LuganoSwitzerland

Personalised recommendations