Asymptotic Learnability of Reinforcement Problems with Arbitrary Dependence

  • Daniil Ryabko
  • Marcus Hutter
Conference paper

DOI: 10.1007/11894841_27

Part of the Lecture Notes in Computer Science book series (LNCS, volume 4264)
Cite this paper as:
Ryabko D., Hutter M. (2006) Asymptotic Learnability of Reinforcement Problems with Arbitrary Dependence. In: Balcázar J.L., Long P.M., Stephan F. (eds) Algorithmic Learning Theory. ALT 2006. Lecture Notes in Computer Science, vol 4264. Springer, Berlin, Heidelberg

Abstract

We address the problem of reinforcement learning in which observations may exhibit an arbitrary form of stochastic dependence on past observations and actions, i.e. environments more general than (PO) MDPs. The task for an agent is to attain the best possible asymptotic reward where the true generating environment is unknown but belongs to a known countable family of environments. We find some sufficient conditions on the class of environments under which an agent exists which attains the best asymptotic reward for any environment in the class. We analyze how tight these conditions are and how they relate to different probabilistic assumptions known in reinforcement learning and related fields, such as Markov Decision Processes and mixing conditions.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Daniil Ryabko
    • 1
  • Marcus Hutter
    • 1
  1. 1.IDSIAManno-LuganoSwitzerland

Personalised recommendations