PAC Bounds for Discounted MDPs

  • Tor Lattimore
  • Marcus Hutter
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7568)

Abstract

We study upper and lower bounds on the sample-complexity of learning near-optimal behaviour in finite-state discounted Markov Decision Processes (mdps). We prove a new bound for a modified version of Upper Confidence Reinforcement Learning (ucrl) with only cubic dependence on the horizon. The bound is unimprovable in all parameters except the size of the state/action space, where it depends linearly on the number of non-zero transition probabilities. The lower bound strengthens previous work by being both more general (it applies to all policies) and tighter. The upper and lower bounds match up to logarithmic factors provided the transition matrix is not too dense.

Keywords

Reinforcement learning sample-complexity exploration exploitation PAC-MDP Markov decision processes 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Tor Lattimore
    • 1
  • Marcus Hutter
    • 1
  1. 1.Australian National UniversityAustralia

Personalised recommendations