Online Regret Bounds for Markov Decision Processes with Deterministic Transitions

  • Ronald Ortner
Conference paper

DOI: 10.1007/978-3-540-87987-9_14

Part of the Lecture Notes in Computer Science book series (LNCS, volume 5254)
Cite this paper as:
Ortner R. (2008) Online Regret Bounds for Markov Decision Processes with Deterministic Transitions. In: Freund Y., Györfi L., Turán G., Zeugmann T. (eds) Algorithmic Learning Theory. ALT 2008. Lecture Notes in Computer Science, vol 5254. Springer, Berlin, Heidelberg

Abstract

We consider an upper confidence bound algorithm for Markov decision processes (MDPs) with deterministic transitions. For this algorithm we derive upper bounds on the online regret (with respect to an (ε-)optimal policy) that are logarithmic in the number of steps taken. These bounds also match known asymptotic bounds for the general MDP setting. We also present corresponding lower bounds. As an application, multi-armed bandits with switching cost are considered.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Ronald Ortner
    • 1
  1. 1.University of LeobenLeobenAustria

Personalised recommendations