Online Regret Bounds for Markov Decision Processes with Deterministic Transitions
- Cite this paper as:
- Ortner R. (2008) Online Regret Bounds for Markov Decision Processes with Deterministic Transitions. In: Freund Y., Györfi L., Turán G., Zeugmann T. (eds) Algorithmic Learning Theory. ALT 2008. Lecture Notes in Computer Science, vol 5254. Springer, Berlin, Heidelberg
We consider an upper confidence bound algorithm for Markov decision processes (MDPs) with deterministic transitions. For this algorithm we derive upper bounds on the online regret (with respect to an (ε-)optimal policy) that are logarithmic in the number of steps taken. These bounds also match known asymptotic bounds for the general MDP setting. We also present corresponding lower bounds. As an application, multi-armed bandits with switching cost are considered.
Unable to display preview. Download preview PDF.