Bounded Parameter Markov Decision Processes with Average Reward Criterion
Bounded parameter Markov Decision Processes (BMDPs) address the issue of dealing with uncertainty in the parameters of a Markov Decision Process (MDP). Unlike the case of an MDP, the notion of an optimal policy for a BMDP is not entirely straightforward. We consider two notions of optimality based on optimistic and pessimistic criteria. These have been analyzed for discounted BMDPs. Here we provide results for average reward BMDPs.
We establish a fundamental relationship between the discounted and the average reward problems, prove the existence of Blackwell optimal policies and, for both notions of optimality, derive algorithms that converge to the optimal value function.
Unable to display preview. Download preview PDF.
- Strehl, A.L., Littman, M.: A theoretical analysis of model-based interval estimation. In: Proceedings of the Twenty-Second International Conference on Machine Learning, pp. 857–864. ACM Press, New York (2005)Google Scholar
- Auer, P., Ortner, R.: Logarithmic online regret bounds for undiscounted reinforcement learning. In: dvances in Neural Information Processing Systems 19, MIT Press, Cambridge (2007) (to appear)Google Scholar
- Even-Dar, E., Mansour, Y.: Convergence of optimistic and incremental Q-learning. In: Advances in Neural Information Processing Systems 14, pp. 1499–1506. MIT Press, Cambridge (2001)Google Scholar
- Bertsekas, D.P.: Dynamic Programming and Optimal Control. Vol. 2. Athena Scientific, Belmont, MA (1995)Google Scholar