Bounded Parameter Markov Decision Processes with Average Reward Criterion

  • Ambuj Tewari
  • Peter L. Bartlett
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4539)


Bounded parameter Markov Decision Processes (BMDPs) address the issue of dealing with uncertainty in the parameters of a Markov Decision Process (MDP). Unlike the case of an MDP, the notion of an optimal policy for a BMDP is not entirely straightforward. We consider two notions of optimality based on optimistic and pessimistic criteria. These have been analyzed for discounted BMDPs. Here we provide results for average reward BMDPs.

We establish a fundamental relationship between the discounted and the average reward problems, prove the existence of Blackwell optimal policies and, for both notions of optimality, derive algorithms that converge to the optimal value function.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Givan, R., Leach, S., Dean, T.: Bounded-parameter Markov decision processes. Artificial Intelligence 122, 71–109 (2000)MATHCrossRefMathSciNetGoogle Scholar
  2. Strehl, A.L., Littman, M.: A theoretical analysis of model-based interval estimation. In: Proceedings of the Twenty-Second International Conference on Machine Learning, pp. 857–864. ACM Press, New York (2005)Google Scholar
  3. Auer, P., Ortner, R.: Logarithmic online regret bounds for undiscounted reinforcement learning. In: dvances in Neural Information Processing Systems 19, MIT Press, Cambridge (2007) (to appear)Google Scholar
  4. Brafman, R.I., Tennenholtz, M.: R-MAX – a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research 3, 213–231 (2002)CrossRefMathSciNetGoogle Scholar
  5. Even-Dar, E., Mansour, Y.: Convergence of optimistic and incremental Q-learning. In: Advances in Neural Information Processing Systems 14, pp. 1499–1506. MIT Press, Cambridge (2001)Google Scholar
  6. Nilim, A., El Ghaoui, L.: Robust control of Markov decision processes with uncertain transition matrices. Operations Research 53, 780–798 (2005)CrossRefMathSciNetGoogle Scholar
  7. Bertsekas, D.P.: Dynamic Programming and Optimal Control. Vol. 2. Athena Scientific, Belmont, MA (1995)Google Scholar
  8. Burnetas, A.N., Katehakis, M.N.: Optimal adaptive policies for Markov decision processes. Mathematics of Operations Research 22, 222–255 (1997)MATHMathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Berlin Heidelberg 2007

Authors and Affiliations

  • Ambuj Tewari
    • 1
  • Peter L. Bartlett
    • 2
  1. 1.University of California, Berkeley, Division of Computer Science, 544 Soda Hall # 1776, Berkeley, CA 94720-1776USA
  2. 2.University of California, Berkeley, Division of Computer Science and Department of Statistics, 387 Soda Hall # 1776, Berkeley, CA 94720-1776USA

Personalised recommendations