Illustrated review of convergence conditions of the value iteration algorithm and the rolling horizon procedure for average-cost MDPs
- 184 Downloads
This paper is concerned with the links between the Value Iteration algorithm and the Rolling Horizon procedure, for solving problems of stochastic optimal control under the long-run average criterion, in Markov Decision Processes with finite state and action spaces. We review conditions of the literature which imply the geometric convergence of Value Iteration to the optimal value. Aperiodicity is an essential prerequisite for convergence. We prove that the convergence of Value Iteration generally implies that of Rolling Horizon. We also present a modified Rolling Horizon procedure that can be applied to models without analyzing periodicity, and discuss the impact of this transformation on convergence. We illustrate with numerous examples the different results. Finally, we discuss rules for stopping Value Iteration or finding the length of a Rolling Horizon. We provide an example which demonstrates the difficulty of the question, disproving in particular a conjectured rule proposed by Puterman.
KeywordsMarkov decision problems Value iteration Heuristic methods Rolling horizon
- Bertsekas, D. P. (1987). Dynamic programming: deterministic and stochastic models. Englewood Cliffs: Prentice Hall. Google Scholar
- Derman, C. (1970). Finite state Markovian decision processes. New York: Academic Press. Google Scholar
- Çinlar, E. (1975). Introduction to stochastic processes. New York: Prentice Hall. Google Scholar
- Kallenberg, L. (2002). Finite state and action MDPS. In E. Feinberg & A. Shwartz (Eds.), Handbook of Markov decision processes. Methods and applications. Kluwer’s international series. Google Scholar
- Kallenberg, L. Markov decision processes. Lectures notes, University of Leiden, 2009, in www.math.leidenuniv.nl/~kallenberg/Lecture-notes-MDP.pdf.
- Lanery, E. (1967). Etude asymptotique des systèmes Markoviens à commande. Revue Française d’Informatique et de Recherche Opérationnelle, 1, 3–56. Google Scholar
- Ross, S. M. (1970). Applied probability models with optimization applications. Oakland: Holden-Day. Google Scholar
- Tijms, H. C. (1986). Stochastic modelling and analysis, a computational approach. New York: Wiley. Google Scholar
- White, D. J. (1993). Markov decision processes. New York: Wiley. Google Scholar