# Illustrated review of convergence conditions of the value iteration algorithm and the rolling horizon procedure for average-cost MDPs

- 184 Downloads
- 1 Citations

## Abstract

This paper is concerned with the links between the Value Iteration algorithm and the Rolling Horizon procedure, for solving problems of stochastic optimal control under the long-run average criterion, in Markov Decision Processes with finite state and action spaces. We review conditions of the literature which imply the geometric convergence of Value Iteration to the optimal value. Aperiodicity is an essential prerequisite for convergence. We prove that the convergence of Value Iteration generally implies that of Rolling Horizon. We also present a modified Rolling Horizon procedure that can be applied to models without analyzing periodicity, and discuss the impact of this transformation on convergence. We illustrate with numerous examples the different results. Finally, we discuss rules for stopping Value Iteration or finding the length of a Rolling Horizon. We provide an example which demonstrates the difficulty of the question, disproving in particular a conjectured rule proposed by Puterman.

## Keywords

Markov decision problems Value iteration Heuristic methods Rolling horizon## References

- Alden, J. M., & Smith, R. L. (1992). Rolling horizon procedures in nonhomogeneous Markov decision processes.
*Operations Research*,*40*(2), S183–S194. CrossRefGoogle Scholar - Bertsekas, D. P. (1987).
*Dynamic programming: deterministic and stochastic models*. Englewood Cliffs: Prentice Hall. Google Scholar - Derman, C. (1970).
*Finite state Markovian decision processes*. New York: Academic Press. Google Scholar - Çinlar, E. (1975).
*Introduction to stochastic processes*. New York: Prentice Hall. Google Scholar - Federgruen, A., Schweitzer, P., & Tijms, C. (1978). Contraction mappings underlying undiscounted Markov decision problems.
*Journal of Mathematical Analysis and Applications*,*65*, 711–730. CrossRefGoogle Scholar - Guo, X., & Shi, P. (2001). Limiting average criteria for non stationary Markov decision processes.
*SIAM Journal on Optimization*,*11*(4), 1037–1053. CrossRefGoogle Scholar - Hernández-Lerma, O., & Lasserre, J. B. (1990). Error bounds for rolling horizon policies in discrete-time Markov control processes.
*IEEE Transactions on Automatic Control*,*35*(10), 1118–1124. CrossRefGoogle Scholar - Kallenberg, L. (2002). Finite state and action MDPS. In E. Feinberg & A. Shwartz (Eds.),
*Handbook of Markov decision processes. Methods and applications*.*Kluwer’s international series*. Google Scholar - Kallenberg, L.
*Markov decision processes*. Lectures notes, University of Leiden, 2009, in www.math.leidenuniv.nl/~kallenberg/Lecture-notes-MDP.pdf. - Lanery, E. (1967). Etude asymptotique des systèmes Markoviens à commande.
*Revue Française d’Informatique et de Recherche Opérationnelle*,*1*, 3–56. Google Scholar - Meyn, S. P., & Tweedie, R. L. (2009).
*Markov chains and stochastic stability*(2nd ed.). London: Cambridge University Press. CrossRefGoogle Scholar - Puterman, L. (1994).
*Markov decision processes*. New York: Wiley. CrossRefGoogle Scholar - Ross, S. M. (1970).
*Applied probability models with optimization applications*. Oakland: Holden-Day. Google Scholar - Schweitzer, P. J. (1971). Iterative solution of the functional equation of undiscounted Markov renewal programming.
*Journal of Mathematical Analysis and Applications*,*34*, 495–501. CrossRefGoogle Scholar - Schweitzer, P. J., & Federgruen, A. (1977). The asymptotic behavior of undiscounted value iteration in Markov decision problems.
*Mathematics of Operations Research*,*2*(4), 360–381. CrossRefGoogle Scholar - Schweitzer, P. J., & Federgruen, A. (1979). Geometric convergence of the value iteration in multichain Markov decision problems.
*Advances in Applied Probability*,*11*, 188–217. CrossRefGoogle Scholar - Tijms, H. C. (1986).
*Stochastic modelling and analysis, a computational approach*. New York: Wiley. Google Scholar - White, D. J. (1993).
*Markov decision processes*. New York: Wiley. Google Scholar