Advertisement

Annals of Operations Research

, Volume 199, Issue 1, pp 193–214 | Cite as

Illustrated review of convergence conditions of the value iteration algorithm and the rolling horizon procedure for average-cost MDPs

  • Eugenio Della Vecchia
  • Silvia Di MarcoEmail author
  • Alain Jean-Marie
Article

Abstract

This paper is concerned with the links between the Value Iteration algorithm and the Rolling Horizon procedure, for solving problems of stochastic optimal control under the long-run average criterion, in Markov Decision Processes with finite state and action spaces. We review conditions of the literature which imply the geometric convergence of Value Iteration to the optimal value. Aperiodicity is an essential prerequisite for convergence. We prove that the convergence of Value Iteration generally implies that of Rolling Horizon. We also present a modified Rolling Horizon procedure that can be applied to models without analyzing periodicity, and discuss the impact of this transformation on convergence. We illustrate with numerous examples the different results. Finally, we discuss rules for stopping Value Iteration or finding the length of a Rolling Horizon. We provide an example which demonstrates the difficulty of the question, disproving in particular a conjectured rule proposed by Puterman.

Keywords

Markov decision problems Value iteration Heuristic methods Rolling horizon 

References

  1. Alden, J. M., & Smith, R. L. (1992). Rolling horizon procedures in nonhomogeneous Markov decision processes. Operations Research, 40(2), S183–S194. CrossRefGoogle Scholar
  2. Bertsekas, D. P. (1987). Dynamic programming: deterministic and stochastic models. Englewood Cliffs: Prentice Hall. Google Scholar
  3. Derman, C. (1970). Finite state Markovian decision processes. New York: Academic Press. Google Scholar
  4. Çinlar, E. (1975). Introduction to stochastic processes. New York: Prentice Hall. Google Scholar
  5. Federgruen, A., Schweitzer, P., & Tijms, C. (1978). Contraction mappings underlying undiscounted Markov decision problems. Journal of Mathematical Analysis and Applications, 65, 711–730. CrossRefGoogle Scholar
  6. Guo, X., & Shi, P. (2001). Limiting average criteria for non stationary Markov decision processes. SIAM Journal on Optimization, 11(4), 1037–1053. CrossRefGoogle Scholar
  7. Hernández-Lerma, O., & Lasserre, J. B. (1990). Error bounds for rolling horizon policies in discrete-time Markov control processes. IEEE Transactions on Automatic Control, 35(10), 1118–1124. CrossRefGoogle Scholar
  8. Kallenberg, L. (2002). Finite state and action MDPS. In E. Feinberg & A. Shwartz (Eds.), Handbook of Markov decision processes. Methods and applications. Kluwer’s international series. Google Scholar
  9. Kallenberg, L. Markov decision processes. Lectures notes, University of Leiden, 2009, in www.math.leidenuniv.nl/~kallenberg/Lecture-notes-MDP.pdf.
  10. Lanery, E. (1967). Etude asymptotique des systèmes Markoviens à commande. Revue Française d’Informatique et de Recherche Opérationnelle, 1, 3–56. Google Scholar
  11. Meyn, S. P., & Tweedie, R. L. (2009). Markov chains and stochastic stability (2nd ed.). London: Cambridge University Press. CrossRefGoogle Scholar
  12. Puterman, L. (1994). Markov decision processes. New York: Wiley. CrossRefGoogle Scholar
  13. Ross, S. M. (1970). Applied probability models with optimization applications. Oakland: Holden-Day. Google Scholar
  14. Schweitzer, P. J. (1971). Iterative solution of the functional equation of undiscounted Markov renewal programming. Journal of Mathematical Analysis and Applications, 34, 495–501. CrossRefGoogle Scholar
  15. Schweitzer, P. J., & Federgruen, A. (1977). The asymptotic behavior of undiscounted value iteration in Markov decision problems. Mathematics of Operations Research, 2(4), 360–381. CrossRefGoogle Scholar
  16. Schweitzer, P. J., & Federgruen, A. (1979). Geometric convergence of the value iteration in multichain Markov decision problems. Advances in Applied Probability, 11, 188–217. CrossRefGoogle Scholar
  17. Tijms, H. C. (1986). Stochastic modelling and analysis, a computational approach. New York: Wiley. Google Scholar
  18. White, D. J. (1993). Markov decision processes. New York: Wiley. Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2012

Authors and Affiliations

  • Eugenio Della Vecchia
    • 1
  • Silvia Di Marco
    • 1
    Email author
  • Alain Jean-Marie
    • 2
  1. 1.CONICET-UNRRosarioArgentina
  2. 2.INRIA-LIRMMMontpellierFrance

Personalised recommendations