Encyclopedia of Systems and Control

Living Edition
| Editors: John Baillieul, Tariq Samad

Stochastic Dynamic Programming

Living reference work entry
DOI: https://doi.org/10.1007/978-1-4471-5102-9_230-1


This article is concerned with one of the traditional approaches for stochastic control problems: Stochastic dynamic programming. Brief descriptions of stochastic dynamic programming methods and related terminology are provided. Two asset-selling examples are presented to illustrate the basic ideas. A list of topics and references are also provided for further reading.


Optimality principle Bellman equation Hamilton-Jacobi-Bellman equation Markov decision problem Stochastic control Viscosity solution Asset-selling rule 
This is a preview of subscription content, log in to check access.


  1. Bellman RE (1957) Dynamic programming. Princeton University Press, PrincetonGoogle Scholar
  2. Bertsekas DP (1987) Dynamic programming. Prentice Hall, Englewood CliffsGoogle Scholar
  3. Davis MHA (1993) Markov models and optimization. Chapman & Hall, LondonGoogle Scholar
  4. Elliott RJ, Aggoun L, Moore JB (1995) Hidden Markov models: estimation and control. Springer, New YorkGoogle Scholar
  5. Fleming WH, Rishel RW (1975) Deterministic and stochastic optimal control. Springer, New YorkGoogle Scholar
  6. Fleming WH, Soner HM (2006) Controlled Markov processes and viscosity solutions, 2nd edn. Springer, New YorkGoogle Scholar
  7. Hernandez-Lerma O, Lasserre JB (1996) Discrete-time Markov control processes: basic optimality criteria. Springer, New YorkGoogle Scholar
  8. Kushner HJ, Dupuis PG (1992) Numerical methods for stochastic control problems in continuous time. Springer, New YorkGoogle Scholar
  9. Kushner HJ, Yin G (1997) Stochastic approximation algorithms and applications. Springer, New YorkGoogle Scholar
  10. Øksendal B (2007) Stochastic differential equations, 6th edn. Springer, New YorkGoogle Scholar
  11. Pham H (2009) Continuous-time stochastic control and optimization with financial applications. Springer, New YorkGoogle Scholar
  12. Sethi SP, Thompson GL (2000) Optimal control theory: applications to management science and economics, 2nd edn. Kluwer, BostonGoogle Scholar
  13. Sethi SP, Zhang Q (1994) Hierarchical decision making in stochastic manufacturing systems. Birkhäuser, BostonGoogle Scholar
  14. Yin G, Zhang Q (2005) Discrete-time Markov chains: two-time-scale methods and applications. Springer, New YorkGoogle Scholar
  15. Yin G, Zhang Q (2013) Continuous-time Markov chains and applications: a two-time-scale approach, 2nd edn. Springer, New YorkGoogle Scholar
  16. Yin G, Zhu C (2010) Hybrid switching diffusions: properties and applications. Springer, New YorkGoogle Scholar
  17. Yong J, Zhou XY (1999) Stochastic control: Hamiltonian systems and HJB equations. Springer, New YorkGoogle Scholar

Copyright information

© Springer-Verlag London 2014

Authors and Affiliations

  1. 1.Department of MathematicsThe University of GeorgiaAthensUSA