Skip to main content

Optimal Control and the Dynamic Programming Principle

  • Reference work entry
  • First Online:
Encyclopedia of Systems and Control
  • 163 Accesses

Abstract

This entry illustrates the application of Bellman’s dynamic programming principle within the context of optimal control problems for continuous-time dynamical systems. The approach leads to a characterization of the optimal value of the cost functional, over all possible trajectories given the initial conditions, in terms of a partial differential equation called the Hamilton–Jacobi–Bellman equation. Importantly, this can be used to synthesize the corresponding optimal control input as a state-feedback law.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 899.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 549.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Bibliography

  • Bardi M, Capuzzo Dolcetta I (1997) Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations. Birkhäuser, Boston

    Google Scholar 

  • Barles G (1994) Solutions de viscosité des équations de Hamilton-Jacobi. In: Mathématiques et applications, vol 17. Springer, Paris

    Google Scholar 

  • Bellman R (1957) Dynamic programming. Princeton University Press, Princeton

    Google Scholar 

  • Bertsekas DP (1987) Dynamic programming: deterministic and stochastic models. Prentice Hall, Englewood Cliffs

    Google Scholar 

  • Boltyanskii VG, Gamkrelidze RV, Pontryagin LS (1956) On the theory of optimal processes (in Russian). Doklady Akademii Nauk SSSR 110, 7–10

    MathSciNet  Google Scholar 

  • Fleming WH, Rishel RW (1975) Deterministic and stochastic optimal control. Springer, New York

    Google Scholar 

  • Fleming WH, Soner HM (1993) Controlled Markov processes and viscosity solutions. Springer, New York

    Google Scholar 

  • Howard RA (1960) Dynamic programming and Markov processes. Wiley, New York

    Google Scholar 

  • Kushner HJ, Dupuis P (2001) Numerical methods for stochastic control problems in continuous time. Springer, Berlin

    Google Scholar 

  • Macki J, Strauss A (1982) Introduction to optimal control theory. Springer, Berlin/Heidelberg/New York

    Google Scholar 

  • Pontryagin LS, Boltyanskii VG, Gamkrelidze RV, Mishchenko EF (1961) Matematicheskaya teoriya optimal’ nykh prozessov. Fizmatgiz, Moscow. Translated into English. The mathematical theory of optimal processes. John Wiley and Sons (Interscience Publishers), New York, 1962

    Google Scholar 

  • Ross IM (2009) A primer on Pontryagin’s principle in optimal control. Collegiate Publishers, San Francisco

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer-Verlag London

About this entry

Cite this entry

Falcone, M. (2015). Optimal Control and the Dynamic Programming Principle. In: Baillieul, J., Samad, T. (eds) Encyclopedia of Systems and Control. Springer, London. https://doi.org/10.1007/978-1-4471-5058-9_209

Download citation

Publish with us

Policies and ethics