Abstract
In this chapter we define many of the standard control problems whose numerical solutions will concern us in the subsequent chapters. Other, less familiar control problems will be discussed separately in later chapters. We will first define cost functional for uncontrolled processes, and then formally discuss the partial differential equations which they satisfy. Then the cost functional for the controlled problems will be stated and the partial differential equations for the optimal cost formally derived. These partial differential equations are generally known as Bellman equations or dynamic programming equations. The main tool in the derivations is Ito’s formula.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer Science+Business Media New York
About this chapter
Cite this chapter
Kushner, H.J., Dupuis, P. (2001). Dynamic Programming Equations. In: Numerical Methods for Stochastic Control Problems in Continuous Time. Stochastic Modelling and Applied Probability, vol 24. Springer, New York, NY. https://doi.org/10.1007/978-1-4613-0007-6_4
Download citation
DOI: https://doi.org/10.1007/978-1-4613-0007-6_4
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4612-6531-3
Online ISBN: 978-1-4613-0007-6
eBook Packages: Springer Book Archive