Skip to main content

Part of the book series: Stochastic Modelling and Applied Probability ((SMAP,volume 61))

Abstract

In this chapter, we use the dynamic programming method for solving stochastic control problems. We consider in Section 3.2 the framework of controlled diffusion and the problem is formulated on finite or infinite horizon. The basic idea of the approach is to consider a family of control problems by varying the initial state values, and to derive some relations between the associated value functions. This principle, called the dynamic programming principle and initiated in the 1950s by Bellman, is stated precisely in Section 3.3. This approach yields a certain partial differential equation (PDE), of second order and nonlinear, called Hamilton-Jacobi-Bellman (HJB), and formally derived in Section 3.4. When this PDE can be solved by the explicit or theoretical achievement of a smooth solution, the verification theorem proved in Section 3.5, validates the optimality of the candidate solution to the HJB equation. This classical approach to the dynamic programming is called the verification step. We illustrate this method in Section 3.6 by solving three examples in finance. The main drawback of this approach is to suppose the existence of a regular solution to the HJB equation. This is not the case in general, and we give in Section 3.7 a simple example inspired by finance pointing out this feature.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 79.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huyên Pham .

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Pham, H. (2009). The classical PDE approach to dynamic programming. In: Continuous-time Stochastic Control and Optimization with Financial Applications. Stochastic Modelling and Applied Probability, vol 61. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-89500-8_3

Download citation

Publish with us

Policies and ethics