Skip to main content
Log in

Discussion of dynamic programming and linear programming approaches to stochastic control and optimal stopping in continuous time

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

This paper seeks to highlight two approaches to the solution of stochastic control and optimal stopping problems in continuous time. Each approach transforms the stochastic problem into a deterministic problem. Dynamic programming is a well-established technique that obtains a partial/ordinary differential equation, variational or quasi-variational inequality depending on the type of problem; the solution provides the value of the problem as a function of the initial position (the value function). The other method recasts the problems as linear programs over a space of feasible measures. Both approaches use Dynkin’s formula in essential but different ways. The aim of this paper is to present the main ideas underlying these approaches with only passing attention paid to the important and necessary technical details.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Alili L, Kyprianou AE (2005) Some remarks on first passage of Lévy processes, the American put and pasting principles. Ann Appl Probab 15(3):2062–2080

    Article  MATH  MathSciNet  Google Scholar 

  • Alvarez LHR (2001) Reward functionals, salvage values, and optimal stopping. Math Methods Oper Res 54(2):315–337

    Article  MATH  MathSciNet  Google Scholar 

  • Borodin A, Salminen P (2002) Handbook of Brownian motion: facts and formulae, 2nd edn. Birkhäuser, Basel

    Book  Google Scholar 

  • Broadie M, Detemple J (1995) American capped call options on dividend-paying assets. Rev Fin Stud 8:161–191

    Article  Google Scholar 

  • Cho M, Stockbridge R (2002) Linear programming formulation for optimal stopping problems. SIAM J Control Optim 40:1965–1982

    Article  MATH  MathSciNet  Google Scholar 

  • Crandall MG, Lions PL (1983) Viscosity solutions of Hamilton–Jacobi equations. Trans Amer Math Soc 277:1–42

    Article  MATH  MathSciNet  Google Scholar 

  • Crandall MG, Ishii H, Lions PL (1992) User’s guide to viscosity solutions of second order partial differential equations. Bull Amer Math Soc (NS) 27(1):1–67

    Article  MATH  MathSciNet  Google Scholar 

  • Dayanik S, Karatzas I (2003) On the optimal stopping problem for one-dimensional diffusions. Stochastic Process Appl 107(2):173–212

    Article  MATH  MathSciNet  Google Scholar 

  • Dufour F, Stockbridge RH (2012) On the existence of strict optimal controls for constrained, controlled Markov processes in continuous time. Stochastics 84(1):55–78

    MATH  MathSciNet  Google Scholar 

  • Dynkin EB (1965) Markov processes, Vols. I, II. Translated with the authorization and assistance of the author by Fabius J, Greenberg V, Maitra A, Majone G. Die Grundlehren der Mathematischen Wissenschaften, Bände 121, vol 122. Academic Press, New York

  • El Karoui N (1981) Les aspects probabilistes du contrôle stochastique. In: Ninth Saint Flour Probability Summer School—1979 (Saint Flour, 1979), Lecture Notes in Mathematics, vol 876. Springer, Berlin, pp 73–238

  • Fleming W, Rishel R (1975) Deterministic and stochastic optimal control, stochastic modelling and applied probability, vol 1. Springer, New York, NY

    Book  Google Scholar 

  • Fleming W, Soner H (2006) Controlled markov processes and viscosity solutions, stochastic modelling and applied probability, vol 25, 2nd edn. Springer, New York, NY

    Google Scholar 

  • Haussmann UG, Lepeltier JP (1990) On the existence of optimal controls. SIAM J Control Optim 28(4):851–902

    Article  MATH  MathSciNet  Google Scholar 

  • Helmes K, Stockbridge R (2010) Construction of the value function and stopping rules for optimal stopping of one-dimensional diffusions. Adv Appl Probab 42:158–182

    Article  MATH  MathSciNet  Google Scholar 

  • Kaczmarek P, Kent ST, Rus GA, Stockbridge RH, Wade BA (2007) Numerical solution of a long-term average control problem for singular stochastic processes. Math Methods Oper Res 66(3):451–473

    Article  MATH  MathSciNet  Google Scholar 

  • Krylov N (1980) Controlled diffusion processes, stochastic modelling and applied probability, vol 14. Springer, New York, Berlin, translated from the Russian by Aries AB

  • Kurtz T, Stockbridge R (1998) Existence of Markov controls and characterization of optimal Markov controls. SIAM J Control Optim 36:609–653

    Article  MATH  MathSciNet  Google Scholar 

  • Lions PL (1983) Optimal control of diffusion processes and Hamilton–Jacobi–Bellman equations. II. Viscosity solutions and uniqueness. Commun Partial Differ Equ 8(11):1229–1276

    Article  MATH  Google Scholar 

  • Øksendal B (2003) Stochastic differential equations: an introduction with applications, 6th edn. Springer, Berlin

    Book  Google Scholar 

  • Peskir G, Shiryaev A (2006) Optimal stopping and free-boundary problems. Lectures in Mathematics ETH Zürich, Birkhäuser Verlag, Basel

    MATH  Google Scholar 

  • Pham H (2009) Continuous-time stochastic control and optimization with financial applications, stochastic modelling and applied probability, vol 61. Springer, Berlin

    Book  Google Scholar 

  • Protter P (2005) Stochastic integration and differential equations, stochastic modelling and applies probability, vol 21, 2nd edn. Springer, Berlin Heidelberg

    Book  Google Scholar 

  • Salminen P (1985) Optimal stopping of one-dimensional diffusions. Math Nachr 124:85–101

    Article  MATH  MathSciNet  Google Scholar 

  • Sethi S, Zhang Q (1994) Hierarchical decision making in stochastic manufacturing systems. Birkhäuser, Boston

    Book  MATH  Google Scholar 

  • Shiryaev AN (1978) Optimal stopping rules, stochastic modelling and applied probability, vol 8. Springer, Berlin, translated from the 1976 Russian, 2nd edn by Aries AB. Reprint of the 1978 translation

  • Shreve S (2004) Stochastic calculus for finance. II. Continuous-time models. Springer Finance, Springer, New York

    Book  MATH  Google Scholar 

  • Zabczyk J (1979) Introduction to the theory of optimal stopping. In: Stochastic control theory and stochastic differential systems (Proc. Workshop, Deutsch. Forschungsgemeinsch., Univ. Bonn, Bad Honnef, 1979), Lecture Notes in Control and Information Sci., vol 16. Springer, Berlin, pp 227–250

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to R. H. Stockbridge.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Stockbridge, R.H. Discussion of dynamic programming and linear programming approaches to stochastic control and optimal stopping in continuous time. Metrika 77, 137–162 (2014). https://doi.org/10.1007/s00184-013-0476-2

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-013-0476-2

Keywords

Navigation