Abstract
This paper seeks to highlight two approaches to the solution of stochastic control and optimal stopping problems in continuous time. Each approach transforms the stochastic problem into a deterministic problem. Dynamic programming is a well-established technique that obtains a partial/ordinary differential equation, variational or quasi-variational inequality depending on the type of problem; the solution provides the value of the problem as a function of the initial position (the value function). The other method recasts the problems as linear programs over a space of feasible measures. Both approaches use Dynkin’s formula in essential but different ways. The aim of this paper is to present the main ideas underlying these approaches with only passing attention paid to the important and necessary technical details.
Similar content being viewed by others
References
Alili L, Kyprianou AE (2005) Some remarks on first passage of Lévy processes, the American put and pasting principles. Ann Appl Probab 15(3):2062–2080
Alvarez LHR (2001) Reward functionals, salvage values, and optimal stopping. Math Methods Oper Res 54(2):315–337
Borodin A, Salminen P (2002) Handbook of Brownian motion: facts and formulae, 2nd edn. Birkhäuser, Basel
Broadie M, Detemple J (1995) American capped call options on dividend-paying assets. Rev Fin Stud 8:161–191
Cho M, Stockbridge R (2002) Linear programming formulation for optimal stopping problems. SIAM J Control Optim 40:1965–1982
Crandall MG, Lions PL (1983) Viscosity solutions of Hamilton–Jacobi equations. Trans Amer Math Soc 277:1–42
Crandall MG, Ishii H, Lions PL (1992) User’s guide to viscosity solutions of second order partial differential equations. Bull Amer Math Soc (NS) 27(1):1–67
Dayanik S, Karatzas I (2003) On the optimal stopping problem for one-dimensional diffusions. Stochastic Process Appl 107(2):173–212
Dufour F, Stockbridge RH (2012) On the existence of strict optimal controls for constrained, controlled Markov processes in continuous time. Stochastics 84(1):55–78
Dynkin EB (1965) Markov processes, Vols. I, II. Translated with the authorization and assistance of the author by Fabius J, Greenberg V, Maitra A, Majone G. Die Grundlehren der Mathematischen Wissenschaften, Bände 121, vol 122. Academic Press, New York
El Karoui N (1981) Les aspects probabilistes du contrôle stochastique. In: Ninth Saint Flour Probability Summer School—1979 (Saint Flour, 1979), Lecture Notes in Mathematics, vol 876. Springer, Berlin, pp 73–238
Fleming W, Rishel R (1975) Deterministic and stochastic optimal control, stochastic modelling and applied probability, vol 1. Springer, New York, NY
Fleming W, Soner H (2006) Controlled markov processes and viscosity solutions, stochastic modelling and applied probability, vol 25, 2nd edn. Springer, New York, NY
Haussmann UG, Lepeltier JP (1990) On the existence of optimal controls. SIAM J Control Optim 28(4):851–902
Helmes K, Stockbridge R (2010) Construction of the value function and stopping rules for optimal stopping of one-dimensional diffusions. Adv Appl Probab 42:158–182
Kaczmarek P, Kent ST, Rus GA, Stockbridge RH, Wade BA (2007) Numerical solution of a long-term average control problem for singular stochastic processes. Math Methods Oper Res 66(3):451–473
Krylov N (1980) Controlled diffusion processes, stochastic modelling and applied probability, vol 14. Springer, New York, Berlin, translated from the Russian by Aries AB
Kurtz T, Stockbridge R (1998) Existence of Markov controls and characterization of optimal Markov controls. SIAM J Control Optim 36:609–653
Lions PL (1983) Optimal control of diffusion processes and Hamilton–Jacobi–Bellman equations. II. Viscosity solutions and uniqueness. Commun Partial Differ Equ 8(11):1229–1276
Øksendal B (2003) Stochastic differential equations: an introduction with applications, 6th edn. Springer, Berlin
Peskir G, Shiryaev A (2006) Optimal stopping and free-boundary problems. Lectures in Mathematics ETH Zürich, Birkhäuser Verlag, Basel
Pham H (2009) Continuous-time stochastic control and optimization with financial applications, stochastic modelling and applied probability, vol 61. Springer, Berlin
Protter P (2005) Stochastic integration and differential equations, stochastic modelling and applies probability, vol 21, 2nd edn. Springer, Berlin Heidelberg
Salminen P (1985) Optimal stopping of one-dimensional diffusions. Math Nachr 124:85–101
Sethi S, Zhang Q (1994) Hierarchical decision making in stochastic manufacturing systems. Birkhäuser, Boston
Shiryaev AN (1978) Optimal stopping rules, stochastic modelling and applied probability, vol 8. Springer, Berlin, translated from the 1976 Russian, 2nd edn by Aries AB. Reprint of the 1978 translation
Shreve S (2004) Stochastic calculus for finance. II. Continuous-time models. Springer Finance, Springer, New York
Zabczyk J (1979) Introduction to the theory of optimal stopping. In: Stochastic control theory and stochastic differential systems (Proc. Workshop, Deutsch. Forschungsgemeinsch., Univ. Bonn, Bad Honnef, 1979), Lecture Notes in Control and Information Sci., vol 16. Springer, Berlin, pp 227–250
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Stockbridge, R.H. Discussion of dynamic programming and linear programming approaches to stochastic control and optimal stopping in continuous time. Metrika 77, 137–162 (2014). https://doi.org/10.1007/s00184-013-0476-2
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00184-013-0476-2