Numerical Methods for Continuous-Time Stochastic Control Problems
This expository article provides a brief review of numerical methods for stochastic control in continuous time. Leaving most of the technical details out with the broad general audience in mind, it aims to serve as an introductory reference for researchers, practitioners, and students, who wish to know something about numerical methods for stochastic controls.
The study of stochastic control has witnessed tremendous progress in the last few decades; see, for example, Fleming and Rishel (1975), Fleming and Soner (1992), Kushner (1977), and Yong and Zhou (1999) among others, for fundamentals of stochastic controls as well as historical remarks. Much of the development has been accompanied by the needs and progress in science, engineering, as well as finance. Typically, the problems are highly nonlinear, so a closed-form solution is very difficult to obtain. As a result, designing feasible numerical algorithms becomes vitally important. Among the many approximation methods,...
KeywordsStochastic Approximation Stochastic Control Jump Diffusion Policy Iteration Local Consistency
Research of this author was supported in part by the Army Research Office under grant W911NF-12-1-0223.
- Chancelier P, Gomez C, Quadrat J-P, Sulem A, Blankenship GL, La Vigna A, MaCenary DC, Yan I (1986) An expert system for control and signal processing with automatic FORTRAN program generation. In: Mathematical systems symposium, Stockholm. Royal Institute of Technology, StockholmGoogle Scholar
- Chancelier P, Gomez C, Quadrat J-P, Sulem A (1987) Automatic study in stochastic control. In: Fleming W, Lions PL (eds) IMA volume in mathematics and its applications, vol 10. Springer, BerlinGoogle Scholar
- Fleming WH, Soner HM (1992) Controlled Markov processes and viscosity solutions. Springer, New YorkGoogle Scholar
- Kushner HJ (1977) Probability methods for approximation in stochastic control and for elliptic equations. Academic, New YorkGoogle Scholar