Skip to main content
Log in

Approximating the Solution of Stochastic Optimal Control Problems and the Merton’s Portfolio Selection Model

  • Published:
Computational Economics Aims and scope Submit manuscript

Abstract

In this paper, a numerical algorithm is presented to solve stochastic optimal control problems via the Markov chain approximation method. This process is based on state and time spaces discretization followed by a backward iteration technique. First, the original controlled process by an appropriate controlled Markov chain is approximated. Then, the cost functional is appropriate for the approximated Markov chain. Also, the finite difference approximations are used to the construction of locally consistent approximated Markov chain. Furthermore, the coefficients of the resulting discrete equation can be considered as the desired transition probabilities and interpolation interval. Finally, the performance of the presented algorithm on a test case with a well-known explicit solution, namely the Merton’s portfolio selection model, is demonstrated.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Bellman, R. E. (1957). Dynamic programming. Princeton, NJ: Princeton University Press.

    Google Scholar 

  • El-Kady, M., & Elbarbary, E. M. E. (2002). A Chebyshev expansion method for solving non-linear optimal control problems. Applied Mathematics and Computation, 129, 171–182.

    Article  Google Scholar 

  • Fleming, W. H., & Rishel, C. J. (1975). Deterministic and stochastic optimal control. New York, NY: Springer.

    Book  Google Scholar 

  • Fleming, W. H., & Soner, H. M. (2006). Controlled Markov processes and viscosity solutions. New York: Springer.

    Google Scholar 

  • Huanga, C. S., Wangb, S., Chenc, C. S., & Lia, Z. C. (2006). Aradial basis collocation method for Hamilton–Jacobi–Bellman equations. Automatica, 42, 2201–2207.

    Article  Google Scholar 

  • Huang, Y., Forsyth, P. A., & Labahn, G. (2012). Iterative methods for the solution of a singular control formulation of a GMWB pricing problem. Numerische Mathematik, 122(1), 133–167.

    Article  Google Scholar 

  • Ieda, M. (2015). An implicit method for the finite time horizon Hamilton–Jacobi–Bellman quasi-variational inequalities. Applied Mathematics and Computation, 265, 163–175.

    Article  Google Scholar 

  • Jaddu, H. M. (2002). Direct solution of non-linear optimal control problems using quasilinearization and Chebyshev polynomials. Journal of the Franklin Institute, 339, 479–498.

    Article  Google Scholar 

  • Jajarmi, A., Pariz, N., Effati, S., & Kamyad, A. V. (2012). Infinite horizon optimal control for non-linear interconnected large-scale dynamical systems with an application to optimal attitude control. Asian Journal of Control, 14, 1239–1250.

    Article  Google Scholar 

  • Kafash, B., Delavarkhalafi, A., & Karbassi, S. M. (2012a). Application of Chebyshev polynomials to derive efficient algorithms for the solution of optimal control problems. Scientia Iranica, 19(3), 795–805.

    Article  Google Scholar 

  • Kafash, B., Delavarkhalafi, A., & Karbassi, S. M. (2012b). Numerical solution of non-linear optimal control problems based on state parametrization. Iranian Journal of Science and Technology, 36(A3), 331–340.

    Google Scholar 

  • Kafash, B., Delavarkhalafi, A., & Karbassi, S. M. (2013). Application of variational iteration method for Hamilton–Jacobi–Bellman equations. Applied Mathematical Modelling, 37, 3917–3928.

    Article  Google Scholar 

  • Kafash, B., Delavarkhalafi, A., & Karbassi, S. M. (2016). A computational method for stochastic optimal control problems in financial mathematics. Asian Journal of Control, 18(4), 1175–1592.

    Article  Google Scholar 

  • Kafash, B., Delavarkhalafi, A., Karbassi, S. M., & Boubaker, K. (2014). A numerical approach for solving optimal control problems using the Boubaker polynomials expansion scheme. Journal of Interpolation and Approximation in Scientific Computing, 2014, 1–18.

    Article  Google Scholar 

  • Kafash, B., & Nadizadeh, A. (2017). Solution of stochastic optimal control problems and financial applications. Journal of Mathematical Extension, 11, 27–44.

    Google Scholar 

  • Kortas, I., Sakly, A., & Mimouni, M. F. (2015). Analytical solution of optimized energy consumption of Double Star Induction Motor operating in transient regime using a Hamilton–Jacobi–Bellman equation. Energy, 89, 55–64.

    Article  Google Scholar 

  • Krawczyk, J. B. (2001a). A Markovian approximated solution to a portfolio management problem. Information Technolgy and Management, 1(1), Paper 2. Available at: http://www.item.woiz.polsl.pl/issue/journal1.htm.

  • Krawczyk, J. B. (2001b). Using a simple Markovian approximation for the solution to continuous-time finite-horizon stochastic optimal control problems. Working paper, School of Economics and Finance, Victoria University of Wellington.

  • Kushner, H. J. (1990). Numerical methods for stochastic control problems in continuous time. SIAM Journal on Control and Optimization, 28(5), 999–1048.

    Article  Google Scholar 

  • Kushner, H. J. (2014). A partial history of the early development of continuous-time non-linear stochastic systems theory. Automatica, 50, 303–334.

    Article  Google Scholar 

  • Kushner, H. J., & Dupuis, P. G. (1992). Numerical methods for stochastic control problems in continuous time., Volume 24 of applications of mathematics. New York: Springer.

    Book  Google Scholar 

  • Lin, Q., Loxton, R., & Teo, K. L. (2014). The control parameterization method for non-linear optimal control: A survey. Journal of Industrial and Management Optimization, 10(1), 275–309.

    Article  Google Scholar 

  • Luo, B., et al. (2015). Reinforcement learning solution for HJB equation arising in constrained optimal control problem. Neural Networks, 71, 150–158.

    Article  Google Scholar 

  • Marti, K., & Stein, I. (2015). Stochastic optimal open-loop feedback control: Approximate solution of the Hamiltonian system. Advances in Engineering Software, 89, 43–51.

    Article  Google Scholar 

  • Matsuno, Y., et al. (2015). Stochastic optimal control for aircraft conflict resolution under wind uncertainty. Aerospace Science and Technology, 43, 77–88.

    Article  Google Scholar 

  • Ni, Y.-H., Elliott, R., & Li, X. (2015). Discrete-time mean-field stochastic linear–quadratic optimal control problems, II: Infinite horizon case. Automatica, 57, 65–77.

    Article  Google Scholar 

  • Nik, H. S., Effati, S., & Shirazian, M. (2012). An approximate-analytical solution for the Hamilton–Jacobi–Bellman equation via homotopy perturbation method. Applied Mathematical Modelling, 36(11), 5614–5623.

    Article  Google Scholar 

  • Pham, H. (2010). Lectures on stochastic control and applications in finance. University Paris Diderot, LPMA Institut Universitaire de France and CREST-ENSAE.

  • Wang, T., Zhang, H., & Luo, Y. (2015). Infinite-time stochastic linear quadratic optimal control for unknown discrete-time systems using adaptive dynamic programming approach. Neurocomputing, 171, 379–386.

    Article  Google Scholar 

  • Wu, S., & Wang, G. (2015). Optimal control problem of backward stochastic differential delay equation under partial information. Systems and Control Letters, 82, 71–78.

    Article  Google Scholar 

  • Yeung, D. W. K., & Petrosyan, L. A. (2005). Cooperative stochastic differential games. New York: Springer.

    Google Scholar 

  • Yin, G., Jin, H., & Jin, Z. (2009). Numerical methods for portfolio selection with bounded constraints. Journal of Computational and Applied Mathematics, 233(2), 564–581.

    Article  Google Scholar 

  • Zhu, X. (2014). The optimal control related to Riemannian manifolds and the viscosity solutions to Hamilton–Jacobi–Bellman equations. Systems and Control Letters, 69, 7–15.

    Article  Google Scholar 

Download references

Acknowledgements

The author is grateful to the Editor-in-Chief, Professor Hans Amman, and the referee(s) for their valuable comments, which led to improvements in the article. Also, I wish to thank Dr. A. Delavarkhalafi for his useful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Behzad Kafash.

Additional information

This research was in part supported by Grant from IPM (No. 93490052).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kafash, B. Approximating the Solution of Stochastic Optimal Control Problems and the Merton’s Portfolio Selection Model. Comput Econ 54, 763–782 (2019). https://doi.org/10.1007/s10614-018-9852-3

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10614-018-9852-3

Keywords

Navigation