Skip to main content
Log in

On continuous-time discounted stochastic dynamic programming

  • Published:
Applied Mathematics and Optimization Aims and scope Submit manuscript

Abstract

In this paper a continuous-time discounted dynamic programming problem in a Markov decision model is investigated. In many cases it is difficult to search directly for an optimal solution for such a programming problem. We introduce a Lagrangian-type programming problem associated with the original programming problem and show that, under some assumptions, a weak optimal solution exists for the Lagrangian problem. Moreover, we consider the original programming problem in the perturbed programming one and develop the Lagrangian duality.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Aubin JP (1979) Mathematical Methods of Game and Economic Theory. North-Holland, Amsterdam

    Google Scholar 

  2. Blackwell D (1965) Discounted dynamic programming. Ann Math Statist 36:226–235

    Google Scholar 

  3. Doshi BT (1976) Continuous time control of Markov processes on an arbitrary state space; discounted rewards. Ann Statist 4:1219–1235

    Google Scholar 

  4. Dynkin EB (1965) Markov Processes—I. Springer-Verlag, Berlin

    Google Scholar 

  5. Fleming WH (1969) Optimal continuous-parameter stochastic control. SIAM Rev 11:470–509

    Google Scholar 

  6. Gihman II, Skorohod AV (1979) Controlled Stochastic Processes. Springer-Verlag, Berlin

    Google Scholar 

  7. Kakumanu PK (1971) Continuously discounted Markov decision model with countable state and action spaces. Ann Math Statist 42:919–926

    Google Scholar 

  8. Lai HC, Tanaka K (1986) On aD-solution of a cooperativem-person discounted Markov game. J Math Anal Appl 115:578–591

    Google Scholar 

  9. Lai HC, Tanaka K (1987) AnN-person noncooperative discounted vector-valued dynamic game with a metric space. Appl Math Optim 16:135–146

    Google Scholar 

  10. Luenberger DG (1969) Optimization by Vector Space Methods. Wiley—Interscience, New York

    Google Scholar 

  11. Miller BL (1968) Finite state continuous time Markov decision processes with finite planning horizon. SIAM J Control 6:266–280

    Google Scholar 

  12. Miller BL (1968) Finite state continuous time Markov decision processes with an infinite planning horizon. J Math Anal Appl 22:552–569

    Google Scholar 

  13. Pliska SR (1975) Controlled jump processes. Stochastic Process Appl 3:259–282

    Google Scholar 

  14. Pliska SR (1978) On a functional differential equation that arises in a Markov control problem. J Differential Equations 28:390–405

    Google Scholar 

  15. Ponstein J (1980) Approaches to the Theory of Optimization. Cambridge University Press, London.

    Google Scholar 

  16. Tanaka K (to appear) On a discounted dynamic programming with the constraints. J Math Anal Appl

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lai, HC., Tanaka, K. On continuous-time discounted stochastic dynamic programming. Appl Math Optim 23, 155–169 (1991). https://doi.org/10.1007/BF01442395

Download citation

  • Accepted:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01442395

Keywords

Navigation