Skip to main content

Controlled Markov Chains

  • Chapter
  • 613 Accesses

Part of the book series: Applications of Mathematics ((SMAP,volume 24))

Abstract

The main computational techniques in this book require the approximation of an original controlled processes in continuous time by appropriately chosen controlled finite state Markov chains. In this chapter, we will define some of the canonical control problems for the Markov chain models which will be used in the sequel as “approximating processes.” The cost functions will be defined. The functional equations which are satisfied by these cost functions for fixed controls, as well as the functional equations satisfied by the optimal cost functions (the dynamic programming or Bellman equation), will be obtained by exploiting the Markov property and the uniqueness of their solutions is shown, under appropriate conditions. These are the equations which will have to be solved in order to get the required approximate solutions to the original control or optimal control problem. The simplest case, where there is no control or where the control is fixed, is dealt with in Section 2.1, and the recursive equations satisfied by the cost functionals are obtained. A similar method is used to get the recursive equations for the optimal value functions for the controlled problems. The optimal stopping problem is treated in Section 2.2. This is a relatively simple control problem, because the only decision to be made is the choice of the moment at which the process is to be stopped. This problem will illustrate the basic ideas of dynamic programming for Markov chains and introduce the fundamental principle of optimality in a simple way. Section 2.3 concerns the general discounted cost problem. Section 2.4 deals with the optimization problem when the control stops at the first moment of reaching a target or stopping set.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   74.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1992 Springer-Verlag New York, Inc.

About this chapter

Cite this chapter

Kushner, H.J., Dupuis, P.G. (1992). Controlled Markov Chains. In: Numerical Methods for Stochastic Control Problems in Continuous Time. Applications of Mathematics, vol 24. Springer, New York, NY. https://doi.org/10.1007/978-1-4684-0441-8_3

Download citation

  • DOI: https://doi.org/10.1007/978-1-4684-0441-8_3

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4684-0443-2

  • Online ISBN: 978-1-4684-0441-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics