# Controlled Markov Chains

• Harold J. Kushner
• Paul Dupuis
Chapter
Part of the Stochastic Modelling and Applied Probability book series (SMAP, volume 24)

## Abstract

The main computational techniques in this book require the approximation of an original controlled processes in continuous time by appropriately chosen controlled finite state Markov chains. In this chapter, we will define some of the canonical control problems for the Markov chain models which will be used in the sequel as “approximating processes.” The cost functions will be defined. The functional equations which are satisfied by these cost functions for fixed controls, as well as the functional equations satisfied by the optimal cost functions (the dynamic programming or Bellman equation), will be obtained by exploiting the Markov property and the uniqueness of their solutions is shown, under appropriate conditions. These are the equations which will have to be solved in order to get the required approximate solutions to the original control or optimal control problem. The simplest case, where there is no control or where the control is fixed, is dealt with in Section 2.1, and the recursive equations satisfied by the cost functional are obtained. A similar method is used to get the recursive equations for the optimal value functions for the controlled problems. The optimal stopping problem is treated in Section 2.2. This is a relatively simple control problem, because the only decision to be made is the choice of the moment at which the process is to be stopped. This problem will illustrate the basic ideas of dynamic programming for Markov chains and introduce the fundamental principle of optimality in a simple way. Section 2.3 concerns the general discounted cost problem. Section 2.4 deals with the optimization problem when the control stops at the first moment of reaching a target or stopping set. The basic concept of contraction map is introduced and its role in the solution of the functional equations for the costs is emphasized. Section 2.5 gives the results for the case where the process is of interest over a finite time only. The chapter contains only a brief outline. Further information concerning controlled or uncontrolled Markov chain models can be found in the standard references [11, 54, 84, 88, 126, 151, 155].

## Preview

Unable to display preview. Download preview PDF.

© Springer Science+Business Media New York 2001

## Authors and Affiliations

• Harold J. Kushner
• 1
• Paul Dupuis
• 1
1. 1.Division of Applied MathematicsBrown UniversityProvidenceUSA