Abstract
We begin in Section 6.2 by introducing the control model (CM) we will be dealing with, together with some general assumptions. In Section 6.3 we consider non-adaptive CM’s. We start with a non-recursive procedure and then show how it can be made recursive; furthermore, in both cases, recursive and non-recursive, we obtain asymptotically discount optimal (ADO) control policies. The results in Section 6.3 can be seen as “discretized” versions of the Nonstationary Value Iteration (NVI) schemes NVI-1 and NVI-2 in Section 2.4. Next, in Section 6.4, the discretizations in Section 6.3 are extended to adaptive CM’s and, in particular, we obtain “discretized” forms of the Principle of Estimation and Control (PEC) and the NVI adaptive policies in Section 2.5. The proofs of the theorems in Sections 6.3 and 6.4 have many common arguments, and therefore, all the proofs are collected in Section 6.5. We close in Section 6.6 with some comments on relevant references.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1989 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Hernández-Lerma, O. (1989). Discretization Procedures. In: Adaptive Markov Control Processes. Applied Mathematical Sciences, vol 79. Springer, New York, NY. https://doi.org/10.1007/978-1-4419-8714-3_6
Download citation
DOI: https://doi.org/10.1007/978-1-4419-8714-3_6
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4612-6454-5
Online ISBN: 978-1-4419-8714-3
eBook Packages: Springer Book Archive