Abstract
As noted in Remark 4.7(b), the solution \(x(\cdot )\) of the (deterministic) ordinary differential equation (4.0.1) can be interpreted as a Markov control process (MCP), also known as a controlled Markov process. In this chapter we introduce some facts on general continuous–time MCPs, which allows us to make a unified presentation of related control problems. We will begin below with some comments on (noncontrolled) continuous–time Markov processes. (We only wish to motivate some concepts, so our presentation is not very precise. For further details, see the bibliographical notes at the end of this chapter.)
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The probability measure \(\mu \) is said to be “invariant” or a “stationary measure” for \(\mathcal {X}\) because if the initial state x(0) has distribution \(\mu \), then the state x(t) has distribution \(\mu \) for all \(t \ge 0\). A Markov process is called ergodic if it has a unique invariant probability measure.
- 2.
The requirement that c is bounded simplifies the presentation, but it is not necessary. (See the references in Exercise 5.9.)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Hernández-Lerma, O., Laura-Guarachi, L.R., Mendoza-Palacios, S., González-Sánchez, D. (2023). Continuous–Time Markov Control Processes . In: An Introduction to Optimal Control Theory. Texts in Applied Mathematics, vol 76. Springer, Cham. https://doi.org/10.1007/978-3-031-21139-3_5
Download citation
DOI: https://doi.org/10.1007/978-3-031-21139-3_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-21138-6
Online ISBN: 978-3-031-21139-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)