Skip to main content

Continuous–Time Markov Control Processes

  • Chapter
  • First Online:
An Introduction to Optimal Control Theory

Abstract

As noted in Remark 4.7(b), the solution \(x(\cdot )\) of the (deterministic) ordinary differential equation (4.0.1) can be interpreted as a Markov control process (MCP), also known as a controlled Markov process. In this chapter we introduce some facts on general continuous–time MCPs, which allows us to make a unified presentation of related control problems. We will begin below with some comments on (noncontrolled) continuous–time Markov processes. (We only wish to motivate some concepts, so our presentation is not very precise. For further details, see the bibliographical notes at the end of this chapter.)

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 64.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The probability measure \(\mu \) is said to be “invariant” or a “stationary measure” for \(\mathcal {X}\) because if the initial state x(0) has distribution \(\mu \), then the state x(t) has distribution \(\mu \) for all \(t \ge 0\). A Markov process is called ergodic  if it has a unique invariant probability measure.

  2. 2.

    The requirement that c is bounded simplifies the presentation, but it is not necessary. (See the references in Exercise 5.9.)

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Onésimo Hernández-Lerma .

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Hernández-Lerma, O., Laura-Guarachi, L.R., Mendoza-Palacios, S., González-Sánchez, D. (2023). Continuous–Time Markov Control Processes . In: An Introduction to Optimal Control Theory. Texts in Applied Mathematics, vol 76. Springer, Cham. https://doi.org/10.1007/978-3-031-21139-3_5

Download citation

Publish with us

Policies and ethics