Skip to main content

Partially Observed Markov Decision Processes

  • Reference work entry
  • First Online:
  • 170 Accesses

A Markov decision process (MDP) in which the state of the system cannot be fully or precisely observed, e.g., only part of the state is known and/or the state observation has some error. In principle, such a model can be converted to a fully observed MDP by introducing an “information” or “belief” state that may be infinite dimensional, corresponding to a probability distribution over the original state.

See

Dynamic Programming

Markov Decision Processes

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   799.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD   899.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media New York

About this entry

Cite this entry

(2013). Partially Observed Markov Decision Processes. In: Gass, S.I., Fu, M.C. (eds) Encyclopedia of Operations Research and Management Science. Springer, Boston, MA. https://doi.org/10.1007/978-1-4419-1153-7_200580

Download citation

  • DOI: https://doi.org/10.1007/978-1-4419-1153-7_200580

  • Published:

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4419-1137-7

  • Online ISBN: 978-1-4419-1153-7

  • eBook Packages: Business and Economics

Publish with us

Policies and ethics