Skip to main content

Part of the book series: Systems & Control: Foundations & Applications ((SCFA))

Abstract

In this paper the risk sensitive optimal control problem of discrete time partially observed systems on an infinite horizon is considered. Defining an appropriate information state we formulate an equivalent control problem with completely observed state dynamics, which is solved using dynamic games and dynamic programming methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Arapostathis A. and Marcus S.I. (1990). Analisys of an identification algorithm arising in the adaptive estimation of Markov chains, Math. of Control, Signals and Systems 3, pp. 1–29.

    Article  MathSciNet  MATH  Google Scholar 

  • Baras J.S. and James M.R. (1997). Robust and risk-sensitive output feedback control for finite state machines and hidden Markov models, J. Math. Systems, Estimation and Control.

    Google Scholar 

  • Bensoussan A. and Van Schuppen J.H. (1985). Optimal control of partially observable stochastic systems with exponential of integral performance index, SIAM J. Control Optim. 23, pp. 599–613.

    Article  MathSciNet  MATH  Google Scholar 

  • Dai Pra P. Meneghini L. and Runggaldier W. (1996). Connections between stochastic control and dynamic games, Math of Control Signals and Systems 9, pp. 303–326.

    Article  MathSciNet  MATH  Google Scholar 

  • Dupuis P. and Ellis R.S. (1997). A Weak Convergence Approach to the Theory of Large Deviations, John Wiley & Sons, New York.

    Book  MATH  Google Scholar 

  • Fleming W.H. and Hernández-Hernández D. Risk sensitive control of finite state machines on an infinite horizon II, SIAM J. Control Optim. (to appear).

    Google Scholar 

  • Fleming W.H. and McEneaney W.M. (1995). Risk sensitive control on an infinite horizon, SIAM J. Control Optim. 33, pp. 1881–1915.

    Article  MathSciNet  MATH  Google Scholar 

  • Fleming W.H. and McEneaney W.M. (1992). Risk sensitive and differential games, Lecture Notes in Control and Info. Sci. No. 184, Springer, pp. 185–197.

    Google Scholar 

  • Furstenberg H. and Kesten H. (1960). Products of random matrices, Ann. Math. Statist. 31, pp. 457–469.

    Article  MathSciNet  MATH  Google Scholar 

  • Furstenberg H. and Kifer Y. (1983). Random matrix products and measures on proyective spaces, Israel J. Math. 46, pp. 12–33.

    Article  MathSciNet  MATH  Google Scholar 

  • Hernández-Hernández D. and Marcus S.I. (1996). Risk sensitive control of Markov processes in countable state space, Sys. Control Lett. 29, pp. 147–155. Corrigendum, Systems and Control Letters (to appear).

    Article  MATH  Google Scholar 

  • Jacobson D.H. (1973). Optimal stochastic linear systems with exponential performance criteria and their relation to deterministic differential games. IEEE Trans. Autom. Control s18 pp. 124–131.

    Article  Google Scholar 

  • James M.R., Baras J.S. and Elliott R. (1994). Risk sensitive control and dynamic games for partially observed discrete time nonlinear systems, IEEE Trans. Aut. Control 39 pp. 750–792.

    MathSciNet  Google Scholar 

  • Kaijser T. (1978). A limit theorem for Markov chains in compact metric spaces with applications to products of random matrices, Duke Mathematical J. 45 pp. 311–349.

    Article  MathSciNet  MATH  Google Scholar 

  • Marcus S.I., et al (1997). Risk sensitive Markov decision processes, in “Systems and Control in the Twenty-First Century”, C.I. Byrnes, et al (eds.) Boston, Birkhäuser.

    Google Scholar 

  • Runggaldier W. and Stettner L. (1994). Approximations of Discrete Time Partially Observed Control Problems, Applied Mathematics Monographs. Gardini Editori e Stampatori.

    Google Scholar 

  • Whittle P. (1990). Risk Sensitive Optimal Control, John Wiley, New York.

    MATH  Google Scholar 

  • Yosida K. (1980). Functional Analysis, Springer-Verlag, Berlin.

    Book  MATH  Google Scholar 

Download references

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer Science+Business Media New York

About this chapter

Cite this chapter

Hernández-Hernández, D. (1999). Partially Observed Control Problems with Multiplicative Cost. In: McEneaney, W.M., Yin, G.G., Zhang, Q. (eds) Stochastic Analysis, Control, Optimization and Applications. Systems & Control: Foundations & Applications. Birkhäuser, Boston, MA. https://doi.org/10.1007/978-1-4612-1784-8_3

Download citation

  • DOI: https://doi.org/10.1007/978-1-4612-1784-8_3

  • Publisher Name: Birkhäuser, Boston, MA

  • Print ISBN: 978-1-4612-7281-6

  • Online ISBN: 978-1-4612-1784-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics