Skip to main content
Log in

Markov Decision Processes with a Constraint on the Asymptotic Failure Rate

  • Published:
Methodology And Computing In Applied Probability Aims and scope Submit manuscript

Abstract

In this paper, we introduce a Markov decision model with absorbing states and a constraint on the asymptotic failure rate. The objective is to find a stationary policy which minimizes the infinite horizon expected average cost, given that the system never fails. Using Perron-Frobenius theory of non-negative matrices and spectral analysis, we show that the problem can be reduced to a linear programming problem. Finally, we apply this method to a real problem for an aeronautical system.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • E. Altman and A. Shwartz, “Markov decision problems and state-action frequencies,” SIAM Journal and Optimization vol. 29 pp. 786-809, 1991.

    Google Scholar 

  • R. E. Bellman, “A Markovian decision process,” Journal of Mathematics and Mechanics vol. 6 pp. 679-684, 1957.

    Google Scholar 

  • A. Berman and R. J. Plemmons, Nonnegative matrices in the mathematical sciences, Academic Press, 1979.

  • F. J. Beutler and K. W. Ross, “Optimal policies for controlled Markov chains with a constraint,” Journal of Mathematical Analysis and Applications vol. 111 pp. 236-252, 1985.

    Google Scholar 

  • T. Bickard, N. Hubart, and J. L. Lanet, “Future jet engine control systems,” Proc. of the 1997 European Propulsion Forum, Design of Aero Engines for Safety, Reliability, Maintainability and Cost, Berlin, Germany, pp. 149-153, March 1997.

  • J. F. Bonnans and J. C. Gilbert, Optimisation Numérique, Springer-Verlag, 1997.

  • M. Boussemart, Modélisation de la maintenance des calculateurs à architecture duale et répartie, utilisés pour la régulation des turboréacteurs, DEA report, Control of Systems UTC, 1998.

  • P. G. Ciarlet, Introduction à l'analyse numérique matricielle et à l' optimisation, Masson, 1982.

  • C. Derman, Finite State Markovian Decision Processes, Academic Press: New York and London, 1970.

    Google Scholar 

  • J. Filar and K. Vrieze, Competitive Markov Decision Proceses, Springer-Verlag: New York, 1997.

    Google Scholar 

  • F. R. Gantmacher, The Theory of Matrices, Tome I: New York, 1959.

  • I. B. Gertsbakh, Models of preventive maintenance, Studies in mathematical and managerial economics, North-Holland Publishing Company, 1977.

  • R. A. Howard, Dynamic Programming and Markov Process, Wiley and Sons, 1960.

  • V. G. Kulkarni, Modeling and analysis of stochastic systems, Chapman &; Hall, 1995.

  • N. Limnios and G. Oprisan, Semi-Markov Processes and Reliability, Birkhäuser: Boston, 2001.

    Google Scholar 

  • B. H. Lindqvist and H. Amundrustad, “Markov model for periodically tested components,” ESREL'98, Trondheim, Norway, June 1998.

  • M. L. Puterman, Markov Decision Processes: Discrete Stochastic Dynamic Programming, Wiley: New York, 1994.

    Google Scholar 

  • E. Seneta, Non-negative Matrices and Markov Chains, Springer-Verlag, 1981.

  • D. J. White, Markov Decision Process, Wiley and Sons, 1993.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Boussemart, M., Bickard, T. & Limnios, N. Markov Decision Processes with a Constraint on the Asymptotic Failure Rate. Methodology and Computing in Applied Probability 3, 199–214 (2001). https://doi.org/10.1023/A:1012209311286

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1012209311286

Navigation