Skip to main content
Log in

Target-sensitive control of Markov and semi-Markov processes

  • Technical Notes and Correspondence
  • Published:
International Journal of Control, Automation and Systems Aims and scope Submit manuscript

Abstract

We develop the theory for Markov and semi-Markov control using dynamic programming and reinforcement learning in which a form of semi-variance which computes the variability of rewards below a pre-specified target is penalized. The objective is to optimize a function of the rewards and risk where risk is penalized. Penalizing variance, which is popular in the literature, has some drawbacks that can be avoided with semi-variance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. J. Abounadi, D. Bertsekas, and V. Borkar, “Learning algorithms for Markov decision processes with average cost,” SIAM Journal of Control and Optimization, vol. 40, pp. 681–698, 2001.

    Article  MathSciNet  MATH  Google Scholar 

  2. E. Altman, Constrained Markov Decision Processes, CRC Press, Boca Raton, 1998.

    Google Scholar 

  3. J. Baxter and P. Bartlett, “Infinite-horizon policygradient estimation,” Journal of Artificial Intelligence, vol. 15, pp. 319–350, 2001.

    Article  MathSciNet  MATH  Google Scholar 

  4. D. P. Bertsekas and J. Tsitsiklis, Neuro-Dynamic Programming, Athena, Belmont, 1996.

    MATH  Google Scholar 

  5. D. P. Bertsekas, Dynamic Programming and Optimal Control, 2nd edition, Athena, Belmont, 2000.

    Google Scholar 

  6. T. Bielecki, D. Hernandez-Hernandez, and S. Pliska, “Risk-sensitive control of finite state Markov chains in discrete time,” Math. Methods of Opns. Research, vol. 50, pp. 167–188, 1999.

    Article  MathSciNet  MATH  Google Scholar 

  7. K. Boda and J. Filar, “Time consistent dynamic risk measures,” Mathematical Methods of Operations Research, vol. 63, pp. 169–186, 2005.

    Article  MathSciNet  Google Scholar 

  8. V. Borkar and S. Meyn, “Risk-sensitive optimal control for Markov decision processes with monotone cost,” Mathematics of Operations Research, vol. 27, pp. 192–209, 2002.

    Article  MathSciNet  MATH  Google Scholar 

  9. V. S. Borkar, “Stochastic approximation with two-time scales,” Systems and Control Letters, vol. 29, pp. 291–294, 1997.

    Article  MathSciNet  MATH  Google Scholar 

  10. V. S. Borkar, “Asynchronous stochastic approximation,” SIAM Journal of Control and Optimization, vol. 36, no. 3, pp. 840–851, 1998.

    Article  MathSciNet  MATH  Google Scholar 

  11. V. S. Borkar and S. P. Meyn, “The ODE method for convergence of stochastic approximation and reinforcement learning,” SIAM Journal of Control and Optimization, vol. 38, no. 2, pp. 447–469, 2000.

    Article  MathSciNet  MATH  Google Scholar 

  12. V. S. Borkar and K. Soumyanath, “A new analog parallel scheme for fixed point computation, part I: Theory,” IEEE Trans. on Circuits and Systems I: Theory and Applications, vol. 44, pp. 351–355, 1997.

    Article  MathSciNet  Google Scholar 

  13. M. Bouakiz and Y. Kebir, “Target-level criterion in Markov decision processes,” Journal of Optimization Theory and Applications, vol. 86, pp. 1–15, 1995.

    Article  MathSciNet  MATH  Google Scholar 

  14. S. J. Bradtke and M. Duff, “Reinforcement learning methods for continuous-time MDPs,” In Advances in Neural Information Processing Systems 7. MIT Press, Cambridge, MA, USA, 1995.

    Google Scholar 

  15. F. Brauer and J. Nohel, The Qualitative Theory of Ordinary Differential Equations: An Introduction, Dover Publishers, New York, 1989.

    Google Scholar 

  16. X.-R. Cao, “From perturbation analysis to Markov decision processes and reinforcement learning,” Discrete-Event Dynamic Systems: Theory and Applications, vol. 13, pp. 9–39, 2003.

    Article  MathSciNet  MATH  Google Scholar 

  17. X.-R. Cao, “Semi-Markov decision problems and performance sensitivity analysis,” IEEE Trans. on Automatic Control, vol. 48, no. 5, pp. 758–768, 2003.

    Article  Google Scholar 

  18. R. Cavazos-Cadena, “Solution to risk-sensitive average cost optimality equation in a class of MDPs with finite state space,” Math. Methods of Opns. Research, vol. 57, pp. 253–285, 2003.

    MathSciNet  Google Scholar 

  19. R. Cavazos-Cadena and E. Fernandez-Gaucherand, “Controlled Markov chains with risk-sensitive criteria,” Mathematical Models of Operations Research, vol. 43, pp. 121–139, 1999.

    Google Scholar 

  20. R.-R. Chen and S. Meyn, “Value iteration and optimization of multiclass queueing networks,” Queueing Systems, vol. 32, pp. 65–97, 1999.

    Article  MathSciNet  MATH  Google Scholar 

  21. K. Chung and M. Sobel, “Discounted MDPs: distribution functions and exponential utility maximization,” SIAM Journal of Control and Optimization, vol. 25, pp. 49–62, 1987.

    Article  MathSciNet  MATH  Google Scholar 

  22. G. Di Masi and L. Stettner, “Risk-sensitive control of discrete-time Markov processes with infinite horizon,” SIAM Journal of Control and Optimization, vol. 38, no. 1, pp. 61–78, 1999.

    Article  MATH  Google Scholar 

  23. J. Estrada, “Mean-semivariance behavior: Downside risk and capital asset pricing,” International Review of Economics and Finance, vol. 16, pp. 169–185, 2007.

    Article  Google Scholar 

  24. J. Filar, L. Kallenberg, and H. Lee, “Variancepenalized Markov decision processes,” Mathematics of Operations Research, vol. 14, no 1, pp. 147–161, 1989.

    Article  MathSciNet  MATH  Google Scholar 

  25. J. Filar, D. Krass, and K. Ross, “Percentile perfor mance criteria for limiting average Markov decision processes,” IEEE Trans. on Automatic Control, vol. 40, pp. 2–10, 1995.

    Article  MathSciNet  MATH  Google Scholar 

  26. W. Fleming and D. Hernandez-Hernandez, “Risksensitive control of finite state machines on an infinte horizon,” SIAM Journal of Control and Optimization, vol. 35, pp. 1790–1810, 1997.

    Article  MathSciNet  MATH  Google Scholar 

  27. A. Gosavi, “Reinforcement learning for long-run average cost,” European Journal of Operational Research, vol. 155, pp. 654–674, 2004.

    Article  MathSciNet  MATH  Google Scholar 

  28. A. Gosavi, “A risk-sensitive approach to total productive maintenance,” Automatica, vol. 42, pp. 1321–1330, 2006.

    Article  MathSciNet  MATH  Google Scholar 

  29. A. Gosavi, S. L. Murray, V. M. Tirumalasetty, and S. Shewade, “A budget-sensitive approach to scheduling maintenance in a total productive maintenance (TPM),” Engineering Management Journal, vol. 23, no. 3, pp. 46–56, 2011.

    Google Scholar 

  30. D. Hernandez-Hernandez and S. Marcus, “Risksensitive control of Markov processes in countable state space,” Systems and Control Letters, vol. 29, pp. 147–155, 1996.

    Article  MathSciNet  MATH  Google Scholar 

  31. R. Howard and J. Matheson, “Risk-sensitive MDPs,” Management Science, vol. 18, no. 7, pp. 356–369, 1972.

    Article  MathSciNet  MATH  Google Scholar 

  32. G. Hübner, “Improved procedures for eliminating sub-optimal actions in Markov programming by the use of contraction properties,” Transactions of 7th Prague Conference on Information Theory, Statistical Decision Functions, Random Processes, pp. 257–263, Dordrecht, 1978.

  33. Q. Jiang, H.-S. Xi, and B.-Q. Yin, “Dynamic file grouping for load balancing in streaming media clustered server systems,” International Journal of Control, Automation, and Systems, vol. 7, no. 4, pp. 630–637, 2009.

    Article  Google Scholar 

  34. W. Y. Kwon, H. Suh, and S. Lee, “SSPQL: stochastic shortest path-based Q-learning,” International Journal of Control, Automation, and Systems, vol. 9, no. 2, pp. 328–338, 2011.

    Article  Google Scholar 

  35. A. E. B. Lim and X. Y. Zhou, “Risk-sensitive control with HARA utility,” IEEE Trans. on Automatic Control, vol. 46, no. 4, pp. 563–578, 2001.

    Article  MathSciNet  MATH  Google Scholar 

  36. R. Porter, “Semivariance and stochastic dominance,” American Economic Review, vol. 64, pp. 200–204, 1974.

    Google Scholar 

  37. M. L. Puterman, Markov Decision Processes, Wiley Interscience, New York, 1994.

    Book  MATH  Google Scholar 

  38. S. Ross, Applied Probability Models with Optimization Applications, Dover, New York, 1992.

    MATH  Google Scholar 

  39. E. Seneta, Non-Negative Matrices and Markov Chains, Springer-Verlag, NY, 1981.

    MATH  Google Scholar 

  40. S. Singh, V. Tadic, and A. Doucet, “A policygradient method for semi-Markov decision processes with application to call admission control,” European Journal of Operational Research, vol. 178, no. 3, pp. 808–818, 2007.

    Article  MathSciNet  MATH  Google Scholar 

  41. M. Sobel, “The variance of discounted Markov decision processes,” Journal of Applied Probability, vol. 19, pp. 794–802, 1982.

    Article  MathSciNet  MATH  Google Scholar 

  42. H. C. Tijms, A First Course in Stochastic Models, 2nd edition, Wiley, 2003.

  43. C. G. Turvey and G. Nayak, “The semi-varianceminimizing hedge ratios,” Journal of Agricultural and Resource Economics, vol. 28, no. 1, pp. 100–115, 2003.

    Google Scholar 

  44. D. White, “Minimizing a threshold probability in discounted Markov decision processes,” Journal of Mathematical Analysis and Applications, vol. 173, pp. 634–646, 1993.

    Article  MathSciNet  MATH  Google Scholar 

  45. C. Wu and Y. Lin, “Minimizing risk models in Markov decision processes with policies depending on target values,” Journal of Mathematical Analysis and Applications, vol. 231, pp. 47–67, 1999.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abhijit Gosavi.

Additional information

Recommended by Editor Young Il Lee. The author would like to acknowledge support from NSF grant ECCS: 0841055 that partially funded this research.

Abhijit Gosavi received his B.E in Mechanical Engineering from Jadavpur University in 1992, an M.Tech in Mechanical Engineering from the Indian Institute of Technology, Madras in 1995, and a Ph.D. in Industrial Engineering from the University of South Florida. His research interests include Markov decision processes, simulation, and applied operations research. He joined the Missouri University of Science and Technology in 2008 as an Assistant Professor.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gosavi, A. Target-sensitive control of Markov and semi-Markov processes. Int. J. Control Autom. Syst. 9, 941–951 (2011). https://doi.org/10.1007/s12555-011-0515-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12555-011-0515-6

Keywords

Navigation