Skip to main content

Bounded Aggregation for Continuous Time Markov Decision Processes

  • 655 Accesses

Part of the Lecture Notes in Computer Science book series (LNPSE,volume 10497)

Abstract

Markov decision processes suffer from two problems, namely the so-called state space explosion which may lead to long computation times and the memoryless property of states which limits the modeling power with respect to real systems. In this paper we combine existing state aggregation and optimization methods for a new aggregation based optimization method. More specifically, we compute reward bounds on an aggregated model by exchanging state space size with uncertainty. We propose an approach for continuous time Markov decision models with discounted or average reward measures.

The approach starts with a portioned state space which consists of blocks that represent an abstract, high-level view on the state space. The sojourn time in each block can then be represented by a phase-type distribution (PHD). Using known properties of PHDs, we can then bound sojourn times in the blocks and also the accumulated reward in each sojourn by constraining the set of possible initial vectors in order to derive tighter bounds for the sojourn times, and, ultimatively, for the average or discounted reward measures. Furthermore, given a fixed policy for the CTMDP, we can then further constrain the initial vector which improves reward bounds. The aggregation approach is illustrated on randomly generated models.

Keywords

  • Markov Decision Process
  • Aggregation
  • Discounted reward
  • Average reward
  • Bounds

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-319-66583-2_2
  • Chapter length: 14 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   54.99
Price excludes VAT (USA)
  • ISBN: 978-3-319-66583-2
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   69.99
Price excludes VAT (USA)
Fig. 1.
Fig. 2.
Fig. 3.

References

  1. Abate, A., Češka, M., Kwiatkowska, M.: Approximate policy iteration for Markov decision processes via quantitative adaptive aggregations. In: Artho, C., Legay, A., Peled, D. (eds.) ATVA 2016. LNCS, vol. 9938, pp. 13–31. Springer, Cham (2016). doi:10.1007/978-3-319-46520-3_2

    CrossRef  Google Scholar 

  2. Beutler, F.J., Ross, K.W.: Uniformization for semi-Markov decision processes under stationary policies. J. Appl. Probability 24, 644–656 (1987)

    MathSciNet  CrossRef  MATH  Google Scholar 

  3. Buchholz, P.: Bounding reward measures of Markov models using the Markov decision processes. Numerical Lin. Alg. with Applic. 18(6), 919–930 (2011)

    MathSciNet  CrossRef  MATH  Google Scholar 

  4. Buchholz, P., Dohndorf, I., Scheftelowitsch, D.: Analysis of Markov decision processes under parameter uncertainty. In: Reinecke, P., Di Marco, A. (eds.) EPEW 2017. LNCS, vol. 10497, pp. 3–18. Springer, Cham (2017). doi:10.1007/978-3-319-66583-2_1

    CrossRef  Google Scholar 

  5. Buchholz, P., Hahn, E.M., Hermanns, H., Zhang, L.: Model checking algorithms for CTMDPs. In: Computer Aided Verification - 23rd International Conference, CAV 2011, Snowbird, UT, USA, 14–20 July 2011, Proceedings, pp. 225–242 (2011)

    Google Scholar 

  6. Buchholz, P., Kriege, J., Felko, I.: Input Modeling with Phase-Type Distributions and Markov Models. SM. Springer, Cham (2014)

    CrossRef  MATH  Google Scholar 

  7. Courtois, P., Semal, P.: Bounds for the positive eigenvectors of nonnegative matrices and for their approximations by decomposition. J. ACM 31(4), 804–825 (1984)

    MathSciNet  CrossRef  MATH  Google Scholar 

  8. Dean, T.L., Givan, R., Leach, S.M.: Model reduction techniques for computing approximately optimal solutions for Markov decision processes. In: Geiger, D., Shenoy, P.P. (eds.) UAI 1997: Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence, Brown University, Providence, Rhode Island, USA, 1–3 August 1997, pp. 124–131. Morgan Kaufmann (1997)

    Google Scholar 

  9. Franceschinis, G., Muntz, R.R.: Bounds for quasi-lumpable Markov chains. Perform. Eval. 20(1–3), 223–243 (1994)

    CrossRef  MATH  Google Scholar 

  10. Givan, R., Dean, T.L., Greig, M.: Equivalence notions and model minimization in Markov decision processes. Artif. Intell. 147(1–2), 163–223 (2003)

    MathSciNet  CrossRef  MATH  Google Scholar 

  11. Givan, R., Leach, S.M., Dean, T.L.: Bounded-parameter Markov decision processes. Artif. Intell. 122(1–2), 71–109 (2000)

    MathSciNet  CrossRef  MATH  Google Scholar 

  12. Li, L., Walsh, T.J., Littman, M.L.: Towards a unified theory of state abstraction for MDPs. In: International Symposium on Artificial Intelligence and Mathematics, ISAIM 2006, Fort Lauderdale, Florida, USA, 4–6 January 2006 (2006)

    Google Scholar 

  13. Puterman, M.L.: Markov Decision Processes. Wiley, New York (2005)

    MATH  Google Scholar 

  14. Ren, Z., Krogh, B.: State aggregation in Markov decision processes. In: Proceedings of the 41st IEEE Conference on Decision and Control, vol. 4, pp. 3819–3824. IEEE (2002)

    Google Scholar 

  15. Semal, P.: Refinable bounds for large Markov chains. IEEE Trans. Computers 44(10), 1216–1222 (1995)

    CrossRef  MATH  Google Scholar 

  16. Serfozo, R.F.: An equivalence between continuous and discrete time Markov decision processes. Oper. Res. 27(3), 616–620 (1979)

    MathSciNet  CrossRef  MATH  Google Scholar 

  17. Tewari, A., Bartlett, P.L.: Bounded parameter Markov decision processes with average reward criterion. In: Bshouty, N.H., Gentile, C. (eds.) COLT 2007. LNCS, vol. 4539, pp. 263–277. Springer, Heidelberg (2007). doi:10.1007/978-3-540-72927-3_20

    CrossRef  Google Scholar 

  18. Van Roy, B.: Performance loss bounds for approximate value iteration with state aggregation. Math. Oper. Res. 31(2), 234–244 (2006)

    MathSciNet  CrossRef  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dimitri Scheftelowitsch .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Buchholz, P., Dohndorf, I., Frank, A., Scheftelowitsch, D. (2017). Bounded Aggregation for Continuous Time Markov Decision Processes. In: Reinecke, P., Di Marco, A. (eds) Computer Performance Engineering. EPEW 2017. Lecture Notes in Computer Science(), vol 10497. Springer, Cham. https://doi.org/10.1007/978-3-319-66583-2_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-66583-2_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-66582-5

  • Online ISBN: 978-3-319-66583-2

  • eBook Packages: Computer ScienceComputer Science (R0)