Bounded Aggregation for Continuous Time Markov Decision Processes
- 584 Downloads
Markov decision processes suffer from two problems, namely the so-called state space explosion which may lead to long computation times and the memoryless property of states which limits the modeling power with respect to real systems. In this paper we combine existing state aggregation and optimization methods for a new aggregation based optimization method. More specifically, we compute reward bounds on an aggregated model by exchanging state space size with uncertainty. We propose an approach for continuous time Markov decision models with discounted or average reward measures.
The approach starts with a portioned state space which consists of blocks that represent an abstract, high-level view on the state space. The sojourn time in each block can then be represented by a phase-type distribution (PHD). Using known properties of PHDs, we can then bound sojourn times in the blocks and also the accumulated reward in each sojourn by constraining the set of possible initial vectors in order to derive tighter bounds for the sojourn times, and, ultimatively, for the average or discounted reward measures. Furthermore, given a fixed policy for the CTMDP, we can then further constrain the initial vector which improves reward bounds. The aggregation approach is illustrated on randomly generated models.
KeywordsMarkov Decision Process Aggregation Discounted reward Average reward Bounds
- 5.Buchholz, P., Hahn, E.M., Hermanns, H., Zhang, L.: Model checking algorithms for CTMDPs. In: Computer Aided Verification - 23rd International Conference, CAV 2011, Snowbird, UT, USA, 14–20 July 2011, Proceedings, pp. 225–242 (2011)Google Scholar
- 8.Dean, T.L., Givan, R., Leach, S.M.: Model reduction techniques for computing approximately optimal solutions for Markov decision processes. In: Geiger, D., Shenoy, P.P. (eds.) UAI 1997: Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence, Brown University, Providence, Rhode Island, USA, 1–3 August 1997, pp. 124–131. Morgan Kaufmann (1997)Google Scholar
- 12.Li, L., Walsh, T.J., Littman, M.L.: Towards a unified theory of state abstraction for MDPs. In: International Symposium on Artificial Intelligence and Mathematics, ISAIM 2006, Fort Lauderdale, Florida, USA, 4–6 January 2006 (2006)Google Scholar
- 14.Ren, Z., Krogh, B.: State aggregation in Markov decision processes. In: Proceedings of the 41st IEEE Conference on Decision and Control, vol. 4, pp. 3819–3824. IEEE (2002)Google Scholar