Model Minimization in Hierarchical Reinforcement Learning
When applied to real world problems Markov Decision Processes (MDPs) often exhibit considerable implicit redundancy, especially when there are symmetries in the problem. In this article we present an MDP minimization framework based on homomorphisms. The framework exploits redundancy and symmetry to derive smaller equivalent models of the problem. We then apply our minimization ideas to the options framework to derive relativized options—options defined without an absolute frame of reference. We demonstrate their utility empirically even in cases where the minimization criteria are not met exactly.
Unable to display preview. Download preview PDF.
- 1.C. Boutilier and R. Dearden. Using abstractions for decision theoretic planning with time constraints. In Proceedings of the AAAI-94, pages 1016–1022. AAAI, 1994.Google Scholar
- 2.C. Boutilier, R. Dearden, and M. Goldszmidt. Exploiting structure in policy construction In Proceedings of International Joint Conference on Artificial Intelligence 14, pages 1104–1111, 1995.Google Scholar
- 3.Steven J. Bradtke and Michael O. Duff. Reinforcement learning methods for continuous-time Markov decision problems. In Advances in Neural Information Processing Systems 7. MIT Press, 1995.Google Scholar
- 4.Thomas Dean and Robert Givan. Model minimization in markov decision processes. In Proceedings of AAAI-97, pages 106–111. AAAI, 1997.Google Scholar
- 6.Robert Givan, Thomas Dean, and Matthew Greig. Equivalence notions and model minimization in markov decision processes. Submitted to Artificial Intelligence, 2001.Google Scholar
- 10.Glenn A. Iba. A heuristic approach to the discovery of macro-operators. Machine Learning, 3:285–317, 1989.Google Scholar
- 13.D. Lee and M. Yannakakis. Online minimization of transition systems. In Proceed-ings of 24th Annual ACM Symposium on the Theory of Computing, pages 264–274. ACM, 1992.Google Scholar
- 14.Doina Precup. Temporal Abstraction in Reinforcement Learning. PhD thesis, University of Massachusetts, Amherst, May2000.Google Scholar
- 15.B. Ravindran and A. G. Barto. Symmetries and model minimization of markov decision processes. Technical Report 01-43, University of Massachusetts, Amherst, 2001.Google Scholar
- 16.Richard S. Sutton and Andrew G. Barto. Reinforcement Learning. An Introduction. MIT Press, Cambridge, MA, 1998.Google Scholar
- 18.C. J. C. H. Watkins. Learning from delayed rewards. PhD thesis, Cambridge University, Cambridge, England, 1989.Google Scholar
- 19.M. Zinkevich and T. Balch. Symmetry in markov decision processes and its implications for single agent and multi agent learning. In Proceedings of the 18th International Conference on Machine Learning, pages 632–640, San Francisco, CA, 2001. Morgan Kaufmann.Google Scholar