We present new theoretical results on planning within the framework of temporally abstract reinforcement learning (Precup & Sutton, 1997; Sutton, 1995). Temporal abstraction is a key step in any decision making system that involves planning and prediction. In temporally abstract reinforcement learning, the agent is allowed to choose among “options”, whole courses of action that may be temporally extended, stochastic, and contingent on previous events. Examples of options include closed-loop policies such as picking up an object, as well as primitive actions such as joint torques. Knowledge about the consequences of options is represented by special structures called multi-time models. In this paper we focus on the theory of planning with multi-time models. We define new Bellman equations that are satisfied for sets of multi-time models. As a consequence, multi-time models can be used interchangeably with models of primitive actions in a variety of well-known planning methods including value iteration, policy improvement and policy iteration.
- Optimal Policy
- Goal State
- Markov Decision Process
- Policy Iteration
- Reward Prediction
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Dimitri P. Bertsekas. Dynamic Programming: Deterministic and Stochastic Models. Prentice Hall, Englewood Cliffs, NJ, 1987.
Peter Dayan. Improving generalization for temporal difference learning: The successor representation. Neural Computation, 5:613–624, 1993.
Peter Dayan and Geoff E. Hinton. Feudal reinforcement learning. In Advances in Neural Information Processing Systems, volume 5, pages 271–278, Cambridge, MA, 1993. MIT Press.
Thomas G. Dietterich. Hierarchical reinfrecement learning with maxq value function decomposition. Technical report, Computer Science Department, Oregon State University, 1997.
Manfred Huber and Roderic A. Grupen. Learning to coordinate controllers — reinforcement learning on a control basis. In Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence, IJCAI-97, San Francisco, CA, 1997. Morgan Kaufmann.
Leslie P. Kaelbling. Hierarchical learning in stochastic domains: Preliminary results. In Proceedings of the Tenth International Conference on Machine Learning ICML'93, pages 167–173, San Mateo, CA, 1993. Morgan Kaufmann.
Richard E. Korf. Learning to Solve Problems by Searching for Macro-Operators. Pitman Publishing Ltd, London, 1985.
John E. Laird, Paul S. Rosenbloom, and Allan Newell. Chunking in SOAR: The anatomy of a general learning mechanism. Machine Learning, 1:11–46, 1986.
Sridhar Mahadevan and Jonathan Connell. Automatic programming of behavior-based robots using reinforcement learning. Artificial Intelligence, 55(2–3):311–365, 1992
Amy McGovern, Richard S. Sutton, and Andrew H. Fagg. Roles of macro-actions in accelerating reinforcement learning. In Grace Hopper Celebration of Women in Computing, pages 13–18, 1997.
Andrew W. Moore and Chris G. Atkeson. Prioritized sweeping: Reinforcement learning with less data and less real time. Machine Learning, 13:103–130, 1993
Ronald Parr and Stuart Russell. Reinforcement learning with hierarchies of machines. In Advances in Neural Information Processing Systems, volume 10, Cambridge, MA, 1998. MIT Press.
Jing Peng and John Williams. Efficient learning and planning within the Dyna framework. Adaptive Behavior, 4:323–334, 1993.
Doina Precup and Richard S. Sutton. Multi-Time models for temporally abstract planning. In Advances in Neural Information Processing Systems, volume 10, Cambridge, MA, 1998. MIT Press.
Martin L. Puterman. Markov Decision Processes. Wiley-Interscience, New York, NY, 1994.
Earl D. Sacerdoti. A Structure for Plans and Behavior. Elsevier, North-Holland, NY, 1977.
Satinder P. Singh. Scaling reinforcement learning by learning variable temporal resolution models. In Proceedings of the Ninth International Conference on Machine Learning ICML'92, pages 202–207, San Mateo, CA, 1992. Morgan Kaufmann.
Richard S. Sutton. Integrating architectures for learning, planning, and reacting based on approximating dynamic programming. In Proceedings of the Seventh International Conference on Machine Learning ICML'90, pages 216–224, San Mateo, CA, 1990. Morgan Kaufmann.
Richard S. Sutton. TD models: Modeling the world as a mixture of time scales. In Proceedings of the Twelfth International Conference on Machine Learning ICML'9S, pages 531–539, San Mateo, CA, 1995. Morgan Kaufmann.
Richard S. Sutton and Andrew G. Barto. Reinforcement Learning. An Introduction. MIT Press, Cambridge, MA, 1998.
Richard S. Sutton and Brian Pinette. The learning of world models by connectionist networks. In Proceedings of the Seventh Annual Conference of the Cognitive Science Society, pages 54–64, 1985.
Christopher J. C. H. Watkins. Learning with Delayed Rewards. PhD thesis, Cambridge University, 1989.
Editors and Affiliations
© 1998 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Precup, D., Sutton, R.S., Singh, S. (1998). Theoretical results on reinforcement learning with temporally abstract options. In: Nédellec, C., Rouveirol, C. (eds) Machine Learning: ECML-98. ECML 1998. Lecture Notes in Computer Science, vol 1398. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0026709
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-64417-0
Online ISBN: 978-3-540-69781-7
eBook Packages: Springer Book Archive