Discrete Event Dynamic Systems

, Volume 13, Issue 4, pp 341–379

Recent Advances in Hierarchical Reinforcement Learning

  • Andrew G. Barto
  • Sridhar Mahadevan
Article

DOI: 10.1023/A:1025696116075

Cite this article as:
Barto, A.G. & Mahadevan, S. Discrete Event Dynamic Systems (2003) 13: 341. doi:10.1023/A:1025696116075

Abstract

Reinforcement learning is bedeviled by the curse of dimensionality: the number of parameters to be learned grows exponentially with the size of any compact encoding of a state. Recent attempts to combat the curse of dimensionality have turned to principled ways of exploiting temporal abstraction, where decisions are not required at each step, but rather invoke the execution of temporally-extended activities which follow their own policies until termination. This leads naturally to hierarchical control architectures and associated learning algorithms. We review several approaches to temporal abstraction and hierarchical organization that machine learning researchers have recently developed. Common to these approaches is a reliance on the theory of semi-Markov decision processes, which we emphasize in our review. We then discuss extensions of these ideas to concurrent activities, multiagent coordination, and hierarchical memory for addressing partial observability. Concluding remarks address open challenges facing the further development of reinforcement learning in a hierarchical setting.

reinforcement learningMarkov decision processessemi-Markov decision processeshierarchytemporal abstraction

Copyright information

© Kluwer Academic Publishers 2003

Authors and Affiliations

  • Andrew G. Barto
    • 1
  • Sridhar Mahadevan
    • 1
  1. 1.Autonomous Learning Laboratory, Department of Computer ScienceUniversity of MassachusettsAmherst