Unified Inter and Intra Options Learning Using Policy Gradient Methods

  • Kfir Y. Levy
  • Nahum Shimkin
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7188)

Abstract

Temporally extended actions (or macro-actions) have proven useful for speeding up planning and learning, adding robustness, and building prior knowledge into AI systems. The options framework, as introduced in Sutton, Precup and Singh (1999), provides a natural way to incorporate macro-actions into reinforcement learning. In the subgoals approach, learning is divided into two phases, first learning each option with a prescribed subgoal, and then learning to compose the learned options together. In this paper we offer a unified framework for concurrent inter- and intra-options learning. To that end, we propose a modular parameterization of intra-option policies together with option termination conditions and the option selection policy (inter options), and show that these three decision components may be viewed as a unified policy over an augmented state-action space, to which standard policy gradient algorithms may be applied. We identify the basis functions that apply to each of these decision components, and show that they possess a useful orthogonality property that allows to compute the natural gradient independently for each component. We further outline the extension of the suggested framework to several levels of options hierarchy, and conclude with a brief illustrative example.

Keywords

Reinforcement Learn Multiagent System Markov Decision Process Inverted Pendulum Orthogonality Property 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Comanici, G., Precup, D.: Optimal policy switching algorithms for reinforcement learning. In: Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems, pp. 709–714 (2010)Google Scholar
  2. 2.
    Ghavamzadeh, M., Mahadevan, S.: Hierarchical policy gradient algorithms. In: Twentieth ICML, pp. 226–233 (2003)Google Scholar
  3. 3.
    Neumann, G., Maass, W., Peters, J.: Learning complex motions by sequencing simpler motion templates. In: ICML (2009)Google Scholar
  4. 4.
    Sutton, R.S., Precup, D., Singh, S.: Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial intelligence 112, 181–211 (1999)MathSciNetMATHCrossRefGoogle Scholar
  5. 5.
    Simsek, O., Barto, A.: Using relative novelty to identify useful temporal abstractions in reinforcement learning. In: ICML, vol. 21, p. 751. Citeseer (2004)Google Scholar
  6. 6.
    Menache, I., Mannor, S., Shimkin, N.: Q-Cut - Dynamic Discovery of Sub-goals in Reinforcement Learning. In: Elomaa, T., Mannila, H., Toivonen, H. (eds.) ECML 2002. LNCS (LNAI), vol. 2430, pp. 295–306. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  7. 7.
    Sutton, R.S., McAllester, D., Singh, S., Mansour, Y.: Policy gradient methods for reinforcement learning with function approximation. In: Advances in Neural Information Processing Systems, vol. 12 (2000)Google Scholar
  8. 8.
    Peters, J., Schaal, S.: Natural actor-critic. Neurocomputing 71(7-9), 1180–1190 (2008)CrossRefGoogle Scholar
  9. 9.
    Bhatnagar, S., Sutton, R.S., Ghavamzadeh, M., Lee, M.: Natural actor-critic algorithms. Automatica 45, 2471–2482 (2009)MATHCrossRefGoogle Scholar
  10. 10.
    Richter, S., Aberdeen, D., Yu, J.: Natural actor-critic for road traffic optimisation. In: Advances in Neural Information Processing Systems, vol. 19, p. 1169 (2007)Google Scholar
  11. 11.
    Buffet, O., Dutech, A., Charpillet, F.: Shaping multi-agent systems with gradient reinforcement learning. In: Autonomous Agents and Multi-Agent Systems (2007)Google Scholar
  12. 12.
    Kakade, S.: A natural policy gradient. In: Advances in Neural Information Processing Systems 14, vol. 2, pp. 1531–1538 (2002)Google Scholar
  13. 13.
    Bagnell, J., Schneider, J.: Covariant policy search. In: International Joint Conference on Artificial Intelligence, vol. 18, pp. 1019–1024. Citeseer (2003)Google Scholar
  14. 14.
    Boyan, J.A.: Technical update: Least-squares temporal difference learning. Machine Learning 49, 233–246 (2002)MATHCrossRefGoogle Scholar
  15. 15.
    Nedić, A., Bertsekas, D.: Least squares policy evaluation algorithms with linear function approximation. Discrete Event Dynamic Systems 13 (2003)Google Scholar
  16. 16.
    Yoshimoto, J., Nishimura, M., Tokita, Y., Ishii, S.: Acrobot control by learning the switching of multiple controllers. Artificial Life and Robotics 9 (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Kfir Y. Levy
    • 1
  • Nahum Shimkin
    • 1
  1. 1.Faculty of Electrical EngineeringTechnionHaifaIsrael

Personalised recommendations