Advertisement

Hierarchical Reinforcement Learning with Options and United Neural Network Approximation

  • Vadim Kuzmin
  • Aleksandr I. Panov
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 874)

Abstract

The “curse of dimensionality” and environments with sparse delayed rewards are one of the main challenges in reinforcement learning (RL). To tackle these problems we can use hierarchical reinforcement learning (HRL) that provides abstraction both on actions and states of the environment. This work proposes an algorithm that combines hierarchical approach for RL and the ability of neural networks to serve as universal function approximators. To perform the hierarchy of actions the options framework is used which main idea is to utilize macro-actions (the sequence of simpler actions). State of the environment is the input to a convolutional neural network that plays a role of Q-function estimating the utility of every possible action and skill in the given state. We learn each option separately using different neural networks and then combine result into one architecture with top-level approximator. We compare the performance of the proposed algorithm with the deep Q-network algorithm (DQN) in the environment where the aim of the magnet-arm robot is to build a tower from bricks.

Keywords

Hierarchical reinforcement learning Options Neural network DQN Deep neural network Q-learning 

Notes

Acknowledgements

This work was supported by the Russian Science Foundation (Project No. 18-71-00143).

References

  1. 1.
    Bacon, P.-L., Harb, J., Precup, D.: The Option-Critic Architecture. arXiv:1609.05140v2 (2016)
  2. 2.
    Bai, A., Russell, S.: Efficient reinforcement learning with hierarchies of machines by leveraging internal transitions. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. Main track, pp. 1418–1424 (2017)Google Scholar
  3. 3.
    Botvinick, M.M.: Hierarchical reinforcement learning and decision making. Curr. Opin. Neurobiol. 22, 956–962 (2012)CrossRefGoogle Scholar
  4. 4.
    Dietterich, T.G.: Hierarchical reinforcement learning with the MAXQ value function decomposition. arXiv:cs/9905014 (1999)
  5. 5.
    Kulkarni, T.D., Narasimhan, K.R., Saeedi, A., Tenenbaum, J.B.: Hierarchical deep reinforcement learning: integrating temporal abstraction and intrinsic motivation. arXiv:1604.06057 (2016)
  6. 6.
    Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529533 (2015)CrossRefGoogle Scholar
  7. 7.
    Parr, R., Russell, S.: Reinforcement learning with hierarchies of machines. In: Advances in Neural Information Processing Systems: Proceedings of the 1997 Conference. MIT Press, Cambridge (1998)Google Scholar
  8. 8.
    Sutton, R.S., Precup, D., Singh, S.: Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning. In: Artificial Intelligence (1999)MathSciNetCrossRefGoogle Scholar
  9. 9.
    (Sasha) Vezhnevets, A., et al.: Strategic attentive writer for learning macro-actions. In: Proceedings of NIPS (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.National Research University Higher School of EconomicsMoscowRussia
  2. 2.Moscow Institute of Physics and TechnologyMoscowRussia
  3. 3.Federal Research Center “Computer Science and Control” of the Russian Academy of SciencesMoscowRussia

Personalised recommendations