Advertisement

Exploration Bonuses Based on Upper Confidence Bounds for Sparse Reward Games

  • Naoki MizukamiEmail author
  • Jun Suzuki
  • Hirotaka Kameko
  • Yoshimasa Tsuruoka
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10664)

Abstract

Recent deep reinforcement learning (RL) algorithms have achieved super-human-level performance in many Atari games. However, a closer look at their performance reveals that the algorithms fall short of humans in games where rewards are only obtained occasionally. One solution to this sparse reward problem is to incorporate an explicit and more sophisticated exploration strategy in the agent’s learning process. In this paper, we present an effective exploration strategy that explicitly considers the progress of training using exploration bonuses based on Upper Confidence Bounds (UCB). Our method also includes a mechanism to separate exploration bonuses from rewards, thereby avoiding the problem of interfering with the original learning objective. We evaluate our method on Atari 2600 games with sparse rewards, and achieve significant improvements over the vanilla asynchronous advantage actor-critic (A3C) algorithm.

References

  1. 1.
    Bellemare, M., Srinivasan, S., Ostrovski, G., Schaul, T., Saxton, D., Munos, R.: Unifying count-based exploration and intrinsic motivation. In: Advances in Neural Information Processing Systems, NIPS, vol. 29, pp. 1471–1479 (2016)Google Scholar
  2. 2.
    Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)CrossRefGoogle Scholar
  3. 3.
    Tang, H., Houthooft, R., Foote, D., Stooke, A., Chen, X., Duan, Y., Schulman, J., De Turck, F., Abbeel, P.: # exploration: a study of count-based exploration for deep reinforcement learning. arXiv preprint arXiv:1611.04717 (2016)
  4. 4.
    Strehl, A.L., Littman, M.L.: An analysis of model-based interval estimation for Markov decision processes. J. Comput. Syst. Sci. 74, 1309–1331 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Lai, T.L., Robbins, H.: Asymptotically efficient adaptive allocation rules. Adv. Appl. Math. 6, 4–22 (1985)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Mnih, V., Badia, A.P., Mirza, M., Graves, A., Lillicrap, T.P., Harley, T., Silver, D., Kavukcuoglu, K.: Asynchronous methods for deep reinforcement learning. In: Proceedings of the 33rd International Conference on Machine Learning, JMLR (2016)Google Scholar
  7. 7.
    Bellemare, M.G., Naddaf, Y., Veness, J., Bowling, M.: The arcade learning environment: an evaluation platform for general agents. J. Artif. Intell. Res. 47, 253–279 (2013)Google Scholar
  8. 8.
    Andoni, A., Indyk, P.: Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In: Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science, pp. 459–468. IEEE (2006)Google Scholar
  9. 9.
    Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing atari with deep reinforcement learning. In: NIPS Deep Learning Workshop, NIPS (2013)Google Scholar
  10. 10.
    Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: Proceedings of the 27th International Conference on Machine Learning, Omnipress, pp. 807–814 (2010)Google Scholar
  11. 11.
    Van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double Q-learning. In: Proceedings of the 30th AAAI Conference on Artificial Intelligence, AAAI, pp. 2094–2100 (2016)Google Scholar
  12. 12.
    Nair, A., Srinivasan, P., Blackwell, S., Alcicek, C., Fearon, R., De Maria, A., Panneershelvam, V., Suleyman, M., Beattie, C., Petersen, S., Legg, S., Mnih, V., Kavukcuoglu, K., Silver, D.: Massively parallel methods for deep reinforcement learning. In: ICML Deep Learning Workshop (2015)Google Scholar
  13. 13.
    Osband, I., Blundell, C., Pritzel, A., Van Roy, B.: Deep exploration via bootstrapped DQN. In: Advances in Neural Information Processing Systems, NIPS 29, pp. 4026–4034 (2016)Google Scholar
  14. 14.
    Wang, Z., Schaul, T., Hessel, M., van Hasselt, H., Lanctot, M., de Freitas, N.: Dueling network architectures for deep reinforcement learning. In: Proceedings of the 33rd International Conference on Machine Learning, JLMR, pp. 1995–2003 (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Naoki Mizukami
    • 1
    Email author
  • Jun Suzuki
    • 2
  • Hirotaka Kameko
    • 1
  • Yoshimasa Tsuruoka
    • 1
  1. 1.Graduate School of EngineeringThe University of TokyoTokyoJapan
  2. 2.NTT Communication Science LaboratoriesNTT CorporationKyotoJapan

Personalised recommendations