Advertisement

Averaged-A3C for Asynchronous Deep Reinforcement Learning

  • Song Chen
  • Xiao-Fang Zhang
  • Jin-Jin Wu
  • Di Liu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11303)

Abstract

In recent years, Deep Reinforcement Learning (DRL) has achieved unprecedented success in high-dimensional and large-scale space tasks. However, instability and variability of DRL algorithms have an important effect on their performance. To alleviate this problem, the Asynchronous Advantage Actor-Critic (A3C) algorithm uses the advantage function to update the policy and value network, but there still remains a certain variance in the advantage function. Aiming to reduce the variance of the advantage function, we propose a new A3C algorithm called Averaged Asynchronous Advantage Actor-Critic (Averaged-A3C). Averaged-A3C is an extension of the A3C algorithm, by averaging previously learned state value estimates to calculate the advantage function, which contributes to a more stable training procedure and improved performance. We evaluate the performance of the new algorithm through some games on the Atari 2600 and MuJoCo environment. Experimental results show that the Averaged-A3C algorithm effectively improves the performance of Agent and the stability of training process compared to the original A3C algorithm.

Keywords

Deep reinforcement learning Asynchronous Advantage Actor-Critic Advantage function Average 

References

  1. 1.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. J. Nature. 521, 436–444 (2015)CrossRefGoogle Scholar
  2. 2.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT press, Cambridge (1998)Google Scholar
  3. 3.
    Silver, D., et al.: Mastering the game of go without human knowledge. J. Nature. 550, 354–359 (2017)CrossRefGoogle Scholar
  4. 4.
    Watkins, H., Cornish, C.J.: Learning from Delayed Rewards. King’s College, Cambridge (1989)Google Scholar
  5. 5.
    Mnih, V., et al.: Human-level control through deep reinforcement learning. J. Nature. 518, 529–533 (2015)CrossRefGoogle Scholar
  6. 6.
    Lin, L.J.: Programming robots using reinforcement learning and teaching. In: AAAI Conference on Artificial Intelligence, pp. 781–786 (1991)Google Scholar
  7. 7.
    Mnih, V., et al.: Asynchronous methods for deep reinforcement learning. In: International Conference on Machine Learning, pp. 1928–1937 (2016)Google Scholar
  8. 8.
    Bellemare, M.G., Ostrovski, G., Guez, A., Thomas, P.S., Munos, R.: Increasing the action gap: new operators for reinforcement learning. In: AAAI Conference on Artificial Intelligence, pp. 1476–1483 (2016)Google Scholar
  9. 9.
    Schulman, J., Moritz, P., Levine, S., Jordan, M., Abbeel, P.: High-dimensional continuous control using generalized advantage estimation. arXiv:1506.02438 (2015)
  10. 10.
    Anschel, O., Baram, N., Shimkin, N.: Averaged-DQN: variance reduction and stabilization for deep reinforcement learning. In: International Conference on Machine Learning, pp. 176–185 (2017)Google Scholar
  11. 11.
    Silver, D., Lever, G., Heess, N., Degris, T., Wierstra, D., Riedmiller, M.: Deterministic policy gradient algorithms. In: International Conference on Machine Learning, pp. 387–395 (2014)Google Scholar
  12. 12.
    Konda, V.R., Tsitsiklis, J.N.: Actor-critic algorithms. In: Advances in Neural Information Processing Systems, pp. 1008–1014 (2000)Google Scholar
  13. 13.
    Schaul, T., Quan, J., Antonoglou, I., Silver, D.: Prioritized Experience Replay. arXiv:1511.05952 (2015)
  14. 14.
    Van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double Q-learning. In: Association for the Advance of Artificial Intelligence, pp. 2094–2100 (2016)Google Scholar
  15. 15.
    Schulman, J.., Levine, S., Moritz, P., Jordan, M.I., Abbeel, P.: Trust region policy optimization. In: International Conference on Machine Learning, pp. 1889–1897 (2015)Google Scholar
  16. 16.
    Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv:1707.06347 (2017)

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Song Chen
    • 1
  • Xiao-Fang Zhang
    • 1
  • Jin-Jin Wu
    • 1
  • Di Liu
    • 1
  1. 1.School of Computer Science and TechnologySoochow UniversitySuzhouChina

Personalised recommendations