Advertisement

Deep Q-Network Using Reward Distribution

  • Yuta Nakaya
  • Yuko OsanaEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10841)

Abstract

In this paper, we propose a Deep Q-Network using reward distribution. Deep Q-Network is based on the convolutional neural network which is a representative method of Deep Learning and the Q Learning which is a representative method of reinforcement learning. In the Deep Q-Network, when the game screen (observation) is given as an input to the convolutional neural network, the action value in Q Learning for each action is output. This method can realize learning that acquires a score equal to or higher than that of a human in plural games. The Q Learning learns using the greatest value in the next action, so a positive reward is propagated. However, since negative rewards can not be of greatest value, they are not propagated in learning. Therefore, by distributing negative rewards in the same way as Profit Sharing, the proposed method learn to not take wrong actions. Computer experiments were carried out, and it was confirmed that the proposed method can learn with almost the same speed and accuracy as the conventional Deep Q-Network. Moreover, by introducing reward distribution, we confirmed that learning can be performed so as not to acquire negative reward in the proposed method.

Keywords

Deep Q-Network Reward distribution Profit Sharing 

References

  1. 1.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)CrossRefGoogle Scholar
  2. 2.
    Hinton, G.E., Osindero, S., Teh, Y.: A fast learning algorithm for deep belief nets. Neural Comput. 18, 1527–1544 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  3. 3.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. The MIT Press, Cambridge (1998)Google Scholar
  4. 4.
    Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)CrossRefGoogle Scholar
  5. 5.
    Watkins, C.J.C.H., Dayan, P.: Technical note: Q-learning. Mach. Learn. 8, 55–68 (1992)Google Scholar
  6. 6.
    Grefenstette, J.J.: Credit assignment in rule discovery systems based on genetic algorithms. Mach. Learn. 3, 225–245 (1988)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Tokyo University of TechnologyHachiojiJapan

Personalised recommendations