Abstract
This paper describes a novel reinforcement learning system for learning to play the game of Tron. The system combines Q-learning, multi-layer perceptrons, vision grids, opponent modelling, and Monte Carlo rollouts in a novel way. By learning an opponent model, Monte Carlo rollouts can be effectively applied to generate state trajectories for all possible actions from which improved action estimates can be computed. This allows to extend experience replay by making it possible to update the state-action values of all actions in a given game state simultaneously. The results show that the use of experience replay that updates the Q-values of all actions simultaneously strongly outperforms the conventional experience replay that only updates the Q-value of the performed action. The results also show that using short or long rollout horizons during training lead to similar good performances against two fixed opponents.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Baxter, J., Tridgell, A., Weaver, L.: Learning to play chess using temporal differences. Mach. Learn. 40(3), 243–263 (2000)
Bom, L., Henken, R., Wiering, M.: Reinforcement learning to train Ms. Pac-Man using higher-order action-relative inputs. In: 2013 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), pp. 156–163 (2013)
de Bruin, T., Kober, J., Tuyls, K., Babuška, R.: The importance of experience replay database composition in deep reinforcement learning. In: Deep Reinforcement Learning Workshop, NIPS (2015)
Clevert, D., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (ELUs). CoRR abs/1511.07289 (2015)
Ganzfried, S., Sandholm, T.: Game theory-based opponent modeling in large imperfect-information games. In: the 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pp. 533–540. International Foundation for Autonomous Agents and Multiagent Systems (2011)
He, H., Boyd-Graber, J.L., Kwok, K., Daumé III, H.: Opponent modeling in deep reinforcement learning. CoRR abs/1609.05559 (2016)
Knegt, S., Drugan, M., Wiering, M.: Opponent modelling in the game of Tron using reinforcement learning. In: 10th International Conference on Agents and Artificial Intelligence, ICAART 2018, pp. 29–40 (2018)
Kocsis, L., Szepesvári, C.: Bandit based Monte-Carlo planning. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS (LNAI), vol. 4212, pp. 282–293. Springer, Heidelberg (2006). https://doi.org/10.1007/11871842_29
Lin, L.J.: Reinforcement Learning for Robots Using Neural Networks. Ph.D. thesis, Carnegie Mellon University, Pittsburgh, January 1993
Mnih, V., et al.: Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013)
van Otterlo, M., Wiering, M.: Reinforcement learning and Markov decision processes. In: Wiering, M., van Otterlo, M. (eds.) Reinforcement Learning: State-of-the-Art, pp. 3–42. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-27645-3_1
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: Parallel Distributed Processing, vol. 1, pp. 318–362. MIT Press (1986)
Samuel, A.L.: Some studies in machine learning using the game of checkers. IBM J. Res. Dev. 3, 210–229 (1959)
Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)
Shantia, A., Begue, E., Wiering, M.: Connectionist reinforcement learning for intelligent unit micro management in Starcraft. In: The 2011 International Joint Conference on Neural Networks (IJCNN), pp. 1794–1801. IEEE (2011)
Silver, D., et al.: Mastering the game of Go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)
Silver, D., et al.: Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815 (2017)
Silver, D., et al.: Mastering the game of Go without human knowledge. Nature 550, 354 (2017)
Southey, F., et al.: Bayes bluff: opponent modelling in poker. In: Proceedings of the 21st Annual Conference on Uncertainty in Artificial Intelligence (UAI), pp. 550–558 (2005)
Sutton, R.S.: Learning to predict by the methods of temporal differences. Mach. Learn. 3(1), 9–44 (1988)
Sutton, R.S., Barto, A.G.: Introduction to Reinforcement Learning, 1st edn. MIT Press, Cambridge (1998)
Tesauro, G.: Temporal difference learning and TD-Gammon. Commun. ACM 38(3), 58–68 (1995)
Tesauro, G., Galperin, G.R.: On-line policy improvement using Monte-Carlo search. In: Jordan, M.I., Petsche, T. (eds.) Advances in Neural Information Processing Systems 9, pp. 1068–1074. MIT Press (1997)
Watkins, C.J., Dayan, P.: Q-learning. Mach. Learn. 8(3), 279–292 (1992)
Werbos, P.J.: Beyond regression: new tools for prediction and analysis in the behavioral sciences. Ph.D. thesis, Harvard University (1974)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Knegt, S.J.L., Drugan, M.M., Wiering, M.A. (2019). Learning from Monte Carlo Rollouts with Opponent Models for Playing Tron. In: van den Herik, J., Rocha, A. (eds) Agents and Artificial Intelligence. ICAART 2018. Lecture Notes in Computer Science(), vol 11352. Springer, Cham. https://doi.org/10.1007/978-3-030-05453-3_6
Download citation
DOI: https://doi.org/10.1007/978-3-030-05453-3_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-05452-6
Online ISBN: 978-3-030-05453-3
eBook Packages: Computer ScienceComputer Science (R0)