Advertisement

The Application of AlphaZero to Wargaming

  • Glennn Moy
  • Slava ShekhEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11919)

Abstract

In this paper, we explore the process of automatically learning to play wargames using AlphaZero deep reinforcement learning. We consider a simple wargame, Coral Sea, which is a turn-based game played on a hexagonal grid between two players. We explore the differences between Coral Sea and traditional board games, where the successful use of AlphaZero has been demonstrated. Key differences include: problem representation, wargame asymmetry, limited strategic depth, and the requirement for significant hardware resources. We demonstrate how bootstrapping AlphaZero with supervised learning can overcome these challenges. In the context of Coral Sea, this enables AlphaZero to learn optimal play and outperform the supervised examples on which it was trained.

Keywords

Wargaming Deep reinforcement learning AlphaZero 

References

  1. 1.
    Australian Defence Force: Joint Military Appreciation Process (2016)Google Scholar
  2. 2.
    Genesereth, M., Love, N., Pell, B.: General game playing: overview of the AAAI competition. AI Mag. 26(2), 62 (2005)Google Scholar
  3. 3.
    Silver, D., et al.: Mastering the game of Go with deep neural networks and tree search. Nature 529(7587), 484 (2016)CrossRefGoogle Scholar
  4. 4.
    Silver, D., et. al.: Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815 (2017)
  5. 5.
    Tregenza, M.: Coral Sea 2042: Rules for the Maritime/Air Analytical Wargame (2018)Google Scholar
  6. 6.
    Pace, D.K.: Seminar gaming: an approach to problems too complex for algorithmic solution. John Hopkins APL Tech. Digest 12(3), 290–296 (1991)Google Scholar
  7. 7.
    MASA SWORD. https://masasim.com/sword/. Accessed 08 May 2019
  8. 8.
    Kocsis, L., Szepesvári, C.: Bandit based Monte-Carlo planning. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS (LNAI), vol. 4212, pp. 282–293. Springer, Heidelberg (2006).  https://doi.org/10.1007/11871842_29CrossRefGoogle Scholar
  9. 9.
    Campbell, M.S., Marsland, T.A.: A comparison of minimax tree search algorithms. Artif. Intell. 20(4), 347–367 (1983)CrossRefGoogle Scholar
  10. 10.
    Edwards, S.J.: Forsyth-Edwards Notation. Portable Game Notation Specification and Implementation Guide (1994)Google Scholar
  11. 11.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  12. 12.
    Lantz, F., Isaksen, A., Jaffe, A., Nealen, A., Togelius, J.: Depth in strategic games. In: Workshops at the Thirty-First AAAI Conference on Artificial Intelligence (2017)Google Scholar
  13. 13.
    Stockfish continues to dominate computer chess, wins TCEC S14. http://www.chessdom.com/stockfish-continues-to-dominate-computer-chess-wins-tcec-s14/. Accessed 08 May 2019
  14. 14.
    Silver, A.: Leela Chess Zero: AlphaZero for the PC. https://en.chessbase.com/post/leela-chess-zero-alphazero-for-the-pc. Accessed 08 May 2019
  15. 15.
    OpenAI Five. https://openai.com/blog/openai-five/. Accessed 08 May 2019
  16. 16.
    Botvinick, M., Ritter, S., Wang, J.X., Kurth-Nelson, Z., Blundell, C., Hassabis, D.: Reinforcement learning, fast and slow. Trends Cogn. Sci. 23, 408–422 (2019)CrossRefGoogle Scholar
  17. 17.
    AlphaStar: Mastering the Real-Time Strategy Game StarCraft II. https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/. Accessed 08 May 2019
  18. 18.
    Hussein, A., Gaber, M.M., Elyan, E., Jayne, C.: Imitation learning: a survey of learning methods. ACM Comput. Surv. (CSUR) 50(2), 21 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Defence Science and Technology GroupEdinburghAustralia

Personalised recommendations