Skip to main content

Playing Catan with Cross-Dimensional Neural Network

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2020)

Abstract

Catan is a strategic board game with many interesting properties, including multi-player, imperfect information, stochasticity, a complex state space structure (hexagonal board where each vertex, edge and face has its own features, cards for each player, etc.), and a large action space (including trading). Therefore, it is challenging to build AI agents by Reinforcement Learning (RL), without domain knowledge nor heuristics. In this paper, we introduce cross-dimensional neural networks to handle a mixture of information sources and a wide variety of outputs, and empirically demonstrate that the network dramatically improves RL in Catan. We also show that, for the first time, a RL agent can outperform jsettler, the best heuristic agent available.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Previously named The Settlers of Catan, renamed for the 5th Edition (2015).

  2. 2.

    AMD Ryzen Threadripper 2990WX.

References

  1. Bowling, M., Burch, N., Johanson, M., Tammelin, O.: Heads-up limit hold’em poker is solved. Science 347(6218), 145–149 (2015). https://doi.org/10.1126/science.1259433

    Article  Google Scholar 

  2. Catan Studio and Catan GmbH: Catan base game rules & almanac 3/4 players (5th edition). https://www.catan.com/service/game-rules (2020)

  3. Cuayáhuitl, H., Keizer, S., Lemon, O.: Strategic dialogue management via deep reinforcement learning. CoRR (2015). http://arxiv.org/abs/1511.08099

  4. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, June 2016. https://doi.org/10.1109/CVPR.2016.90

  5. Li, J., et al.: Suphx: mastering mahjong with deep reinforcement learning. CoRR (2020). http://arxiv.org/abs/2003.13590

  6. Mnih, V., et al.: Playing atari with deep reinforcement learning. NIPS Deep Learning Workshop (2013)

    Google Scholar 

  7. Mnih, V., et al.: Asynchronous methods for deep reinforcement learning. In: The 33rd International Conference on Machine Learning, pp. 1928–1937 (2016)

    Google Scholar 

  8. Monin, J., contributors: Jsettlers2 release-2.2.00. https://github.com/jdmonin/JSettlers2/releases/tag/release-2.2.00 (2020)

  9. Pfeiffer, M.: Reinforcement learning of strategies for settlers of catan. In: Proceedings of the International Conference on Computer Games: Artificial Intelligence, Design and Education (2004)

    Google Scholar 

  10. Silver, D., et al.: A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science 362(6419), 1140–1144 (2018). https://doi.org/10.1126/science.aar6404

    Article  MathSciNet  MATH  Google Scholar 

  11. Sutton, R.S., Barto, A.G.: Introduction to Reinforcement Learning, 2nd edn. MIT Press, Cambridge, MA, USA (2018)

    MATH  Google Scholar 

  12. Szita, I., Chaslot, G., Spronck, P.: Monte-carlo tree search in settlers of catan. In: van den Herik, H.J., Spronck, P. (eds.) Advances in Computer Games, pp. 21–32. Springer, Berlin, Heidelberg (2010)

    Chapter  Google Scholar 

  13. Vinyals, O., et al.: Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature 575, 350–354 (2019)

    Article  Google Scholar 

  14. Williams, R.J.: Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn. 8(3–4), 229–256 (1992)

    MATH  Google Scholar 

  15. Xenou, K., Chalkiadakis, G., Afantenos, S.: Deep reinforcement learning in strategic board game environments. In: Slavkovik, M. (ed.) Multi-Agent Systems, pp. 233–248. Springer International Publishing, Cham (2019)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Quentin Gendre .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gendre, Q., Kaneko, T. (2020). Playing Catan with Cross-Dimensional Neural Network. In: Yang, H., Pasupa, K., Leung, A.CS., Kwok, J.T., Chan, J.H., King, I. (eds) Neural Information Processing. ICONIP 2020. Lecture Notes in Computer Science(), vol 12533. Springer, Cham. https://doi.org/10.1007/978-3-030-63833-7_49

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-63833-7_49

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-63832-0

  • Online ISBN: 978-3-030-63833-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics