Skip to main content

Monte-Carlo Tree Search in Settlers of Catan

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 6048))

Abstract

Games are considered important benchmark opportunities for artificial intelligence research. Modern strategic board games can typically be played by three or more people, which makes them suitable test beds for investigating multi-player strategic decision making. Monte-Carlo Tree Search (MCTS) is a recently published family of algorithms that achieved successful results with classical, two-player, perfect-information games such as Go. In this paper we apply MCTS to the multi-player, non-deterministic board game Settlers of Catan. We implemented an agent that is able to play against computer-controlled and human players. We show that MCTS can be adapted successfully to multi-agent environments, and present two approaches of providing the agent with a limited amount of domain knowledge. Our results show that the agent has a considerable playing strength when compared to game implementation with existing heuristics. So, we may conclude that MCTS is a suitable tool for achieving a strong Settlers of Catan player.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Grabinger, R., Dunlap, J.: Rich environments for active learning: a definition. Association for Learning Technology Journal 3(2), 5–34 (1995)

    Google Scholar 

  2. Singh, S.P., Barto, A.G., Chentanez, N.: Intrinsically motivated reinforcement learning. In: Advances in Neural Information Processing Systems, vol. 17 (2005)

    Google Scholar 

  3. Dekker, S., van den Herik, H., Herschberg, I.: Perfect knowledge revisited. Artificial Intelligence 43(1), 111–123 (1990)

    Article  Google Scholar 

  4. Laird, J., van Lent, M.: Human-level AI’s killer application: Interactive computer games. AI Magazine 22(2), 15–26 (2001)

    Google Scholar 

  5. Sawyer, B.: Serious games: Improving public policy through game-based learning and simulation. Foresight and Governance Project, Woodrow Wilson International Center for Scholars Publication 1 (2002)

    Google Scholar 

  6. Schaeffer, J., van den Herik, H.: Games, computers, and artificial intelligence. Artificial Intelligence 134, 1–7 (2002)

    Article  MATH  Google Scholar 

  7. Caldera, Y., Culp, A., O’Brien, M., Truglio, R., Alvarez, M., Huston, A.: Children’s play preferences, construction play with blocks, and visual-spatial skills: Are they related? International Journal of Behavioral Development 23(4), 855–872 (1999)

    Article  Google Scholar 

  8. Huitt, W.: Cognitive development: Applications. Educational Psychology Interactive (1997)

    Google Scholar 

  9. van den Herik, H., Iida, H. (eds.): Games in AI Research, Van Spijk, Venlo, The Netherlands (2000)

    Google Scholar 

  10. van den Herik, H.J., Uiterwijk, J.W.H.M., van Rijswijck, J.: Games solved: Now and in the future. Artificial Intelligence 134, 277–311 (2002)

    Article  MATH  Google Scholar 

  11. Marsland, T.A.: Computer chess methods. In: Shapiro, S. (ed.) Encyclopedia of Artificial Intelligence, pp. 157–171. J. Wiley & Sons, Chichester (1987)

    Google Scholar 

  12. Pfeiffer, M.: Reinforcement learning of strategies for settlers of catan. In: Proceedings of the International Conference on Computer Games: Artificial Intelligence, Design and Education (2004)

    Google Scholar 

  13. Thomas, R.: Real-time Decision Making for Adversarial Environments Using a Plan-based Heuristic. PhD thesis, Northwestern University, Evanston, Illinois (2003)

    Google Scholar 

  14. Billings, D., Davidson, A., Schaeffer, J., Szafron, D.: The challenge of poker. Artificial Intelligence 134(1), 201–240 (2002)

    Article  MATH  Google Scholar 

  15. Sheppard, B.: World-championship-caliber scrabble. Artificial Intelligence 134(1), 241–275 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  16. Kocsis, L., Szepesvári, C.: Bandit based monte-carlo planning. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS (LNAI), vol. 4212, pp. 282–293. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  17. Chaslot, G., Saito, J., Bouzy, B., Uiterwijk, J., van den Herik, H.: Monte-carlo strategies for computer go. In: Proceedings of the 18th BeNeLux Conference on Artificial Intelligence, pp. 83–90 (2006)

    Google Scholar 

  18. Chaslot, G., Winands, M., van den Herik, H., Uiterwijk, J., Bouzy, B.: Progressive strategies for monte-carlo tree search. New Mathematics and Natural Computation 4(3), 343 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  19. Gelly, S., Wang, Y.: Exploration exploitation in go: UCT for monte-carlo go. In: NIPS-2006: On-line trading of Exploration and Exploitation Workshop (2006)

    Google Scholar 

  20. Bouzy, B., Chaslot, G.: Monte-Carlo go reinforcement learning experiments. In: IEEE 2006 Symposium on Computational Intelligence in Games, pp. 187–194 (2006)

    Google Scholar 

  21. Chatriot, L., Gelly, S., Jean-Baptiste, H., Perez, J., Rimmel, A., Teytaud, O.: Including expert knowledge in bandit-based monte-carlo planning, with application to computer-go. In: Girgin, S., Loth, M., Munos, R., Preux, P., Ryabko, D. (eds.) EWRL 2008. LNCS (LNAI), vol. 5323. Springer, Heidelberg (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Szita, I., Chaslot, G., Spronck, P. (2010). Monte-Carlo Tree Search in Settlers of Catan. In: van den Herik, H.J., Spronck, P. (eds) Advances in Computer Games. ACG 2009. Lecture Notes in Computer Science, vol 6048. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-12993-3_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-12993-3_3

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-12992-6

  • Online ISBN: 978-3-642-12993-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics