AIs for Dominion Using Monte-Carlo Tree Search

  • Robin Tollisen
  • Jon Vegard Jansen
  • Morten Goodwin
  • Sondre Glimsdal
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9101)

Abstract

Dominion is a complex game, with hidden information and stochastic elements. This makes creating any artificial intelligence (AI) challenging. To this date, there is little work in the literature on AI for Dominion, and existing solutions rely upon carefully tuned finite-state solutions.

This paper presents two novel AIs for Dominion based on Monte-Carlo Tree Search (MCTS) methods. This is achieved by employing Upper Confidence Bounds (UCB) and Upper Confidence Bounds applied to Trees (UCT). The proposed solutions are notably better than existing work. The strongest proposal is able to win 67% of games played against a known, good finite-state solution, even when the finite-state solution has the unfair advantage of starting the game.

Keywords

Dominion MCTS UCB UCT 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Jansen, J.V., Tollisen, R.: An ai for dominion based on monte-carlo methods (2014)Google Scholar
  2. 2.
    Fynbo, R.B., Nellemann, C.: Developing an agent for dominion using modern ai-approaches. Master’s thesis, IT University of Copenhagen (2010)Google Scholar
  3. 3.
    Fisher, M.: Provincial: A kingdom-adaptive ai for dominion (2014)Google Scholar
  4. 4.
    Chaslot, G.: Monte-Carlo Tree Search. PhD thesis, Maastricht University (2010)Google Scholar
  5. 5.
    Kocsis, L., Szepesvári, C.: Bandit based monte-carlo planning. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS (LNAI), vol. 4212, pp. 282–293. Springer, Heidelberg (2006) CrossRefGoogle Scholar
  6. 6.
    Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multiarmed bandit problem. Machine learning 47(2–3), 235–256 (2002)CrossRefMATHGoogle Scholar
  7. 7.
    Gelly, S., Wang, Y.: Exploration exploitation in go: Uct for monte-carlo go (2006)Google Scholar
  8. 8.
    Auer, P., Ortner, R.: Ucb revisited: Improved regret bounds for the stochastic multi-armed bandit problem. Periodica Mathematica Hungarica 61(1–2), 55–65 (2010)CrossRefMATHMathSciNetGoogle Scholar
  9. 9.
    Coquelin, P.A., Munos, R.: Bandit algorithms for tree search (2007). arXiv preprint cs/0703062
  10. 10.
    Gelly, S., Silver, D.: Monte-carlo tree search and rapid action value estimation in computer go. Artificial Intelligence 175(11), 1856–1875 (2011)CrossRefMathSciNetGoogle Scholar
  11. 11.
    Wizards of the Coast. Magic: The Gathering (2013). http://www.wizards.com/Magic/Summoner/ (visited on 05/30/2014)
  12. 12.
    GmbH, C.: The official website for the settlers of catan (2014)Google Scholar
  13. 13.
    Ward, C.D., Cowling, P.I.: Monte carlo search applied to card selection in magic: the gathering. In: IEEE Symposium on Computational Intelligence and Games, CIG 2009, pp. 9–16. IEEE (2009)Google Scholar
  14. 14.
    Cowling, P.I., Ward, C.D., Powley, E.J.: Ensemble determinization in monte carlo tree search for the imperfect information card game magic: The gathering. IEEE Transactions on Computational Intelligence and AI in Games 4(4), 241–257 (2012)CrossRefGoogle Scholar
  15. 15.
    Szita, I., Chaslot, G., Spronck, P.: Monte-carlo tree search in settlers of catan. In: van den Herik, H.J., Spronck, P. (eds.) ACG 2009. LNCS, vol. 6048, pp. 21–32. Springer, Heidelberg (2010) CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Robin Tollisen
    • 1
  • Jon Vegard Jansen
    • 1
  • Morten Goodwin
    • 1
  • Sondre Glimsdal
    • 1
  1. 1.Department of ICTUniversity of AgderGrimstadNorway

Personalised recommendations