Advertisement

Some Variations of Upper Confidence Bound for General Game Playing

  • Iván Francisco-ValenciaEmail author
  • José Raymundo Marcial-Romero
  • Rosa María Valdovinos-Rosas
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11524)

Abstract

Monte Carlo Tree Search (MCTS) is the most used method in General Game Playing, area of the Artificial Intelligence, whose main goal is to develop agents capable of play any board game without preview knowledge. MCTS requires a tree which represents the states and moves of the board game which is visited and expanded using an iterations method. In order to visit the tree, MCTS requires a selection policy which determines which node is visited in each level. Nowdays, Upper Confidence Bound (UCB), is the most popular policy in MCTS due to its simplicity and efficiency. This policy was propose for the Multi-Armed Bandit Problem (MABP) which consists in set of slot machines each of which has a certain probability of give a reward. The goal is to maximize the accumulative reward that is obtained when a machine is played in a series of rounds. Other policy proposed for MCTS is Upper Confidence Bound\(_{\sqrt{.}}\) (UCB\(_{\sqrt{.}}\)) whose goal is to identify the machine with the highest probability to give a reward. This paper shows a comparative between five modifications of UCB and one of UCB\(_{\sqrt{.}}\), this comparative has the goal of finding a policy which be able to identify the optimal machine as quickly as possible, this goal in MCTS is equals to identify the node with the highest probability to leading to a victory. The results show that some policies find the optimal machine before UCB, however, with 10,000 rounds UCB is the policy who plays the optimal machine more often.

Keywords

General game playing Selection policy Upper confidence bound 

References

  1. 1.
    Audibert, J.Y., Bubeck, S.: Minimax policies for adversarial and stochastic bandits. In: COLT, pp. 217–226 (2009)Google Scholar
  2. 2.
    Audibert, J.-Y., Munos, R., Szepesvári, C.: Tuning bandit algorithms in stochastic environments. In: Hutter, M., Servedio, R.A., Takimoto, E. (eds.) ALT 2007. LNCS (LNAI), vol. 4754, pp. 150–165. Springer, Heidelberg (2007).  https://doi.org/10.1007/978-3-540-75225-7_15CrossRefGoogle Scholar
  3. 3.
    Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multiarmed bandit problem. Mach. Learn. 47(2–3), 235–256 (2002)CrossRefGoogle Scholar
  4. 4.
    Auer, P., Ortner, R.: UCB revisited: improved regret bounds for the stochastic multi-armed bandit problem. Periodica Math. Hungarica 61(1–2), 55–65 (2010)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Bjornsson, Y., Finnsson, H.: Cadiaplayer: a simulation-based general game player. IEEE Trans. Comput. Intell. AI Games 1(1), 4–15 (2009)CrossRefGoogle Scholar
  6. 6.
    Browne, C.B., et al.: A survey of Monte Carlo tree search methods. IEEE Trans. Comput. Intell. AI Games 4(1), 1–43 (2012)CrossRefGoogle Scholar
  7. 7.
    Carpentier, A., Valko, M.: Simple regret for infinitely many armed bandits. In: International Conference on Machine Learning, pp. 1133–1141 (2015)Google Scholar
  8. 8.
    Cazenave, T.: Playout policy adaptation for games. In: Plaat, A., van den Herik, J., Kosters, W. (eds.) ACG 2015. LNCS, vol. 9525, pp. 20–28. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-27992-3_3CrossRefGoogle Scholar
  9. 9.
    Cazenave, T.: Playout policy adaptation with move features. Theoret. Comput. Sci. 644, 43–52 (2016)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Cazenave, T., Diemert, E.: Memorizing the playout policy. In: Cazenave, T., Winands, M.H.M., Saffidine, A. (eds.) CGW 2017. CCIS, vol. 818, pp. 96–107. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-75931-9_7CrossRefGoogle Scholar
  11. 11.
    Genesereth, M., Thielscher, M.: General Game Playing. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, San Rafael (2014)Google Scholar
  12. 12.
    Liu, Y.C., Tsuruoka, Y.: Modification of improved upper confidence bounds for regulating exploration in Monte-Carlo tree search. Theoret. Comput. Sci. 644, 92–105 (2016)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Maes, F., Wehenkel, L., Ernst, D.: Automatic discovery of ranking formulas for playing with multi-armed bandits. In: Sanner, S., Hutter, M. (eds.) EWRL 2011. LNCS (LNAI), vol. 7188, pp. 5–17. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-29946-9_5CrossRefGoogle Scholar
  14. 14.
    Tolpin, D., Shimony, S.E.: MCTS based on simple regret. In: AAAI (2012)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Iván Francisco-Valencia
    • 1
    Email author
  • José Raymundo Marcial-Romero
    • 1
  • Rosa María Valdovinos-Rosas
    • 1
  1. 1.Facultad de IngenieríaUniversidad Autónoma del Estado de MéxicoTolucaMexico

Personalised recommendations