Advertisement

Challenges and Progress on Using Large Lossy Endgame Databases in Chinese Checkers

Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 614)

Abstract

A common evaluation function for playing Chinese Checkers with two or more players has been the single-agent distance across the board. This is an abstraction of a perfect heuristic, because it ignores the interactions between the players in the game. Previous work has studied these heuristics for smaller versions of the game, including 6-piece data for a board with 49 locations and 81 locations which have 13.98 million and 324.5 million combinations respectively. The single-agent solution to the full game of Chinese Checkers has 81 locations and 10 pieces per player. This results in 1.88 trillion possible positions and is stored using 500 GB of disk space. In this paper we report results from a preliminary study on how to best use the data to improve the play of a Chinese Checkers program.

Keywords

Chinese Checkers Endgame Databases Monte Carlo Tree Search (MCTS) MCTS Tree Single-agent Data 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgments

This paper benefited from research by a summer student, Evan Boucher, who worked on the problem of determining the true distance of a state from the goal given the modulo distance.

References

  1. 1.
    8 longest 7-man checkmates. http://tb7.chessok.com/articles/Top8DTM_eng. Acces-sed 11 May 2015
  2. 2.
    Bell, G.I.: The shortest game of Chinese Checkers and related problems. CoRR abs/0803.1245 (2008). http://arxiv.org/abs/0803.1245
  3. 3.
    Bouzy, B.: Old-fashioned computer Go vs Monte-Carlo Go. In: IEEE Symposium on Computational Intelligence in Games (CIG) (2007). Invited TutorialGoogle Scholar
  4. 4.
    Browne, C., Powley, E.J., Whitehouse, D., Lucas, S.M., Cowling, P.I., Rohlfshagen, P., Tavener, S., Perez, D., Samothrakis, S., Colton, S.: A survey of monte carlo tree search methods. IEEE Trans. Comput. Intell. AI Games 4(1), 1–43 (2012)CrossRefGoogle Scholar
  5. 5.
    Campbell, M., Hoane Jr., A.J., Hsu, F.: Deep blue. Artif. Intell. 134(1–2), 57–83 (2002)CrossRefzbMATHGoogle Scholar
  6. 6.
    Chaslot, G.M.J.B., Winands, M.H.M., van den Herik, H.J., Uiterwijk, J.W.H.M., Bouzy, B.: Progressive strategies for Monte-Carlo tree search. New Math. Nat. Comput. 4(3), 343–357 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Fang, H., Hsu, T., Hsu, S.-C.: Construction of Chinese chess endgame databases by retrograde analysis. In: Marsland, T., Frank, I. (eds.) CG 2001. LNCS, vol. 2063, pp. 96–114. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  8. 8.
    Gelly, S., Silver, D.: Combining online and offline knowledge in UCT. In: Ghahramani, Z. (ed.) Machine Learning. ACM International Conference Proceeding Series, vol. 227, pp. 273–280. ACM, New York (2007)Google Scholar
  9. 9.
    Gelly, S., Silver, D.: Achieving master level play in 9 x 9 computer go. In: Fox, D., Gomes, C.P. (eds.) AAAI, pp. 1537–1540. AAAI Press, Menlo Park (2008)Google Scholar
  10. 10.
    Huang, S.-C., Arneson, B., Hayward, R.B., Müller, M., Pawlewicz, J.: MoHex 2.0: a pattern-based MCTS hex player. In: van den Herik, H.J., Iida, H., Plaat, A. (eds.) CG 2013. LNCS, vol. 8427, pp. 60–71. Springer, Heidelberg (2014)Google Scholar
  11. 11.
    Hutton, A.: Developing Computer Opponents for Chinese Checkers. Master’s thesis, University of Glasgow (2001)Google Scholar
  12. 12.
    Kocsis, L., Szepesvári, C.: Bandit based Monte-Carlo planning. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS (LNAI), vol. 4212, pp. 282–293. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  13. 13.
    Lanctot, M., Winands, M.H.M., Pepels, T., Sturtevant, N.R.: Monte Carlo tree search with heuristic evaluations using implicit minimax backups. In: 2014 IEEE Conference on Computational Intelligence and Games, CIG 2014, Dortmund, Germany, 26–29 August 2014, pp. 341–348. IEEE (2014)Google Scholar
  14. 14.
    Lorentz, R.J.: Amazons discover Monte-Carlo. In: van den Herik, H.J., Xu, X., Ma, Z., Winands, M.H.M. (eds.) CG 2008. LNCS, vol. 5131, pp. 13–24. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  15. 15.
    Nalimov, E., Haworth, G.M., Heinz, E.A.: Space-efficient indexing of endgame tables for chess. ICGA J. 23(3), 148–162 (2000)Google Scholar
  16. 16.
    Nijssen, J.P.A.M., Winands, M.H.M.: Enhancements for multi-player Monte-Carlo tree search. In: van den Herik, H.J., Iida, H., Plaat, A. (eds.) CG 2010. LNCS, vol. 6515, pp. 238–249. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  17. 17.
    Pearl, J.: The solution for the branching factor of the alpha-beta pruning algorithm and its optimality. Commun. ACM 25(8), 559–564 (1982)MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Roschke, M., Sturtevant, N.R.: UCT enhancements in Chinese Checkers using an endgame database. In: Cazenave, T., Winands, M.H.M., Iida, H. (eds.) Computer Games (CGW 2013). CCIS, vol. 408, pp. 57–70. Springer International Publishing, Switzerland (2014)Google Scholar
  19. 19.
    Samadi, M., Asr, F.T., Schaeffer, J., Azimifar, Z.: Extending the applicability of pattern and endgame databases. IEEE Trans. Comput. Intell. AI Games 1(1), 28–38 (2009)CrossRefGoogle Scholar
  20. 20.
    Samadi, M., Schaeffer, J., Asr, F.T., Samar, M., Azimifar, Z.: Using abstraction in two-player games. In: ECAI, pp. 545–549 (2008)Google Scholar
  21. 21.
    Schaeffer, J.: The history heuristic and alpha-beta search enhancements in practice. IEEE Trans. Pattern Anal. Mach. Intell. 11(11), 1203–1212 (1989)CrossRefGoogle Scholar
  22. 22.
    Schaeffer, J.: One Jump Ahead - Challenging Human Supremacy in Checkers. Springer, New York (1997)CrossRefGoogle Scholar
  23. 23.
    Schaeffer, J., Björnsson, Y., Burch, N., Lake, R., Lu, P., Sutphen, S.: Building the checkers 10-piece endgame databases. Adv. Comput. Games 10, 193–210 (2003)Google Scholar
  24. 24.
    Sturtevant, N.R., Rutherford, M.J.: Minimizing writes in parallel external memory search. In: International Joint Conference on Artificial Intelligence (IJCAI) (2013)Google Scholar
  25. 25.
    Sturtevant, N.R.: A comparison of algorithms for multi-player games. In: Schaeffer, J., Müller, M., Björnsson, Y. (eds.) CG 2002. LNCS, vol. 2883, pp. 108–122. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  26. 26.
    Sturtevant, N.R.: An analysis of UCT in multi-player games. ICGA J. 31(4), 195–208 (2008)zbMATHGoogle Scholar
  27. 27.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
  28. 28.
    Tong, K.B.: Intelligent Strategy for Two-person Non-random Perfect Information Zero-sum Game. Master’s thesis, Chinese University of Hong Kong (2003)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of DenverDenverUSA

Personalised recommendations