Training Neural Networks to Play Backgammon Variants Using Reinforcement Learning

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6624)


Backgammon is a board game that has been studied considerably by computer scientists. Apart from standard backgammon, several yet unexplored variants of the game exist, which use the same board, number of checkers, and dice but may have different rules for moving the checkers, starting positions and movement direction. This paper studies two popular variants in Greece and neighboring countries, named Fevga and Plakoto. Using reinforcement learning and Neural Network function approximation we train agents that learn a game position evaluation function for these games. We show that the resulting agents significantly outperform the open-source program Tavli3D.


Neural Network Reinforcement Learning Input Unit Board Game Training Neural Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    BackGammon Variants,
  2. 2.
    van Eck, N.J., van Wezel, M.: Application of reinforcement learning to the game of Othello. Computers and Operations Research 35(6), 1999–2017 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Hauk, T., Buro, M., Schaeffer, J.: *-minimax performance in backgammon. In: van den Herik, H.J., Björnsson, Y., Netanyahu, N.S. (eds.) CG 2004. LNCS, vol. 3846, pp. 35–50. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  4. 4.
    Schaeffer, J., Hlynka, M., Vili, J.: Temporal Difference Learning Applied to a High-Performance Game-Playing Program. In: Proceedings IJCAI, pp. 529–534 (2001)Google Scholar
  5. 5.
    Sutton, R.S.: Learning to predict by the methods of temporal differences. Machine Learning, 9–44 (1988) Google Scholar
  6. 6.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Indroduction. MIT Press, Cambridge (1998)Google Scholar
  7. 7.
    Szepesvári, C.: Algorithms for Reinforcement Learning (Electronic Draft Version) (June 2010),
  8. 8.
  9. 9.
    Tesauro, G.: Practical issues in temporal differnce learning. Machine Learning 4, 257–277 (1992)zbMATHGoogle Scholar
  10. 10.
    Tesauro, G.: Programming backgammon using self-teching neural nets. Artificial Intelligence 134, 181–199 (2002)CrossRefzbMATHGoogle Scholar
  11. 11.
    Tesauro, G.: Temporal Difference Learning and TD-Gammon. Communications of the ACM 38(3), 58–68 (1995)CrossRefGoogle Scholar
  12. 12.
    Veness, J., Silver, D., Uther, W., Blair, A.: Bootstrapping from Game Tree Search. Advances in Neural Information Processing Systems 22, 1937–1945 (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  1. 1.Department of Applied InformaticsUniversity of MacedoniaThessalonikiGreece

Personalised recommendations