The Neural MoveMap Heuristic in Chess

  • Levente Kocsis
  • Jos W. H. M. Uiterwijk
  • Eric Postma
  • Jaap van den Herik
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2883)


The efficiency of alpha-beta search algorithms heavily depends on the order in which the moves are examined. This paper investigates a new move-ordering heuristic in chess, namely the Neural MoveMap (NMM) heuristic. The heuristic uses a neural network to estimate the likelihood of a move being the best in a certain position. The moves considered more likely to be the best are examined first. We develop an enhanced approach to apply the NMM heuristic during the search, by using a weighted combination of the neural-network scores and the history-heuristic scores. Moreover, we analyse the influence of existing game databases and opening theory on the design of the training patterns. The NMM heuristic is tested for middle-game chess positions by the program Crafty. The experimental results indicate that the NMM heuristic outperforms the existing move ordering, especially when a weighted-combination approach is chosen.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Marsland, T.: A review of game-tree pruning. International Computer Chess Association Journal 9, 3–19 (1986)Google Scholar
  2. 2.
    Schaeffer, J.: The history heuristic. International Computer Chess Association Journal 6, 16–19 (1983)Google Scholar
  3. 3.
    Greer, K.: Computer chess move-ordering schemes using move influence. Artificial Intelligence 120, 235–250 (2000)zbMATHCrossRefGoogle Scholar
  4. 4.
    Kocsis, L., Uiterwijk, J., van den Herik, J.: Move ordering using neural networks. In: Monostori, L., Váncza, J., Ali, M. (eds.) IEA/AIE 2001. LNCS (LNAI), vol. 2070, pp. 45–50. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  5. 5.
    Tesauro, G.: Connectionist learning of expert preferences by comparison training. In: Touretzky, D. (ed.) Advances in Neural Information Processing Systems 1, pp. 99–106. Morgan Kaufmann, San Francisco (1989)Google Scholar
  6. 6.
    Utgoff, P., Clouse, J.: Two kinds of training information for evaluation function learning. In: Ninth National Conference of the American Association for Artificial Intelligence (AAAI 1991), pp. 596–600. AAAI Press, Menlo Park (1991)Google Scholar
  7. 7.
    Enderton, H.: The Golem Go program. Technical Report CMU-CS-92-101, School of Computer Science, Carnegie-Mellon University (1991)Google Scholar
  8. 8.
    Winands, M., Uiterwijk, J., van den Herik, J.: The quad heuristic in Lines of Action. International Computer Games Association Journal 24, 3–15 (2001)Google Scholar
  9. 9.
    Winands, M.: Personal communication (2002)Google Scholar
  10. 10.
    Hyatt, R., Newborn, M.: Crafty goes deep. International Computer Chess Association Journal 20, 79–86 (1997)Google Scholar
  11. 11.
    Matanović, A. (ed.): Encyclopaedia of Chess Openings. Volume A–E. Chess Informant (2003) Google Scholar
  12. 12.
    Riedmiller, M., Braun, H.: A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In: IEEE International Conference on Neural Networks, pp. 586–591 (1993)Google Scholar
  13. 13.
    Kocsis, L., Uiterwijk, J., van den Herik, J.: Search-independent forward pruning. In: Belgium-Netherlands Conference on Artificial Intelligence, pp. 159–166 (2001)Google Scholar
  14. 14.
    Björnsson, Y., Marsland, T.: Learning search control in adversary games. In: van den Herik, J., Monien, B. (eds.) Advances in Computer Games 9, Universiteit Maastricht, pp. 157–174 (2001)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Levente Kocsis
    • 1
  • Jos W. H. M. Uiterwijk
    • 1
  • Eric Postma
    • 1
  • Jaap van den Herik
    • 1
  1. 1.Department of Computer Science, Institute for Knowledge and Agent TechnologyUniversiteit MaastrichtThe Netherlands

Personalised recommendations