Advertisement

Monte-Carlo Go Developments

  • B. Bouzy
  • B. Helmstetter
Part of the IFIP — The International Federation for Information Processing book series (IFIPAICT, volume 135)

Abstract

We describe two Go programs, Olga and Oleg, developed by a Monte-Carlo approach that is simpler than Bruegmann’s (1993) approach. Our method is based on Abramson (1990). We performed experiments, to assess ideas on (1) progressive pruning, (2) all moves as first heuristic, (3) temperature, (4) simulated annealing, and (5) depth-two tree search within the Monte-Carlo framework. Progressive pruning and the all moves as first heuristic are good speed-up enhancements that do not deteriorate the level of the program too much. Then, using a constant temperature is an adequate and simple heuristic that is about as good as simulated annealing. The depth-two heuristic gives deceptive results at the moment. The results of our Monte-Carlo programs against knowledge-based programs on 9x9 boards are promising. Finally, the ever-increasing power of computers lead us to think that Monte-Carlo approaches are worth considering for computer Go in the future.

Keywords

Monte-Carlo approach computer Go heuristics 

References

  1. Abramson, B. (1990). Expected-outcome: a general model of static evaluation. IEEE transactions on PAMI, Vol. 12, pp. 182–193.CrossRefGoogle Scholar
  2. Billings, D., Davidson, A., Schaeffer, J., and Szafron, D. (2002). The challenge of poker. Artificial Intelligence, Vol. 134, pp. 201–240.CrossRefzbMATHGoogle Scholar
  3. Bouzy, B. (2002). Indigo home page. http://www.math-info.univ-paris5.fr/bouzy/INDIGO.html. Bouzy, B. (2003). The move decision process of Indigo. ICGA Journal, Vol. 26, No. 1, pp. 14–27.Google Scholar
  4. Bouzy, B. and Cazenave, T. (2001). Computer Go: an AI oriented survey. Artificial Intelligence, Vol. 132, pp. 39–103.MathSciNetCrossRefzbMATHGoogle Scholar
  5. Bruegmann, B. (1993). Monte Carlo Go. ftp://www.joy.ne.jp/welcome/igs/Go/computer/mcgo.tex.Z.Google Scholar
  6. Bump, D. (2003). Gnugo home page. http://www.gnu.org/software/gnugo/devel.html.Google Scholar
  7. Chen, K. and Chen, Z. (1999). Static analysis of life and death in the game of Go. Information Sciences, Vol. 121, Nos. 1–2, pp. 113–134.Google Scholar
  8. Fishman (1996). Monte-Carlo: Concepts, Algorithms, Applications. Springer-Verlag, Berlin, Germany.CrossRefGoogle Scholar
  9. Fotland, D. (2002). Static Eye in “The Many Faces of Go”. ICGA Journal, Vol. 25, No. 4, pp. 203–210.Google Scholar
  10. Junghanns, A. (1998). Are there Practical Alternatives to Alpha-Beta? ICCA Journal, Vol. 21, No. 1, pp. 14–32.Google Scholar
  11. Kaminski, P. (2003). Vegos home page. http://www.ideanest.com/vegos/.Google Scholar
  12. Kirkpatrick, S., Gelatt, C.D., and Vecchi, M.P. (1983). Optimization by Simulated Annealing. Science.Google Scholar
  13. Rivest, R. (1988). Game-tree searching by min-max approximation. Artificial Intelligence, Vol. 34, No. 1, pp. 77–96.MathSciNetCrossRefzbMATHGoogle Scholar
  14. Sheppard, B. (2002). World-championship-caliber Scrabble. Artificial Intelligence, Vol. 134, Nos. 1–2, pp. 241–275.CrossRefzbMATHGoogle Scholar
  15. Tesauro, G. (2002). Programming backgammon using self-teaching neural nets. Artificial Intelligence, Vol. 134, Nos. 1–2, pp. 181–199.CrossRefzbMATHGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2004

Authors and Affiliations

  • B. Bouzy
    • 1
  • B. Helmstetter
    • 2
  1. 1.UFR de mathematiques et d’informatique, C.R.I.P.5Université Paris 5Paris Cedex 06France
  2. 2.laboratoire d’Intelligence ArtificielleUniversité Paris 8Saint-Denis CedexFrance

Personalised recommendations