# Solving MasterMind using GAs and simulated annealing: A case of dynamic constraint optimization

• J. L. Bernier
• C. Ilia Herráiz
• J. J. Merelo
• S. Olmeda
• A. Prieto
Comparison of Methods
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1141)

## Abstract

MasterMind is a game in which the player must find out, by making guesses, a hidden combination of colors set by the opponent. The player lays a combination of colors, and the opponent points out the number of positions the player has found out (black tokens) and the number of colors that are in a different place from the hidden combination (white tokens). This problem can be formulated in the following way: the target of the game is to find a string composed of l symbols, drawn from an alphabet of cardinality c, using as constraints hints that restrict the search space. The partial objective of the search is to find a string that meets all the constraints made so far the final objective being to find the hidden string.This problem can also be located within the class of constrained optimization, although in this case not all the constraints are known in advance; hence its dynamic nature.Three algorithms playing MasterMind have been evaluated with respect to the number of guesses made by each one and the number of combinations examined before finding the solution: a random-search-with-constraints algorithm, simulated annealing, and a genetic algorithm. The random search and genetic algorithm at each step plays the optimal solution i.e., the one that is consistent with the constraints made so far, while simulated annealing plays the best found within certain time constraints. This paper proves that the algorithms that follow the optimal strategy behave similarly, getting the correct combination in more or less the same number of guesses; between them, GA is better with respect to the number of combinations examined, and this difference increases with the size of the search space, while SA is much faster (around 2 orders of magnitude) and gives a good enough answer.

## References

1. 1.
Melle Koning. Mastermind. newsgroup comp.ai.games, archived at http://www.krl.caltech.edu/∼brown/alife/news/ai-games-html/0370.html.Google Scholar
2. 2.
Tim Cooper. Mastermind. newsgroup comp.ai.games, archived at http://www.krl.caltech.edu/∼brown/alife/news/ai-games-html/0313.html.Google Scholar
3. 3.
Donald E. Knuth. The computer as Master Mind. J. Recreational Mathematics, (9):1–6, 1976–77.Google Scholar
4. 4.
Kenji Koyama; T. W. Lai. An optimal Mastermind strategy. J. Recreational Mathematics, 1994.Google Scholar
5. 5.
E. Neuwirth. Some strategies for mastermind. Zeitschrift fur Operations Research. Serie B, 26(8):B257–B278, 1982.
6. 6.
D. Viaud. Une strategie generale pour jouer au Mastermind. RAIRO-Recherche Operationelle, 21(1):87–100, 1987.Google Scholar
7. 7.
D. Viaud. Une formalisation du jeu de Mastermind. RAIRO-Recherche Operationelle, 13(3):307–321, 1979.Google Scholar
8. 8.
Leon Sterling; Ehud Shapiro. The Art of Prolog: Advanced Programming Techniques. MIT press, 1986.Google Scholar
9. 9.
V. Chvatal. Mastermind. Combinatorica, 3(3–4):325–329, 1983.Google Scholar
10. 10.
Risto Widenius. Mastermind. newsgroup comp.ai.games, archived at http://www.krl.caltech.edu/∼brown/alife/news/ai-games-html/1311.html.Google Scholar
11. 11.
Zbigniew Michalewicz. Genetic Algorithms+Data Structures=Evolution programs. Springer-Verlag, 2nd edition edition, 1994.Google Scholar
12. 12.
Marc Schoenauer; Spyros Xanthakis. Constrained GA optimization. In Forrest [25], pages 573–580.Google Scholar
13. 13.
John J. Grefenstette. Genetic algorithms for changing environments. In R. Maenner; B. Manderick, editor, Parallel Problem Solving from Nature, 2, pages 137–144. Elsevier Science Publishers B. V., 1992.Google Scholar
14. 14.
Helen G. Cobb; John J. Grefenstette. Genetic algorithms for tracking changing environments. In Forrest [25], pages 523–530.Google Scholar
15. 15.
A. Barak J. Maresky; Y. Davidor; D. Gitler; G. Aharoni. Selective destructive re-start. In Eshelman [26], pages 144–150.Google Scholar
16. 16.
David E. Goldberg. Zen and the art of genetic algorithms. In J. David Schaffer, editor, Procs. of the 3rd Int. Conf. on Genetic Algorithms, ICGA89, pages 80–85. Morgan Kauffmann, 1989.Google Scholar
17. 17.
Conor Ryan. Advances in Genetic Programming, chapter Pygmies and Civil Servants. MIT press, 1992.Google Scholar
18. 18.
J. J. Merelo. Genetic Mastermind, a case of dynamic constraint optimization. GeNeura Technical Report G-96-1, Universidad de Granada, 1996.Google Scholar
19. 19.
Bryant A. Julstrom. What have you done for me lately? Adapting operator probabilities in a steady-state Genetic Algorithm. In Eshelman [26], pages 81–87.Google Scholar
20. 20.
John J. Holland. Adaptation in Natural and Artificial Systems. University of Michigan Press, 1975.Google Scholar
21. 21.
D. B. Fogel. Evolutionary Computation: Towards a New Philosophy of Machine Intelligence. IEEE press, 1995.Google Scholar
22. 22.
J. Aarts, E.; Korst. Simulated Annealing and Boltzmann Machines. John Wiley & Sons, 1989.Google Scholar
23. 23.
C. H. Papadimitriou; K. Steiglitz. Combinatorial Optimization: Algorithms and Complexity. Prentice-Hall, New York, 1982.Google Scholar
24. 24.
S. W. Mahfoud; D. E. Goldberg. Parallel recombinative simulated annealing: A genetic algorithm. Technical report, IlliGAL Report no. 92002, Dept. Of General Engineering, UIUC., 1992.Google Scholar
25. 25.
Stephanie Forrest, editor. Proceedings of the 5th International Conference on Genetic Algorithms. Morgan Kaufmann, 1993.Google Scholar
26. 26.
Larry J. Eshelman, editor. Proceedings of the 6th International Conference on Genetic Algorithms. Morgan Kaufmann, 1995.Google Scholar

## Authors and Affiliations

• J. L. Bernier
• 1
• C. Ilia Herráiz
• 1
• J. J. Merelo
• 1
• S. Olmeda
• 1
• A. Prieto
• 1
1. 1.Department of Electronics and Computer TechnologyUniversity of GranadaSpain