Abstract
A theoretical model for a system that learns from its experience has been conceived and developed. The model, titled “LEFEX” (LEarning From EXperience), operates in the domain of board games played on a rectangular board between two contestants who alternately move pieces on the board. The model is “game-independent” and may be applied to any game of the above type. All knowledge of specific game details is “hidden” from the learning portion of the system in three external routines: INITPOS gives initial positioning of pieces on the board, MOVEGEN generates all legal move in a given board position, BASIC-SEF performs an approximate evaluation of the worth of a given position (e.g., material balance). By replacing these routines the system may be applied to different games such as Chess, Checkers, GO, etc.
Revisions of this work were completed while the second author was a visitor at the IBM Thomas J. Watson Research Center, Yorktown Heights, NY.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Adelson-Velsky, G.M., Arlazarov, V.L., and Donskoy, M.V., Algorithms of adaptive search, in Machine Intelligence 9 ( Hayes, Michie, Mikulish, eds.) Chichester: Ellis Horwood, 1979.
Arlazarov, V.L. and Futer, A.L., Computer analysis of a rook endgame, in Machine Intelligence 9 ( Hayes, Michie, and Mikulish, eds.), Chichester: Ellis Horwood, 1979.
Ben-Porat, Z. and Golumbic, M.C., “LEFEX” - a General Model of Learning from Experience, IBM Israel Scientific Center, Technical Report 257, Dec. 1988.
Berliner, H.J. and Campbell, M.S., Using chunking to solve chess pawn endgames, Artificial Intelligence 23 (1984), 97–120.
Chamess, N., Human chess skill, in Chess Skill in Man and Machine ( Frey, ed.), New York: Springer-Verlag, 1977.
Church, R.M. and Church, K.W., Plans, goals and search strategies for the selection of a move in chess, in Chess Skill in Man and Machine ( Frey, ed.), New York: Springler-Verlag, 1977.
Condon, J.H. and Thompson, K., Belle chess hardware, in Advances in Computer Chess ( Clark, ed.) Oxford: Pergamon Press, 1982.
Frey, P.W., An introduction to computer chess, Chess Skill in Man and Machine (Frey, ed.), New York: Springier-Verlag, 1977.
Griffith, A.K., A comparison and evaluation of three machine learning procedures as applied to the game of Checkers, Artificial Intelligence 5 (1974), 137–148.
Hearst, E., Man and machine: Chess achievement and chess thinking, in Chess Skill in Man and Machine ( Frey, ed.), New York: Springier-Verlag, 1977.
Langley, P., Learning search strategies through discrimination, International Journal of Man-Machine Studies 18 (1983), 513–541.
Lee, K-F. and Mahajan, S., A pattern classification approach to evaluation function learning, Artificial Intelligence 36 (1988), 1–25.
Newborn, M., Computer Chess, New York: Academic Press, 1975, 25
Newborn, M., Cray-Blitz wins world computer chess championship, Abacus vol. 1, no. 3, (Spring 1984 ), 58–61.
Newborn, M., PEASANT: An endgame program for kings and pawns, in Chess Skill in Man and Machine ( Frey, ed.), New York: Springier-Verlag, 1977.
Quinlan, J.R., Learning efficient classification procedures and their application to chess endgames, in Machine Learning ( Michalsky, Carbonell and Mitchell, eds.) Palo Alto: Tioga, 1983.
Rich, E., “Artificial Intelligence,” McGraw Hill, 1983.
Samuel, A.L., Some studies in machine learning using the game of checkers, IBM J. Res. Dey. 3 (1959) 210–229.
Samuel, A.L., Some studies in machine learning using the game of checkers, II, IBM J. Res. De,. 11 (1967) 601–617.
Schaeffer J., Long–range planning in computer chess, ACM, 0–89791–120–2/83/010/0170,1983.
Simon, H.A., Why should machines learn?, in Machine Learning ( Michalsky, Carbonell and Mitchell, eds.) Palo Alto: Tioga, 1983.
Slate, D.J. and Atkins, R.L., CHESS 4.5 - The Northwestern University chess program, in Chess Skill in Man and Machine ( Frey, ed.), New York: Springier-Verlag, 1977.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1990 Springer-Verlag New York, Inc.
About this chapter
Cite this chapter
Ben-Porat, Z., Golumbic, M.C. (1990). Learning from Experience in Board Games. In: Golumbic, M.C. (eds) Advances in Artificial Intelligence. Springer, New York, NY. https://doi.org/10.1007/978-1-4613-9052-7_1
Download citation
DOI: https://doi.org/10.1007/978-1-4613-9052-7_1
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4613-9054-1
Online ISBN: 978-1-4613-9052-7
eBook Packages: Springer Book Archive