Skip to main content

Learning from Experience in Board Games

  • Chapter
Advances in Artificial Intelligence

Abstract

A theoretical model for a system that learns from its experience has been conceived and developed. The model, titled “LEFEX” (LEarning From EXperience), operates in the domain of board games played on a rectangular board between two contestants who alternately move pieces on the board. The model is “game-independent” and may be applied to any game of the above type. All knowledge of specific game details is “hidden” from the learning portion of the system in three external routines: INITPOS gives initial positioning of pieces on the board, MOVEGEN generates all legal move in a given board position, BASIC-SEF performs an approximate evaluation of the worth of a given position (e.g., material balance). By replacing these routines the system may be applied to different games such as Chess, Checkers, GO, etc.

Revisions of this work were completed while the second author was a visitor at the IBM Thomas J. Watson Research Center, Yorktown Heights, NY.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Adelson-Velsky, G.M., Arlazarov, V.L., and Donskoy, M.V., Algorithms of adaptive search, in Machine Intelligence 9 ( Hayes, Michie, Mikulish, eds.) Chichester: Ellis Horwood, 1979.

    Google Scholar 

  2. Arlazarov, V.L. and Futer, A.L., Computer analysis of a rook endgame, in Machine Intelligence 9 ( Hayes, Michie, and Mikulish, eds.), Chichester: Ellis Horwood, 1979.

    Google Scholar 

  3. Ben-Porat, Z. and Golumbic, M.C., “LEFEX” - a General Model of Learning from Experience, IBM Israel Scientific Center, Technical Report 257, Dec. 1988.

    Google Scholar 

  4. Berliner, H.J. and Campbell, M.S., Using chunking to solve chess pawn endgames, Artificial Intelligence 23 (1984), 97–120.

    Article  MATH  Google Scholar 

  5. Chamess, N., Human chess skill, in Chess Skill in Man and Machine ( Frey, ed.), New York: Springer-Verlag, 1977.

    Google Scholar 

  6. Church, R.M. and Church, K.W., Plans, goals and search strategies for the selection of a move in chess, in Chess Skill in Man and Machine ( Frey, ed.), New York: Springler-Verlag, 1977.

    Google Scholar 

  7. Condon, J.H. and Thompson, K., Belle chess hardware, in Advances in Computer Chess ( Clark, ed.) Oxford: Pergamon Press, 1982.

    Google Scholar 

  8. Frey, P.W., An introduction to computer chess, Chess Skill in Man and Machine (Frey, ed.), New York: Springier-Verlag, 1977.

    Google Scholar 

  9. Griffith, A.K., A comparison and evaluation of three machine learning procedures as applied to the game of Checkers, Artificial Intelligence 5 (1974), 137–148.

    Article  MATH  Google Scholar 

  10. Hearst, E., Man and machine: Chess achievement and chess thinking, in Chess Skill in Man and Machine ( Frey, ed.), New York: Springier-Verlag, 1977.

    Google Scholar 

  11. Langley, P., Learning search strategies through discrimination, International Journal of Man-Machine Studies 18 (1983), 513–541.

    Article  Google Scholar 

  12. Lee, K-F. and Mahajan, S., A pattern classification approach to evaluation function learning, Artificial Intelligence 36 (1988), 1–25.

    Article  Google Scholar 

  13. Newborn, M., Computer Chess, New York: Academic Press, 1975, 25

    MATH  Google Scholar 

  14. Newborn, M., Cray-Blitz wins world computer chess championship, Abacus vol. 1, no. 3, (Spring 1984 ), 58–61.

    Google Scholar 

  15. Newborn, M., PEASANT: An endgame program for kings and pawns, in Chess Skill in Man and Machine ( Frey, ed.), New York: Springier-Verlag, 1977.

    Google Scholar 

  16. Quinlan, J.R., Learning efficient classification procedures and their application to chess endgames, in Machine Learning ( Michalsky, Carbonell and Mitchell, eds.) Palo Alto: Tioga, 1983.

    Google Scholar 

  17. Rich, E., “Artificial Intelligence,” McGraw Hill, 1983.

    Google Scholar 

  18. Samuel, A.L., Some studies in machine learning using the game of checkers, IBM J. Res. Dey. 3 (1959) 210–229.

    Article  Google Scholar 

  19. Samuel, A.L., Some studies in machine learning using the game of checkers, II, IBM J. Res. De,. 11 (1967) 601–617.

    Article  Google Scholar 

  20. Schaeffer J., Long–range planning in computer chess, ACM, 0–89791–120–2/83/010/0170,1983.

    Google Scholar 

  21. Simon, H.A., Why should machines learn?, in Machine Learning ( Michalsky, Carbonell and Mitchell, eds.) Palo Alto: Tioga, 1983.

    Google Scholar 

  22. Slate, D.J. and Atkins, R.L., CHESS 4.5 - The Northwestern University chess program, in Chess Skill in Man and Machine ( Frey, ed.), New York: Springier-Verlag, 1977.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1990 Springer-Verlag New York, Inc.

About this chapter

Cite this chapter

Ben-Porat, Z., Golumbic, M.C. (1990). Learning from Experience in Board Games. In: Golumbic, M.C. (eds) Advances in Artificial Intelligence. Springer, New York, NY. https://doi.org/10.1007/978-1-4613-9052-7_1

Download citation

  • DOI: https://doi.org/10.1007/978-1-4613-9052-7_1

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4613-9054-1

  • Online ISBN: 978-1-4613-9052-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics