Advertisement

Learning features by experimentation in chess

  • Eduardo Morales
Part 8: Applications
Part of the Lecture Notes in Computer Science book series (LNCS, volume 482)

Abstract

There are two main issues to consider in an inductive learning system. These are 1) its search through the hypothesis space and 2) the amount of provided information for the system to work. In this paper we use a constrained relative least-general-generalisation (RLGG) algorithm as method of generalisation to organise the search space and an automatic example generator to reduce the user's intervention and guide the learning process. Some initial results to learn a restricted form of Horn clause concepts in chess are presented. The main limitations of the learning system and the example generator are pointed out and conclusions and future research directions indicated.

Keywords

LGG experimentation chess Horn clause 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Buntine, W. (1988). Generalised subsumption an its applications to induction and redundancy. Artificial Intelligence, (36):149–176.Google Scholar
  2. Carbonell, F. and Gil, Y. (1987). Learning by experimentation. In Proceedings of the Fourth International Workshop on Machine Learning, pages 256–265.Google Scholar
  3. Dietterich, T. and Buchanan, B. (1983). The role of experimentation in theory formation. In Proceedings of the Second International Workshop on Machine Learning, pages 147–155.Google Scholar
  4. Feng, C. (1990). Learning by Experimentation. PhD thesis, Turing Institute — University of Strathclyde.Google Scholar
  5. Lenat, D. B. (1976). AM: an artificial intelligence approach to discovery in mathematics as heuristic search. PhD thesis, Stanford University, Artificial Intelligence Laboratory. AIM-286 or STAN-CS-76-570.Google Scholar
  6. Morales, E. (1990). Thesis proposal. (unpublished).Google Scholar
  7. Muggleton, S. (1990). Inductive logic programming. In First International Workshop on Algorithmic Learning Theory (ALT90), Tokyo, Japan.Google Scholar
  8. Muggleton, S. and Buntine, W. (1988). Machine invention of first-order predicates by inverting resolution. In Proceedings of the Fifth International Conference on Machine Learning, pages 339–353. Kaufmann.Google Scholar
  9. Muggleton, S. and Cao, F. (1990). Efficient induction of logic programs. In First International Workshop on Algorithmic Learning Theory (ALT90), Tokyo, Japan.Google Scholar
  10. Plotkin, G. (1969). A note on inductive generalisation. In Machine Intelligence 5, pages 153–163. Meltzer B. and Michie D. (Eds).Google Scholar
  11. Plotkin, G. (1971a). Automatic Methods of Inductive Inference. PhD thesis, Edimburgh University.Google Scholar
  12. Plotkin, G. (1971b). A further note on inductive generalisation. In Machine Intelligence 6, pages 101–124. Meltzer B. and Michie D. (Eds).Google Scholar
  13. Porter, B. and Kibler, D. (1986). Experimental goal regression. Machine Learning, (1):249–286.Google Scholar
  14. Ruff, R. and Dietterich, T. (1989). What good are experiments? In Proc. of the Sixth International Workshop on Machine Learning, pages 109–112, Conell Univ., Ithaca New York. Morgan Kaufmann.Google Scholar
  15. Sammut, C. and Banerji, R. (1986). Learning concepts by asking questions. In Machine Learning: An artificial intelligence approach (Vol 2). R. Michalski, J. Carbonell and T. Mitchell (eds).Google Scholar
  16. Winston, P. (1977). Learning structural descriptions from examples. In The Psychology of computer vision. Winston, P.H. (Ed), MacGraw-Hill.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1991

Authors and Affiliations

  • Eduardo Morales
    • 1
  1. 1.The Turing InstituteGlasgow

Personalised recommendations