The GRL System: Learning Board Game Rules with Piece-Move Interactions

Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 614)


Many real-world systems can be represented as formal state transition systems. The modeling process, in other words the process of constructing these systems, is a time-consuming and error-prone activity. In order to counter these difficulties, efforts have been made in various communities to learn the models from input data. One learning approach is to learn models from example transition sequences. Learning state transition systems from example transition sequences is helpful in many situations. For example, where no formal description of a transition system already exists, or when wishing to translate between different formalisms.

In this work, we study the problem of learning formal models of the rules of board games, using as input only example sequences of the moves made in playing those games. Our work is distinguished from previous work in this area in that we learn the interactions between the pieces in the games. We supplement a previous game rule acquisition system by allowing pieces to be added and removed from the board during play, and using a planning domain model acquisition system to encode the relationships between the pieces that interact during a move.


Game State Board Game Deterministic Finite Automaton Game Rule State Transition System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Aarts, F., De Ruiter, J., Poll, E.: Formal models of bank cards for free. In: 2013 IEEE Sixth International Conference on Software Testing, Verification and Validation Workshops, pp. 461–468. IEEE (2013)Google Scholar
  2. 2.
    Bessiere, C., Coletta, R., Daoudi, A., Lazaar, N., Mechqrane, Y., Bouyakhf, E.H.: Boosting constraint acquisition via generalization queries. In: ECAI, pp. 99–104 (2014)Google Scholar
  3. 3.
    Björnsson, Y.: Learning rules of simplified boardgames by observing. In: ECAI, pp. 175–180 (2012)Google Scholar
  4. 4.
    Cresswell, S., McCluskey, T., West, M.: Acquiring planning domain models using LOCM. Knowl. Eng. Rev. 28(2), 195–213 (2013)CrossRefGoogle Scholar
  5. 5.
    Cresswell, S., Gregory, P.: Generalised domain model acquisition from action traces. In: International Conference on Automated Planning and Scheduling, pp. 42–49 (2011)Google Scholar
  6. 6.
    Cresswell, S., McCluskey, T.L., West, M.M.: Acquisition of object-centred domain models from planning examples. In: Gerevini, A., Howe, A.E., Cesta, A., Refanidis, I. (eds.) ICAPS. AAAI (2009)Google Scholar
  7. 7.
    Genesereth, M.R., Love, N., Pell, B.: General game playing: overview of the AAAI competition. AI Mag. 26(2), 62–72 (2005)Google Scholar
  8. 8.
    Gregory, P., Cresswell, S.: Domain model acquisition in the presence of static relations in the LOP system. In: International Conference on Automated Planning and Scheduling, pp. 97–105 (2015)Google Scholar
  9. 9.
    Hausknecht, M.J., Lehman, J., Miikkulainen, R., Stone, P.: A neuroevolution approach to general atari game playing. IEEE Trans. Comput. Intell. AI Games 6(4), 355–366 (2014)CrossRefGoogle Scholar
  10. 10.
    Kaiser, L.: Learning games from videos guided by descriptive complexity. In: Hoffmann, J., Selman, B. (eds.) Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, Toronto, Ontario, Canada, 22–26 July 2012, pp. 963–970. AAAI Press (2012)Google Scholar
  11. 11.
    Kirk, J.R., Laird, J.: Interactive task learning for simple games. In: Advances in Cognitive Systems, pp. 11–28. AAAI Press (2013)Google Scholar
  12. 12.
    Love, N., Hinrichs, T., Genesereth, M.: General game playing: game description language specification. Technical report, Stanford University, 4 April 2006.
  13. 13.
    McCluskey, T.L., Cresswell, S.N., Richardson, N.E., West, M.M.: Automated acquisition of action knowledge. In: International Conference on Agents and Artificial Intelligence (ICAART), pp. 93–100 (2009)Google Scholar
  14. 14.
    McCluskey, T.L., Porteous, J.: Engineering and compiling planning domain models to promote validity and efficiency. Artif. Intell. 95(1), 1–65 (1997)CrossRefzbMATHGoogle Scholar
  15. 15.
    Muggleton, S., Paes, A., Santos Costa, V., Zaverucha, G.: Chess revision: acquiring the rules of chess variants through FOL theory revision from examples. In: De Raedt, L. (ed.) ILP 2009. LNCS, vol. 5989, pp. 123–130. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  16. 16.
    O’Sullivan, B.: Automated modelling and solving in constraint programming. In: AAAI, pp. 1493–1497 (2010)Google Scholar
  17. 17.
    Richardson, N.E.: An operator induction tool supporting knowledge engineering in planning. Ph.D. thesis, School of Computing and Engineering, University of Huddersfield, UK (2008)Google Scholar
  18. 18.
    Schaul, T.: A video game description language for model-based or interactive learning. In: Proceedings of the IEEE Conference on Computational Intelligence in Games (CIG 2013), pp. 193–200. IEEE (2013)Google Scholar
  19. 19.
    Wu, K., Yang, Q., Jiang, Y.: ARMS: an automatic knowledge engineering tool for learning action models for AI planning. Knowl. Eng. Rev. 22(2), 135–152 (2007)CrossRefGoogle Scholar
  20. 20.
    Zhuo, H.H., Yang, Q., Hu, D.H., Li, L.: Learning complex action models with quantifiers and logical implications. Artif. Intell. 174, 1540–1569 (2010)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Digital Futures InstituteTeesside UniversityMiddlesbroughUK
  2. 2.School of Computer ScienceReykjavik UniversityReykjavikIceland

Personalised recommendations