Skip to main content

Incremental Learning of Relational Action Models in Noisy Environments

  • Conference paper
Inductive Logic Programming (ILP 2010)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6489))

Included in the following conference series:

Abstract

In the Relational Reinforcement Learning framework, we propose an algorithm that learns an action model (or an approximation of the transition function) in order to predict the resulting state of an action in a given situation. This algorithm learns incrementally a set of first order rules in a noisy environment following a data-driven loop. Each time a new example is presented that contradicts the current action model, the model is revised (by generalization and/or specialization). As opposed to a previous version of our algorithm that operates in a noise-free context, we introduce here a number of indicators attached to each rule that allows to evaluate if the revision should take place immediately or should be delayed. We provide an empirical evaluation on usual RRL benchmarks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Benson, S.: Inductive learning of reactive action models. In: ICML 1995, pp. 47–54 (1995)

    Google Scholar 

  2. Croonenborghs, T., Ramon, J., Blockeel, H., Bruynooghe, M.: Online learning and exploiting relational models in reinforcement learning. In: IJCAI, pp. 726–731 (2007)

    Google Scholar 

  3. Driessens, K., Ramon, J.: Relational instance based regression for relational reinforcement learning. In: ICML, pp. 123–130 (2003)

    Google Scholar 

  4. Driessens, K., Ramon, J., Blockeel, H.: Speeding up relational reinforcement learning through the use of an incremental first order decision tree algorithm. In: Flach, P.A., De Raedt, L. (eds.) ECML 2001. LNCS (LNAI), vol. 2167, pp. 97–108. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  5. Dzeroski, S., De Raedt, L., Driessens, K.: Relational reinforcement learning. Machine Learning 43, 7–52 (2001)

    Article  MATH  Google Scholar 

  6. Esposito, F., Semeraro, G., Fanizzi, N., Ferilli, S.: Multistrategy theory revision: Induction and abduction in INTHELEX. Machine Learning 38(1-2), 133–156 (2000)

    Article  MATH  Google Scholar 

  7. Gil, Y.: Learning by experimentation: Incremental refinement of incomplete planning domains. In: ICML, pp. 87–95 (1994)

    Google Scholar 

  8. Li, L., Littman, M.L., Walsh, T.J.: Knows what it knows: a framework for self-aware learning. In: ICML, pp. 568–575 (2008)

    Google Scholar 

  9. Pasula, H.M., Zettlemoyer, L.S.: Kaelbling L. Learning symbolic models of stochastic domains. Journal of Artificial Intelligence Research (JAIR) 29, 309–352 (2007)

    MATH  Google Scholar 

  10. Pasula, H.M., Zettlemoyer, L.S., Pack Kaelbling, L.: Learning probabilistic planning rules. In: ICAPS, pp. 146–163 (2004)

    Google Scholar 

  11. Rodrigues, C., Gérard, P., Rouveirol, C., Soldano, H.: Incremental learning of relational action rules. In: ICMLA. IEEE Computer Society, Los Alamitos (2010) (to appear)

    Google Scholar 

  12. Shen, W.M.: Discovery as autonomous learning from the environment. Machine Learning 12(1-3), 143–165 (1993)

    Article  Google Scholar 

  13. Sutton, R.S.: Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In: ICML, pp. 216–224 (1990)

    Google Scholar 

  14. Van Otterlo, M.: The logic of adaptive behavior. PhD thesis, University of Twente, Enschede (2008)

    Google Scholar 

  15. Walsh, T.J., Littman, M.L.: Efficient learning of action schemas and web-service descriptions. In: AAAI, pp. 714–719 (2008)

    Google Scholar 

  16. Walsh, T.J., Szita, I., Diuk, M., Littman, M.L.: Exploring compact reinforcement-learning representations with linear regression. In: UAI, pp. 714–719 (2009)

    Google Scholar 

  17. Wang, X.: Learning by observation and practice: An incremental approach for planning operator acquisition. In: ICML, pp. 549–557 (1995)

    Google Scholar 

  18. Yang, Q., Wu, K., Jiang, Y.: Learning action models from plan examples using weighted max-sat. Artificial Intelligence 171(2-3), 107–143 (2007)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Rodrigues, C., Gérard, P., Rouveirol, C. (2011). Incremental Learning of Relational Action Models in Noisy Environments. In: Frasconi, P., Lisi, F.A. (eds) Inductive Logic Programming. ILP 2010. Lecture Notes in Computer Science(), vol 6489. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21295-6_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-21295-6_24

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-21294-9

  • Online ISBN: 978-3-642-21295-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics