Advertisement

Handling Ambiguous Effects in Action Learning

  • Boris Lesner
  • Bruno Zanuttini
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7188)

Abstract

We study the problem of learning stochastic actions in propositional, factored environments, and precisely the problem of identifying STRIPS-like effects from transitions in which they are ambiguous. We give an unbiased, maximum likelihood approach, and show that maximally likely actions can be computed efficiently from observations. We also discuss how this study can be used to extend an RL approach for actions with independent effects to one for actions with correlated effects.

Keywords

stochastic action maximum likelihood factored MDP 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Boutilier, C., Dean, T., Hanks, S.: Decision-theoretic planning: Structural assumptions and computational leverage. J. Artificial Intelligence Research 11, 1–94 (1999)MathSciNetzbMATHGoogle Scholar
  2. 2.
    Brafman, R.I., Tennenholtz, M.: R-max: A general polynomial time algorithm for near-optimal reinforcement learning. J. Machine Learning Research 3, 213–231 (2002)MathSciNetGoogle Scholar
  3. 3.
    Dearden, R., Boutilier, C.: Abstraction and approximate decision-theoretic planning. Artificial Intelligence 89, 219–283 (1997)MathSciNetzbMATHCrossRefGoogle Scholar
  4. 4.
    Degris, T., Sigaud, O., Wuillemin, P.-H.: Learning the structure of factored Markov Decision Processes in reinforcement learning problems. In: Proc. International Conference on Machine Learning (ICML 2006), pp. 257–264. ACM (2006)Google Scholar
  5. 5.
    Džeroski, S., De Raedt, L., Driessens, K.: Relational reinforcement learning. Machine Learning 43, 7–52 (2001)zbMATHCrossRefGoogle Scholar
  6. 6.
    Kearns, M., Koller, D.: Efficient reinforcement learning in factored MDPs. In: Proc. 16th International Joint Conference on Artificial Intelligence (IJCAI 1999), pp. 740–474. Morgan Kaufmann (1999)Google Scholar
  7. 7.
    Kearns, M., Singh, S.: Near-optimal reinforcement learning in polynomial time. Machine Learning 49(2-3), 209–232 (2002)zbMATHCrossRefGoogle Scholar
  8. 8.
    Kushmerick, N., Hanks, S., Weld, D.S.: An algorithm for probabilistic planning. Artificial Intelligence 76, 239–286 (1995)CrossRefGoogle Scholar
  9. 9.
    Li, L., Littman, M.L., Walsh, T.J., Strehl, A.L.: Knows what it knows: a framework for self-aware learning. Machine Learning 82, 399–443 (2011)zbMATHCrossRefGoogle Scholar
  10. 10.
    Pasula, H.M., Zettlemoyer, L.S., Kaelbling, L.P.: Learning symbolic models of stochastic domains. J. Artificial Intelligence Research 29, 309–352 (2007)zbMATHGoogle Scholar
  11. 11.
    Rodrigues, C., Gérard, P., Rouveirol, C., Soldano, H.: Incremental learning of relational action rules. In: Proc. 9th International Conference on Machine Learning and Applications (ICMLA 2010), pp. 451–458. IEEE Computer Society (2010)Google Scholar
  12. 12.
    Strehl, A.L., Diuk, C., Littman, M.L.: Efficient structure learning in factored-state MDPs. In: Proc. 22nd AAAI Conference on Artificial Intelligence (AAAI 2007), pp. 645–650. AAAI Press (2007)Google Scholar
  13. 13.
    Walsh, T.J., Szita, I., Diuk, C., Littman, M.L.: Exploring compact reinforcement-learning representations with linear regression. In: Proc. 25th Conference on Uncertainty in Artificial Intelligence, UAI 2009 (2009)Google Scholar
  14. 14.
    Weissman, T., Ordentlich, E., Seroussi, G., Verdu, S., Weinberger, M.J.: Inequalities for the l 1 deviation of the empirical distribution. Tech. Rep. HPL-2003-97, Hewlett-Packard Company (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Boris Lesner
    • 1
  • Bruno Zanuttini
    • 1
  1. 1.GREYC, Université de Caen Basse-Normandie, CNRS UMR 6072, ENSICAENFrance

Personalised recommendations