Machine Learning of Plan Robustness Knowledge About Instances

  • Sergio Jiménez
  • Fernando Fernández
  • Daniel Borrajo
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3720)


Classical planning domain representations assume all the objects from one type are exactly the same. But when solving problems in the real world systems, the execution of a plan that theoretically solves a problem, can fail because of not properly capturing the special features of an object in the initial representation. We propose to capture this uncertainty about the world with an architecture that integrates planning, execution and learning. In this paper, we describe the PELA system (Planning-Execution-Learning Architecture). This system generates plans, executes those plans in the real world, and automatically acquires knowledge about the behaviour of the objects to strengthen the execution processes in the future.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: A survey. Journal of Artificial Intelligence Research 4, 237–285 (1996)Google Scholar
  2. 2.
    Pasula, H., Zettlemoyer, L., Kaelbling, L.: Learning probabilistic relational planning rules. In: Proceedings of the Fourteenth International Conference on Automated Planning and Scheduling (2004)Google Scholar
  3. 3.
    Rodr’iguez-Moreno, M.D., Borrajo, D., Oddi, A., Cesta, A., Meziat, D.: Ipss: A problem solver that integrates planning and scheduling. In: Third Italian Workshop on Planning and Scheduling (2004)Google Scholar
  4. 4.
    Veloso, M., Carbonell, J., Pérez, A., Borrajo, D., Fink, E., Blythe, J.: Integrating planning and learning: The prodigy architecture. Journal of Experimental and Theoretical AI 7, 81–120 (1995)MATHCrossRefGoogle Scholar
  5. 5.
    Nareyek, A.: Choosing search heuristics by non-stationary reinforcement learning (2003)Google Scholar
  6. 6.
    Fernández, S., Sebastiá, L., Fdez-Olivares, J.: Planning tourist visits adapted to user preferences. In: Workshop on Planning and Scheduling. ECAI (2004)Google Scholar
  7. 7.
    García-Martínez, R., Borrajo, D.: An integrated approach of learning, planning, and execution. Journal of Intelligent and Robotic Systems 29, 47–78 (2000)MATHCrossRefGoogle Scholar
  8. 8.
    Barto, A., Bradtke, S., Singh, S.: Real-time learning and control using asynchronous dynamic programming. Technical Report, Department of Computer Science, University of Massachusetts, Amherst, 91–57 (1991)Google Scholar
  9. 9.
    Blythe, J.: Decision-theoretic planning. AI Magazine (summer 1999)Google Scholar
  10. 10.
    Koening, S.: Optimal probabilistic and decision-theoretic planning using markovian decision theory. Master’s Report, Computer Science Division University of California, Berkeley (1991)Google Scholar
  11. 11.
    Haigh, K.Z., Veloso, M.M.: Planning, execution and learning in a robotic agent. In: AIPS, pp. 120–127 (1998)Google Scholar
  12. 12.
    Watkins, C.J.C.H., Dayan, P.: Technical note: Q-learning. Machine Learning 8, 279–292 (1992)MATHGoogle Scholar
  13. 13.
    Dzeroski, S., Raedt, L.D., Driessens, K.: Relational reinforcement learning. Machine Learning 43, 7–52 (2001)MATHCrossRefGoogle Scholar
  14. 14.
    Thrun, S.: Efficient exploration in reinforcement learning. Technical Report C,I-CS-92-102, Carnegie Mellon University (1992)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Sergio Jiménez
    • 1
  • Fernando Fernández
    • 1
  • Daniel Borrajo
    • 1
  1. 1.Departamento de InformáticaUniversidad Carlos III de MadridLeganés (Madrid)Spain

Personalised recommendations