Machine Learning of Plan Robustness Knowledge About Instances
Classical planning domain representations assume all the objects from one type are exactly the same. But when solving problems in the real world systems, the execution of a plan that theoretically solves a problem, can fail because of not properly capturing the special features of an object in the initial representation. We propose to capture this uncertainty about the world with an architecture that integrates planning, execution and learning. In this paper, we describe the PELA system (Planning-Execution-Learning Architecture). This system generates plans, executes those plans in the real world, and automatically acquires knowledge about the behaviour of the objects to strengthen the execution processes in the future.
Unable to display preview. Download preview PDF.
- 1.Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: A survey. Journal of Artificial Intelligence Research 4, 237–285 (1996)Google Scholar
- 2.Pasula, H., Zettlemoyer, L., Kaelbling, L.: Learning probabilistic relational planning rules. In: Proceedings of the Fourteenth International Conference on Automated Planning and Scheduling (2004)Google Scholar
- 3.Rodr’iguez-Moreno, M.D., Borrajo, D., Oddi, A., Cesta, A., Meziat, D.: Ipss: A problem solver that integrates planning and scheduling. In: Third Italian Workshop on Planning and Scheduling (2004)Google Scholar
- 5.Nareyek, A.: Choosing search heuristics by non-stationary reinforcement learning (2003)Google Scholar
- 6.Fernández, S., Sebastiá, L., Fdez-Olivares, J.: Planning tourist visits adapted to user preferences. In: Workshop on Planning and Scheduling. ECAI (2004)Google Scholar
- 8.Barto, A., Bradtke, S., Singh, S.: Real-time learning and control using asynchronous dynamic programming. Technical Report, Department of Computer Science, University of Massachusetts, Amherst, 91–57 (1991)Google Scholar
- 9.Blythe, J.: Decision-theoretic planning. AI Magazine (summer 1999)Google Scholar
- 10.Koening, S.: Optimal probabilistic and decision-theoretic planning using markovian decision theory. Master’s Report, Computer Science Division University of California, Berkeley (1991)Google Scholar
- 11.Haigh, K.Z., Veloso, M.M.: Planning, execution and learning in a robotic agent. In: AIPS, pp. 120–127 (1998)Google Scholar
- 14.Thrun, S.: Efficient exploration in reinforcement learning. Technical Report C,I-CS-92-102, Carnegie Mellon University (1992)Google Scholar