Abstract
Intelligent agents solving problems in the real world require domain models containing widespread knowledge of the world. Domain models can be encoded by human experts or automatically learned through the observation of some existing plans (behaviours). Encoding a domain model manually from experience and intuition is a very complex and time-consuming task, even for domain experts. This chapter investigates various classical and state-of-the-art methods proposed by the researchers to attain the ability of automatic learning of domain models from training data. This concerns with the learning and representation of knowledge about the operator schema, discrete or continuous resources, processes and events involved in the planning domain model. The taxonomy and order of these methods we followed are based on their standing and frequency of usage in the past research. Our intended contribution in this chapter is to provide a broader perspective on the range of techniques in the domain model learning area which underpin the developmental decisions of the learning tools.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Abbeel, P., D. Dolgov, A. Y. Ng and S. Thrun (2008). Apprenticeship learning for motion planning with application to parking lot navigation. 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE.
Aha, D., M. Klenk, H. Munoz-Avila, A. Ram and D. Shapiro (2010). Goal-driven autonomy: Notes from the AAAI workshop, Menlo Park, CA: AAAI Press.
Aineto, D., S. J. Celorrio and E. Onaindia (2019). “Learning action models with minimal observability.” Artificial Intelligence.
Argall, B. D., S. Chernova, M. Veloso and B. Browning (2009). “A survey of robot learning from demonstration.” Robotics and autonomous systems 57(5): 469–483.
Baxter, J. (1995). Learning internal representations. Proceedings of the eighth annual conference on Computational learning theory. Santa Cruz, California, USA, ACM: 311–320.
Benson, S. (1995). Action model learning and action execution in a reactive agent. Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI-95).
Brachman, R. J. and H. J. Levesque (1984). The tractability of subsumption in frame-based description languages. AAAI.
Bratko, I. and T. Urbančič (1997). “Transfer of control skill by machine learning.” Engineering Applications of Artificial Intelligence 10(1): 63–71.
Carbonell, J., O. Etzioni, Y. Gil, R. Joseph, C. Knoblock, S. Minton and M. Veloso (1991). “Prodigy: An integrated architecture for planning and learning.” ACM SIGART Bulletin 2(4): 51–55.
Carbonell, J. G. and Y. Gil (1990). Learning by experimentation: The operator refinement method. Machine learning, Elsevier: 191–213.
Carbonell, J. G. and M. Veloso (1988). Integrating derivational analogy into a general problem solving architecture. Proceedings of the First Workshop on Case-Based Reasoning.
Cresswell, S. (2009). “LOCM: A tool for acquiring planning domain models from action traces.” ICKEPS 2009.
Cresswell, S., T. L. McCluskey and M. M. West (2009). Acquisition of Object-Centred Domain Models from Planning Examples. ICAPS.
Davis, J. and M. Goadrich (2006). The relationship between Precision-Recall and ROC curves. Proceedings of the 23rd international conference on Machine learning, ACM.
DeJong, G. and R. Mooney (1986). “Explanation-based learning: An alternative view.” Machine learning 1(2): 145–176.
Ernst, G. W. and A. Newell (1969). GPS: A case study in generality and problem solving, Academic Pr.
Etzioni, O. (1991). STATIC: A Problem-Space Compiler for PRODIGY. AAAI.
Gao, J., H. H. Zhuo, S. Kambhampati and L. Li (2015). Acquiring Planning Knowledge via Crowdsourcing. Third AAAI Conference on Human Computation and Crowdsourcing.
Garland, A. and N. Lesh (2003). “Learning hierarchical task models by demonstration.” Mitsubishi Electric Research Laboratory (MERL), USA–(January 2002).
Gil, Y. (1992). Acquiring domain knowledge for planning by experimentation, DTIC Document.
Gregory, P. and S. Cresswell (2015). Domain Model Acquisition in the Presence of Static Relations in the LOP System. ICAPS.
Gregory, P. and A. Lindsay (2016). Domain model acquisition in domains with action costs. Twenty-Sixth International Conference on Automated Planning and Scheduling.
Hoffmann, J., I. Weber and F. Kraft (2009). Planning@ sap: An application in business process management. 2nd International Scheduling and Planning Applications woRKshop (SPARK’09).
Howe, J. (2008). Crowdsourcing: How the power of the crowd is driving the future of business, Random House.
Inoue, K., T. Ribeiro and C. Sakama (2014). “Learning from interpretation transition.” Machine Learning 94(1): 51–79.
Jilani, R., A. Crampton, D. Kitchin and M. Vallati (2015). Ascol: A tool for improving automatic planning domain model acquisition. Congress of the Italian Association for Artificial Intelligence, Springer.
Jiménez, S., T. De la Rosa, S. Fernández, F. Fernández and D. Borrajo (2012). “A review of machine learning for automated planning.” The Knowledge Engineering Review 27(4): 433–467.
Joseph, R. L. (1989). “Graphical knowledge acquisition.” In Proceedings of the Fourth Knowledge Acquisition For Knowledge-Based Systems Workshop, Banff, Canada.
Jourdan, J., L. Dent, J. McDermott, T. Mitchell and D. Zabowski (1993). Interfaces that learn: A learning apprentice for calendar management. Machine learning methods for planning, Elsevier: 31–65.
Kambhampati, S. (2007). Model-lite planning for the web age masses: The challenges of planning with incomplete and evolving domain models. Proceedings of the National Conference on Artificial Intelligence, Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999.
Knoblock, C. A. (1991). Automatically Generating Abstractions for Problem Solving, CARNEGIE-MELLON UNIV PITTSBURGH PA DEPT OF COMPUTER SCIENCE.
Konidaris, G., L. P. Kaelbling and T. Lozano-Perez (2014). “Constructing symbolic representations for high-level planning.” Proceedings of the 28th AAAI Conference on Artificial Intelligence.
Laird, J. E., A. Newell and P. S. Rosenbloom (1987). “Soar: An architecture for general intelligence.” Artificial intelligence 33(1): 1–64.
Lindsay, A., J. Read, J. F. Ferreira, T. Hayton, J. Porteous and P. Gregory (2017). Framer: Planning models from natural language action descriptions. Twenty-Seventh International Conference on Automated Planning and Scheduling.
Martínez, D., G. Alenya, C. Torras, T. Ribeiro and K. Inoue (2016). Learning relational dynamics of stochastic domains for planning. Twenty-Sixth International Conference on Automated Planning and Scheduling.
McCluskey, T., S. Cresswell, N. Richardson, R. Simpson and M. M. West (2008). “An evaluation of Opmaker2.” The 27th Workshop of the UK Planning and Scheduling Special Interest Group, December 11–12th, 2008, Edinburgh.: 65–72.
McCluskey, T. L., T. S. Vaquero and M. Vallati (2017). Engineering knowledge for automated planning: Towards a notion of quality. Proceedings of the Knowledge Capture Conference, ACM.
Michalski, R. S. (1993). Learning= inferencing+ memorizing. Foundations of Knowledge Acquisition, Springer: 1–41.
Minton, S., J. G. Carbonell, O. Etzioni, C. A. Knoblock and D. R. Kuokka (1987). Acquiring effective search control rules: Explanation-based learning in the PRODIGY system. Proceedings of the fourth International workshop on Machine Learning, Elsevier.
Mitchell, T. M. (1977). Version spaces: A candidate elimination approach to rule learning. Proceedings of the 5th international joint conference on Artificial intelligence-Volume 1, Morgan Kaufmann Publishers Inc.
Mitchell, T. M. (1980). The need for biases in learning generalizations, Department of Computer Science, Laboratory for Computer Science Research ….
Mitchell, T. M., J. Allen, P. Chalasani, J. Cheng, O. Etzioni, M. Ringuette and J. C. Schlimmer (1991). “Theo: A framework for self-improving systems.” Architectures for intelligence: 323–355.
Mitchell, T. M., S. Mabadevan and L. I. Steinberg (1990). LEAP: A learning apprentice for VLSI design. Machine learning, Elsevier: 271–289.
Mitchell, T. M. and S. Thrun (2014). Explanation based learning: A comparison of symbolic and neural network approaches. Proceedings of the Tenth International Conference on Machine Learning.
Mitchell, T. M. and S. B. Thrun (1996). “Learning analytically and inductively.” Mind matters: A tribute to Allen Newell: 85–110.
Molineaux, M. and D. W. Aha (2014). Learning unknown event models. Twenty-Eighth AAAI Conference on Artificial Intelligence.
Molineaux, M., M. Klenk and D. Aha (2010). Goal-driven autonomy in a Navy strategy simulation. Twenty-Fourth AAAI Conference on Artificial Intelligence.
Mourao, K., L. S. Zettlemoyer, R. Petrick and M. Steedman (2012). “Learning strips operators from noisy and incomplete observations.” arXiv preprint arXiv:1210.4889.
Nakauchi, Y., T. Okada and Y. Anzai (1991). Groupware that learns. [1991] IEEE Pacific Rim Conference on Communications, Computers and Signal Processing Conference Proceedings, IEEE.
Nejati, N., P. Langley and T. Konik (2006). Learning hierarchical task networks by observation. Proceedings of the 23rd international conference on Machine learning, ACM.
Nguyen, T.-H. D. and T.-Y. Leong (2009). A Surprise Triggered Adaptive and Reactive (STAR) Framework for Online Adaptation in Non-stationary Environments. AIIDE.
Pan, S. and Q. Yang (2010). A survey on transfer learning. IEEE Transaction on Knowledge Discovery and Data Engineering, 22 (10), IEEE press.
Pomerleau, D. A. (1989). ALVINN: An autonomous land vehicle in a neural network. Advances in neural information processing systems.
Pomerleau, D. A. (1991). “Efficient training of artificial neural networks for autonomous navigation.” Neural Computation 3(1): 88–97.
Ranasinghe, N. and W.-M. Shen (2008). Surprise-based learning for developmental robotics. 2008 ECSIS Symposium on Learning and Adaptive Behaviors for Robotic Systems (LAB-RS), IEEE.
Raykar, V. C., S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni and L. Moy (2010). “Learning from crowds.” Journal of Machine Learning Research 11(Apr): 1297–1322.
Richardson, M. and P. Domingos (2006). “Markov logic networks.” Machine learning 62(1–2): 107–136.
Richardson, N. E. (2008). An operator induction tool supporting knowledge engineering in planning, University of Huddersfield.
Riddle, P. J., R. C. Holte and M. W. Barley (2011). Does Representation Matter in the Planning Competition? Ninth Symposium of Abstraction, Reformulation, and Approximation.
Schlimmer, J. C. and D. Fisher (1986). A case study of incremental concept induction. AAAI.
Segre, A. M. (1987). Explanation-Based Learning of Generalized Robot Assembly Plans, ILLINOIS UNIV AT URBANA COORDINATED SCIENCE LAB.
Segura-Muros, J. Á., R. Pérez and J. Fernández-Olivares (2018). “Learning Numerical Action Models from Noisy and Partially Observable States by means of Inductive Rule Learning Techniques.” KEPS 2018: 46.
Shahaf, D. and E. Amir (2006). Learning partially observable action schemas. Proceedings of the National Conference on Artificial Intelligence, Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999.
Shen, W.-M. and H. A. Simon (1989). Rule Creation and Rule Learning Through Environmental Exploration. IJCAI, Citeseer.
Tecuci, G. and T. Dybala (1998). Building Intelligent Agents: An Apprenticeship, Multistrategy Learning Theory, Methodology, Tool and Case Studies, Morgan Kaufmann.
Vallati, M. and T. L. McCluskey (2018). “Towards a Framework for Understanding and Assessing Quality Aspects of Automated Planning Models.” KEPS 2018: 28.
Veloso, M., J. Carbonell, A. Perez, D. Borrajo, E. Fink and J. Blythe (1995). “Integrating planning and learning: The PRODIGY architecture.” Journal of Experimental & Theoretical Artificial Intelligence 7(1): 81–120.
Walsh, T. J. and M. L. Littman (2008). Efficient learning of action schemas and web-service descriptions. AAAI.
Wang, X. (1995). Learning by observation and practice: An incremental approach for planning operator acquisition. ICML.
Watkins, C. J. C. H. (1989). PhD Thesis: Learning from delayed rewards, University of Cambridge England.
Weber, B. G., M. Mateas and A. Jhala (2012). Learning from demonstration for goal-driven autonomy. Twenty-Sixth AAAI Conference on Artificial Intelligence.
Wu, K., Q. Yang and Y. Jiang (2005). “Arms: Action-relation modelling system for learning action models.” CKE: 50.
Ying, W., Y. Zhang, J. Huang and Q. Yang (2018). Transfer learning via learning to transfer. International Conference on Machine Learning.
Zhang, H., E. Law, R. Miller, K. Gajos, D. Parkes and E. Horvitz (2012). Human computation tasks with global constraints. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM.
Zhuo, H., Q. Yang, D. H. Hu and L. Li (2008). Transferring knowledge from another domain for learning action models. Pacific Rim International Conference on Artificial Intelligence, Springer.
Zhuo, H., Q. Yang and L. Li (2009). Transfer learning action models by measuring the similarity of different domains. Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer.
Zhuo, H. H. (2015). Crowdsourced action-model acquisition for planning. Twenty-Ninth AAAI Conference on Artificial Intelligence.
Zhuo, H. H., D. H. Hu, Q. Yang, H. Munoz-Avila and C. Hogg (2009). Learning applicability conditions in AI planning from partial observations. Workshop on Learning Structural Knowledge From Observations at IJCAI.
Zhuo, H. H. and S. Kambhampati (2013). Action-model acquisition from noisy plan traces. Twenty-Third International Joint Conference on Artificial Intelligence.
Zhuo, H. H., H. Muñoz-Avila and Q. Yang (2011). Learning action models for multi-agent planning. The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 1, International Foundation for Autonomous Agents and Multiagent Systems.
Zhuo, H. H., H. Muñoz-Avila and Q. Yang (2014). “Learning hierarchical task network domains from partially observed plan traces.” Artificial intelligence 212: 134–157.
Zhuo, H. H. and Q. Yang (2014). “Action-model acquisition for planning via transfer learning.” Artificial intelligence 212: 80–103.
Zhuo, H. H., Q. Yang, D. H. Hu and L. Li (2010). “Learning complex action models with quantifiers and logical implications.” Artificial Intelligence 174(18): 1540–1569.
Zhuoa, H. H., T. Nguyenb and S. Kambhampatib (2013). Refining incomplete planning domain models through plan traces. Proceedings of IJCAI.
Zimmerman, T. and S. Kambhampati (2003). “Learning-assisted automated planning: looking back, taking stock, going forward.” AI Magazine 24(2): 73–73.
Cresswell, S. and P. Gregory (2011). Generalised domain model acquisition from action traces. Twenty-First International Conference on Automated Planning and Scheduling.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Jilani, R. (2020). Automated Domain Model Learning Tools for Planning. In: Vallati, M., Kitchin, D. (eds) Knowledge Engineering Tools and Techniques for AI Planning. Springer, Cham. https://doi.org/10.1007/978-3-030-38561-3_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-38561-3_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-38560-6
Online ISBN: 978-3-030-38561-3
eBook Packages: Computer ScienceComputer Science (R0)