Skip to main content

Automated Domain Model Learning Tools for Planning

  • Chapter
  • First Online:
Knowledge Engineering Tools and Techniques for AI Planning

Abstract

Intelligent agents solving problems in the real world require domain models containing widespread knowledge of the world. Domain models can be encoded by human experts or automatically learned through the observation of some existing plans (behaviours). Encoding a domain model manually from experience and intuition is a very complex and time-consuming task, even for domain experts. This chapter investigates various classical and state-of-the-art methods proposed by the researchers to attain the ability of automatic learning of domain models from training data. This concerns with the learning and representation of knowledge about the operator schema, discrete or continuous resources, processes and events involved in the planning domain model. The taxonomy and order of these methods we followed are based on their standing and frequency of usage in the past research. Our intended contribution in this chapter is to provide a broader perspective on the range of techniques in the domain model learning area which underpin the developmental decisions of the learning tools.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abbeel, P., D. Dolgov, A. Y. Ng and S. Thrun (2008). Apprenticeship learning for motion planning with application to parking lot navigation. 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE.

    Google Scholar 

  2. Aha, D., M. Klenk, H. Munoz-Avila, A. Ram and D. Shapiro (2010). Goal-driven autonomy: Notes from the AAAI workshop, Menlo Park, CA: AAAI Press.

    Google Scholar 

  3. Aineto, D., S. J. Celorrio and E. Onaindia (2019). “Learning action models with minimal observability.” Artificial Intelligence.

    Google Scholar 

  4. Argall, B. D., S. Chernova, M. Veloso and B. Browning (2009). “A survey of robot learning from demonstration.” Robotics and autonomous systems 57(5): 469–483.

    Google Scholar 

  5. Baxter, J. (1995). Learning internal representations. Proceedings of the eighth annual conference on Computational learning theory. Santa Cruz, California, USA, ACM: 311–320.

    Google Scholar 

  6. Benson, S. (1995). Action model learning and action execution in a reactive agent. Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI-95).

    Google Scholar 

  7. Brachman, R. J. and H. J. Levesque (1984). The tractability of subsumption in frame-based description languages. AAAI.

    Google Scholar 

  8. Bratko, I. and T. Urbančič (1997). “Transfer of control skill by machine learning.” Engineering Applications of Artificial Intelligence 10(1): 63–71.

    Google Scholar 

  9. Carbonell, J., O. Etzioni, Y. Gil, R. Joseph, C. Knoblock, S. Minton and M. Veloso (1991). “Prodigy: An integrated architecture for planning and learning.” ACM SIGART Bulletin 2(4): 51–55.

    Google Scholar 

  10. Carbonell, J. G. and Y. Gil (1990). Learning by experimentation: The operator refinement method. Machine learning, Elsevier: 191–213.

    Google Scholar 

  11. Carbonell, J. G. and M. Veloso (1988). Integrating derivational analogy into a general problem solving architecture. Proceedings of the First Workshop on Case-Based Reasoning.

    Google Scholar 

  12. Cresswell, S. (2009). “LOCM: A tool for acquiring planning domain models from action traces.” ICKEPS 2009.

    Google Scholar 

  13. Cresswell, S., T. L. McCluskey and M. M. West (2009). Acquisition of Object-Centred Domain Models from Planning Examples. ICAPS.

    Google Scholar 

  14. Davis, J. and M. Goadrich (2006). The relationship between Precision-Recall and ROC curves. Proceedings of the 23rd international conference on Machine learning, ACM.

    Google Scholar 

  15. DeJong, G. and R. Mooney (1986). “Explanation-based learning: An alternative view.” Machine learning 1(2): 145–176.

    Google Scholar 

  16. Ernst, G. W. and A. Newell (1969). GPS: A case study in generality and problem solving, Academic Pr.

    Google Scholar 

  17. Etzioni, O. (1991). STATIC: A Problem-Space Compiler for PRODIGY. AAAI.

    Google Scholar 

  18. Gao, J., H. H. Zhuo, S. Kambhampati and L. Li (2015). Acquiring Planning Knowledge via Crowdsourcing. Third AAAI Conference on Human Computation and Crowdsourcing.

    Google Scholar 

  19. Garland, A. and N. Lesh (2003). “Learning hierarchical task models by demonstration.” Mitsubishi Electric Research Laboratory (MERL), USA–(January 2002).

    Google Scholar 

  20. Gil, Y. (1992). Acquiring domain knowledge for planning by experimentation, DTIC Document.

    Google Scholar 

  21. Gregory, P. and S. Cresswell (2015). Domain Model Acquisition in the Presence of Static Relations in the LOP System. ICAPS.

    Google Scholar 

  22. Gregory, P. and A. Lindsay (2016). Domain model acquisition in domains with action costs. Twenty-Sixth International Conference on Automated Planning and Scheduling.

    Google Scholar 

  23. Hoffmann, J., I. Weber and F. Kraft (2009). Planning@ sap: An application in business process management. 2nd International Scheduling and Planning Applications woRKshop (SPARK’09).

    Google Scholar 

  24. Howe, J. (2008). Crowdsourcing: How the power of the crowd is driving the future of business, Random House.

    Google Scholar 

  25. Inoue, K., T. Ribeiro and C. Sakama (2014). “Learning from interpretation transition.” Machine Learning 94(1): 51–79.

    Google Scholar 

  26. Jilani, R., A. Crampton, D. Kitchin and M. Vallati (2015). Ascol: A tool for improving automatic planning domain model acquisition. Congress of the Italian Association for Artificial Intelligence, Springer.

    Google Scholar 

  27. Jiménez, S., T. De la Rosa, S. Fernández, F. Fernández and D. Borrajo (2012). “A review of machine learning for automated planning.” The Knowledge Engineering Review 27(4): 433–467.

    Google Scholar 

  28. Joseph, R. L. (1989). “Graphical knowledge acquisition.” In Proceedings of the Fourth Knowledge Acquisition For Knowledge-Based Systems Workshop, Banff, Canada.

    Google Scholar 

  29. Jourdan, J., L. Dent, J. McDermott, T. Mitchell and D. Zabowski (1993). Interfaces that learn: A learning apprentice for calendar management. Machine learning methods for planning, Elsevier: 31–65.

    Google Scholar 

  30. Kambhampati, S. (2007). Model-lite planning for the web age masses: The challenges of planning with incomplete and evolving domain models. Proceedings of the National Conference on Artificial Intelligence, Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999.

    Google Scholar 

  31. Knoblock, C. A. (1991). Automatically Generating Abstractions for Problem Solving, CARNEGIE-MELLON UNIV PITTSBURGH PA DEPT OF COMPUTER SCIENCE.

    Google Scholar 

  32. Konidaris, G., L. P. Kaelbling and T. Lozano-Perez (2014). “Constructing symbolic representations for high-level planning.” Proceedings of the 28th AAAI Conference on Artificial Intelligence.

    Google Scholar 

  33. Laird, J. E., A. Newell and P. S. Rosenbloom (1987). “Soar: An architecture for general intelligence.” Artificial intelligence 33(1): 1–64.

    Google Scholar 

  34. Lindsay, A., J. Read, J. F. Ferreira, T. Hayton, J. Porteous and P. Gregory (2017). Framer: Planning models from natural language action descriptions. Twenty-Seventh International Conference on Automated Planning and Scheduling.

    Google Scholar 

  35. Martínez, D., G. Alenya, C. Torras, T. Ribeiro and K. Inoue (2016). Learning relational dynamics of stochastic domains for planning. Twenty-Sixth International Conference on Automated Planning and Scheduling.

    Google Scholar 

  36. McCluskey, T., S. Cresswell, N. Richardson, R. Simpson and M. M. West (2008). “An evaluation of Opmaker2.” The 27th Workshop of the UK Planning and Scheduling Special Interest Group, December 11–12th, 2008, Edinburgh.: 65–72.

    Google Scholar 

  37. McCluskey, T. L., T. S. Vaquero and M. Vallati (2017). Engineering knowledge for automated planning: Towards a notion of quality. Proceedings of the Knowledge Capture Conference, ACM.

    Google Scholar 

  38. Michalski, R. S. (1993). Learning= inferencing+ memorizing. Foundations of Knowledge Acquisition, Springer: 1–41.

    Google Scholar 

  39. Minton, S., J. G. Carbonell, O. Etzioni, C. A. Knoblock and D. R. Kuokka (1987). Acquiring effective search control rules: Explanation-based learning in the PRODIGY system. Proceedings of the fourth International workshop on Machine Learning, Elsevier.

    Google Scholar 

  40. Mitchell, T. M. (1977). Version spaces: A candidate elimination approach to rule learning. Proceedings of the 5th international joint conference on Artificial intelligence-Volume 1, Morgan Kaufmann Publishers Inc.

    Google Scholar 

  41. Mitchell, T. M. (1980). The need for biases in learning generalizations, Department of Computer Science, Laboratory for Computer Science Research ….

    Google Scholar 

  42. Mitchell, T. M., J. Allen, P. Chalasani, J. Cheng, O. Etzioni, M. Ringuette and J. C. Schlimmer (1991). “Theo: A framework for self-improving systems.” Architectures for intelligence: 323–355.

    Google Scholar 

  43. Mitchell, T. M., S. Mabadevan and L. I. Steinberg (1990). LEAP: A learning apprentice for VLSI design. Machine learning, Elsevier: 271–289.

    Google Scholar 

  44. Mitchell, T. M. and S. Thrun (2014). Explanation based learning: A comparison of symbolic and neural network approaches. Proceedings of the Tenth International Conference on Machine Learning.

    Google Scholar 

  45. Mitchell, T. M. and S. B. Thrun (1996). “Learning analytically and inductively.” Mind matters: A tribute to Allen Newell: 85–110.

    Google Scholar 

  46. Molineaux, M. and D. W. Aha (2014). Learning unknown event models. Twenty-Eighth AAAI Conference on Artificial Intelligence.

    Google Scholar 

  47. Molineaux, M., M. Klenk and D. Aha (2010). Goal-driven autonomy in a Navy strategy simulation. Twenty-Fourth AAAI Conference on Artificial Intelligence.

    Google Scholar 

  48. Mourao, K., L. S. Zettlemoyer, R. Petrick and M. Steedman (2012). “Learning strips operators from noisy and incomplete observations.” arXiv preprint arXiv:1210.4889.

    Google Scholar 

  49. Nakauchi, Y., T. Okada and Y. Anzai (1991). Groupware that learns. [1991] IEEE Pacific Rim Conference on Communications, Computers and Signal Processing Conference Proceedings, IEEE.

    Google Scholar 

  50. Nejati, N., P. Langley and T. Konik (2006). Learning hierarchical task networks by observation. Proceedings of the 23rd international conference on Machine learning, ACM.

    Google Scholar 

  51. Nguyen, T.-H. D. and T.-Y. Leong (2009). A Surprise Triggered Adaptive and Reactive (STAR) Framework for Online Adaptation in Non-stationary Environments. AIIDE.

    Google Scholar 

  52. Pan, S. and Q. Yang (2010). A survey on transfer learning. IEEE Transaction on Knowledge Discovery and Data Engineering, 22 (10), IEEE press.

    Google Scholar 

  53. Pomerleau, D. A. (1989). ALVINN: An autonomous land vehicle in a neural network. Advances in neural information processing systems.

    Google Scholar 

  54. Pomerleau, D. A. (1991). “Efficient training of artificial neural networks for autonomous navigation.” Neural Computation 3(1): 88–97.

    Google Scholar 

  55. Ranasinghe, N. and W.-M. Shen (2008). Surprise-based learning for developmental robotics. 2008 ECSIS Symposium on Learning and Adaptive Behaviors for Robotic Systems (LAB-RS), IEEE.

    Google Scholar 

  56. Raykar, V. C., S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni and L. Moy (2010). “Learning from crowds.” Journal of Machine Learning Research 11(Apr): 1297–1322.

    Google Scholar 

  57. Richardson, M. and P. Domingos (2006). “Markov logic networks.” Machine learning 62(1–2): 107–136.

    Google Scholar 

  58. Richardson, N. E. (2008). An operator induction tool supporting knowledge engineering in planning, University of Huddersfield.

    Google Scholar 

  59. Riddle, P. J., R. C. Holte and M. W. Barley (2011). Does Representation Matter in the Planning Competition? Ninth Symposium of Abstraction, Reformulation, and Approximation.

    Google Scholar 

  60. Schlimmer, J. C. and D. Fisher (1986). A case study of incremental concept induction. AAAI.

    Google Scholar 

  61. Segre, A. M. (1987). Explanation-Based Learning of Generalized Robot Assembly Plans, ILLINOIS UNIV AT URBANA COORDINATED SCIENCE LAB.

    Google Scholar 

  62. Segura-Muros, J. Á., R. Pérez and J. Fernández-Olivares (2018). “Learning Numerical Action Models from Noisy and Partially Observable States by means of Inductive Rule Learning Techniques.” KEPS 2018: 46.

    Google Scholar 

  63. Shahaf, D. and E. Amir (2006). Learning partially observable action schemas. Proceedings of the National Conference on Artificial Intelligence, Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999.

    Google Scholar 

  64. Shen, W.-M. and H. A. Simon (1989). Rule Creation and Rule Learning Through Environmental Exploration. IJCAI, Citeseer.

    Google Scholar 

  65. Tecuci, G. and T. Dybala (1998). Building Intelligent Agents: An Apprenticeship, Multistrategy Learning Theory, Methodology, Tool and Case Studies, Morgan Kaufmann.

    Google Scholar 

  66. Vallati, M. and T. L. McCluskey (2018). “Towards a Framework for Understanding and Assessing Quality Aspects of Automated Planning Models.” KEPS 2018: 28.

    Google Scholar 

  67. Veloso, M., J. Carbonell, A. Perez, D. Borrajo, E. Fink and J. Blythe (1995). “Integrating planning and learning: The PRODIGY architecture.” Journal of Experimental & Theoretical Artificial Intelligence 7(1): 81–120.

    Google Scholar 

  68. Walsh, T. J. and M. L. Littman (2008). Efficient learning of action schemas and web-service descriptions. AAAI.

    Google Scholar 

  69. Wang, X. (1995). Learning by observation and practice: An incremental approach for planning operator acquisition. ICML.

    Google Scholar 

  70. Watkins, C. J. C. H. (1989). PhD Thesis: Learning from delayed rewards, University of Cambridge England.

    Google Scholar 

  71. Weber, B. G., M. Mateas and A. Jhala (2012). Learning from demonstration for goal-driven autonomy. Twenty-Sixth AAAI Conference on Artificial Intelligence.

    Google Scholar 

  72. Wu, K., Q. Yang and Y. Jiang (2005). “Arms: Action-relation modelling system for learning action models.” CKE: 50.

    Google Scholar 

  73. Ying, W., Y. Zhang, J. Huang and Q. Yang (2018). Transfer learning via learning to transfer. International Conference on Machine Learning.

    Google Scholar 

  74. Zhang, H., E. Law, R. Miller, K. Gajos, D. Parkes and E. Horvitz (2012). Human computation tasks with global constraints. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM.

    Google Scholar 

  75. Zhuo, H., Q. Yang, D. H. Hu and L. Li (2008). Transferring knowledge from another domain for learning action models. Pacific Rim International Conference on Artificial Intelligence, Springer.

    Google Scholar 

  76. Zhuo, H., Q. Yang and L. Li (2009). Transfer learning action models by measuring the similarity of different domains. Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer.

    Google Scholar 

  77. Zhuo, H. H. (2015). Crowdsourced action-model acquisition for planning. Twenty-Ninth AAAI Conference on Artificial Intelligence.

    Google Scholar 

  78. Zhuo, H. H., D. H. Hu, Q. Yang, H. Munoz-Avila and C. Hogg (2009). Learning applicability conditions in AI planning from partial observations. Workshop on Learning Structural Knowledge From Observations at IJCAI.

    Google Scholar 

  79. Zhuo, H. H. and S. Kambhampati (2013). Action-model acquisition from noisy plan traces. Twenty-Third International Joint Conference on Artificial Intelligence.

    Google Scholar 

  80. Zhuo, H. H., H. Muñoz-Avila and Q. Yang (2011). Learning action models for multi-agent planning. The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 1, International Foundation for Autonomous Agents and Multiagent Systems.

    Google Scholar 

  81. Zhuo, H. H., H. Muñoz-Avila and Q. Yang (2014). “Learning hierarchical task network domains from partially observed plan traces.” Artificial intelligence 212: 134–157.

    Google Scholar 

  82. Zhuo, H. H. and Q. Yang (2014). “Action-model acquisition for planning via transfer learning.” Artificial intelligence 212: 80–103.

    Google Scholar 

  83. Zhuo, H. H., Q. Yang, D. H. Hu and L. Li (2010). “Learning complex action models with quantifiers and logical implications.” Artificial Intelligence 174(18): 1540–1569.

    Google Scholar 

  84. Zhuoa, H. H., T. Nguyenb and S. Kambhampatib (2013). Refining incomplete planning domain models through plan traces. Proceedings of IJCAI.

    Google Scholar 

  85. Zimmerman, T. and S. Kambhampati (2003). “Learning-assisted automated planning: looking back, taking stock, going forward.” AI Magazine 24(2): 73–73.

    Google Scholar 

  86. Cresswell, S. and P. Gregory (2011). Generalised domain model acquisition from action traces. Twenty-First International Conference on Automated Planning and Scheduling.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rabia Jilani .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Jilani, R. (2020). Automated Domain Model Learning Tools for Planning. In: Vallati, M., Kitchin, D. (eds) Knowledge Engineering Tools and Techniques for AI Planning. Springer, Cham. https://doi.org/10.1007/978-3-030-38561-3_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-38561-3_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-38560-6

  • Online ISBN: 978-3-030-38561-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics