HTN-Style Planning in Relational POMDPs Using First-Order FSCs

  • Felix Müller
  • Susanne Biundo
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7006)


In this paper, a novel approach to hierarchical planning under partial observability in relational domains is presented. It combines hierarchical task network planning with the finite state controller (FSC) policy representation for partially observable Markov decision processes. Based on a new first-order generalization of FSCs, action hierarchies are defined as in traditional hierarchical planning, so that planning corresponds to finding the best plan in a given decomposition hierarchy of predefined, partially abstract FSCs. Finally, we propose an algorithm for solving planning problems in this setting. Our approach offers a way of practically dealing with real-world partial observability planning problems: it avoids the complexity originating from the dynamic programming backup operation required in many present-day policy generation algorithms.


Terminal Node Abstract Action Horizon Case Terminal Action Observable Markov Decision Process 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Biundo, S., Bercher, P., Geier, T., Müller, F., Schattenberg, B.: Advanced user assistance based on AI planning. In: Cognitive Systems Research, Special Issue on ‘‘Complex Cognition’’, pp. 219–236 (2011)Google Scholar
  2. 2.
    Boger, J., Poupart, P., Hoey, J., Boutilier, C., Fernie, G., Mihailidis, A.: A decision-theoretic approach to task assistance for persons with dementia. In: IJCAI, pp. 1293–1299 (2005)Google Scholar
  3. 3.
    Bouguerra, A., Karlsson, L.: Hierarchical task planning under uncertainty. In: 3rd Italian Workshop on Planning and Scheduling (2004)Google Scholar
  4. 4.
    Erol, K., Hendler, J., Nau, D.S.: UMCP: A sound and complete procedure for hierarchical task-network planning. In: AIPS, pp. 249–254 (1994)Google Scholar
  5. 5.
    Hansen, E.A.: Solving POMDPs by searching in policy space. In: UAI, pp. 211–219 (1998)Google Scholar
  6. 6.
    Hansen, E.A.: Indefinite-horizon POMDPs with action-based termination. In: AAAI, pp. 1237–1242 (2007)Google Scholar
  7. 7.
    Hansen, E.A., Zhou, R.: Synthesis of hierarchical finite-state controllers for POMDPs. In: ICAPS, pp. 113–122 (2003)Google Scholar
  8. 8.
    Madani, O., Hanks, S., Condon, A.: On the undecidability of probabilistic planning and related stochastic optimization problems. Artificial Intelligence 147(1-2), 5–34 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Nau, D., Au, T.-C., Ilghami, O., Kuter, U., Muñoz-Avila, H., Murdock, J.W., Wu, D., Yaman, F.: Applications of SHOP and SHOP2. In: IEEE Intelligent Systems (2004)Google Scholar
  10. 10.
    Nau, D., Cao, Y., Lotem, A., Munoz-Avila, H.: SHOP: simple hierarchical ordered planner. In: IJCAI, pp. 968–973 (1999)Google Scholar
  11. 11.
    Papadimitriou, C.H., Tsitsiklis, J.N.: The complexity of Markov decision processes. Mathematics of Operations Research 12(3), 441–450 (1987)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Pednault, E.P.D.: ADL: exploring the middle ground between STRIPS and the situation calculus. In: KR, pp. 324–332 (1989)Google Scholar
  13. 13.
    Pineau, J., Gordon, G., Thrun, S.: Anytime point-based approximations for large POMDPs. JAIR 27, 335–380 (2006)zbMATHGoogle Scholar
  14. 14.
    Puterman, M.L.: Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, Chichester (1994)CrossRefzbMATHGoogle Scholar
  15. 15.
    Russell, J.A., Mehrabian, A.: Evidence for a three-factor theory of emotions. Journal of Research in Personality 11(3), 273–294 (1977)CrossRefGoogle Scholar
  16. 16.
    Sanner, S., Kersting, K.: Symbolic dynamic programming for first-order POMDPs. In: AAAI, pp. 1140–1146 (2010)Google Scholar
  17. 17.
    Wang, C., Khardon, R.: Relational partially observable MDPs. In: AAAI, pp. 1153–1158 (2010)Google Scholar
  18. 18.
    Younes, H.L.S., Littman, M.L.: PPDDL 1.0: an extension to PDDL for expressing planning domains with probabilistic effects. Technical Report CMU-CS-04-167, Carnegie Mellon University, Pittsburgh (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Felix Müller
    • 1
  • Susanne Biundo
    • 1
  1. 1.Institute of Artificial IntelligenceUlm UniversityUlmGermany

Personalised recommendations