Advertisement

Addressing Uncertainty in Hierarchical User-Centered Planning

  • Felix Richter
  • Susanne Biundo
Chapter
Part of the Cognitive Technologies book series (COGTECH)

Abstract

Companion-Systems need to reason about dynamic properties of their users, e.g., their emotional state, and the current state of the environment. The values of these properties are often not directly accessible; hence information on them must be pieced together from indirect, noisy or partial observations. To ensure probability-based treatment of partial observability on the planning level, planning problems can be modeled as Partially Observable Markov Decision Processes (POMDPs).

While POMDPs can model relevant planning problems, it is algorithmically difficult to solve them. A starting point for mitigating this is that many domains exhibit hierarchical structures where plans consist of a number of higher-level activities, each of which can be implemented in different ways that are known a priori. We show how to make use of such structures in POMDPs using the Partially Observable HTN (POHTN) planning approach by developing a Partially Observable HTN (POHTN) action hierarchy for an example domain derived from an existing deterministic demonstration domain.

We then apply Monte-Carlo Tree Search to POHTNs for generating plans and evaluate both the developed domain and the POHTN approach empirically.

Notes

Acknowledgements

This work was done within the Transregional Collaborative Research Centre SFB/TRR 62 “Companion-Technology for Cognitive Technical Systems” funded by the German Research Foundation (DFG).

References

  1. 1.
    Bercher, P., Biundo, S., Geier, T., Hoernle, T., Nothdurft, F., Richter, F., Schattenberg, B.: Plan, repair, execute, explain - how planning helps to assemble your home theater. In: Proceedings of the 24th International Conference on Automated Planning and Scheduling (ICAPS 2014), pp. 386–394. AAAI Press, Palo Alto (2014)Google Scholar
  2. 2.
    Bercher, P., Richter, F., Hörnle, T., Geier, T., Höller, D., Behnke, G., Nothdurft, F., Honold, F., Minker, W., Weber, M., Biundo, S.: A planning-based assistance system for setting up a home theater. In: Proceedings of the 29th National Conference on Artificial Intelligence (AAAI 2015). AAAI Press, Palo Alto (2015)Google Scholar
  3. 3.
    Browne, C.B., Powley, E., Whitehouse, D., Lucas, S.M., Cowling, P.I., Rohlfshagen, P., Tavener, S., Perez, D., Samothrakis, S., Colton, S.: A survey of monte carlo tree search methods. IEEE Trans. Comput. Intell. AI Games 4(1), 1–43 (2012)CrossRefGoogle Scholar
  4. 4.
    Dietterich, T.G.: Hierarchical reinforcement learning with the MAXQ value function decomposition. J. Artif. Intell. Res. (JAIR) 13, 227–303 (2000)Google Scholar
  5. 5.
    Erol, K., Hendler, J., Nau, D.: UMCP: a sound and complete procedure for hierarchical task-network planning. In: Proceedings of the 2nd International Conference on Artificial Intelligence Planning Systems (AIPS 1994), pp. 249–254 (1994)Google Scholar
  6. 6.
    Geier, T., Bercher, P.: On the decidability of HTN planning with task insertion. In: Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI 2011), pp. 1955–1961 (2011)Google Scholar
  7. 7.
    Hansen, E.A., Zhou, R.: Synthesis of hierarchical finite-state controllers for POMDPs. In: Proceedings of the Thirteenth International Conference on Automated Planning and Scheduling (ICAPS 2003), pp. 113–122 (2003)Google Scholar
  8. 8.
    He, R., Brunskill, E., Roy, N.: PUMA: planning under uncertainty with macro-actions. In: Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2010 (2010)Google Scholar
  9. 9.
    Honold, F., Bercher, P., Richter, F., Nothdurft, F., Geier, T., Barth, R., Hörnle, T., Schüssel, F., Reuter, S., Rau, M., Bertrand, G., Seegebarth, B., Kurzok, P., Schattenberg, B., Minker, W., Weber, M., Biundo, S.: Companion-technology: towards user- and situation-adaptive functionality of technical systems. In: Proceedings of the 10th International Conference on Intelligent Environments (IE 2014), pp. 378–381. IEEE, New York (2014). doi:10.1109/IE.2014.60Google Scholar
  10. 10.
    Keller, T., Helmert, M.: Trial-based heuristic tree search for finite horizon MDPs. In: Proceedings of the 23rd International Conference on Automated Planning and Scheduling (ICAPS 2013), pp. 135–143. AAAI Press, Palo Alto (2013)Google Scholar
  11. 11.
    Kocsis, L., Szepesvári, C.: Bandit based monte-carlo planning. In: Proceedings of the 17th European Conference on Machine Learning (ECML 2006), pp. 282–293 (2006)Google Scholar
  12. 12.
    Müller, F., Biundo, S.: HTN-style planning in relational POMDPs using first-order FSCs. In: Bach, J., Edelkamp, S. (eds.) Proceedings of the 34th Annual German Conference on Artificial Intelligence (KI 2011), pp. 216–227. Springer, Berlin (2011)Google Scholar
  13. 13.
    Müller, F., Späth, C., Geier, T., Biundo, S.: Exploiting expert knowledge in factored POMDPs. In: Proceedings of the 20th European Conference on Artificial Intelligence (ECAI 2012), pp. 606–611. IOS Press, Amsterdam (2012)Google Scholar
  14. 14.
    Nau, D., Au, T.C., Ilghami, O., Kuter, U., Muñoz-Avila, H., Murdock, J.W., Wu, D., Yaman, F.: Applications of SHOP and SHOP2. IEEE Intell. Syst. 20(2), 34–41 (2005)CrossRefGoogle Scholar
  15. 15.
    Parr, R., Russell, S.J.: Reinforcement learning with hierarchies of machines. In: Advances in Neural Information Processing Systems (NIPS 1997), vol. 10, pp. 1043–1049. MIT Press, Cambridge (1997)Google Scholar
  16. 16.
    Pineau, J., Gordon, G., Thrun, S.: Policy-contingent abstraction for robust robot control. In: Proceedings of the 19th conference on Uncertainty in Artificial Intelligence (UAI 2003), pp. 477–484. Morgan Kaufmann, San Francisco (2003)Google Scholar
  17. 17.
    Sanner, S.: Relational dynamic influence diagram language (RDDL): language description (2010). http://users.cecs.anu.edu.au/ ssanner/IPPC_2011/RDDL.pdfGoogle Scholar
  18. 18.
    Silver, D., Veness, J.: Monte-carlo planning in large POMDPs. In: Lafferty, J., Williams, C., Shawe-Taylor, J., Zemel, R., Culotta, A. (eds.) Advances in Neural Information Processing Systems 23, pp. 2164–2172. Curran Associates, Red Hook (2010)Google Scholar
  19. 19.
    Sondik, E.: The optimal control of partially observable Markov decision processes. Ph.D. Thesis, Stanford University (1971)Google Scholar
  20. 20.
    Sutton, R.S., Precup, D., Singh, S.: Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning. Artif. Intell. 112(1), 181–211 (1999)MathSciNetCrossRefMATHGoogle Scholar
  21. 21.
    Theocharous, G., Kaelbling, L.P.: Approximate planning in POMDPs with macro-actions. In: Thrun, S., Saul, L., Schölkopf, B. (eds.) Advances in Neural Information Processing Systems (NIPS 2004), vol. 16, pp. 775–782. MIT Press, Cambridge (2004)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Institute of Artificial IntelligenceUlm UniversityUlmGermany

Personalised recommendations