Advertisement

A POMDP Design Framework for Decision Making in Assistive Robots

  • Ioannis Kostavelis
  • Dimitrios Giakoumis
  • Sotiris Malassiotis
  • Dimitrios Tzovaras
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10271)

Abstract

This paper proposes a theoretical framework that determines the high-level cognitive functions for multipurpose assistive service robots, required to autonomously complete their tasks. It encompasses a probabilistic POMDP based decision-making strategy that provides constant situation awareness about the human and the environment by associating the robot awareness about the user with specific clusters of robotic actions. To achieve this, a method for designing POMDP models is presented herein ample to define decision making policies suitable to resolve assistive tasks through a series of robotic actions. The proposed POMDP design methodology compensates the partial and noisy sensor input acquired from the robot sensors by foreseen mitigation strategies on the robot’s decisions when a software component fails. The theoretical work presented herein is assessed over well defined robotic tasks and proved capable to operate in realistic assistive robotic scenarios.

Keywords

Decision making Assistive robots POMDP Partial observability Robot situation awareness 

Notes

Acknowledgments

This work has been supported by the EU Horizon 2020 funded project namely: “Robotic Assistant for MCI Patients at home (RAMCIP)” under the grant agreement with no: 643433.

References

  1. 1.
    Bradshaw, J.M., Feltovich, P.J., Jung, H., Kulkarni, S., Taysom, W., Uszok, A.: Dimensions of adjustable autonomy and mixed-initiative interaction. In: Nickles, M., Rovatsos, M., Weiss, G. (eds.) AUTONOMY 2003. LNCS (LNAI), vol. 2969, pp. 17–39. Springer, Heidelberg (2004). doi: 10.1007/978-3-540-25928-2_3 CrossRefGoogle Scholar
  2. 2.
    Goodrich, M.A., Schultz, A.C.: Human-robot interaction: a survey. Found. Trends Hum. Comput. Interact. 1(3), 203–275 (2007)CrossRefzbMATHGoogle Scholar
  3. 3.
    White, D.J.: A survey of applications of Markov decision processes. J. Oper. Res. Soc. 44, 1073–1096 (1993)CrossRefzbMATHGoogle Scholar
  4. 4.
    Pineau, J., Thrun, S.: High-level robot behavior control using POMDPs. In: AAAI 2002 Workshop on Cognitive Robotics, vol. 107 (2002)Google Scholar
  5. 5.
    Mihailidis, A., Fernie, G.R.: Context-aware assistive devices for older adults with dementia. Gerontechnology 2(2), 173–188 (2002)CrossRefGoogle Scholar
  6. 6.
    Hoey, J., Poupart, P., Boutilier, C., Mihailidis, A.: POMDP models for assistive technology. In: Proceedings of AAAI Fall Symposium on Caring Machines: AI in Eldercare (2005)Google Scholar
  7. 7.
    Mihailidis, A., Boger, J.N., Craig, T., Hoey, J.: The coach prompting system to assist older adults with dementia through handwashing: an efficacy study. BMC Geriatr 8(1), 1 (2008)CrossRefGoogle Scholar
  8. 8.
    Hoey, J., Plötz, T., Jackson, D., Monk, A., Pham, C., Olivier, P.: Rapid specification and automated generation of prompting systems to assist people with dementia. Pervasive Mob. Comput. 7(3), 299–318 (2011)CrossRefGoogle Scholar
  9. 9.
    Ryu, H., Monk, A.: Interaction unit analysis: a new interaction design framework. Hum. Comput. Interact. 24(4), 367–407 (2009)CrossRefGoogle Scholar
  10. 10.
    Grześ, M., Hoey, J., Khan, S.S., Mihailidis, A., Czarnuch, S., Jackson, D., Monk, A.: Relational approach to knowledge engineering for POMDP-based assistance systems as a translation of a psychological model. Int. J. Approximate Reasoning 55(1), 36–58 (2014)CrossRefGoogle Scholar
  11. 11.
    Cassandra, A.R., Kaelbling, L.P., Kurien, J.A.: Acting under uncertainty: Discrete Bayesian models for mobile-robot navigation. In: Proceedings of the 1996 IEEE/RSJ International Conference on Intelligent Robots and Systems 1996, IROS 1996, vol. 2, pp. 963–972. IEEE (1996)Google Scholar
  12. 12.
    Grady, D., Moll, M., Kavraki, L.E.: Automated model approximation for robotic navigation with POMDPs. In: 2013 IEEE International Conference on Robotics and Automation (ICRA), pp. 78–84. IEEE (2013)Google Scholar
  13. 13.
    Foka, A., Trahanias, P.: Real-time hierarchical POMDPs for autonomous robot navigation. Robot. Auton. Syst. 55(7), 561–571 (2007)CrossRefGoogle Scholar
  14. 14.
    Candido, S., Hutchinson, S.: Minimum uncertainty robot navigation using information-guided POMDP planning. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 6102–6108. IEEE (2011)Google Scholar
  15. 15.
    Doumanoglou, A., Kargakos, A., Kim, T.K., Malassiotis, S.: Autonomous active recognition and unfolding of clothes using random decision forests and probabilistic planning. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 987–993. IEEE (2014)Google Scholar
  16. 16.
    Shani, G., Pineau, J., Kaplow, R.: A survey of point-based POMDP solvers. Auton. Agents Multi-Agent Syst. 27(1), 1–51 (2013)CrossRefGoogle Scholar
  17. 17.
    Schmidt-Rohr, S.R., Knoop, S., Lösch, M., Dillmann, R.: Bridging the gap of abstraction for probabilistic decision making on a multi-modal service robot. In: Robotics: Science and Systems (2008)Google Scholar
  18. 18.
    Taha, T., Miró, J.V., Dissanayake, G.: A POMDP framework for modelling human interaction with assistive robots. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 544–549. IEEE (2011)Google Scholar
  19. 19.
    Littman, M.L.: A tutorial on partially observable Markov decision processes. J. Math. Psychol. 53(3), 119–125 (2009)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Ioannis Kostavelis
    • 1
  • Dimitrios Giakoumis
    • 1
  • Sotiris Malassiotis
    • 1
  • Dimitrios Tzovaras
    • 1
  1. 1.Centre for Research and Technology HellasInformation Technologies InstituteThermi, ThessalonikiGreece

Personalised recommendations