Advertisement

Multi-modal Intention Prediction with Probabilistic Movement Primitives

  • Oriane DermyEmail author
  • Francois Charpillet
  • Serena Ivaldi
Conference paper
Part of the Springer Proceedings in Advanced Robotics book series (SPAR, volume 7)

Abstract

This paper proposes a method for multi-modal prediction of intention based on a probabilistic description of movement primitives and goals. We target dyadic interaction between a human and a robot in a collaborative scenario. The robot acquires multi-modal models of collaborative action primitives containing gaze cues from the human partner and kinetic information about the manipulation primitives of its arm. We show that if the partner guides the robot with the gaze cue, the robot recognizes the intended action primitive even in the case of ambiguous actions. Furthermore, this prior knowledge acquired by gaze greatly improves the prediction of the future intended trajectory during a physical interaction. Results with the humanoid iCub are presented and discussed.

Keywords

Multi-modality Probabilistic movement primitive Human robot interaction Collaboration 

Notes

Acknowledgements

The authors wish to thank Olivier Rochel, Alexandros Paraschos, Marco Ewerton, Waldez Azevedo Gomes Junior and Pauline Maurice for their help and feedbacks.

References

  1. 1.
    Anzalone, S.M., Boucenna, S., Ivaldi, S., Chetouani, M.: Evaluating the engagement with social robots. Int. J. Soc. Robot. 7(4), 465–478 (2015)Google Scholar
  2. 2.
    Bader, T., Vogelgesang, M., Klaus, E.: Multimodal integration of natural gaze behavior for intention recognition during object manipulation. In: Proceedings of PIC on Multimodal Interfaces, pp. 199–206. ACM (2009)Google Scholar
  3. 3.
    Baluja, S., Pomerleau, D.: Non-intrusive gaze tracking using artificial neural networks. In: Proceedings of Advances in NIPS, pp. 753–760 (1994)Google Scholar
  4. 4.
    Boucenna, S., Gaussier, P., Andry, P., Hafemeister, L.: A robot learns the facial expressions recognition and face/non-face discrimination through an imitation game. Int. J. Soc. Robot. 6(4), 633–652 (2014)Google Scholar
  5. 5.
    Bretherton, I.: Intentional communication and the development of an understanding of mind. Children’s Theories of Mind: Mental States and Social Understanding, pp. 49–75 (1991)Google Scholar
  6. 6.
    Castellano, G., Pereira, A., Leite, I., Paiva, A., McOwan, P.W.: Detecting user engagement with a robot companion using task and social interaction-based features. In: Proceedings of PIC on Multimodal Interfaces, pp. 119–126. ACM (2009)Google Scholar
  7. 7.
    Dermy, O., Paraschos, A., Ewerton, M., Peters, J., Charpillet, F., Ivaldi, S.: Prediction of intention during interaction with ICUB with probabilistic movement primitives. Front. Robot AI (2017)Google Scholar
  8. 8.
    Dillmann, R., Becher, R., Steinhaus, P.: ARMAR II-a learning and cooperative multimodal humanoid robot system. Int. J. Humanoid Robot 1(01), 143–155 (2004)Google Scholar
  9. 9.
    Dragan, A., Srinivasa, S.: Generating legible motion. In: Proceedings of Robotics: Science and Systems. Berlin, Germany, June 2013Google Scholar
  10. 10.
    Dragan, A., Srinivasa, S.: Integrating human observer inferences into robot motion planning. Auton. Robot. 37(4), 351–368 (2014)Google Scholar
  11. 11.
    Ferrer, G., Sanfeliu, A.: Bayesian human motion intentionality prediction in urban environments. Pattern Recogn. Lett. 44, 134–140 (2014)Google Scholar
  12. 12.
    Hoffman, M.W., Grimes, D.B., Shon, A.P., Rao, R.P.: A probabilistic model of gaze imitation and shared attention. Neural Netw. 19(3), 299–310 (2006)Google Scholar
  13. 13.
    Huang, C.M., Mutlu, B.: Anticipatory robot control for efficient human-robot collaboration. In: Proceedings of HRI, pp. 83–90 (2016)Google Scholar
  14. 14.
    Ishii, R., Shinohara, Y., Nakano, T., Nishida, T.: Combining multiple types of eye-gaze information to predict user’s conversational engagement. In: 2nd Workshop on Eye Gaze on Intelligent Human Machine Interaction (2011)Google Scholar
  15. 15.
    Ivaldi, S., Lefort, S., Peters, J., Chetouani, M., Provasi, J., Zibetti, E.: Towards engagement models that consider individual factors in HRI. Int. J. of Soc. Robot. 9, 63–86 (2017)Google Scholar
  16. 16.
    Kim, J., Banks, C.J., Shah, J.A.: Collaborative planning with encoding of users’ high-level strategies. In: Proceedings of AAAI (2017)Google Scholar
  17. 17.
    Kozima, H., Yano, H.: A robot that learns to communicate with human caregivers. In: Proceedings of the First International Workshop on Epigenetic Robotics, pp. 47–52 (2001)Google Scholar
  18. 18.
    Ma, C., Prendinger, H., Ishizuka, M.: Eye movement as an indicator of users’ involvement with embodied interfaces at the low level. In: Proceedings of AISB, pp. 136–143 (2005)Google Scholar
  19. 19.
    Meltzoff, A.N., Brooks, R.: Eyes wide shut: the importance of eyes in infant gaze following and understanding other minds. In: Flom, R., Lee, K., Muir, D. (eds.) Gaze Following: Its Development and Significance. Erlbaum. [EVH] (2007)Google Scholar
  20. 20.
    Mitsugami, I., Ukita, N., Kidode, M.: Robot navigation by eye pointing. Lecture notes in computer science 3711, 256 (2005)Google Scholar
  21. 21.
    Paraschos, A., Daniel, C., Peters, J.R., Neumann, G.: Probabilistic movement primitives. In: Proceedings of NIPS, pp. 2616–2624 (2013)Google Scholar
  22. 22.
    Ravichandar, H., Kumar, A., Dani, A.: Bayesian human intention inference through multiple model filtering with gaze-based priors. In: Proceedings of Information Fusion (FUSION), pp. 2296–2302. IEEE (2016)Google Scholar
  23. 23.
    Timm, F., Barth, E.: Accurate eye centre localisation by means of gradients. In: Proceedings of Visapp, vol. 11, pp. 125–130 (2011)Google Scholar
  24. 24.
    Traver, V.J., del Pobil, A.P., Pérez-Francisco, M.: Making service robots human-safe. In: Proceedings of (IROS 2000), vol. 1, pp. 696–701. IEEE (2000)Google Scholar
  25. 25.
    Walker-Andrews, A.S.: Infants’ perception of expressive behaviors: differentiation of multimodal information. Psychol. Bull. 121(3), 437 (1997)Google Scholar
  26. 26.
    Wang, Z., Deisenroth, M.P., Amor, H.B., Vogt, D., Schölkopf, B., Peters, J.: Probabilistic modeling of human movements for intention inference. Robot. Sci. Syst. (2012)Google Scholar
  27. 27.
    Weser, M., Westhoff, D., Huser, M., Zhang, J.: Multimodal people tracking and trajectory prediction based on learned generalized motion patterns. In: International Conference on Multisensor Fusion and Integration for Intelligent Systems, pp. 541–546 (2006)Google Scholar
  28. 28.
    Xiong, X., De la Torre, F.: Supervised descent method and its applications to face alignment. In: Proceedings of IEEE CVPR (2013)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2019

Authors and Affiliations

  • Oriane Dermy
    • 1
    Email author
  • Francois Charpillet
    • 1
  • Serena Ivaldi
    • 1
  1. 1.INRIAVillers-lès-NancyFrance

Personalised recommendations