Advertisement

On-the-Fly Detection of User Engagement Decrease in Spontaneous Human–Robot Interaction Using Recurrent and Deep Neural Networks

  • Atef Ben-YoussefEmail author
  • Giovanna Varni
  • Slim Essid
  • Chloé Clavel
Article
  • 50 Downloads

Abstract

In this paper we consider the detection of a decrease of engagement by users spontaneously interacting with a socially assistive robot in a public space. We first describe the UE-HRI dataset that collects spontaneous human–robot interactions following the guidelines provided by the affective computing research community to collect data “in-the-wild”. We then analyze the users’ behaviors, focusing on proxemics, gaze, head motion, facial expressions and speech during interactions with the robot. Finally, we investigate the use of deep leaning techniques (recurrent and deep neural networks) to detect user engagement decrease in real-time. The results of this work highlight, in particular, the relevance of taking into account the temporal dynamics of a user’s behavior. Allowing 1–2 s as buffer delay improves the performance of taking a decision on user engagement.

Keywords

User engagement decrease Socially assistive robot HRI in public space Real-time detection 

Notes

Acknowledgements

This work was supported by European projects H2020 ANIMATAS (ITN 7659552) and a grant overseen by the French National Research Agency (ANR-17-MAOI). The authors would like to thank Nicolas Rollet and Christian Licoppe for useful discussions on pre-closing and Rodolphe Gelin, Angelica Lim, Marine Chanoux and Myriam Bilac from Softbank robotics for their help in the recording of UE-HRI dataset.

Funding

This work was supported by SoftBank Robotics.

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

References

  1. 1.
    Andrist S, Bohus D, Kamar E, Horvitz E (2017) What went wrong and why? Diagnosing situated interaction failures in the wild. In: 9th international conference on social robotics (ICSR), Tsukuba, JapanCrossRefGoogle Scholar
  2. 2.
    Anzalone SM, Varni G, Zibetti E, Ivaldi S, Chetouani M (2015) Automated prediction of extraversion during human–robot interaction. In: Finzi A, Alberto and Mastrogiovanni, Fulvio and Orlandini, Andrea and Sgorbissa (ed) AIRO@AI*IA, vol 1544, pp 29–39Google Scholar
  3. 3.
    Baltrusaitis T, Mahmoud M, Robinson P (2015) Cross-dataset learning and person-specific normalisation for automatic Action Unit detection. In: 2015 11th IEEE international conference and workshops on automatic face and gesture recognition (FG). IEEE, pp 1–6Google Scholar
  4. 4.
    Baltrusaitis T, Zadeh A, Lim YC, Morency LP (2018) OpenFace 2.0: facial behavior analysis toolkit. In: 2018 13th IEEE international conference on automatic face and gesture recognition (FG 2018). IEEE, pp 59–66Google Scholar
  5. 5.
    Ben-Youssef A, Clavel C, Essid S (2019) Early detection of user engagement breakdown in spontaneous human–humanoid interaction. IEEE Trans Affect Comput.  https://doi.org/10.1109/TAFFC.2019.2898399 CrossRefGoogle Scholar
  6. 6.
    Ben-Youssef A, Clavel C, Essid S, Bilac M, Chamoux M, Lim A (2017) UE-HRI: a new dataset for the study of user engagement in spontaneous human–robot interactions. In: Proceedings of the 19th ACM international conference on multimodal interaction, ICMI 2017. ACM, New York, pp 464–472Google Scholar
  7. 7.
    Bengio Y (2009) Learning deep architectures for AI. Found Trends Mach Learn 2(1):1–127MathSciNetCrossRefGoogle Scholar
  8. 8.
    Bohus D, Horvitz E (2009) Learning to predict engagement with a spoken dialog system in open-world settings. In: Proceedings of the SIGDIAL 2009 conference on the 10th annual meeting of the special interest group on discourse and dialogue—SIGDIAL ’09, September, pp 244–252Google Scholar
  9. 9.
    Bohus D, Horvitz E (2009) Models for multiparty engagement in open-world dialog. In: Proceedings of the SIGDIAL 2009 conference: the 10th annual meeting of the special interest group on discourse and dialogue, SIGDIAL ’09. Association for Computational Linguistics, Stroudsburg, pp 225–234Google Scholar
  10. 10.
    Bohus D, Horvitz E (2009) Open-world dialog: challenges, directions, and a prototype. In: Proceedings of the IJCAI’2009 workshop on knowledge and reasoning in practical dialogue systems, Pasadena, California, USA, pp 34–45Google Scholar
  11. 11.
    Bohus D, Horvitz E (2014) Managing human–robot engagement with forecasts and...um...hesitations. In: Proceedings of the 16th international conference on multimodal interaction—ICMI ’14. ACM Press, New York, pp 2–9Google Scholar
  12. 12.
    Bosch N, D’Mello S (2015) The affective experience of novice computer programmers. Int J Artif Intell Educ 27(1):181–206CrossRefGoogle Scholar
  13. 13.
    Bradley AP (1997) The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognit 30(7):1145–1159CrossRefGoogle Scholar
  14. 14.
    Castellano G, Leite I, Pereira A, Martinho C, Paiva A, McOwan PW (2012) Detecting engagement in HRI: an exploration of social and task-based context. In: 2012 international conference on privacy, security, risk and trust and 2012 international conference on social computing. IEEE, pp 421–428Google Scholar
  15. 15.
    Celiktutan O, Skordos E, Gunes H (2017) Multimodal human–human–robot interactions (MHHRI) dataset for studying personality and engagement. IEEE Trans Affect Comput Google Scholar
  16. 16.
    Cho K, van Merrienboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, Bengio Y (2014) Learning phrase representations using RNN encoder–decoder for statistical machine translationGoogle Scholar
  17. 17.
    Chollet F (2015) keras. https://github.com/fchollet/keras. Accessed 05 Feb 2018
  18. 18.
    Clavel C, Cafaro A, Campano S, Pelachaud C (2016) Fostering user engagement in face-to-face human–agent interactions: a survey. Springer, Cham, pp 93–120Google Scholar
  19. 19.
    Corrigan LJ, Peters C, Küster D, Castellano G (2016) Engagement perception and generation for social robots and virtual agents. Springer, Cham, pp 29–51Google Scholar
  20. 20.
    D’Mello S, Graesser A (2012) Dynamics of affective states during complex learning. Learn Instr 22(2):145–157CrossRefGoogle Scholar
  21. 21.
    Dominey P, Metta G, Nori F, Natale L (2008) Anticipation and initiative in human-humanoid interaction. In: Humanoids 2008—8th IEEE-RAS international conference on humanoid robots. IEEE, pp 693–699Google Scholar
  22. 22.
    Eyben F, Wöllmer M, Schuller B (2010) Opensmile: the Munich versatile and fast open-source audio feature extractor. In: Proceedings of the international conference on multimedia—MM ’10. ACM Press, New York, pp 1459–1462Google Scholar
  23. 23.
    Feil-Seifer D, Mataric M (2005) Defining socially assistive robotics. In: 9th international conference on rehabilitation robotics, 2005. ICORR 2005. IEEE, pp 465–468Google Scholar
  24. 24.
    Foster ME, Alami R, Gestranius O, Lemon O, Niemelä M, Odobez JM, Pandey AK (2016) The MuMMER project: engaging human–robot interaction in real-world public spaces. Springer, Cham, pp 753–763Google Scholar
  25. 25.
    Foster ME, Gaschler A, Giuliani M (2017) Automatically classifying user engagement for dynamic multi-party human-robot interaction. Int J Soc Robot 9(5):659–674CrossRefGoogle Scholar
  26. 26.
    Gehle R, Pitsch K, Dankert T, Wrede S (2017) How to open an interaction between robot and museum visitor? Strategies to establish a focused encounter in HRI. In: Proceedings of the 2017 ACM/IEEE international conference on human–robot interaction—HRI ’17. ACM Press, New York, pp 187–195Google Scholar
  27. 27.
    Glas N, Pelachaud C (2015) User engagement and preferences in information-giving chat with virtual agents, pp 33–40Google Scholar
  28. 28.
    Hall J, Tritton T, Rowe A, Pipe A, Melhuish C, Leonards U (2014) Perception of own and robot engagement in human–robot interactions and their dependence on robotics knowledge. Robot Autonom Syst 62(3):392–399CrossRefGoogle Scholar
  29. 29.
    Hayashi K, Sakamoto D, Kanda T, Shiomi M, KoizumiS, Ishiguro H, Ogasawara T, Hagita N (2007) Humanoid robots as a passive-social medium. In: Proceedings of the ACM/IEEE international conference on human–robot interaction—HRI ’07. ACM Press, New York, p 137Google Scholar
  30. 30.
    Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–80CrossRefGoogle Scholar
  31. 31.
    Human Vision Components (HVC-P2) B5T-007001 Command Specifications. Technical report, OMRON Corporation Electronic and Mechanical Components Company, Japan (2016)Google Scholar
  32. 32.
    Ivaldi S, Lefort S, Peters J, Chetouani M, Provasi J, Zibetti E (2017) Towards engagement models that consider individual factors in HRI: on the relation of extroversion and negative attitude towards robots to gaze and speech during a human-robot assembly task. Int J Soc Robot 9(1):63–86CrossRefGoogle Scholar
  33. 33.
    Joder C, Essid S, Richard G (2009) Temporal integration for audio classification with application to musical instrument classification. IEEE Trans Audio Speech Lang Process 17(1):174–186CrossRefGoogle Scholar
  34. 34.
    Kanda T, Shiomi M, Miyashita Z, Ishiguro H, Hagita N (2009) An affective guide robot in a shopping mall. In: Proceedings of the 4th ACM/IEEE international conference on Human robot interaction—HRI ’09. ACM Press, New York, p 173Google Scholar
  35. 35.
    Kendon A (1967) Some functions of gaze-direction in social interaction. Acta Psychol 26:22–63CrossRefGoogle Scholar
  36. 36.
    Leite I, McCoy M, Ullman D, Salomons N, Scassellati B (2015) Comparing models of disengagement in individual and group interactions. In: Proceedings of the tenth annual ACM/IEEE international conference on human–robot interaction—HRI ’15. ACM Press, New York, pp 99–105Google Scholar
  37. 37.
    Li L, Xu Q, Tan YK (2012) Attention-based addressee selection for service and social robots to interact with multiple persons. In: Proceedings of the workshop at SIGGRAPH Asia, WASA ’12. ACM, New York, pp 131–136Google Scholar
  38. 38.
    Liu T, Kappas A (2018) Predicting engagement breakdown in HRI using thin-slices of facial expressions. In: Workshops at the thirty-second AAAI conference on artificial intelligence, pp 37–43Google Scholar
  39. 39.
    Martinovski B, Traum D (2003) The error is the clue: breakdown in human–machine interaction. In: Proceedings of the ISCA workshop on error handling in spoken dialogue systems, pp 11–17Google Scholar
  40. 40.
    Miller RB (1968) Response time in man-computer conversational transactions. In: Proceedings of the December 9–11, 1968, fall joint computer conference, part I on–AFIPS ’68 (Fall, part I). ACM Press, New York, p 267Google Scholar
  41. 41.
    Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay É (2011) Scikit-learn: machine learning in Python. J Mach Learn Res 12:2825–2830MathSciNetzbMATHGoogle Scholar
  42. 42.
    Pitsch K, Kuzuoka H, Suzuki Y, Sussenbach L, Luff P, Heath C (2009) “The first five seconds”: contingent stepwise entry into an interaction as a means to secure sustained engagement in HRI. In: RO-MAN 2009—the 18th IEEE international symposium on robot and human interactive communication. IEEE, Toyama, pp 985–991Google Scholar
  43. 43.
    Poggi I (2007) Mind, hands, face and body: a goal and belief view of multimodal communication. Weidler Buchverlag, BerlinGoogle Scholar
  44. 44.
    Rawassizadeh R, Momeni E, Dobbins C, Gharibshah J, Pazzani M (2016) Scalable daily human behavioral pattern mining from multivariate temporal data. IEEE Trans Knowl Data Eng 28(11):3098–3112CrossRefGoogle Scholar
  45. 45.
    Rich C, Ponsler B, Holroyd A, Sidner CL (2010) Recognizing engagement in human–robot interaction. In: 2010 5th ACM/IEEE international conference on human–robot interaction (HRI). IEEE, pp 375–382Google Scholar
  46. 46.
    Robots in public spaces (2013) towards multi-party, short-term, dynamic human-robot interaction. In: Giuliani M, Petrick R (eds) International conference on social robotics (ICSR 2013), Bristol, UKGoogle Scholar
  47. 47.
    Schuller B, Ganascia JG, Devillers L (2016) Multimodal sentiment analysis in the wild: ethical considerations on data collection, annotation, and exploitation. In: Actes du workshop on ethics in corpus collection, annotation and application (ETHI-CA2), LREC, Portoroz, SlovénieGoogle Scholar
  48. 48.
    Schuller B, Müeller R, Höernler B, Höethker A, Konosu H, Rigoll G (2007) Audiovisual recognition of spontaneous interest within conversations. In: Proceedings of the ninth international conference on multimodal interfaces—ICMI ’07. ACM Press, New York, p 30Google Scholar
  49. 49.
    Sidner CL, Lee C, Kidd CD, Lesh N, Rich C (2005) Explorations in engagement for humans and robots. Artif Intell 166(1–2):140–164CrossRefGoogle Scholar
  50. 50.
    Tapus A Mataric MJ (2008) Socially assistive robots: the link between personality, empathy, physiological signals, and task performance. UndefinedGoogle Scholar
  51. 51.
    Trung P, Giuliani M, Miksch M, Stollnberger G, Stadler S, Mirnig N, Tscheligi M (2017) Head and shoulders: automatic error detection in human–robot interaction. In: Proceedings of the 19th ACM international conference on multimodal interaction—ICMI 2017. ACM Press, New York, pp 181–188Google Scholar
  52. 52.
    Vaufreydaz D, Johal W, Combe C (2016) Starting engagement detection towards a companion robot using multimodal features. Robot Autonom Syst 75:4–16CrossRefGoogle Scholar
  53. 53.
    Wittenburg P, Brugman H, Russel A, Klassmann A, Sloetjes H (2006) ELAN: a professional framework for multimodality research. In: LREC 2006, pp 1556–1559Google Scholar
  54. 54.
    Wood E, Baltruaitis T, Zhang X, Sugano Y, Robinson P, Bulling A (2015) Rendering of eyes for eye-shape registration and gaze estimation. In: 2015 IEEE international conference on computer vision (ICCV). IEEE, pp 3756–3764Google Scholar

Copyright information

© Springer Nature B.V. 2019

Authors and Affiliations

  1. 1.LTCI, Télécom ParisInstitut polytechnique de ParisParisFrance

Personalised recommendations