Designing an Emotion Detection System for a Socially Intelligent Human-Robot Interaction

  • Clément Chastagnol
  • Céline Clavel
  • Matthieu Courgeon
  • Laurence Devillers
Conference paper

Abstract

The long-term goal of this work is to build an assistive robot for elderly and disabled people. It is part of the French ANR ARMEN project. The subjects will interact with a mobile robot controlled by a virtual character. In order to build this system, we collected interactions between patients from different medical centers and a Wizard-of-Oz operated virtual character in the frame of scenarii written with physicians and functional therapists. The human-robot spoken interaction consisted mainly of small talking with patients, with no real task to perform. For precise tasks such as “Finding a remote control,” keyword recognition is performed. The main focus of the article is to build an emotion detection system that will be used to control the dialog and the answer strategy of the virtual character. This article presents the Wizard-of-Oz system for the audio corpus collection which is used for training the emotion detection module. We analyze the audio data at the segmental level on annotated measures of acoustically perceived emotion but also at the interaction level with global objective measures such as amount of speech and emotion. We also report on the results of a questionnaire qualifying the interaction and the agent and compare between objective and subjective measures.

Notes

Acknowledgements

This work is funded by the French ANR (http://projet_armen.byethost4.com). The authors wish to thank the association APPROCHE for their help during the data collection and the SME Voxler, member of the project.

References

  1. 1.
    Bernsen, N., et al.: Wizard of Oz Prototyping: How and when? CCI Working Papers in Cognitive Science and HCI, WPCS-94-1. Center for Cognitive Science, Roskilde University (1994)Google Scholar
  2. 2.
    Bickmore, T., Caruso, L., Clough-Gorr, K., Heeren, T.: It’s just like you talk to a friend - Relational agents for older adults. Interact. Comput. 17, 711–735 (2005)Google Scholar
  3. 3.
    Brendel, M., Zaccarelli, R., Devillers, L.: Building a system for emotions detection from speech to control an affective avatar. In: Proceedings of the 7th International Conference on Language Resources and Evaluation (2010)Google Scholar
  4. 4.
    Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Tech. 2, 27:1–27:27 (2011)Google Scholar
  5. 5.
    Courgeon, M., Martin, J.-C., Jacquemin, C.: MARC: a Multimodal Affective and Reactive Character. In: Proceedings of the 1st Workshop on Affective Interaction in Natural Environments (AFFINE). Chania, Crete (2008)Google Scholar
  6. 6.
    Delaborde, A., Tahon, M., Barras, C., Devillers, L.: A wizard-of-Oz game for collecting emotional audio data in a children-robot interaction. In: Proceedings of the International Workshop on Affective-Aware Virtual Agents and Social Robots, ICMI-MLMI. Boston, USA (2009)Google Scholar
  7. 7.
    Devillers, L., Vidrascu, L., Layachi, O.: Automatic detection of emotion from vocal expression. In: Scherer, K., Bänziger, T., Roach, E. (eds.) A Blueprint for an Affectively Competent Agent, Cross-Fertilization Between Emotion Psychology, Affective Neuroscience, and Affective Computing, pp. 232–244. Oxford University Press, Oxford (2009)Google Scholar
  8. 8.
    Graf, B., Hans, M., Kubacki, J., Schraft, R.: Robotic home assistant care-o-bot II. In: Proceedings of the Joint EMBS/BMES Conference, vol. 3, pp. 2343–2344. Houston, TX, USA (2002)Google Scholar
  9. 9.
    Han, J.G., Gilmartin, E., De Looze, C., Vaughan, B., Campbell, N.: Speech and multimodal resources - The Herme database of spontaneous multimodal human-robot dialogues. In: Proceedings of LREC 2012, pp. 1328–1331. Istanbul, Turkey (2012)Google Scholar
  10. 10.
    Lee, C.M., Narayanan, S.S.: Toward detecting emotions in spoken dialogs. IEEE Trans. Speech Audio Process. 13(2), 293–303 (2005)CrossRefGoogle Scholar
  11. 11.
    McQuiggan, S., Rowe, J., Lester, J.: The effects of empathetic virtual characters on presence in narrative-centered learning environments. In: Proceedings of the ACM Conference on Human Factors in Computing Systems, pp. 1511–1520 (2008)Google Scholar
  12. 12.
    Oertel, C., Scherer, S., and Campbell, N.: On the use of multimodal cues for the prediction of degrees of involvement in spontaneous conversation. In: INTERSPEECH 2011, pp. 1541–1544 (2011)Google Scholar
  13. 13.
    Robins, B., Dautenhahn, K., Boekhorst, R., Billard, A.: Robotic assistants in therapy and education of children with autism: Can a small humanoid robot help encourage social interaction skills? Universal Access in the Information Society (2005)Google Scholar
  14. 14.
    Schröder, M., Bevacqua, E., Cowie, R., Eyben, F., Gunes, H., Heylen, D., ter Maat, M., McKeown, G., Pammi, S., Pantic, M., Pelachaud, C., Schuller, B., de Sevin, E., Valstar, M., Wöllmer, M.: Building autonomous sensitive artificial listeners. IEEE Trans. Affective Comput. 99 134–146 (2011)Google Scholar
  15. 15.
    Schuller, B., Müller, R., Eyben, F., Gast, J., Hörnler, B., Wöllmer, M., Rigoll, G., Höthker, A., Konosu, H.: Being Bored? Recognizing natural interest by extensive audiovisual integration for real-life application. Image Vision Comput. J. Special Issue Vis. Multimodal Anal. Human Spontaneous Behav. 27, 1760–1774 (2009)Google Scholar
  16. 16.
    Schuller, B., Steidl, S., Batliner, A.: The Interspeech 2009 emotion challenge. In: Proceedings of the 10th Annual Conference of the International Speech Communication Association. Brighton, UK (2009)Google Scholar
  17. 17.
    Walker, M.A., Littman, D.J., Kamm, C.A., Abella, A.: PARADISE: A framework for evaluation of spoken dialog agents. In: Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics. Madrid, Spain (1997)Google Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • Clément Chastagnol
    • 1
  • Céline Clavel
    • 1
  • Matthieu Courgeon
    • 1
  • Laurence Devillers
    • 2
  1. 1.Department of Human-Machine InteractionLIMSI-CNRS, University of Orsay 11OrsayFrance
  2. 2.Department of Human-Machine InteractionLIMSI-CNRS, University Paris-Sorbonne 4Orsay cedexFrance

Personalised recommendations