Advertisement

A Conversational Agent that Reacts to Vocal Signals

  • Daniel FormoloEmail author
  • Tibor Bosse
Conference paper
Part of the Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering book series (LNICST, volume 178)

Abstract

Conversational agents are increasingly being used for training of social skills. One of their most important benefits is their ability to provide natural interaction with humans. This work proposes to extend conversational agents’ benefits for social skills training by analysing the emotion conveyed by the user’s speech. For that, we developed a new system that captures emotions from human voice and, combined with the context of a particular situation, uses this to influence the internal state of the agent and change its behaviour. An example of the system’s use is shown and its limitations and advantages are discussed, together with the internal workflow of the system.

Keywords

Virtual agents Social skills training Speech analysis Vocal signals Emotions 

Notes

Acknowledgments

This research was supported by the Brazilian scholarship program Science without Borders - CNPq {scholarship reference: 233883/2014-2}.

References

  1. 1.
    Cassell, J., Sullivan, J., Prevost, S., Churchill, E.: Embodied Conversational Agents. MIT Press, Cambridge (2000)Google Scholar
  2. 2.
    Bosse, T., Provoost, S.: Towards aggression de-escalation training with virtual agents: a computational model. In: Zaphiris, P., Ioannou, A. (eds.) LCT 2014. LNCS, vol. 8524, pp. 375–387. Springer, Heidelberg (2014). doi: 10.1007/978-3-319-07485-6_37 Google Scholar
  3. 3.
    Bruijnes, M., Linssen, J.M., op den Akker, H.J.A., Theune, M., Wapperom, S., Broekema, C., Heylen, D.K.J.: Social behaviour in police interviews: relating data to theories. In: D’Errico, F., Poggi, I., Vinciarelli, A., Vincze, L. (eds.) Conflict and Multimodal Communication. Computational Social Sciences, pp. 317–347. Springer, Heidelberg (2015)Google Scholar
  4. 4.
    Hays, M., Campbell, J., Trimmer, M., Poore, J., Webb, A., Stark, C., King, T.: Can role-play with virtual humans teach interpersonal skills? In: Interservice/Industry Training, Simulation and Education Conference (I/ITSEC) (2012)Google Scholar
  5. 5.
    Jeuring, J., Grosfeld, F., Heeren, B., Hulsbergen, M., IJntema, R., Jonker, V., Mastenbroek, N., Smagt, M., Wijmans, F., Wolters, M., Zeijts, H.: Communicate! — a serious game for communication skills —. In: Conole, G., Klobučar, T., Rensing, C., Konert, J., Lavoué, É. (eds.) EC-TEL 2015. LNCS, vol. 9307, pp. 513–517. Springer, Heidelberg (2015). doi: 10.1007/978-3-319-24258-3_49 Google Scholar
  6. 6.
    Kim, J., Hill, R.W., Durlach, P., Lane, H.C., Forbell, E., Core, C., Marsella, S., Pynadath, D., Hart, J.: BiLAT: a game-based environment for practicing negotiation in a cultural context. Int. J. Artif. Intell. Educ. 19(3), 289–308 (2009)Google Scholar
  7. 7.
    Vaassen, F., Wauters, J.: deLearyous: Training interpersonal communication skills using unconstrained text input. In: Proceedings of ECGBL, pp. 505–513 (2012)Google Scholar
  8. 8.
    Juslin, P.N., Scherer, K.R.: Vocal expression of affect. In: Harrigan, J.A., et al. (eds.) The New Handbook of Methods in Nonverbal Behavior Research. Oxford Press, Oxford (2005)Google Scholar
  9. 9.
    Ekman, P.: An argument for basic emotions. Cogn. Emot. 6(3–4), 169–200 (1992)CrossRefGoogle Scholar
  10. 10.
    Russel, J.A.: A circumplex model of affect. J. Pers. Soc. Psychol. 39, 1161–1178 (1980)CrossRefGoogle Scholar
  11. 11.
    Yik, M., Russel, J., Steiger, J.: A 12-point circumplex structure of core affect. Emotion 11(4), 705–731 (2011)CrossRefGoogle Scholar
  12. 12.
    Scherer, K.R., Shorr, A., Johnstone, T.: Appraisal Processes in Emotion: Theory, Methods, Research. Oxford University Press, Canary (2001)Google Scholar
  13. 13.
    ElAyadi, M., Kamel, M.S., Karray, F.: Survey on speech emotion recognition: features, classification schemes, and databases. Pattern Recogn. 44, 572–587 (2011)CrossRefzbMATHGoogle Scholar
  14. 14.
    Truong, K.P., van Leeuwen, D.A., de Jong, F.M.G.: Speech-based recognition of self-reported and observed emotion in a dimensional space. Speech Commun. 54, 1049–1063 (2012)CrossRefGoogle Scholar
  15. 15.
    Lefter, I., Rothkrantz, L.J.M., Burghouts, G.: Aggression detection in speech using sensor and semantic information. Text, Speech and Dialogue 7499, 665–672 (2012)Google Scholar
  16. 16.
    Rodriguez, H., Beck, D., Lind, D., Lok, B.: Audio analysis of human/virtual-human interaction. In: Prendinger, H., Lester, J., Ishizuka, M. (eds.) IVA 2008. LNCS (LNAI), vol. 5208, pp. 154–161. Springer, Heidelberg (2008). doi: 10.1007/978-3-540-85483-8_16 CrossRefGoogle Scholar
  17. 17.
    Bevacqua, E., Pammi, S., Hyniewska, S.J., Schröder, M., Pelachaud, C.: Multimodal backchannels for embodied conversational agents. In: Allbeck, J., Badler, N., Bickmore, T., Pelachaud, C., Safonova, A. (eds.) IVA 2010. LNCS (LNAI), vol. 6356, pp. 194–200. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-15892-6_21 CrossRefGoogle Scholar
  18. 18.
    Acosta, J.C., Ward, N.G.: Achieving rapport with turn-by-turn, user-responsive emotional coloring. Speech Commun. 53, 1137–1148 (2011)CrossRefGoogle Scholar
  19. 19.
    Cavazza, M., Pizzi, D., Charles, F., Vogt, T., Andre, E.: Emotional input for character-based interactive storytelling. In: Proceedings of the 8th International Conference on Autonomous Agents and multi-agent systems, AAMAS 2009, pp. 313–320 (2009)Google Scholar
  20. 20.
    DeVault, D., et al.: SimSensei kiosk: a virtual human interviewer for healthcare decision support. In: Proceedings of the 13th International Conference on Autonomous Agents and Multi-agent Systems, AAMAS 2014, pp. 1061–1068 (2014)Google Scholar
  21. 21.
    Ben Youssef, A., Chollet, M., Jones, H., Sabouret, N., Pelachaud, C., Ochs, M.: Towards a socially adaptive virtual agent. In: Brinkman, W.-P., Broekens, J., Heylen, D. (eds.) IVA 2015. LNCS (LNAI), vol. 9238, pp. 3–16. Springer, Heidelberg (2015). doi: 10.1007/978-3-319-21996-7_1 CrossRefGoogle Scholar
  22. 22.
    Eyben, F., Wöllmer, M., Schuller, B.: openEAR - introducing the munich open-source emotion and affect recognition toolkit. In Proceedings of the 4th International HUMAINE Association Conference on Affective Computing and Intelligent Interaction 2009 (ACII 2009). IEEE, Amsterdam (2009)Google Scholar
  23. 23.
    Nwe, T., Foo, S., De Silva, L.: Speech emotion recognition using hidden Markov models. Speech Commun. 41, 603–623 (2003)CrossRefGoogle Scholar

Copyright information

© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2017

Authors and Affiliations

  1. 1.Department of Computer ScienceVU University AmsterdamAmsterdamThe Netherlands

Personalised recommendations