Advertisement

Natural Language Dialog System Considering Speaker’s Emotion Calculated from Acoustic Features

  • Takumi Takahashi
  • Kazuya Mera
  • Tang Ba Nhat
  • Yoshiaki Kurosawa
  • Toshiyuki Takezawa
Chapter
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 427)

Abstract

With the development of Interactive Voice Response (IVR) systems , people can not only operate computer systems through task-oriented conversation but also enjoy non-task-oriented conversation with the computer. When an IVR system generates a response, it usually refers to just verbal information of the user’s utterance. However, when a person gloomily says “I’m fine,” people will respond not by saying “That’s wonderful” but “Really?” or “Are you OK?” because we can consider both verbal and non-verbal information such as tone of voice, facial expressions, gestures, and so on. In this article, we propose an intelligent IVR system that considers not only verbal but also non-verbal information. To estimate a speaker’s emotion (positive, negative, or neutral), 384 acoustic features extracted from the speaker’s utterance are utilized to machine learning (SVM). Artificial Intelligence Markup Language (AIML)-based response generating rules are expanded to be able to consider the speaker’s emotion. As a result of the experiment, subjects felt that the proposed dialog system was more likable, enjoyable, and did not give machine-like reactions.

Keywords

Interactive Voice Response System (IVR) Acoustic features Emotion Support Vector Machine (SVM) Artificial Intelligence Markup Language (AIML) 

Notes

Acknowledgements

This research is supported by JSPS KAKENHI Grant Number 26330313 and the Center of Innovation Program from Japan Science and Technology Agency, JST.

References

  1. 1.
    AIML—The Artificial Intelligence Markup Language. http://www.alicebot.org/aiml.html. Accessed 9 May 2016
  2. 2.
    Home Page of the Loebner Prize. http://www.loebner.net/Prizef/loebner-prize.html. Accessed 9 May 2016
  3. 3.
    Emotion Challenge—AAAC emotion-research.net—Association for the Advancement of Affective Computing. http://emotion-research.net/sigs/speech-sig/emotion-challenge. Accessed 9 May 2016
  4. 4.
    Eyben, F., Wöllmer, M., Schuller, B.: openSMILE: The Munich versatile and fast open-source audio feature extractor. In: Proceedings of the International Conference on Multimedia (2010)Google Scholar
  5. 5.
    Lee, A., Kawahara, T., Shikano, K.: Real-time confidence scoring based on word posterior probability on two-pass search algorithm. Tech. Rep. IEICE 103(520), 35–40 (2003). (in Japanese)Google Scholar
  6. 6.
    Ihaka, R., Gentleman, R.: A language for data analysis and graphics. J. Comput. Graph. Stat. 5, 299–314 (1996)Google Scholar
  7. 7.
    Takayoshi, K., Tanaka, T.: The relationship between the behavior of kuwata and impression of his intelligence & personality. IPSJ SIG Tech. Rep. 160, 43–48 (2007). (in Japanese)Google Scholar

Copyright information

© Springer Science+Business Media Singapore 2017

Authors and Affiliations

  • Takumi Takahashi
    • 1
  • Kazuya Mera
    • 1
  • Tang Ba Nhat
    • 2
  • Yoshiaki Kurosawa
    • 1
  • Toshiyuki Takezawa
    • 1
  1. 1.Graduate School of Information SciencesHiroshima City UniversityAsa-minami-kuJapan
  2. 2.FPT SoftwareTokyoJapan

Personalised recommendations