Advertisement

Merging Intention and Emotion to Develop Adaptive Dialogue Systems

  • Zoraida Callejas
  • David Griol
  • Ramón López-Cózar Delgado
Part of the Communications in Computer and Information Science book series (CCIS, volume 328)

Abstract

In this paper we propose a method for merging intentional and emotional information in spoken dialogue systems in order to make dialogue managers more efficient and adaptive. The prediction of the user intention and emotion is carried out for each user turn in the dialogue by means of a module conceived as an intermediate phase between natural language understanding and dialogue management in the architecture of these systems. We have applied and evaluated our method in the UAH system, for which the evaluation results show that merging both sources of information improves system performance as well as its perceived quality.

Keywords

Spoken Dialogue Systems Emotion Processing 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Riccardi, G., Hakkani-Tür, D.: Grounding Emotions in Human-Machine Conversational Systems. In: Maybury, M., Stock, O., Wahlster, W. (eds.) INTETAIN 2005. LNCS (LNAI), vol. 3814, pp. 144–154. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  2. 2.
    Boril, H., Sadjadi, O., Kleinschmidt, T., Hansen, J.: Analysis and detection of cognitive load and frustration in drivers’ speech. In: Proc. of Interspeech 2010, Makuhari, Chiba, Japan, pp. 502–505 (2010)Google Scholar
  3. 3.
    Gnjatović, M., Rösner, D.: Adaptive Dialogue Management in the NIMITEK Prototype System. In: André, E., Dybkjær, L., Minker, W., Neumann, H., Pieraccini, R., Weber, M. (eds.) PIT 2008. LNCS (LNAI), vol. 5078, pp. 14–25. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  4. 4.
    Pittermann, J., Pittermann, A., Minker, W.: Emotion recognition and adaptation in spoken dialogue systems. Int. Journal of Speech Technology 13, 49–60 (2010)CrossRefGoogle Scholar
  5. 5.
    Bui, T.H., Poel, M., Nijholt, A., Zwiers, J.: A tractable hybrid DDN-POMDP approach to affective dialogue modeling for probabilistic frame-based dialogue systems. Natural Language Engineering 15, 273–307 (2009)CrossRefGoogle Scholar
  6. 6.
    Williams, J.D., Young, S.: Partially observable Markov decision processes for spoken dialog systems. Computer Speech & Language 21, 393–422 (2007)CrossRefGoogle Scholar
  7. 7.
    Callejas, Z., López-Cózar, R.: Influence of contextual information in emotion annotation for spoken dialogue systems. Speech Communication 50, 416–433 (2008)CrossRefGoogle Scholar
  8. 8.
    Griol, D., Hurtado, L.F., Segarra, E., Sanchis, E.: A statistical approach to spoken dialog systems design and evaluation. Speech Communication 50, 666–682 (2008)CrossRefGoogle Scholar
  9. 9.
    Callejas, Z., López-Cózar, R.: Relations between de-facto criteria in the evaluation of a spoken dialogue system. Speech Communication 50, 646–665 (2008)CrossRefGoogle Scholar
  10. 10.
    Burkhardt, F., van Ballegooy, M., Engelbrecht, K., Polzehl, T., Stegmann, J.: Emotion detection in dialog systems - Usecases, strategies and challenges. In: Proc. of ACII 2009, Amsterdam, The Netherlands (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Zoraida Callejas
    • 1
  • David Griol
    • 2
  • Ramón López-Cózar Delgado
    • 1
  1. 1.Dep. Languages and Computer SystemsUniversity of Granada, CITIC-UGRGranadaSpain
  2. 2.Computer Science DepartmentUniversidad Carlos III de MadridLeganésSpain

Personalised recommendations