Towards Conversational Agents That Attend to and Adapt to Communicative User Feedback

  • Hendrik Buschmeier
  • Stefan Kopp
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6895)


Successful dialogue is based on collaborative efforts of the interactants to ensure mutual understanding. This paper presents work towards making conversational agents ‘attentive speakers’ that continuously attend to the communicative feedback given by their interlocutors and adapt their ongoing and subsequent communicative behaviour to their needs. A comprehensive conceptual and architectural model for this is proposed and first steps of its realisation are described. Results from a prototype implementation are presented.


Communicative feedback attentive speaker agents feedback elicitation feedback interpretation attributed listener state adaptation 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Allwood, J., Nivre, J., Ahlsén, E.: On the semantics and pragmatics of linguistic feedback. Journal of Semantics 9, 1–26 (1992)CrossRefGoogle Scholar
  2. 2.
    Bavelas, J.B., Coates, L., Johnson, T.: Listeners as co-narrators. Journal of Personality and Social Psychology 79, 941–952 (2000)CrossRefGoogle Scholar
  3. 3.
    Bavelas, J.B., Coates, L., Johnson, T.: Listener responses as a collaborative process: The role of gaze. Journal of Communication 52, 566–580 (2002)CrossRefGoogle Scholar
  4. 4.
    Bevacqua, E.: Computational Model of Listener Behavior for Embodied Conversational Agents. Ph.D. thesis, Université Paris 8, Paris, France (2009)Google Scholar
  5. 5.
    Bevacqua, E., Pammi, S., Hyniewska, S.J., Schröder, M., Pelachaud, C.: Multimodal backchannels for embodied conversational agents. In: Safonova, A. (ed.) IVA 2010. LNCS, vol. 6356, pp. 194–200. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  6. 6.
    Brennan, S.E., Clark, H.H.: Conceptual pacts and lexical choice in conversation. Journal of Experimental Psychology: Learning, Memory, and Cognition 22, 1482–1493 (1996)CrossRefGoogle Scholar
  7. 7.
    Clark, H.H.: Using Language. Cambridge University Press, Cambridge (1996)Google Scholar
  8. 8.
    Clark, H.H., Krych, M.A.: Speaking while monitoring addressees for understanding. Journal of Memory and Language 50, 62–81 (2004)CrossRefGoogle Scholar
  9. 9.
    Dohsaka, K., Shimazu, A.: A system architecture for spoken utterance production in collaborative dialogue. In: Working Notes of the IJCAI 1997 Workshop on Collaboration, Cooperation and Conflict in Dialogue Systems, Nagoya, Japan (1997)Google Scholar
  10. 10.
    Fujie, S., Miyake, R., Kobayashi, T.: Spoken dialogue system using recognition of user’s feedback for rhythmic dialogue. In: Proc. of Speech Prosody 2006, Dresden, Germany (2006)Google Scholar
  11. 11.
    Gravano, A., Hirschberg, J.: Turn-taking cues in task-oriented dialogue. Computer Speech and Language 25, 601–634 (2011)CrossRefGoogle Scholar
  12. 12.
    Jonsdottir, G.R., Gratch, J., Fast, E., Thórisson, K.R.: Fluid semantic back-channel feedback in dialogue: Challenges and progress. In: Pelachaud, C., Martin, J.-C., André, E., Chollet, G., Karpouzis, K., Pelé, D. (eds.) IVA 2007. LNCS (LNAI), vol. 4722, pp. 154–160. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  13. 13.
    Kopp, S.: Social resonance and embodied coordination in face-to-face conversation with artificial interlocutors. Speech Communication 52, 587–597 (2010)CrossRefGoogle Scholar
  14. 14.
    Kopp, S., Allwood, J., Grammer, K., Ahlsen, E., Stocksmeier, T.: Modeling embodied feedback with virtual humans. In: Wachsmuth, I., Knoblich, G. (eds.) ZiF Research Group International Workshop. LNCS (LNAI), vol. 4930, pp. 18–37. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  15. 15.
    Kopp, S., Krenn, B., Marsella, S.C., Marshall, A.N., Pelachaud, C., Pirker, H., Thórisson, K.R., Vilhjálmsson, H.H.: Towards a common framework for multimodal generation: The behavior markup language. In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 205–217. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  16. 16.
    Kopp, S., Wachsmuth, I.: Synthesizing multimodal utterances for conversational agents. Computer Animation and Virtual Worlds 15, 39–52 (2004)CrossRefGoogle Scholar
  17. 17.
    Morency, L.P., de Kok, I., Gratch, J.: Predicting listener backchannels: A probabilistic multimodal approach. In: Proc. of the 8th Int. Conf. on Intelligent Virtual Agents, Tokyo, Japan, pp. 176–190 (2008)Google Scholar
  18. 18.
    Pickering, M.J., Garrod, S.: Toward a mechanistic psychology of dialogue. Behavioral and Brain Sciences 27, 169–226 (2004)Google Scholar
  19. 19.
    Reidsma, D., de Kok, I., Neiberg, D., Pammi, S., van Straalen, B., Truong, K., van Welbergen, H.: Continuous interaction with a virtual human. Journal on Multimodal User Interfaces (Published online May 27, 2011)Google Scholar
  20. 20.
    Roque, A., Traum, D.R.: Degrees of grounding based on evidence of understanding. In: Proc. of the 9th SIGdial Workshop on Discourse and Dialogue, Columbus, OH, pp. 54–63 (2008)Google Scholar
  21. 21.
    Schlangen, D., Skantze, G.: A general, abstract model of incremental dialogue processing. Dialogue and Discourse 2, 83–111 (2011)Google Scholar
  22. 22.
    Stocksmeier, T., Kopp, S., Gibbon, D.: Synthesis of prosodic attitudinal variants in German backchannel “ja”. In: Proc. of Interspeech 2007, Antwerp, Belgium, pp. 1290–1293 (2007)Google Scholar
  23. 23.
    Stone, M., Doran, C., Webber, B., Bleam, T., Palmer, M.: Microplanning with communicative intentions: The SPUD system. Computational Intelligence 19(4), 311–381 (2003)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Ward, N.: Pragmatic functions of prosodic features in non-lexical utterances. In: Proc. of Speech Prosody 2004, Nara, Japan, pp. 325–328 (2004)Google Scholar
  25. 25.
    Ward, N., Tsukahara, W.: Prosodic features which cue back-channel responses in English and Japanese. Journal of Pragmatics 38, 1177–1207 (2000)CrossRefGoogle Scholar
  26. 26.
    Wöhler, N.C., Großekathöfer, U., Dierker, A., Hanheide, M., Kopp, S., Hermann, T.: A calibration-free head gesture recognition system with online capability. In: Proc. of the 20th Int. Conf. on Pattern Recognition, Istanbul, Turkey, pp. 3814–3817 (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Hendrik Buschmeier
    • 1
  • Stefan Kopp
    • 1
  1. 1.Sociable Agents Group, CITECBielefeld UniversityBielefeldGermany

Personalised recommendations