Fluid Semantic Back-Channel Feedback in Dialogue: Challenges and Progress

  • Gudny Ragna Jonsdottir
  • Jonathan Gratch
  • Edward Fast
  • Kristinn R. Thórisson
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4722)


Participation in natural, real-time dialogue calls for behaviors supported by perception-action cycles from around 100 msec and up. Generating certain kinds of such behaviors, namely envelope feedback, has been possible since the early 90s. Real-time backchannel feedback related to the content of a dialogue has been more difficult to achieve. In this paper we describe our progress in allowing virtual humans to give rapid within-utterance content-specific feedback in real-time dialogue. We present results from human-subject studies of content feedback, where results show that content feedback to a particular phrase or word in human-human dialogue comes 560-2500 msec from the phrase’s onset, 1 second on average. We also describe a system that produces such feedback with an autonomous agent in limited topic domains, present performance data of this agent in human-agent interactions experiments and discuss technical challenges in light of the observed human-subject data.


Face-to-face dialogue real-time envelope feedback content feedback interactive virtual agent 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Gratch, J., Okhmatovskaia, A., Lamothe, F., Marsella, S., Morales, M., van der Werf, R., et al.: Virtual Rapport. In: Gratch, J., Young, M., Aylett, R., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, Springer, Heidelberg (2006)Google Scholar
  2. 2.
    Thórisson, K.R.: Dialogue Control in Social Interface Agents. In: Paper presented at the InterCHI Adjunct Proceedings, Conference on Human Factors in Computing Systems, Amsterdam (1993) Google Scholar
  3. 3.
    Thórisson, K.R.: Communicative Humanoids: A Computational Model of Psycho-Social Dialogue Skills. Unpublished Ph.D. thesis, Massachusetts Institute of Technology (1996) Google Scholar
  4. 4.
    Tosa, N.: Neurobaby. In: ACM SIGGRAPH, pp. 212–213 (1993)Google Scholar
  5. 5.
    Thórisson, K.R.: Natural Turntaking Needs No Manual: Computational Theory and Model, From Perception to Action. In: Granström, B., House, D., Karlsson, I. (eds.) Multimodality in Language and Speech Systems, pp. 173–207. Kluwer Academic Publishers, Dordrecht, The Netherlands (2002)Google Scholar
  6. 6.
    Bavelas, J.B., Coates, L., Johnson, T.: Listeners as Co-narrators. Journal of Personality and Social Psychology 79(6), 941–952 (2000)CrossRefGoogle Scholar
  7. 7.
    Tickle-Degnen, L., Rosenthal, R.: The Nature of Rapport and its Nonverbal Correlates. Psychological Inquiry 1(4), 285–293 (1990)CrossRefGoogle Scholar
  8. 8.
    Gratch, J., Wang, N., Okhmatovskaia, A., Lamothe, F., Morales, M., van der Werf, R., et al.: Can virtual humans be more engaging than real ones? In: Jacko, J. (ed.) HCII 2007. LNCS, vol. 4552, pp. 286–297. Springer, Heidelberg (2007)Google Scholar
  9. 9.
    Scherer, K.R., Ellgring, H.: Are facial expressions of emotion produced by categorical affect programs or dynamically driven by appraisal? Emotion (2007)Google Scholar
  10. 10.
    Ward, N., Tsukahara, W.: Prosodic features which cue back-channel responses in English and Japanese. Journal of Pragmatics 23, 1177–1207 (2000)CrossRefGoogle Scholar
  11. 11.
    Morency, L.-P., Sidner, C., Lee, C., Darrell, T.: Contextual Recognition of Head Gestures. In: Paper presented at the 7th International Conference on Multimodal Interactions, Torento, Italy (2005)Google Scholar
  12. 12.
    Kopp, S., Krenn, B., Marsella, S., Marshall, A., Pelachaud, C., Pirker, H., et al.: Towards a common framework for multimodal generation in ECAs: The behavior markup language. In: Gratch, J., Young, M., Aylett, R., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, Springer, Heidelberg (2006)Google Scholar
  13. 13.
    Gratch, J., Marsella, S.: A domain independent framework for modeling emotion. Journal of Cognitive Systems Research 5(4), 269–306 (2004)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Gudny Ragna Jonsdottir
    • 1
  • Jonathan Gratch
    • 2
  • Edward Fast
    • 2
  • Kristinn R. Thórisson
    • 1
  1. 1.CADIA / Department of Computer Science, Reykjavik University, Ofanleiti 2, IS-103 ReykjavikIceland
  2. 2.University of Southern California, Institute for Creative Technologies, 12374 Fiji Way, Marina del Rey, CA 90292 

Personalised recommendations