Autonomous Agents and Multi-Agent Systems

, Volume 20, Issue 1, pp 70–84

A probabilistic multimodal approach for predicting listener backchannels

  • Louis-Philippe Morency
  • Iwan de Kok
  • Jonathan Gratch
Article

DOI: 10.1007/s10458-009-9092-y

Cite this article as:
Morency, LP., de Kok, I. & Gratch, J. Auton Agent Multi-Agent Syst (2010) 20: 70. doi:10.1007/s10458-009-9092-y

Abstract

During face-to-face interactions, listeners use backchannel feedback such as head nods as a signal to the speaker that the communication is working and that they should continue speaking. Predicting these backchannel opportunities is an important milestone for building engaging and natural virtual humans. In this paper we show how sequential probabilistic models (e.g., Hidden Markov Model or Conditional Random Fields) can automatically learn from a database of human-to-human interactions to predict listener backchannels using the speaker multimodal output features (e.g., prosody, spoken words and eye gaze). The main challenges addressed in this paper are automatic selection of the relevant features and optimal feature representation for probabilistic models. For prediction of visual backchannel cues (i.e., head nods), our prediction model shows a statistically significant improvement over a previously published approach based on hand-crafted rules.

Keywords

Listener backchannel feedback Nonverbal behavior prediction Sequential probabilistic model Conditional random field Head nod Multimodal 

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  • Louis-Philippe Morency
    • 1
  • Iwan de Kok
    • 2
  • Jonathan Gratch
    • 1
  1. 1.Institute for Creative TechnologiesUniversity of Southern CaliforniaMarina del ReyUSA
  2. 2.Human Media Interaction GroupUniversity of TwenteEnschedeThe Netherlands

Personalised recommendations