Skip to main content

Multimodal Backchannels for Embodied Conversational Agents

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6356))

Abstract

One of the most desirable characteristics of an Embodied Conversational Agent (ECA) is the capability of interacting with users in a human-like manner. While listening to a user, an ECA should be able to provide backchannel signals through visual and acoustic modalities. In this work we propose an improvement of our previous system to generate multimodal backchannel signals on visual and acoustic modalities. A perceptual study has been performed to understand how context-free multimodal backchannels are interpreted by users.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Allwood, J., Nivre, J., Ahlsn, E.: On the semantics and pragmatics of linguistic feedback. Semantics 9(1) (1993)

    Google Scholar 

  2. Bevacqua, E., Heylen, D., Tellier, M., Pelachaud, C.: Facial feedback signals for ECAs. In: AISB 2007 Annual convention, workshop “Mindful Environments”, Newcastle upon Tyne, UK, pp. 147–153 (April 2007)

    Google Scholar 

  3. Cassell, J., Bickmore, T.: Embodiment in conversational interfaces: Rean Human Factors in Computing Systems, Pittsburgh, PA (1999)

    Google Scholar 

  4. Gardner, R.: Between Speaking and Listening: The Vocalisation of Under- standings. Applied Linguistics 19(2), 204–224 (1998)

    Article  Google Scholar 

  5. Gratch, J., Wang, N., Gerten, J., Fast, E., Duffy, R.: Creating rapport with virtual agents. In: Pelachaud, C., Martin, J.-C., André, E., Chollet, G., Karpouzis, K., Pelé, D., et al. (eds.) IVA 2007. LNCS (LNAI), vol. 4722, pp. 125–138. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  6. Heylen, D., Bevacqua, E., Tellier, M., Pelachaud, C.: Searching for prototypical facial feedback signals. In: Pelachaud, C., Martin, J.-C., André, E., Chollet, G., Karpouzis, K., Pelé, D. (eds.) IVA 2007. LNCS (LNAI), vol. 4722, pp. 147–153. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  7. Kopp, S., Allwood, J., Grammer, K., Ahlsen, E., Stocksmeier, T.: Modeling embodied feedback with virtual humans. In: Wachsmuth, I., Knoblich, G. (eds.) ZiF Research Group International Workshop. LNCS (LNAI), vol. 4930, pp. 18–37. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  8. Morency, L.-P., de Kok, I., Gratch, J.: A probabilistic multimodal approach for predicting listener backchannels. In: Autonomous Agents and Multi-Agent Systems (2009)

    Google Scholar 

  9. Niewiadomski, R., Bevacqua, E., Mancini, M., Pelachaud, C.: Greta: an interactive expressive eca system. In: AAMAS 2009 - Autonomous Agents and MultiAgent Systems, Budapest, Hungary (2009)

    Google Scholar 

  10. Poggi, I.: Mind, hands, face and body. A goal and belief view of multimodal communication. Weidler, Berlin (2007)

    Google Scholar 

  11. Schröder, M.: The semaine api: Towards a standards-based framework for building emotion-oriented systems. In: Advances in Human-Computer Interaction (2010)

    Google Scholar 

  12. Schröder, M., Pammi, S., Türk, O.: Multilingual mary tts participation in the blizzard challenge 2009. In: Proc. Blizzard Challenge 2009 (2009)

    Google Scholar 

  13. Schröder, M., Trouvain, J.: The German text-to-speech synthesis system MARY: A tool for research, development and teaching. International Journal of Speech technology 6, 365–377 (2003)

    Article  Google Scholar 

  14. Thórisson, K.R.: Communiative Humanoids: A Computational Model of Psychosocial Dialogue Skills. PhD thesis, MIT Media Laboratory (1996)

    Google Scholar 

  15. Yngve, V.: On getting a word in edgewise. In: Papers from the Sixth Regional Meeting of the Chicago Linguistic Society, pp. 567–577 (1970)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bevacqua, E., Pammi, S., Hyniewska, S.J., Schröder, M., Pelachaud, C. (2010). Multimodal Backchannels for Embodied Conversational Agents. In: Allbeck, J., Badler, N., Bickmore, T., Pelachaud, C., Safonova, A. (eds) Intelligent Virtual Agents. IVA 2010. Lecture Notes in Computer Science(), vol 6356. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15892-6_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-15892-6_21

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-15891-9

  • Online ISBN: 978-3-642-15892-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics