Advertisement

Backchannels: Quantity, Type and Timing Matters

  • Ronald Poppe
  • Khiet P. Truong
  • Dirk Heylen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6895)

Abstract

In a perception experiment, we systematically varied the quantity, type and timing of backchannels. Participants viewed stimuli of a real speaker side-by-side with an animated listener and rated how human-like they perceived the latter’s backchannel behavior. In addition, we obtained measures of appropriateness and optionality for each backchannel from key strokes. This approach allowed us to analyze the influence of each of the factors on entire fragments and on individual backchannels. The originally performed type and timing of a backchannel appeared to be more human-like, compared to a switched type or random timing. In addition, we found that nods are more often appropriate than vocalizations. For quantity, too few or too many backchannels per minute appeared to reduce the quality of the behavior. These findings are important for the design of algorithms for the automatic generation of backchannel behavior for artificial listeners.

Keywords

Original Timing Random Timing Timing Matter Virtual Agent Perception Experiment 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Supplementary material

978-3-642-23974-8_25_MOESMa_ESM.avi (2.6 mb)
Electronic Supplementary material (1 KB)
978-3-642-23974-8_25_MOESMb_ESM.avi (2.6 mb)
Electronic Supplementary material (1 KB)

References

  1. 1.
    Bavelas, J.B., Coates, L., Johnson, T.: Listeners as co-narrators. Journal of Personality and Social Psychology 79, 941–952 (2000)CrossRefGoogle Scholar
  2. 2.
    Yngve, V.H.: On getting a word in edgewise. Papers from the Sixth Regional Meeting of Chicago Linguistic Society, pp. 567–577. Chicago Linguistic Society (1970)Google Scholar
  3. 3.
    Duncan Jr., S.: On the structure of speaker-auditor interaction during speaking turns. Language in Society 3, 161–180 (1974)CrossRefGoogle Scholar
  4. 4.
    Heylen, D., Bevacqua, E., Pelachaud, C., Poggi, I., Gratch, J., Schröder, M.: Generating Listening Behaviour. In: Emotion-Oriented Systems Cognitive Technologies - Part 4, pp. 321–347. Springer, Heidelberg (2011)Google Scholar
  5. 5.
    Ward, N., Tsukahara, W.: Prosodic features which cue back-channel responses in English and Japanese. Journal of Pragmatics 32, 1177–1207 (2000)CrossRefGoogle Scholar
  6. 6.
    Morency, L.P., de Kok, I., Gratch, J.: A probabilistic multimodal approach for predicting listener backchannels. Autonomous Agents and Multi-Agent Systems 20, 80–84 (2010)CrossRefGoogle Scholar
  7. 7.
    Poppe, R., Truong, K.P., Reidsma, D., Heylen, D.: Backchannel strategies for artificial listeners. In: Safonova, A. (ed.) IVA 2010. LNCS, vol. 6356, pp. 146–158. Springer, Heidelberg (2010)Google Scholar
  8. 8.
    Truong, K.P., Poppe, R., Kok, I., Heylen, D.: A multimodal analysis of vocal and visual backchannels in spontaneous dialogs. In: Proceedings of Interspeech, Florence, Italy (to appear, 2011)Google Scholar
  9. 9.
    Xudong, D.: Listener response. In: The Pragmatics of Interaction, pp. 104–124. John Benjamins Publishing, Amsterdam (2009)Google Scholar
  10. 10.
    Dittmann, A.T., Llewellyn, L.G.: The phonemic clause as a unit of speech decoding. Journal of Personality and Social Psychology 6, 341–349 (1967)CrossRefGoogle Scholar
  11. 11.
    Gravano, A., Hirschberg, J.: Backchannel-inviting cues in task-oriented dialogue. In: Proceedings of Interspeech, Brighton, UK, pp. 1019–1022 (2009)Google Scholar
  12. 12.
    Koiso, H., Horiuchi, Y., Tutiya, S., Ichikawa, A., Den, Y.: An analysis of turn-taking and backchannels based on prosodic and syntactic features in japanese map task dialogs. Language and Speech 41, 295–321 (1998)Google Scholar
  13. 13.
    Cathcart, N., Carletta, J., Klein, E.: A shallow model of backchannel continuers in spoken dialogue. In: Proceedings of the Conference of the European chapter of the Association for Computational Linguistics, Budapest, Hungary, vol. 1, pp. 51–58 (2003)Google Scholar
  14. 14.
    Kendon, A.: Some functions of gaze direction in social interaction. Acta Psychologica 26, 22–63 (1967)CrossRefGoogle Scholar
  15. 15.
    Bavelas, J.B., Coates, L., Johnson, T.: Listener responses as a collaborative process: The role of gaze. Journal of Communication 52, 566–580 (2002)CrossRefGoogle Scholar
  16. 16.
    Truong, K.P., Poppe, R., Heylen, D.: A rule-based backchannel prediction model using pitch and pause information. In: Proceedings of Interspeech, Makuhari, Japan, pp. 490–493 (2010)Google Scholar
  17. 17.
    Noguchi, H., Den, Y.: Prosody-based detection of the context of backchannel responses. In: Proceedings of the International Conference on Spoken Language Processing (ICSLP), Sydney, Australia, pp. 487–490 (1998)Google Scholar
  18. 18.
    Okato, Y., Kato, K., Yamamoto, M., Itahashi, S.: Insertion of interjectory response based on prosodic information. In: Proceedings of the IEEE Workshop Interactive Voice Technology for Telecommunication Applications, Basking Ridge, NJ, pp. 85–88 (1996)Google Scholar
  19. 19.
    Huang, L., Morency, L.-P., Gratch, J.: Learning backchannel prediction model from parasocial consensus sampling: A subjective evaluation. In: Safonova, A. (ed.) IVA 2010. LNCS, vol. 6356, pp. 159–172. Springer, Heidelberg (2010)Google Scholar
  20. 20.
    Maatman, R.M., Gratch, J., Marsella, S.C.: Natural behavior of a listening agent. In: Panayiotopoulos, T., Gratch, J., Aylett, R.S., Ballin, D., Olivier, P., Rist, T. (eds.) IVA 2005. LNCS (LNAI), vol. 3661, pp. 25–36. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  21. 21.
    Dittmann, A.T., Llewellyn, L.G.: Relationship between vocalizations and head nods as listener responses. Journal of Personality and Social Psychology 9, 79–84 (1968)CrossRefGoogle Scholar
  22. 22.
    Granström, B., House, D., Swerts, M.: Multimodal feedback cues in human-machine interactions. In: Proceedings of the International Conference on Speech Prosody, pp. 11–14. Aix-en-Provence, France (2002)Google Scholar
  23. 23.
    Bevacqua, E., Pammi, S., Hyniewska, S.J., Schröder, M., Pelachaud, C.: Multimodal backchannels for embodied conversational agents. In: Safonova, A. (ed.) IVA 2010. LNCS, vol. 6356, pp. 194–200. Springer, Heidelberg (2010)Google Scholar
  24. 24.
    Valstar, M.F., McKeown, G., Cowie, R., Pantic, M.: The Semaine corpus of emotionally coloured character interactions. In: Proceedings of the International Conference on Multimedia & Expo., Singapore, pp. 1079–1084 (2010)Google Scholar
  25. 25.
    Van Welbergen, H., Reidsma, D., Ruttkay, Z., Zwiers, J.: Elckerlyc - A BML realizer for continuous, multimodal interaction with a virtual human. Journal of Multimodal User Interfaces 3, 271–284 (2010)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Ronald Poppe
    • 1
  • Khiet P. Truong
    • 1
  • Dirk Heylen
    • 1
  1. 1.Human Media Interaction GroupUniversity of TwenteEnschedeThe Netherlands

Personalised recommendations