Advertisement

Virtual Rapport

  • Jonathan Gratch
  • Anna Okhmatovskaia
  • Francois Lamothe
  • Stacy Marsella
  • Mathieu Morales
  • R. J. van der Werf
  • Louis-Philippe Morency
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4133)

Abstract

Effective face-to-face conversations are highly interactive. Participants respond to each other, engaging in nonconscious behavioral mimicry and backchanneling feedback. Such behaviors produce a subjective sense of rapport and are correlated with effective communication, greater liking and trust, and greater influence between participants. Creating rapport requires a tight sense-act loop that has been traditionally lacking in embodied conversational agents. Here we describe a system, based on psycholinguistic theory, designed to create a sense of rapport between a human speaker and virtual human listener. We provide empirical evidence that it increases speaker fluency and engagement.

Keywords

Nonverbal Behavior Speech Rate Conversational Agent Speech Fluency Conversational Partner 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Alibali, M.W., Heath, D.C., et al.: Effects of visibility between speaker and listener on gesture production: some gestures are meant to be seen. Journal of Memory and Language 44, 169–188 (2001)CrossRefGoogle Scholar
  2. Bailenson, J., Beall, A., et al.: Transformed Social Interaction: Decoupling Representation from Behavior and Form in Collaborative Virtual Environments. PRESENCE: Teleoperators and Virtual Environments 13(4), 428–441 (2004)CrossRefGoogle Scholar
  3. Bailenson, J.N., Yee, N.: Digital Chameleons: Automatic assimilation of nonverbal gestures in immersive virtual environments. Psychological Science 16, 814–819 (2005)CrossRefGoogle Scholar
  4. Bavelas, J.B., Coates, L., et al.: Listeners as Co-narrators. Journal of Personality and Social Psychology 79(6), 941–952 (2000)CrossRefGoogle Scholar
  5. Cassell, J., Bickmore, T., et al.: Embodiment in Conversational Interfaces: Rea. In: Conference on Human Factors in Computing Systems, Pittsburgh, PA (1999)Google Scholar
  6. Cassell, J., Thórisson, K.R.: The Power of a Nod and a Glance: Envelope vs. Emotional Feedback in Animated Conversational Agents. International Journal of Applied Artificial Intelligence 13(4-5), 519–538 (1999)CrossRefGoogle Scholar
  7. Chartrand, T.L., Bargh, J.A.: The Chameleon Effect: The Perception-Behavior Link and Social Interaction. Journal of Personality and Social Psychology 76(6), 893–910 (1999)CrossRefGoogle Scholar
  8. Chiu, C., Hong, Y., et al.: Gaze direction and fluency in conversational speech (unpublished) (manuscript, 1995)Google Scholar
  9. Heylen, D.: Challenges Ahead. Head Movements and other social acts in conversation. AISB, Hertfordshire (2005)Google Scholar
  10. Kallmann, M., Marsella, S.: Hierarchical Motion Controllers for Real-Time Autonomous Virtual Humans. In: Panayiotopoulos, T., Gratch, J., Aylett, R.S., Ballin, D., Olivier, P., Rist, T. (eds.) IVA 2005. LNCS (LNAI), vol. 3661, pp. 253–265. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  11. Kopp, S., Krenn, B., et al.: Towards a common framework for multimodal generation in ECAs: The behavior markup language, Intelligent Virtual Agents, Marina del Rey, CA (2006)Google Scholar
  12. Krauss, R.M., Garlock, C.M., et al.: The Role of Audible and Visible Back-Channel Responses in Interpersonal Communication. Journal of Personality and Social Psychology 35, 523–529 (1977)CrossRefGoogle Scholar
  13. Kraut, R.K., Lewis, S.H., et al.: Listener Responsiveness and the Coordination of Conversation. Journal of Personality and Social Psychology, 718–731 (1982)Google Scholar
  14. Lamothe, F., Morales, M.: Response Behavior. Marina del Rey, CA, University of Southern California: Technical Report ICT TR 01.2006 (2006)Google Scholar
  15. Maatman, M., Gratch, J., et al.: Natural Behavior of a Listening Agent. In: Panayiotopoulos, T., Gratch, J., Aylett, R.S., Ballin, D., Olivier, P., Rist, T. (eds.) IVA 2005. LNCS (LNAI), vol. 3661, pp. 25–36. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  16. McNeill, D.: Hand and mind: What gestures reveal about thought. The University of Chicago Press, Chicago (1992)Google Scholar
  17. Morency, L.-P., Sidner, C., et al.: Contextual Recognition of Head Gestures. In: 7th International Conference on Multimodal Interactions, Torento, Italy (2005)Google Scholar
  18. Nakano, Y., Reinstein, G., et al.: Towards a Model of Face-to-Face Grounding. Meeting of the Association for Computational Linguistics, Sapporo, Japan (2003)Google Scholar
  19. Nass, C., Reeves, B.: The Media Equation. Cambridge University Press, Cambridge (1996)Google Scholar
  20. Oppenheim, A.V., Schafer, R.W.: From Frequency to Quefrency: A History of the Cepstrum. IEEE Signal Processing Magazine, 95–106 (September 2004)Google Scholar
  21. Pertaub, D.-P., Slater, M., et al.: An Experiment on Public Speaking Anxiety in Response to Three Different Types of Virtual Audience. Presence: Teleoperators and Virtual Environments 11(1), 68–78 (2001)CrossRefGoogle Scholar
  22. Rickenberg, R., Reeves, B.: The effects of animated characters on anxiety, task performance, and evaluations of user interfaces. In: SIGCHI conference on Human factors in computing systems, The Hague, The Netherlands (2000)Google Scholar
  23. Smith, J.: GrandChair: Conversational Collection of Family Stories. MIT Press, Cambridge (2000)Google Scholar
  24. Tatar, D.: Social and personal consequences of a preoccupied listener, Department of Psychology. Stanford University, Stanford (unpublished) (doctoral dissertation) (1997)Google Scholar
  25. Tosa, N.: Neurobaby. ACM SIGGRAPH, 212–213 (1993)Google Scholar
  26. van der Werf, R.: Creating Rapport with Virtual Humans. Marina del Rey, CA, University of Southern California: Technical Report ICT TR 02.2006 (2006)Google Scholar
  27. Ward, N., Tsukahara, W.: Prosodic features which cue back-channel responses in English and Japanese. Journal of Pragmatics 23, 1177–1207 (2000)CrossRefGoogle Scholar
  28. Welji, H., Duncan, S.: Characteristics of face-to-face interactions, with and without rapport: Friends vs. strangers. In: Symposium on Cognitive Processing Effects of Social Resonance in Interaction, 26th Annual Meeting of the Cognitive Science Societ (2004)Google Scholar
  29. Yngve, V.H.: On getting a word in edgewise. Sixth regional Meeting of the Chicago Linguistic Society (1970)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Jonathan Gratch
    • 1
  • Anna Okhmatovskaia
    • 1
  • Francois Lamothe
    • 2
  • Stacy Marsella
    • 1
  • Mathieu Morales
    • 2
  • R. J. van der Werf
    • 3
  • Louis-Philippe Morency
    • 4
  1. 1.University of Southern California 
  2. 2.Ecole Spéciale Militaire de St-Cyr 
  3. 3.University of Twente 
  4. 4.Massachusetts institute of technology 

Personalised recommendations