Advertisement

Towards More Comprehensive Listening Behavior: Beyond the Bobble Head

  • Zhiyang Wang
  • Jina Lee
  • Stacy Marsella
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6895)

Abstract

Realizing effective listening behavior in virtual humans has become a key area of research, especially as research has sought to realize more complex social scenarios involving multiple participants and bystanders. A human listener’s nonverbal behavior is conditioned by a variety of factors, from current speaker’s behavior to the listener’s role and desire to participate in the conversation and unfolding comprehension of the speaker. Similarly, we seek to create virtual humans able to provide feedback based on their participatory goals and their partial understanding of, and reaction to, the relevance of what the speaker is saying as the speaker speaks. Based on a survey of existing psychological literature as well as recent technological advances in recognition and partial understanding of natural language, we describe a model of how to integrate these factors into a virtual human that behaves consistently with these goals. We then discuss how the model is implemented into a virtual human architecture and present an evaluation of behaviors used in the model.

Keywords

artificial intelligence listener feedback context based feedback nonverbal behavior 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Argyle, M., Cook, M.: Gaze and Mutual Gaze. Cambridge University Press, Cambridge (1976)Google Scholar
  2. 2.
    Argyle, M., Lalljee, M., Cook, M.: The effects of visibility on interaction in a dyad. Human Relations 21, 3–17 (1968)CrossRefGoogle Scholar
  3. 3.
    Bavelas, J.B., Coates, L., Johnson, T.: Listeners as co-narrators. Journal of Personality and Social Psychology 79, 941–952 (2000)CrossRefGoogle Scholar
  4. 4.
    Bevacqua, E., Pammi, S., Hyniewska, S.J., Schröder, M., Pelachaud, C.: Multimodal backchannels for embodied conversational agents. In: Safonova, A. (ed.) IVA 2010. LNCS, vol. 6356, pp. 194–200. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  5. 5.
    Brunner, L.: Smiles can be back channels. Journal of Personality and Social Psychology 37(5), 728–734 (1979)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Callan, H., Chance, M., Pitcairn, T.: Attention and advertence in human groups. Social Science Inform. 12, 27–41 (1973)CrossRefGoogle Scholar
  7. 7.
    DeVault, D., Sagae, K., Traum, D.: Incremental interpretation and prediction of utterance meaning for interactive dialogue. Dialogue & Discourse 2(1), 143–170 (2011)Google Scholar
  8. 8.
    Dittmann, A., Llewellyn, L.: Relationship between vocalizations and head nods as listener responses. Journal of Personality and Social Psychology 9, 79–84 (1968)CrossRefGoogle Scholar
  9. 9.
    Ellsworth, P., Friedman, H., Perlick, D., Hoyt, M.: Some effects of gaze on subjects motivated to seek or to avoid social comparison. Journal of Experimental Social Pscyhology 14, 69–87 (1978)CrossRefGoogle Scholar
  10. 10.
    Friedman, H.S., Riggio, R.E.: Effect of individual differences in non-verbal expressiveness on transmission of emotion. Journal of Nonverbal Behavior 6(2), 96–104 (1981)CrossRefGoogle Scholar
  11. 11.
    Goffman, E.: Forms of Talk. University of Pennsylvania Press, Philadelphia (1981)Google Scholar
  12. 12.
    Goodwin, C.: Conversational organization: interaction between speakers and hearers. Academic Press, NY (1981)Google Scholar
  13. 13.
    Gratch, J., Okhmatovskaia, A., Lamothe, F., Marsella, S.C., Morales, M., van der Werf, R.J., Morency, L.-P.: Virtual rapport. In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 14–27. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  14. 14.
    Gu, E., Badler, N.I.: Visual attention and eye gaze during multiparty conversations with distractions. In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 193–204. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  15. 15.
    Hanks, W.F.: Language and Communicative Practices. Westview Press (1996)Google Scholar
  16. 16.
    Hartholt, A., Gratch, J., Weiss, L., The Gunslinger Team: At the virtual frontier: Introducing gunslinger, a multi-character, mixed-reality, story-driven experience. In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H.H. (eds.) IVA 2009. LNCS, vol. 5773, pp. 500–501. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  17. 17.
    Ikeda, K.: Triadic exchange pattern in multiparty communication: A case study of conversational narrative among friends. Language and culture 30(2), 53–65 (2009)Google Scholar
  18. 18.
    Jan, D., Traum, D.R.: Dynamic movement and positioning of embodied agents in multiparty conversations. In: Proc. of the 6th Int. Conference on Autonomous Agents and Multiagent Systems, pp. 59–66 (2007)Google Scholar
  19. 19.
    Jonsdottir, G.R., Gratch, J., Fast, E., Thórisson, K.R.: Fluid semantic back-channel feedback in dialogue: Challenges and progress. In: Pelachaud, C., Martin, J.-C., André, E., Chollet, G., Karpouzis, K., Pelé, D. (eds.) IVA 2007. LNCS (LNAI), vol. 4722, pp. 154–160. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  20. 20.
    Kendon, A.: Conducting Interaction: Patterns of Behavior in Focused Encounters. Cambridge University Press, Cambridge (1990)Google Scholar
  21. 21.
    Kopp, S., Allwood, J., Grammer, K., Ahlsen, E., Stocksmeier, T.: Modeling embodied feedback with virtual humans. In: Wachsmuth, I., Knoblich, G. (eds.) ZiF Research Group International Workshop. LNCS (LNAI), vol. 4930, pp. 18–37. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  22. 22.
    Lee, J., Marsella, S.: Predicting speaker head nods and the effects of affective information. IEEE Transactions on Multimedia 12(6), 552–562 (2010)CrossRefGoogle Scholar
  23. 23.
    Lee, J., Marsella, S.C.: Nonverbal behavior generator for embodied conversational agents. In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 243–255. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  24. 24.
    Maatman, R.M., Gratch, J., Marsella, S.C.: Natural behavior of a listening agent. In: Panayiotopoulos, T., Gratch, J., Aylett, R.S., Ballin, D., Olivier, P., Rist, T. (eds.) IVA 2005. LNCS (LNAI), vol. 3661, pp. 25–36. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  25. 25.
    Marsella, S., Gratch, J.: EMA: A process model of appraisal dynamics. Cognitive Systems Research 10(1), 70–90 (2009)CrossRefGoogle Scholar
  26. 26.
    Morency, L.P., de Kok, I., Gratch, J.: A probabilistic multimodal approach for predicting listener backchannels. In: Prendinger, H., Lester, J.C., Ishizuka, M. (eds.) IVA 2008. LNCS (LNAI), vol. 5208, pp. 70–84. Springer, Heidelberg (2008)Google Scholar
  27. 27.
    Vertegaa, R., der Veer, G.C.V., Vons, H.: Effects of gaze on multiparty mediated communication. In: Proc. of Graphics Interface, pp. 95–102 (2000)Google Scholar
  28. 28.
    Yngve, V.: On getting a word in edgewise. Papers from the 6th Regional Meeting, pp. 567–578 (April 1970)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Zhiyang Wang
    • 1
  • Jina Lee
    • 1
  • Stacy Marsella
    • 1
  1. 1.Institute for Creative TechnologiesUniversity of Southern CaliforniaPlaya VistaUSA

Personalised recommendations