Individualized Gesture Production in Embodied Conversational Agents

  • Stefan Kopp
  • Kirsten Bergmann
Part of the Studies in Computational Intelligence book series (SCI, volume 396)

Abstract

Gesturing behavior is subject to great variations across situations, individuals, or cultures. These variations make gestures hard for systematic studies and modeling attempts. However, gesture research on real humans and modeling approaches with virtual agents have made significant progress in the last years. In this chapter we discuss the state of research and present results from an extensive empirical study on human iconic gestures in direction giving dialogues. It is described how machine learning methods can be employed to extract different speakers’ gesturing style and to generate individualized language and gestures in ECAs. Evaluations show that human observers rate virtual agents better in terms of competence, human-likeness, or likability when a consistent individual gesture style is produced.

Keywords

Noun Phrase Decision Node Virtual Agent Virtual Human Conversational Agent 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ball, G., Breese, J.: Emotion and personality in a conversational agent. In: Cassell, J., Sullivan, J., Prevost, S., Churchill, E. (eds.) Embodied Conversational Agents, pp. 189–219. MIT Press, Cambridge (2000)Google Scholar
  2. 2.
    Bavelas, J., Gerwing, J., Sutton, C., Prevost, D.: Gesturing on the telephone: Independent effects of dialogue and visibility. Journal of Memory and Language 58, 495–520 (2008)CrossRefGoogle Scholar
  3. 3.
    Bavelas, J., Kenwood, C., Johnson, T., Philips, B.: An experimental study of when and how speakers use gestures to communicate. Gesture 2(1), 1–17 (2002)CrossRefGoogle Scholar
  4. 4.
    Bente, G., Haug Leuschner, A.I., Blascovich, J.: The others: Universals and cultural specificities in the perception of status and dominance from nonverbal behavior. Consciousness and Cognition 19(3), 762–777 (2010)CrossRefGoogle Scholar
  5. 5.
    Bergmann, K., Kopp, S.: GNetIc – using bayesian decision networks for iconic gesture generation. In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H.H. (eds.) IVA 2009. LNCS, vol. 5773, pp. 76–89. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  6. 6.
    Bergmann, K., Kopp, S.: Increasing expressiveness for virtual agents–Autonomous generation of speech and gesture in spatail description tasks. In: Decker, K., Sichman, J., Sierra, C., Castelfranchi, C. (eds.) Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems, IFAAMAS, Budapest, Hungary, pp. 361–368 (2009)Google Scholar
  7. 7.
    Bergmann, K., Kopp, S.: Modelling the production of co-verbal iconic gestures by learning bayesian decision networks. Applied Artificial Intelligence 24(6), 530–551 (2010)CrossRefGoogle Scholar
  8. 8.
    Bergmann, K., Kopp, S., Eyssel, F.: Individualized gesturing outperforms average gesturing – Evaluating gesture production in virtual humans. In: Allbeck, J., Badler, N., Bickmore, T., Pelachaud, C., Safonova, A. (eds.) Proceedings of the 10th Conference on Intelligent Virtual Agents, pp. 104–117. Springer, Berlin (2010)CrossRefGoogle Scholar
  9. 9.
    Cassell, J., Stone, M., Douville, B., Prevost, S., Achorn, B., Steedman, M., Badler, N., Catherine, P.: Modeling the interaction between speech and gesture. In: Ram, A., Eiselt, K. (eds.) Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society, pp. 153–158. Lawrence Erlbaum Associates (1994)Google Scholar
  10. 10.
    Chi, D., Costa, M., Zhao, L., Badler, N.: The EMOTE model for effort and shape. In: Akeley, K. (ed.) Proceedings of SIGGRAPH 2000, pp. 173–182. Addison-Wesley Longman (2000)Google Scholar
  11. 11.
    Foster, M., Oberlander, J.: Corpus-based generation of head and eyebrow motion for an embodied conversational agent. Language Resources and Evaluation 41, 305–323 (2007)CrossRefGoogle Scholar
  12. 12.
    Hartmann, B., Mancini, M., Pelachaud, C.: Implementing expressive gesture synthesis for embodied conversational agents. In: Gibet, S., Courty, N., Kamp, J.-F. (eds.) GW 2005. LNCS (LNAI), vol. 3881, pp. 188–199. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  13. 13.
    Hostetter, A., Alibali, M.: Raise your hand if you’re spatial–Relations between verbal and spatial skills and gesture production. Gesture 7(1), 73–95 (2007)CrossRefGoogle Scholar
  14. 14.
    Howard, R., Matheson, J.: Influence diagrams. Decision Analysis 2(3), 127–143 (2005)CrossRefGoogle Scholar
  15. 15.
    Kendon, A.: Gesture–Visible Action as Utterance. Cambridge University Press (2004)Google Scholar
  16. 16.
    Kimbara, I.: On gestural mimicry. Gesture 6(1), 39–61 (2006)CrossRefGoogle Scholar
  17. 17.
    Kita, S.: How representational gestures helps speaking. In: McNeill, D. (ed.) Language and gesture, pp. 162–185. Cambridge University Press, Cambridge (2000)CrossRefGoogle Scholar
  18. 18.
    Kopp, S.: Social resonance and embodied coordination in face-to-face conversation with artificial interlocutors. Speech Communication 52, 587–597 (2010)CrossRefGoogle Scholar
  19. 19.
    Kopp, S., Bergmann, K., Wachsmuth, I.: Multimodal communication from multimodal thinking–Towards an integrated model of speech and gesture production. International Journal of Semantic Computing 2(1), 115–136 (2008)CrossRefGoogle Scholar
  20. 20.
    Kopp, S., Tepper, P., Ferriman, K., Striegnitz, K., Cassell, J.: Trading spaces: How humans and humanoids use speech and gesture to give directions. In: Nishida, T. (ed.) Conversational Informatics, pp. 133–160. John Wiley, New York (2007)CrossRefGoogle Scholar
  21. 21.
    Kopp, S., Wachsmuth, I.: Synthesizing multimodal utterances for conversational agents. Computer Animation and Virtual Worlds 15(1), 39–52 (2004)CrossRefGoogle Scholar
  22. 22.
    Lauritzen, S.L.: The EM algorithm for graphical association models with missing data. Computational Statistics and Data Analysis 19, 191–201 (1995)CrossRefMATHGoogle Scholar
  23. 23.
    Madsen, A., Jensen, F., Kjærulff, U., Lang, M.: HUGIN–The tool for bayesian networks and influence diagrams. International Journal of Artificial Intelligence Tools 14(3), 507–543 (2005)CrossRefGoogle Scholar
  24. 24.
    McNeill, D.: Gesture and Thought. Univ. of Chicago Press, Chicago (2005)CrossRefGoogle Scholar
  25. 25.
    Melinger, A., Levelt, W.: Gesture and the communicative intention of the speaker. Gesture 4(2), 119–141 (2004)CrossRefGoogle Scholar
  26. 26.
    Müller, C.: Redebegleitende Gesten: Kulturgeschichte–Theorie–Sprachvergleich. Berlin Verlag, Berlin (1998)Google Scholar
  27. 27.
    Nass, C., Isbister, K., Lee, E.-J.: Truth is beauty: Researching embodied conversational agents. In: Cassell, J., Sullivan, J., Prevost, S., Churchill, E. (eds.) Embodied Conversational Agents, pp. 374–402. MIT Press, Cambridge (2000)Google Scholar
  28. 28.
    Neff, M., Kipp, M., Albrecht, I., Seidel, H.-P.: Gesture modeling and animation based on a probabilistic re-creation of speaker style. ACM Transactions on Graphics 27(1), 1–24 (2008)CrossRefGoogle Scholar
  29. 29.
    Ruttkay, Z.: Presenting in style by virtual humans. In: Esposito, A., Faundez-Zanuy, M., Keller, E., Marinaro, M. (eds.) COST Action 2102. LNCS (LNAI), vol. 4775, pp. 23–36. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  30. 30.
    Steck, H., Tresp, V.: Bayesian belief networks for data mining. In: Proceedings of the 2nd Workshop on Data Mining and Data Warehousing (1999)Google Scholar
  31. 31.
    Stone, M., DeCarlo, D., Oh, I., Rodriguez, C., Stere, A., Lees, A., Bregler, C.: Speaking with hands: Creating animated conversational characters from recordings of human performance. In: Proceedings of SIGGRAPH 2004, pp. 506–513 (2004)Google Scholar
  32. 32.
    Stone, M., Doran, C., Webber, B., Bleam, T., Palmer, M.: Microplanning with Communicative Intentions: The SPUD System. Comput. Intelligence 19(4), 311–381 (2003)CrossRefMathSciNetGoogle Scholar
  33. 33.
    Streeck, J.: Depicting by gesture. Gesture 8(3), 285–301 (2008)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Stefan Kopp
    • 1
  • Kirsten Bergmann
    • 2
  1. 1.Sociable Agents Group, CITEC, Faculty of TechnologyBielefeld UniversityBielefeldGermany
  2. 2.SFB 673 Alignment in CommunicationBielefeld UniversityBielefeldGermany

Personalised recommendations