Collection and Analysis of Multimodal Interaction in Direction-Giving Dialogues: Towards an Automatic Gesture Selection Mechanism for Metaverse Avatars

  • Takeo Tsukamoto
  • Yumi Muroya
  • Masashi Okamoto
  • Yukiko Nakano
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7471)

Abstract

With the aim of building a spatial gesture generation mechanism in Metaverse avatars, we report on an empirical study for multimodal direction-giving dialogues and propose a prototype system for gesture generation. First, we conducted an experiment in which a direction receiver asked for directions to some place on a university campus, and the direction giver gave directions. Then, using a machine learning technique, we annotated the direction giver’s right-hand gestures automatically and analyzed the distribution of the direction of the gestures. As a result, we proposed four types of proxemics and found that the distribution of gesture directions differs with the type of proxemics between the conversational participants. Finally, we implement a gesture generation mechanism into a Metaverse application and demonstrate an example.

Keywords

Gesture Direction giving Proxemics Empirical study Metaverse 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Argyle, M.: Non-verbal communication in human social interaction. In: Hinde, R.A. (ed.) Non-verbal Communication. Cambridge University Press, Cambridge (1972)Google Scholar
  2. 2.
    Bergmann, K., Kopp, S.: GNetIc – Using Bayesian Decision Networks for Iconic Gesture Generation. In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H.H. (eds.) IVA 2009. LNCS, vol. 5773, pp. 76–89. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  3. 3.
    Breitfuss, W., Predinger, H., Ishizuka, M.: Automatic generation of gaze and gestures for dialogues between embodied conversational agents. Int’l Journal of Semantic Computing 2(1), 71–90 (2008)CrossRefGoogle Scholar
  4. 4.
    Bull, P.E.: Posture and Gesture. Pergamon Press, Elmsford (1987)Google Scholar
  5. 5.
    Kendon, A.: Some functions of gaze-direction in social interaction. Acta Psycholigica 26, 22–63 (1967)CrossRefGoogle Scholar
  6. 6.
    McNeill, D.: Hand and Mind: What Gestures Reveal about Thought. University of Chicago Press, Chicago (1992)Google Scholar
  7. 7.
    Nakano, Y.I., Okamoto, M., Kawahara, D., Li, Q., Nishida, T.: Converting Text into Agent Animations: Assigning Gestures to Text. In: Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2004), Companion Volume, Boston (2004)Google Scholar
  8. 8.
    Tepper, P., Kopp, S., Cassell, J.: Content in Context: Generating Language and Iconic Gesture without a Gestionary. In: Proc. of the Workshop on Balanced Perception and Action in ECAs at AAMAS 2004 (2004)Google Scholar
  9. 9.
    Tsukamoto, T., Nakano, Y.: Gesture Generation for Metaverse Avatars using Linguistic and Spatial Information. In: Proc. of the 74th National Convention of IPSJ (in Japanese) (to appear)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Takeo Tsukamoto
    • 1
  • Yumi Muroya
    • 2
  • Masashi Okamoto
    • 2
  • Yukiko Nakano
    • 2
  1. 1.Graduate School of Science and TechnologySeikei UniversityTokyoJapan
  2. 2.Faculty of Science and TechnologySeikei UniversityTokyoJapan

Personalised recommendations