A Framework for Motion Based Bodily Enaction with Virtual Characters

  • Roberto Pugliese
  • Klaus Lehtonen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6895)

Abstract

We propose a novel methodology for authoring interactive behaviors of virtual characters. Our approach is based on enaction, which means a continuous two-directional loop of bodily interaction. We have implemented the case of two characters, one human and one virtual, who are separated by a glass wall and can interact only through bodily motions. Animations for the virtual character are based on captured motion segments and descriptors for the style of motions that are automatically calculated from the motion data. We also present a rule authoring system that is used for generating behaviors for the virtual character. Preliminary results of an enaction experiment with an interview show that the participants could experience the different interaction rules as different behaviors or attitudes of the virtual character.

Keywords

Enaction motion capture bodily interaction authoring behaviors 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Supplementary material

978-3-642-23974-8_18_MOESMa_ESM.mp4 (28.4 mb)
Electronic Supplementary material (29,069 KB)

References

  1. 1.
    Varela, F., Thompson, E., Rosch, E.: Embodied Mind: Cognitive Science 2. and Human Experience. MIT Press, Cambridge (1991)Google Scholar
  2. 2.
    De Jaegher, H., Di Paolo, E.A.: Participatory sense-making: An enactive approach to social cognition. Phenomenology and the Cognitive Sciences 6(4), 485–507 (2007)Google Scholar
  3. 3.
    Bruner, J.: Toward a theory of instruction. Belknap Press of Harvard University Press, Cambridge (1966)Google Scholar
  4. 4.
    Kaipainen, M., Ravaja, N., Tikka, P., Vuori, R., Pugliese, R., Rapino, M., Takala, T.: Enactive Systems and Enactive Media. Embodied Human Machine Coupling Beyond Interfaces, Leonardo (in press, 2011)Google Scholar
  5. 5.
    Kaipainen, M., Normak, P., Niglas, K., Kippar, J., Laanpere, M.: Soft ontologies, spatial representations and multi-perspective explorability. Expert Systems 25(5), 474–483 (2008)CrossRefGoogle Scholar
  6. 6.
    Kovar, L., Gleicher, M., Pighin, F.: Motion graphs. In: Proc. of SIGGRAPH 2002, pp. 473–482. ACM, New York (2002)CrossRefGoogle Scholar
  7. 7.
    Zhao, L., Normoyle, A., Khanna, S., Safonova, A.: Automatic construction of a minimum size motion graph. In: Fellner, D., Spencer, S. (eds.) Proc. of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA 2009), pp. 27–35. ACM, New York (2009)CrossRefGoogle Scholar
  8. 8.
    Blumberg, B., Galyean, T.: Multi-level direction of autonomous creatures for real-time virtual environments. In: Mair, S.G., Cook, R. (eds.) Proc. of SIGGRAPH 1995, pp. 47–54. ACM, New York (1995)CrossRefGoogle Scholar
  9. 9.
    Perlin, K., Goldberg, A.: Improv: a system for scripting interactive actors in virtual worlds. In: Proc. of SIGGRAPH 1996, pp. 205–216. ACM, New York (1996)CrossRefGoogle Scholar
  10. 10.
    Camurri, A., Mazzarino, B., Ricchetti, M., Timmers, R., Volpe, G.: Multimodal Analysis of Expressive Gesture in Music and Dance Performances. In: Camurri, A., Volpe, G. (eds.) GW 2003. LNCS (LNAI), vol. 2915, pp. 20–39. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  11. 11.
    Levine, S., Krhenbhl, P., Thrun, S., Koltun, V.: Gesture controllers. In: Hoppe, H. (ed.) Proc. of SIGGRAPH 2010, ACM, New York (2010), Article 124, 11 pagesGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Roberto Pugliese
    • 1
  • Klaus Lehtonen
    • 1
  1. 1.Department of Media Technology, School of ScienceAalto UniversityEspooFinland

Personalised recommendations