Authoring Rules for Bodily Interaction: From Example Clips to Continuous Motions

  • Klaus Förger
  • Tapio Takala
  • Roberto Pugliese
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7502)


We explore motion capture as a means for generating expressive bodily interaction between humans and virtual characters. Recorded interactions between humans are used as examples from which rules are formed that control reactions of a virtual character to human actions. The author of the rules selects segments considered important and features that best describe the desired interaction. These features are motion descriptors that can be calculated in real-time such as quantity of motion or distance between the interacting characters. The rules are authored as mappings from observed descriptors of a human to the desired descriptors of the responding virtual character. Our method enables a straightforward process of authoring continuous and natural interaction. It can be used in games and interactive animations to produce dramatic and emotional effects. Our approach requires less example motions than previous machine learning methods and enables manual editing of the produced interaction rules.


animation motion capture bodily interaction continuous interaction authoring behavior 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Huang, L., Morency, L.-P., Gratch, J.: Virtual Rapport 2.0. In: Vilhjálmsson, H.H., Kopp, S., Marsella, S., Thórisson, K.R. (eds.) IVA 2011. LNCS, vol. 6895, pp. 68–79. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  2. 2.
    Kovar, L., Gleicher, M., Pighin, F.: Motion graphs. ACM Transactions on Graphics (SIGGRAPH 2002) 21(3), 473–482 (2002)Google Scholar
  3. 3.
    Arikan, O., Forsyth, D.A., O’Brien, J.F.: Motion synthesis from annotations. ACM Transactions on Graphics (SIGGRAPH 2003) 22(3), 402–408 (2003)zbMATHCrossRefGoogle Scholar
  4. 4.
    Xu, J., Takagi, K., Sakazawa, S.: Motion synthesis for synchronizing with streaming music by segment-based search on metadata motion graphs. In: 2011 IEEE International Conf. on Multimedia and Expo (ICME), pp. 1–6 (2011)Google Scholar
  5. 5.
    Hachimura, K., Takashina, K., Yoshimura, M.: Analysis and evaluation of dancing movement based on LMA. In: IEEE International Workshop on Robot and Human Interactive Communication 2005 (ROMAN 2005), pp. 294–299. IEEE (2005)Google Scholar
  6. 6.
    Camurri, A., Mazzarino, B., Ricchetti, M., Timmers, R., Volpe, G.: Multimodal Analysis of Expressive Gesture in Music and Dance Performances. In: Camurri, A., Volpe, G. (eds.) GW 2003. LNCS (LNAI), vol. 2915, pp. 20–39. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  7. 7.
    Young, J., Ishii, K., Igarashi, T., Sharlin, E.: Puppet Master: designing reactive character behavior by demonstration. In: Proc. of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA 2008), pp. 183–191. Eurographics Association, Aire-la-Ville (2008)Google Scholar
  8. 8.
    Blumberg, B., Galyean, T.: Multi-level direction of autonomous creatures for real-time virtual environments. In: Mair, S.G., Cook, R. (eds.) Proc. of SIGGRAPH 1995, pp. 47–54. ACM, New York (1995)CrossRefGoogle Scholar
  9. 9.
    Perlin, K., Goldberg, A.: Improv: a system for scripting interactive actors in virtual worlds. In: Proc. of SIGGRAPH 1996, pp. 205–216. ACM, New York (1996)CrossRefGoogle Scholar
  10. 10.
    Jebara, T., Pentland, A.: Action Reaction Learning: Automatic Visual Analysis and Synthesis of Interactive Behaviour. In: Christensen, H.I. (ed.) ICVS 1999. LNCS, vol. 1542, pp. 273–292. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  11. 11.
    Young, J., Ishii, K., Igarashi, T., Sharlin, E.: Style-by-demonstration: Teaching Interactive Movement Style to Robots. In: ACM Conf. on Intelligent User Interfaces (IUI 2012), pp. 41–50. ACM, New York (2012)CrossRefGoogle Scholar
  12. 12.
    Metaxas, D., Chen, B.: Toward gesture-based behavior authoring. In: Proc. of the Computer Graphics International 2005 (CGI 2005), pp. 59–65. IEEE Computer Society, Washington, DC (2005)Google Scholar
  13. 13.
    Ulicny, B., Ciechomski, P., Thalmann, D.: Crowdbrush: interactive authoring of real-time crowd scenes. In: Proc. of the 2004 ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA 2004), pp. 243–252. Eurographics Association, Aire-la-Ville (2004)CrossRefGoogle Scholar
  14. 14.
    Vilhjálmsson, H.H., Cantelmo, N., Cassell, J., Chafai, N.E., Kipp, M., Kopp, S., Mancini, M., Marsella, S.C., Marshall, A.N., Pelachaud, C., Ruttkay, Z., Thórisson, K.R., van Welbergen, H., van der Werf, R.J.: The Behavior Markup Language: Recent Developments and Challenges. In: Pelachaud, C., Martin, J.-C., André, E., Chollet, G., Karpouzis, K., Pelé, D. (eds.) IVA 2007. LNCS (LNAI), vol. 4722, pp. 99–111. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  15. 15.
    Zwiers, J., van Welbergen, H., Reidsma, D.: Continuous Interaction within the SAIBA Framework. In: Vilhjálmsson, H.H., Kopp, S., Marsella, S., Thórisson, K.R. (eds.) IVA 2011. LNCS, vol. 6895, pp. 324–330. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  16. 16.
    Pugliese, R., Lehtonen, K.: A Framework for Motion Based Bodily Enaction with Virtual Characters. In: Vilhjálmsson, H.H., Kopp, S., Marsella, S., Thórisson, K.R. (eds.) IVA 2011. LNCS, vol. 6895, pp. 162–168. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  17. 17.
    Buhmann, M.D.: Radial Basis Functions: Theory and Implementations. Cambridge University Press, Cambridge (2003)zbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Klaus Förger
    • 1
  • Tapio Takala
    • 1
  • Roberto Pugliese
    • 1
  1. 1.Department of Media Technology, School of ScienceAalto UniversityEspooFinland

Personalised recommendations