Action Capture: A VR-Based Method for Character Animation

  • Bernhard Jung
  • Heni Ben Amor
  • Guido Heumer
  • Arnd Vitzthum
Chapter

Abstract

This contribution describes a Virtual Reality (VR) based method for character animation that extends conventional motion capture by not only tracking an actor’s movements but also his or her interactions with the objects of a virtual environment. Rather than merely replaying the actor’s movements, the idea is that virtual characters learn to imitate the actor’s goal-directed behavior while interacting with the virtual scene. Following Arbib’s equation action = movement + goal we call this approach Action Capture. For this, the VR user’s body movements are analyzed and transformed into a multi-layered action representation. Behavioral animation techniques are then applied to synthesize animations which closely resemble the demonstrated action sequences. As an advantage, captured actions can often be naturally applied to virtual characters of different sizes and body proportions, thus avoiding retargeting problems of motion capture.

References

  1. [Arb02]
    M. A. Arbib. The mirror system, imitation, and the evolution of language. In Dautenhahn and Nehaniv Imitation in Animals and Artifacts. MIT, Cambridge, 2002.Google Scholar
  2. [BAHJV08]
    H. B. Amor, G. Heumer, B. Jung, and A. Vitzthum. Grasp synthesis from low-dimensional probabilistic grasp models. Computer Animation and Virtual Worlds, 19(3–4):445–454, 2008.CrossRefGoogle Scholar
  3. [BAWHJ07]
    H. B. Amor, M. Weber, G. Heumer, and B. Jung. Coordinate system transformations for imitation of goal-directed trajectories in virtual humans. In Virtual Environments 2007. IPT EGVE 2007. 13th Eurographics Symposium on Virtual Environments. Short Papers and Posters, 2007.Google Scholar
  4. [BBA + 00]
    N. I. Badler, R. Bindiganavale, J. Allbeck, W. Schuler, L. Zhao, and M. Palmer. Parameterized action representation for virtual human agents. In Embodied conversational agents, pages 256–284. MIT Press, Cambridge, 2000.Google Scholar
  5. [BK96]
    P. Bakker and Y. Kuniyoshi. Robot see, Robot do: An Overview of Robot Imitation. In AISB96 Workshop: Learning in Robots and Animals, pages 3–11, 1996.Google Scholar
  6. [BPW93]
    N. I. Badler, C. B. Phillips, and B. L. Webber. Simulating Humans: Computer Graphics and Animation and Control. Oxford University Press, New York, 1993.MATHGoogle Scholar
  7. [BS04]
    A. Billard and R. Siegwart, editors. Special Issue on Robot Learning from Demonstration, volume 47 of Robotics and Autonomous Systems, 2004.Google Scholar
  8. [BWKE91]
    N. I. Badler, B. L. Webber, J. Kalita, and J. Esakov. Animation from instructions. In Making them move: mechanics, control, and animation of articulated figures, pages 51–93. Morgan Kaufmann, San Francisco, 1991.Google Scholar
  9. [DLR77]
    A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39(1):1–38, 1977.Google Scholar
  10. [DN02]
    K. Dautenhahn and C. Nehaniv, editors. Imitation in Animals and Artifacts. MIT, Cambridge, 2002.Google Scholar
  11. [GHJV09]
    F. Gommlich, G. Heumer, B. Jung, and A. Vitzthum. Simulation of Articulated Standard Control Actuators in Dynamic Virtual Environments. In Proceedings of IEEE Virtual Reality 2009, pages 269–270. IEEE, 2009.Google Scholar
  12. [Gle98]
    M. Gleicher. Retargetting Motion to New Characters. In SIGGRAPH’98 Conference Proceedings, Computer Graphics Annual Conference Series, pages 33–42. ACM, 1998.Google Scholar
  13. [HBAJ08]
    G. Heumer, H. B. Amor, and B. Jung. Grasp recognition for uncalibrated data gloves: A machine learning approach. Presence: Teleoperators & Virtual Environments, 17(2):121–142, 2008.CrossRefGoogle Scholar
  14. [HBAWJ07]
    G. Heumer, H. B. Amor, M. Weber, and B. Jung. Grasp Recognition with Uncalibrated Data Gloves – A Comparison Of Classification Methods. In Proceedings of IEEE Virtual Reality Conference, VR ’07, pages 19–26, March 2007.Google Scholar
  15. [HS98]
    H. Heuer and J. Sangals. Task-dependent mixtures of coordinate systems in visuomotor transformations. Experimental Brain Research, 119(2):224-236, 1998.CrossRefGoogle Scholar
  16. [JR97]
    W. L. Johnson and J. Rickel. Steve: an animated pedagogical agent for procedural training in virtual environments. SIGART Bulletin, 8(1–4):16–21, 1997.CrossRefGoogle Scholar
  17. [Ken04]
    A. Kendon. Gesture: Visible Action as Utterance. Cambridge University Press, Cambridge, 2004.Google Scholar
  18. [KL00]
    J. Kuffner and J. Latombe. Interactive Manipulation Planning for Animated Characters. In Proceedings of Pacific Graphics, 2000.Google Scholar
  19. [KT99]
    M. Kallmann and D. Thalmann. Direct 3D Interaction with Smart Objects. In Proceedings ACM VRST 99, London, 1999.Google Scholar
  20. [KT02]
    M. Kallmann and D. Thalmann. Modeling behaviors of interactive objects for real-time virtual environments. Journal of Visual Languages and Computing, 13(2):177–195, 2002.CrossRefGoogle Scholar
  21. [Mel96]
    A. N. Meltzoff. The human infant as imitative generalist: a 20-year progress report on infant imitation with implications for comparative psychology. In Social Learning in Animals: The Roots of Culture, pages 347–370, 1996.Google Scholar
  22. [MF96]
    A. Moon and M. Farsi. Grasp Quality Measures in the Control of Dextrous Robot Hands. Physical Modelling as a Basis for Control (Digest No: 1996/042), IEE Colloquium on, pages 6/1–6/4, 1996.Google Scholar
  23. [MF10]
    M. Möhring and B. Fröhlich. Enabling Functional Validation of Virtual Cars through Natural Interaction Metaphors. In Proceedings of IEEE Virtual Reality Conference, VR 2010, 2010.Google Scholar
  24. [MI94]
    C. L. MacKenzie and T. Iberall. The Grasping Hand. Elsevier-North Holland, 1994.Google Scholar
  25. [MTT04]
    N. Magnenat-Thalmann and D. Thalmann, editors. Handbook of Virtual Humans. Wiley, 2004.Google Scholar
  26. [ND02]
    C. Nehaniv and K. Dautenhahn. The Correspondence Problem. In Dautenhahn and Nehaniv [DN02], pages 41–61.Google Scholar
  27. [ND07]
    C. Nehaniv and K. Dautenhahn, editors. Imitation and Social Learning in Robots, Humans and Animals: Behavioural, Social and Communicative Dimensions. Cambridge University Press, Cambridge, 2007.Google Scholar
  28. [PZ05]
    N. S. Pollard and V. B. Zordan. Physically based Grasping Control from Example. In SCA ’05: Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation, pages 311–318. ACM, New York, 2005.Google Scholar
  29. [RS00]
    S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, 2000.CrossRefGoogle Scholar
  30. [RSM04]
    R. Rao, A. P. Shon, and A. N. Meltzoff. A Bayesian model of imitation in infants and robots. In Imitation and Social Learning in Robots, Humans and Animals: Behavioural, Social and Communicative Dimensions, Cambridge University Press, Cambridge, 2004.Google Scholar
  31. [Sch19]
    G. Schlesinger. Der Mechanische Aufbau der Künstlichen Glieder. In M. Borchardt et al., editors, Ersatzglieder und Arbeitshilfen für Kriegsbeschädigte und Unfallverletzte, pages 321–661. Springer, Berlin, 1919.Google Scholar
  32. [TdSL00]
    J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000.CrossRefGoogle Scholar
  33. [Tho98]
    E. L. Thorndike. Animal intelligence: an experimental study of the associative processes in animals. Psychological Review Monographs, 8, 1898.Google Scholar
  34. [Tom05]
    B. Tomlinson. From linear to interactive animation: how autonomous characters change the process and product of animating. ACM Computers in Entertainment, 3(1), 2005.Google Scholar
  35. [VAHB09]
    A. Vitzthum, H. B. Amor, G. Heumer, and B. Jung. Action description for animation of virtual characters. In 6. Workshop Virtuelle und Erweiterte Realität. GI-Fachgruppe VR/AR, 2009.Google Scholar
  36. [WHAJ06]
    M. Weber, G. Heumer, H. B. Amor, and B. Jung. An animation system for imitation of object grasping in virtual reality. In ICAT, pages 65–76, 2006.Google Scholar
  37. [Wor08]
    World Wide Web Consortium. Synchronized Multimedia Integration Language (SMIL 3.0), 2008.Google Scholar
  38. [YKH04]
    K. Yamane, J. J. Kuffner, and J. K. Hodgins. Synthesizing animations of human manipulation tasks. ACM Trans. Graph., 23(3):532–539, 2004.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag/Wien 2011

Authors and Affiliations

  • Bernhard Jung
    • 1
  • Heni Ben Amor
    • 1
  • Guido Heumer
    • 1
  • Arnd Vitzthum
    • 1
  1. 1.VR and Multimedia Group, Institute of InformaticsTU Bergakademie FreibergFreibergGermany

Personalised recommendations