Interactive Demonstration of Pointing Gestures for Virtual Trainers

  • Yazhou Huang
  • Marcelo Kallmann
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5611)

Abstract

While interactive virtual humans are becoming widely used in education, training and delivery of instructions, building the animations required for such interactive characters in a given scenario remains a complex and time consuming work. One of the key problems is that most of the systems controlling virtual humans are mainly based on pre-defined animations which have to be re-built by skilled animators specifically for each scenario. In order to improve this situation this paper proposes a framework based on the direct demonstration of motions via a simplified and easy to wear set of motion capture sensors. The proposed system integrates motion segmentation, clustering and interactive motion blending in order to enable a seamless interface for programming motions by demonstration.

Keywords

virtual humans motion capture interactive demonstration 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Gebhard, P., Michael Kipp, M.K.T.R.: What are they going to talk about? towards life-like characters that reflect on interactions with users. In: Proc. of the 1st International Conf. on Tech. for Interactive Digital Storytelling and Entertainment (TIDSE 2003) (2003)Google Scholar
  2. 2.
    Thiebaux, M., Marshall, A., Marsella, S., Kallmann, M.: Smartbody: Behavior realization for embodied conversational agents. In: Seventh International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS) (2008)Google Scholar
  3. 3.
    Noma, T., Zhao, L., Badler, N.I.: Design of a virtual human presenter. IEEE Computer Graphics and Applications 20(4), 79–85 (2000)CrossRefGoogle Scholar
  4. 4.
    Kopp, S., Wachsmuth, I.: Model-based animation of co-verbal gesture. In: Proceedings of Computer Animation 2002, pp. 252–257 (2002)Google Scholar
  5. 5.
    Rose, C., Bodenheimer, B., Cohen, M.F.: Verbs and adverbs: Multidimensional motion interpolation. IEEE Computer Graphics and Applications 18, 32–40 (1998)CrossRefGoogle Scholar
  6. 6.
    Kita, S.: Pointing: A foundational building block of human communication. In: Kita, S. (ed.) Pointing, Where Language, Culture, and Cognition Meet. Lawrence Erlb. Ass., NJ (2003)Google Scholar
  7. 7.
    Breazeal, C., Buchsbaum, D., Gray, J., Gatenby, D., Blumberg, D.: Learning from and about others: Towards using imitation to bootstrap the social understanding of others by robots. Artificial Life 11, 1–2 (2005)CrossRefGoogle Scholar
  8. 8.
    Schaal, S., Ijspeert, A., Billard, A.: Computational approaches to motor learning by imitation. The Neuroscience of Social Interaction 1431, 199–218 (2003)Google Scholar
  9. 9.
    Suleiman, W., Yoshida, E., Kanehiro, F., Laumond, J.P., Monin, A.: On human motion imitation by humanoid robot. In: 2008 IEEE International Conference on Robotics and Automation (ICRA), pp. 2697–2704 (2008)Google Scholar
  10. 10.
    Olenderski, A., Nicolescu, M., Louis, S.: Robot learning by demonstration using forward models of schema-based behaviors. In: Proceedings of International Conference on Informatics in Control, Automation and Robotics, Barcelona, Spain, pp. 14–17 (2005)Google Scholar
  11. 11.
    Nicolescu, M.N., Mataric, M.J.: Natural methods for robots task learning: Instructive demonstration, generalization and practice. In: Proc. of the 2nd Internat. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS), Melbourne, Australia (2003)Google Scholar
  12. 12.
    Ramesh, A., Mataric, M.J.: Learning movement sequences from demonstration. In: Proceedings of the International Conference on Development and Learning (ICDL), pp. 302–306. MIT, Cambridge (2002)Google Scholar
  13. 13.
    Billard, A., Mataric, M.J.: Learning human arm movements by imitation: Evaluation of a biologically inspired connectionist architecture. Robotics and Autonomous Systems 37(2-3), 145–160 (2001)CrossRefMATHGoogle Scholar
  14. 14.
    Mukai, T., Kuriyama, S.: Geostatistical motion interpolation. In: SIGGRAPH 2005: ACM SIGGRAPH 2005 Papers, pp. 1062–1070. ACM, New York (2005)Google Scholar
  15. 15.
    Kallmann, M.: Analytical Inverse Kinematics with Body Posture Control. Computer Animation and Virtual Worlds (CAVW) 19(2), 79–91 (2008)CrossRefGoogle Scholar
  16. 16.
    Innalabs miniAHRS m2 user’s manual (2008)Google Scholar
  17. 17.
    Vlasic, D., Adelsberger, R., Vannucci, G., Barnwell, J., Gross, M., Matusik, W., Popovic, J.: Practical motion capture in everyday surroundings. In: SIGGRAPH 2007: ACM SIGGRAPH 2007 papers, p. 35. ACM, New York (2007)Google Scholar
  18. 18.
    Slyper, R., Hodgins, J.: Action capture with accelerometers. In: 2008 ACM SIGGRAPH / Eurographics Symposium on Computer Animation (2008)Google Scholar
  19. 19.
    Hanson, A.J.: Visualizing Quaternions. The Morgan Kaufmann Series in Interactive 3D Technology, pp. 248–249. Morgan Kaufmann, San Francisco (2005)Google Scholar
  20. 20.
    Xiong, Y., Quek, F., McNeill, D.: Hand gesture symmetric behavior detection and analysis in natural conversation. In: IEEE Internat. Conf. on Multimodal Interfaces, p. 179 (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Yazhou Huang
    • 1
  • Marcelo Kallmann
    • 1
  1. 1.University of California, MercedUS

Personalised recommendations