3D Research

, 8:18 | Cite as

Performance-Driven Hybrid Full-Body Character Control for Navigation and Interaction in Virtual Environments

  • Christos Mousas
  • Christos-Nikolaos Anagnostopoulos
3DR Express

Abstract

This paper presents a hybrid character control interface that provides the ability to synthesize in real-time a variety of actions based on the user’s performance capture. The proposed methodology enables three different performance interaction modules: the performance animation control that enables the direct mapping of the user’s pose to the character, the motion controller that synthesizes the desired motion of the character based on an activity recognition methodology, and the hybrid control that lies within the performance animation and the motion controller. With the methodology presented, the user will have the freedom to interact within the virtual environment, as well as the ability to manipulate the character and to synthesize a variety of actions that cannot be performed directly by him/her, but which the system synthesizes. Therefore, the user is able to interact with the virtual environment in a more sophisticated fashion. This paper presents examples of different scenarios based on the three different full-body character control methodologies.

Keywords

Character animation Hybrid controller Navigation Object manipulation Virtual reality interaction 

Supplementary material

Supplementary material 1 (mp4 21354 KB)

References

  1. 1.
    Microsoft Kinect Motion Capture System, from http://www.microsoft.com/en-us/kinectforwindows/. Accessed 12 2016.
  2. 2.
    Assus Xtion Motion Capture Device, from http://www.asus.com/Multimedia/Xtion_PRO/. Accessed 12 2016.
  3. 3.
    England, D. (2011). Whole body interaction. London: Springer.CrossRefGoogle Scholar
  4. 4.
    Van Welbergen, H., Van Basten, B. J., Egges, A., Ruttkay, Z. M., & Overmars, M. H. (2010). Real time animation of virtual humans: A trade-off between naturalness and control. Computer Graphics Forum, 29(8), 2530–2554.CrossRefGoogle Scholar
  5. 5.
    Multon, F., France, L., Cani-Gascuel, M. P., & Debunne, G. (1999). Computer animation of human walking: A survey. The Journal of Visualization and Computer Animation, 10(1), 39–54.CrossRefGoogle Scholar
  6. 6.
    Sarris, N., & Strintzis, M. G. (2005). 3D modeling and animation: Synthesis and analysis techniques for the human body. Hershey: IGI Global.Google Scholar
  7. 7.
    McCann, J., & Pollard, N. (2007). Responsive characters from motion fragments. ACM Transactions on Graphics, 26(3), 6.CrossRefGoogle Scholar
  8. 8.
    Oshita, M. (2010). Generating animation from natural language texts and semantic analysis for motion search and scheduling. The Visual Computer, 26(5), 339–352.CrossRefGoogle Scholar
  9. 9.
    Mousas, C., & Anagnostopoulos, C.-N. (2015). CHASE: Character animation scripting environment. In Virtual Reality Interaction and Physical Simulation, pp. 55–62Google Scholar
  10. 10.
    Mousas, C., & Anagnostopoulos, C.-N. (2015). Character animation scripting environment. Encyclopedia of computer graphics and games. Berlin: Springer.Google Scholar
  11. 11.
    Levine, S., Theobalt, C., & Koltun, V. (2009). Real-time prosody-driven synthesis of body language. ACM Transactions on Graphics, 28(5), 17.CrossRefGoogle Scholar
  12. 12.
    Thorne, M., Burke, D., & van de Panne, M. (2007). Motion doodles: An interface for sketching character motion. In ACM SIGGRAPH 2007 courses, p. 24.Google Scholar
  13. 13.
    Davis, J., Agrawala, M., Chuang, E., Popović, Z., Salesin, D. (2003). A sketching interface for articulated figure animation. In Proceedings of the 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 320–328.Google Scholar
  14. 14.
    Raunhardt, D., & Boulic, R. (2011). Immersive singularity-free full-body interactions with reduced marker set. Computer Animation and Virtual Worlds, 22(5), 407–419.CrossRefGoogle Scholar
  15. 15.
    Liu, H., Wei, X., Chai, J., Ha, I., Rhee, T. (2011) Realtime human motion control with a small number of inertial sensors. In Symposium on Interactive 3D Graphics and Games, pp. 133–140.Google Scholar
  16. 16.
    Bleiweiss, A., Eshar, D., Kutliroff, G., Lerner, A., Oshrat, Y., & Yanai, Y. (2010). Enhanced interactive gaming by blending full-body tracking and gesture animation. In ACM SIGGRAPH ASIA 2010 Sketches, p. 34.Google Scholar
  17. 17.
    Ouzounis, C., Mousas, C., Anagnostopoulos, C.-N., & Newbury, P. (2015). Using personalized finger gestures for navigating virtual characters. In Virtual Reality Interaction and Physical Simulation, pp. 5–14.Google Scholar
  18. 18.
    Kovar, L., Gleicher, M., & Pighin, F. (2002). Motion graphs. ACM Transactions on Graphics, 21(3), 473–482.CrossRefGoogle Scholar
  19. 19.
    Mukai, T., & Kuriyama, S. (2005). Geostatistical motion interpolation. ACM Transactions on Graphics, 24(3), 1062–1070.CrossRefGoogle Scholar
  20. 20.
    Kovar, L., & Gleicher, M. (2003). Flexible automatic motion blending with registration curves. In Proceedings of the 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 214–224.Google Scholar
  21. 21.
    Park, S. I., Shin, H. J., & Shin, S. Y. (2002). On-line locomotion generation based on motion blending. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 105–111.Google Scholar
  22. 22.
    van Basten, B., & Egges, A. (2012). Motion transplantation techniques: A survey. IEEE Computer Graphics and Applications, 32(3), 16–23.CrossRefGoogle Scholar
  23. 23.
    Mousas, C., Newbury, P., & Anagnostopoulos, C. N. (2013). Splicing of concurrent upper-body motion spaces with locomotion. Procedia Computer Science, 25, 348–359.CrossRefGoogle Scholar
  24. 24.
    Mousas, C., & Newbury, P. (2012). Real-time motion synthesis for multiple goal-directed tasks using motion layers. In Virtual Reality Interaction and Physical Simulation, pp. 79–85.Google Scholar
  25. 25.
    Witkin, A., & Popović, Z. (1995). Motion warping. In Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, pp. 105–108.Google Scholar
  26. 26.
    Gleicher, M. (1998). Retargetting motion to new characters. In Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, pp. 33–42.Google Scholar
  27. 27.
    Feng, A. W., Xu, Y., & Shapiro, A. (2012). An example-based motion synthesis technique for locomotion and object manipulation. In Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, pp. 95–102.Google Scholar
  28. 28.
    Shin, H. J., Lee, J., Shin, S. Y., & Gleicher, M. (2001). Computer puppetry: An importance-based approach. ACM Transactions on Graphics, 20(2), 67–94.CrossRefGoogle Scholar
  29. 29.
    Sturman, D. J. (1998). Computer puppetry. IEEE Computer Graphics and Applications, 18(1), 4–38.CrossRefGoogle Scholar
  30. 30.
    Unzueta, L., Peinado, M., Boulic, R., & Suescun, A. (2008). Full-body performance animation with sequential inverse kinematics. Graphical models, 70(5), 87–104.CrossRefGoogle Scholar
  31. 31.
    Slyper, R., Hodgins, J. K. (2008). Action capture with accelerometers. In Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 193–199.Google Scholar
  32. 32.
    Numaguchi, N., Nakazawa, A., Shiratori, T., & Hodgins, J. K. (2011). A puppet interface for retrieval of motion capture data. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 157–166.Google Scholar
  33. 33.
    Yin, K., & Pai, D. K. (2003). Footsee: An interactive animation system. In Proceedings of the 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 329–338.Google Scholar
  34. 34.
    Misumi, H., Fujimura, W., Kosaka, T., Hattori, M., & Shirai, A. (2011). GAMIC: Exaggerated real time character animation control method for full-body gesture interaction systems. In SIGGRAPH Posters, p. 5.Google Scholar
  35. 35.
    Tautges, J., Zinke, A., Krüger, B., Baumann, J., Weber, A., Helten, T., et al. (2011). Motion reconstruction using sparse accelerometer data. ACM Transactions on Graphics, 30(3), 18.CrossRefGoogle Scholar
  36. 36.
    Mousas, C., Newbury, P., & Anagnostopoulos, C.-N. (2014). Data-driven motion reconstruction using local regression models. In Artificial Intelligence Applications and Innovations, pp. 364–374.Google Scholar
  37. 37.
    Mousas, C., Newbury, P., & Anagnostopoulos, C.-N. (2014). Evaluating the covariance matrix constraints for data-driven statistical human motion reconstruction. In Spring Conference on Computer Graphics, pp. 99–106.Google Scholar
  38. 38.
    Mousas, C., Newbury, P., & Anagnostopoulos, C.-N. (2014). Efficient hand-over motion reconstruction. In International Conference on Computer Graphics, Visualization and Computer Vision, pp. 111–120.Google Scholar
  39. 39.
    Shiratori, T., & Hodgins, J. K. (2008). Accelerometer-based user interfaces for the control of a physically simulated character. ACM Transactions on Graphics, 27(5), 123.CrossRefGoogle Scholar
  40. 40.
    Wei, X., Zhang, P., & Chai, J. (2012). Accurate realtime full-body motion capture using a single depth camera. ACM Transactions on Graphics, 31(6), 188.CrossRefGoogle Scholar
  41. 41.
    Shiratori, T., Park, H. S., Sigal, L., Sheikh, Y., & Hodgins, J. K. (2011). Motion capture from body-mounted cameras. ACM Transactions on Graphics, 30(4), 31.CrossRefGoogle Scholar
  42. 42.
    Min, J., & Chai, J. (2012). Motion graphs++: A compact generative model for semantic motion analysis and synthesis. ACM Transactions on Graphics, 31(6), 153.CrossRefGoogle Scholar
  43. 43.
    Chai, J., & Hodgins, J. K. (2005). Performance animation from low-dimensional control signals. ACM Transactions on Graphics, 24(3), 686–696.CrossRefGoogle Scholar
  44. 44.
    Ishigaki, S., White, T., Zordan, V. B., & Liu, C. K. (2009). Performance-based control interface for character animation. ACM Transactions on Graphics, 28(3), 61.CrossRefGoogle Scholar
  45. 45.
    Seol, Y., O’Sullivan, C., & Lee, J. (2013). Creature features: Online motion puppetry for non-human characters. In Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 213–221.Google Scholar
  46. 46.
    Halmos, P. R. (1960). Naive set theory. Berlin: Springer.MATHGoogle Scholar
  47. 47.
    Powerset Algorithm, from http://rosettacode.org/wiki/Power_set. Accessed 12 2016.
  48. 48.
    Samet, H. (2006). Foundations of multidimensional and metric data structures. Los Altos: Morgan Kaufmann.MATHGoogle Scholar
  49. 49.
    Shotton, J., Sharp, T., Kipman, A., Fitzgibbon, A., Finocchio, M., Blake, A., Cook, M., & Moore, R. (2011). Real-time human pose recognition in parts from single depth images. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 1297–1304.Google Scholar
  50. 50.
    CMU Graphics Lab Motion Capture Database, from http://mocap.cs.cmu.edu/. Accessed 12 2016.
  51. 51.
    Kallmann, M. (2008). Analytical inverse kinematics with body posture control. Computer Animation and Virtual Worlds, 19(2), 79–91.CrossRefGoogle Scholar
  52. 52.
    He, Z., & Jin, L. (2009). Activity recognition from acceleration data based on discrete consine transform and SVM. In IEEE International Conference on Systems, Man and Cybernetics, pp. 5041–5044.Google Scholar
  53. 53.
    Min, J., Chen, Y. L., & Chai, J. (2009). Interactive generation of human animation with deformable motion models. ACM Transactions on Graphics, 29(1), 9.CrossRefGoogle Scholar
  54. 54.
    Shoulson, A., Marshak, N., Kapadia, M., & Badler, N. I. (2013). ADAPT: The agent development and prototyping testbed. In Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, pp. 9–18.Google Scholar
  55. 55.
    Thiebaux, M., Marsella, S., Marshall, A. N., & Kallmann, M. (2008). Smartbody: Behavior realization for embodied conversational agents. In Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Vol. 1, pp. 151–158.Google Scholar
  56. 56.
    Kapadia, M., Singh, S., Reinman, G., & Faloutsos, P. (2011). A behavior-authoring framework for multiactor simulations. IEEE Computer Graphics and Applications, 31(6), 45–55.CrossRefGoogle Scholar
  57. 57.
    Liang, X., Hoyet, L., Geng, W., & Multon, F. (2010). Responsive action generation by physically-based motion retrieval and adaptation. In Motion in Games, pp. 313–324.Google Scholar
  58. 58.
    Al-Asqhar, R. A., Komura, T., & Choi, M. G. (2013). Relationship descriptors for interactive motion adaptation. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 45–53.Google Scholar
  59. 59.
    Dam, P., Braz, P., Raposo, A. (2013). A study of nnavigation and selection techniques in virtual environments using microsoft kinect®. In International Conference on Virtual, Augmented and Mixed Reality, pp. 139–148.Google Scholar
  60. 60.
    Mousas, C. (2017). Towards developing an easy-to-use scripting environment for animating virtual characters. arXiv preprint arXiv:1702.03246.

Copyright information

© 3D Research Center, Kwangwoon University and Springer-Verlag Berlin Heidelberg 2017

Authors and Affiliations

  • Christos Mousas
    • 1
  • Christos-Nikolaos Anagnostopoulos
    • 2
  1. 1.Department of Computer ScienceSouthern Illinois UniversityCarbondaleUSA
  2. 2.Department of Cultural Technology and CommunicationUniversity of the AegeanMytileneGreece

Personalised recommendations