Natural User Interfaces for Virtual Character Full Body and Facial Animation in Immersive Virtual Worlds
In recent years, networked virtual environments have steadily grown to become a frontier in social computing. Such virtual cyberspaces are usually accessed by multiple users through their 3D avatars. Recent scientific activity has resulted in the release of both hardware and software components that enable users at home to interact with their virtual persona through natural body and facial activity performance. Based on 3D computer graphics methods and vision-based motion tracking algorithms, these techniques aspire to reinforce the sense of autonomy and telepresence within the virtual world. In this paper we present two distinct frameworks for avatar animation through user natural motion input. We specifically target the full body avatar control case using a Kinect sensor via a simple, networked skeletal joint retargeting pipeline, as well as an intuitive user facial animation 3D reconstruction pipeline for rendering highly realistic user facial puppets. Furthermore, we present a common networked architecture to enable multiple remote clients to capture and render any number of 3D animated characters within a shared virtual environment.
KeywordsVirtual character animation Markerless performance capture Face animation Kinect-based interfaces
The research leading to this work has received funding from the European Community’s Horizon 2020 Framework Programme under grant agreement no. 644204 (ProsocialLearn project).
- 1.Apostolakis, K.C., Daras, P.: RAAT-the reverie avatar authoring tool. In: 2013 18th International Conference on Digital Signal Processing (DSP). IEEE (2013)Google Scholar
- 2.Apostolakis, K.C., et al.: Blending real with virtual in 3DLife. In: 2013 14th International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS). IEEE (2013)Google Scholar
- 4.Cho, T., et al.: Emotional avatars: appearance augmentation and animation based on facial expression analysis. Appl. Math. 9(2L), 461–469 (2015)Google Scholar
- 6.Cootes, T.F., Taylor, C.J.: Statistical models of appearance for computer vision (2004)Google Scholar
- 7.Cootes, T.F.: Modeling and Search software. http://personalpages.manchester.ac.uk/staff/timothy.f.cootes/software/am_tools_doc/index.html
- 8.Ekman, P., Friesen, W.V.: Manual for the Facial Action Coding System. Consulting Psychologists Press, Palo Alto (1978)Google Scholar
- 10.Rhee, T., et al.: Real-time facial animation from live video tracking. In: Proceedings of the 2011 ACM SIGGRAPH/Eurographics Symposium on Computer Animation. ACM (2011)Google Scholar
- 12.Shapiro, A., et al.: Automatic acquisition and animation of virtual avatars. In: VR (2014)Google Scholar
- 13.Spanlang, B., et al.: Real time whole body motion mapping for avatars and robots. In: Proceedings of the 19th ACM Symposium on Virtual Reality Software and Technology. ACM (2013)Google Scholar
- 14.Wei, Y.: Research on facial expression recognition and synthesis. Master Thesis, Department of Computer Science and Technology, Nanjing (2009)Google Scholar
- 15.Weise, T., et al.: Realtime performance-based facial animation. ACM Trans. Graph. (TOG) 30(4), 77:1-77:10 (2011). ACMGoogle Scholar
- 16.Zahariadis, T., et al.: Utilizing social interaction information for efficient 3D immersive overlay communications. In: Kondoz, A., Dagiuklas, T. (eds.) Novel 3D Media Technologies, pp. 225–240. Springer, New York (2015)Google Scholar