Skip to main content

Consumer-Level Virtual Reality Motion Capture

  • Conference paper
  • First Online:
  • 1109 Accesses

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 693))

Abstract

Virtual Reality (VR) is creating a new paradigm in humans’ communication. Today, we can enter in a virtual environment and interact with each other through 3D characters. However, VR headsets occlude user’s face, limiting the Motion Capture (MoCap) of facial expressions and, thus, limiting the introduction of this non-verbal component. The unique solution available is not suitable for consumer-level applications, relying on complex hardware and calibrations. In this work, we deliver consumer-level methods for facial MoCap under VR environments. We developed an occlusions-support method compatible with generic facial MoCap systems. Then, we extract facial features and deploy Random Forests algorithms that accurately estimate emotions and upper face movements occluded by the headset. Our VR MoCap methods are validated and a facial animation use case is provided. With our novel methods, both calibration and hardware is reduced, making possible face-to-face communication in VR environments.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. OpenCV (2014)

    Google Scholar 

  2. Biocca, F.: The Cyborg’s dilemma: progressive embodiment in virtual environments. J. Comput.-Mediated Commun. 3(2) (1997)

    Google Scholar 

  3. Bombari, D., Schmid, P.C., Schmid Mast, M., Birri, S., Mast, F.W., Lobmaier, J.S.: Emotion recognition: the role of featural and configural face information. Q. J. Exp. Psychol. 66(12), 2426–2442 (2013)

    Article  Google Scholar 

  4. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)

    Article  MATH  Google Scholar 

  5. Cao, C., Hou, Q., Zhou, K.: Displaced dynamic expression regression for real-time facial tracking and animation. ACM Trans. Graph. (TOG) 33(4), 43 (2014)

    Google Scholar 

  6. Cao, C., Weng, Y., Lin, S., Zhou, K.: 3d shape regression for real-time facial animation. ACM Trans. Graph. 32(4), 41 (2013)

    Article  MATH  Google Scholar 

  7. Eisenbarth, H., Alpers, G.W.: Happy mouth and sad eyes: scanning emotional facial expressions. Emotion 11(4), 860 (2011)

    Article  Google Scholar 

  8. Ekman, P., Friesen, W.: Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto (1978)

    Google Scholar 

  9. Ekman, P., Friesen, W.V.: Unmasking the Face: A Guide to Recognizing Emotions from Facial Cues. Malor Books, Cambridge (1975)

    Google Scholar 

  10. Fuentes, C.T., Runa, C., Blanco, X.A., Orvalho, V., Haggard, P.: Does my face fit?: a face image task reveals structure and distortions of facial feature representation. PLoS ONE 8(10), e76805 (2013)

    Article  Google Scholar 

  11. Jack, R.E., Jack, R.E.: Culture and facial expressions of emotion Culture and facial expressions of emotion. Vis. Cogn. 1–39 (2013)

    Google Scholar 

  12. Kilteni, K., Groten, R., Slater, M.: The sense of embodiment in virtual reality. Presence: Teleoperators Virtual Environ. 21(4), 373–387 (2002)

    Article  Google Scholar 

  13. Lang, C., Wachsmuth, S., Hanheide, M., Wersing, H.: Facial communicative signals. Int. J. Social Robot. 4(3), 249–262 (2012)

    Article  Google Scholar 

  14. Li, H., Trutoiu, L., Olszewski, K., Wei, L., Trutna, T., Hsieh, P.-L., Nicholls, A., Ma, C.: Facial performance sensing head-mounted display. ACM Trans. Graph. (Proceedings SIGGRAPH 2015) 34(4), 47 (2015)

    Google Scholar 

  15. Li, H., Yu, J., Ye, Y., Bregler, C.: Realtime facial animation with on-the-fly correctives. ACM Trans. Graph. 32(4) (2013)

    Google Scholar 

  16. Loconsole, C., Miranda, C.R., Augusto, G., Frisoli, G., Orvalho, V.C.: Real-time emotion recognition: a novel method for geometrical facial features extraction. In: 9th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP 2014), vol. 1, pp. 378–385 (2014)

    Google Scholar 

  17. Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended cohn-kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 94–101. IEEE (2010)

    Google Scholar 

  18. Magnenat-Thalmann, N., Primeau, E., Thalmann, D.: Abstract muscle action procedures for human face animation. Vis. Comput. 3(5), 290–297 (1988)

    Article  Google Scholar 

  19. McCloud, S.: Understanding Comics: The Invisible Art. Kitchen Sink Press, Northampton (1993)

    Google Scholar 

  20. McCloud, S.: Making Comics: Storytelling Secrets of Comics, Manga and Graphic Novels. William Morrow, William Morrow Paperbacks, New York (2006)

    Google Scholar 

  21. Pandzic, I.S., Forchheimer, R.: MPEG-4 Facial Animation: The Standard Implementation and Applications. Wiley, Hoboken (2003)

    Google Scholar 

  22. Parikh, R., Mathai, A., Parikh, S., Sekhar, G.C., Thomas, R.: Understanding and using sensitivity, specificity and predictive values. Indian J. Ophthalmol. 56(1), 45 (2008)

    Article  Google Scholar 

  23. Parke, F.I., Waters, K.: Computer Facial Animation, vol. 289. AK Peters Wellesley, Natick (1996)

    Google Scholar 

  24. Pighin, F., Lewis, J.: Performance-driven facial animation. In: ACM SIGGRAPH (2006)

    Google Scholar 

  25. R Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna (2013). ISBN 3-900051-07-0

    Google Scholar 

  26. Rodriguez, J., Perez, A., Lozano, J.: Sensitivity analysis of k-fold cross validation in prediction error estimation. IEEE Trans. Pattern Anal. Mach. Intell. 32(3), 569–575 (2010)

    Article  Google Scholar 

  27. Saragih, J.M., Lucey, S., Cohn, J.F.: Deformable model fitting by regularized landmark mean-shift. Int. J. Comput. Vis. 91(2), 200–215 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  28. Slater, M.: Grand challenges in virtual environments. Front. Robot. AI 1, 3 (2014)

    Article  Google Scholar 

  29. von der Pahlen, J., Jimenez, J., Danvoye, E., Debevec, P., Fyffe, G., Alexander, O.: Digital ira and beyond: creating real-time photoreal digital actors. In: ACM SIGGRAPH 2014 Courses, p. 1. ACM (2014)

    Google Scholar 

  30. Weise, T., Bouaziz, S., Li, H., Pauly, M.: Realtime performance-based facial animation. ACM Trans. Graph. (TOG) 30(4), 77 (2011)

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by Instituto de Telecomunicações (Project Incentivo ref: Projeto Incentivo/EEI/LA0008/2014 and project UID ref: UID/EEA/5008/2013) and University of Porto. The authors would like to thanks Elena Kokkinara from Trinity College Dublin and Pedro Mendes from University of Porto for their support given in the beginning of the project.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Catarina Runa Miranda .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Miranda, C.R., Orvalho, V.C. (2017). Consumer-Level Virtual Reality Motion Capture. In: Braz, J., et al. Computer Vision, Imaging and Computer Graphics Theory and Applications. VISIGRAPP 2016. Communications in Computer and Information Science, vol 693. Springer, Cham. https://doi.org/10.1007/978-3-319-64870-5_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-64870-5_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-64869-9

  • Online ISBN: 978-3-319-64870-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics