Advertisement

3D Research

, 6:16 | Cite as

A Low-cost System for Generating Near-realistic Virtual Actors

  • Mahmoud AfifiEmail author
  • Khaled F. Hussain
  • Hosny M. Ibrahim
  • Nagwa M. Omar
3DR Express

Abstract

Generating virtual actors is one of the most challenging fields in computer graphics. The reconstruction of a realistic virtual actor has been paid attention by the academic research and the film industry to generate human-like virtual actors. Many movies were acted by human-like virtual actors, where the audience cannot distinguish between real and virtual actors. The synthesis of realistic virtual actors is considered a complex process. Many techniques are used to generate a realistic virtual actor; however they usually require expensive hardware equipment. In this paper, a low-cost system that generates near-realistic virtual actors is presented. The facial features of the real actor are blended with a virtual head that is attached to the actor’s body. Comparing with other techniques that generate virtual actors, the proposed system is considered a low-cost system that requires only one camera that records the scene without using any expensive hardware equipment. The results of our system show that the system generates good near-realistic virtual actors that can be used on many applications.

Graphical Abstract

Keywords

Virtual actor Facial animation Digital face Computer animation 

Notes

Acknowledgments

The authors thank the Multimedia Lab (http://www.aun.edu.eg/multimedia/), Faculty of Computers and Information, Assiut University for providing the green-screen studio. Many thanks to Mohammed Fouad, Ali Hussain, and Mostafa Kamel for participating as video subjects, and Mohammed Ashour and Mazen Refaat for recording the videos that are used in the experiments of the proposed system.

Supplementary material

Supplementary material 1 (WMV 158097 kb)

References

  1. 1.
    Magnenat-Thalmann, N., & Thalmann, D. (2005). Handbook of virtual humans. Hoboken: Wiley.Google Scholar
  2. 2.
    Kurtz, L. A. (2005). Digital actors and copyright-from the polar express to simone. Santa Clara Computer and High Technology Law Journal, 21, 783.Google Scholar
  3. 3.
    Jőrg, S. (2011). Perception of body and hand animations for realistic virtual characters. PhD thesis, Trinity College Dublin.Google Scholar
  4. 4.
    Debevec, P. (2012). The light stages and their applications to photoreal digital actors. Singapore: SIGGRAPH Asia Technical Briefs.Google Scholar
  5. 5.
    Parke, F. I. (1972). Computer generated animation of faces. In Proceedings of the ACM annual conference (Vol. 1, pp. 451–457).Google Scholar
  6. 6.
    Alexander, O., Rogers, M., Lambeth, W., Chiang, J.-Y., Ma, W.-C., Wang, C.-C., & Debevec, P. (2010). The digital emily project: Achieving a photorealistic digital actor. Computer Graphics and Applications, 30(4), 20–31.CrossRefGoogle Scholar
  7. 7.
    Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley [from the field]. IEEE Robotics and Automation Magazine, 19(2), 98–100.CrossRefGoogle Scholar
  8. 8.
    Geller, T. (2008). Overcoming the uncanny valley. IEEE Computer Graphics and Applications, 28(4), 11–17.CrossRefGoogle Scholar
  9. 9.
    Debevec, P., Hawkins, T., Tchou, C., Duiker, H.-P., Sarokin, W., & Sagar, M. (2000). Acquiring the reflectance field of a human face. In Proceedings of the 27th annual conference on computer graphics and interactive techniques (pp. 145–156).Google Scholar
  10. 10.
    Robertson, B. (2009). What’s old is new again. Computer Graphics World.Google Scholar
  11. 11.
    Cosker, D., Eisert, P., Grau, O., Hancock, P. J. B., McKinnell, J., & Ong, E.-J. (2013). Applications of face analysis and modeling in media production. MultiMedia IEEE, 20(4), 18–27.CrossRefGoogle Scholar
  12. 12.
    Alexander, O., Fyffe, G., Busch, J., Yu, X., Ichikari, R., Jones, A., Debevec, P., Jimenez, J., Danvoye, E., & Antionazzi, B., et al. (2013) Digital ira: Creating a real-time photoreal digital actor. ACM SIGGRAPH 2013 Posters 1.Google Scholar
  13. 13.
    Bhat, K. S., Goldenthal, R., Ye, Y., Mallet, R., Koperwas, M. (2013). High fidelity facial animation capture and retargeting with contours. In Proceedings of the 12th ACM SIGGRAPH/eurographics symposium on computer animation (pp. 7–14).Google Scholar
  14. 14.
    Afifi, M., Hussain, K. F., Ibrahim, H. M., & Omar, N. M. (2014). Fast video completion using patch-based synthesis and image registration. In IEEE international symposium on intelligent signal processing and communication systems (ISPACS) 2000–2004.Google Scholar
  15. 15.
    Pérez, P., Gangnet, M., & Blake, A. (2003). Poisson image editing. ACM Transactions on Graphics (TOG), 22, 313–318.CrossRefGoogle Scholar
  16. 16.
    Liu, Z., Zhang, Z., Jacobs, C., & Cohen, M. (2001). Rapid modeling of animated faces from video. The Journal of Visualization and Computer Animation, 12(4), 227–240.zbMATHCrossRefGoogle Scholar
  17. 17.
    Wang, C., Yan, S., Zhang, H., & Ma, W. (2005). Realistic 3d face modeling by fusing multiple 2d images. In Proceedings of the 11th international multimedia modelling conference (MMM) (pp. 139–146). IEEE.Google Scholar
  18. 18.
    Blanz, V., & Vetter, T. (1999). A morphable model for the synthesis of 3d faces. In Proceedings of the 26th annual conference on computer graphics and interactive techniques (pp. 187–194).Google Scholar
  19. 19.
    Jain, A., Thormählen, T., Seidel, H.-P., & Theobalt, C. (2010). Moviereshape: Tracking and reshaping of humans in videos. ACM Transactions on Graphics (TOG), 29(6), 148:1–148:10.CrossRefGoogle Scholar
  20. 20.
    Abate, A. F., Nappi, M., Riccio, D., & Sabatino, G. (2007). 2d and 3d face recognition: A survey. Pattern Recognition Letters, 28(14), 1885–1906.CrossRefGoogle Scholar
  21. 21.
    Tong, J., Zhang, M., Xiang, X., Shen, H., Yan, H., & Chen, Z. (2011). 3d body scanning with hairstyle using one time-of-flight camera. Computer Animation and Virtual Worlds, 22(2–3), 203–211.CrossRefGoogle Scholar
  22. 22.
    Lee, Y., Terzopoulos, D., Waters, K. (1995). Realistic modeling for facial animation. In Proceedings of the 22nd annual conference on computer graphics and interactive techniques (pp. 55–62).Google Scholar
  23. 23.
    Beeler, T., Bickel, B., Beardsley, P., Sumner, B., & Gross, M. (2010). High-quality single-shot capture of facial geometry. ACM Transactions on Graphics (TOG), 29(4), 40.CrossRefGoogle Scholar
  24. 24.
    Bermano, A. H., Bradley, D., Beeler, T., Zund, F., Nowrouzezahrai, D., Baran, I., et al. (2014). Facial performance enhancement using dynamic shape space analysis. ACM Transactions on Graphics (TOG), 33(2), 13.CrossRefGoogle Scholar
  25. 25.
    Sagar, M. (2004). Reflectance field rendering of human faces for spider-man 2 (p. 118). ACM SIGGRAPH 2004 Sketches.Google Scholar
  26. 26.
    Ghosh, A., Fyffe, G., Tunwattanapong, B., Busch, J., Xueming, Yu., & Debevec, P. (2011). Multiview face capture using polarized spherical gradient illumination. ACM Transactions on Graphics (TOG), 30(6), 129.CrossRefGoogle Scholar
  27. 27.
    Gleicher, M. (1999). Animation from observation: Motion capture and motion editing. ACM SIGGRAPH Computer Graphics, 33(4), 51–54.CrossRefGoogle Scholar
  28. 28.
    Brown, W. (2009). Beowulf: The digital monster movie. Animation, 4(2), 153–168.CrossRefGoogle Scholar
  29. 29.
    Wyatt, R., & Klaw, R. (2013). The apes of wrath. San Francisco: Tachyon Publications.Google Scholar
  30. 30.
    Stoll, C., Gall, J., de Aguiar, E., Thrun, S., & Theobalt, C. (2010). Video-based reconstruction of animatable human characters. ACM Transactions on Graphics (TOG), 29(6), 139:1–139:10.CrossRefGoogle Scholar
  31. 31.
    Xu, F., Liu, Y., Stoll, C., Tompkin, J., Bharaj, G., Dai, Q., et al. (2011). Video-based characters: Creating new human performances from a multi-view video database. ACM Transactions on Graphics (TOG), 30(4), 32:1–32:10.Google Scholar
  32. 32.
    Deng, Z., Noh, J. (2007). Computer facial animation: A survey. In Data-driven 3D facial animation (pp. 1–28).Google Scholar
  33. 33.
    Ekman, P., & Friesen, W. V. (1977). Facial action coding system. Palo Alto: Consulting Psychologists Press, Stanford University.Google Scholar
  34. 34.
    Terzopoulos, D., & Waters, K. (1990). Physically-based facial modelling, analysis, and animation. The Journal of Visualization and Computer Animation, 1(2), 73–80.CrossRefGoogle Scholar
  35. 35.
    Hyneman, W., Itokazu, H., Williams, L., & Zhao, X. (2005). Human face project. ACM SIGGRAPH 2005 Courses 5.Google Scholar
  36. 36.
    Wolff, E. (2003). Creating virtual performers: Disneys human face project. Millimeter magazine.Google Scholar
  37. 37.
    Yuksel, C., Schaefer, S., & Keyser, J. (2009). Hair meshes. ACM Transactions on Graphics (TOG), 28(5), 166:1–166:7.CrossRefGoogle Scholar
  38. 38.
    Lucas, B. D., & Kanade, T. (1981). An iterative image registration technique with an application to stereo vision. In Proceedings of the 7th international joint conference on Artificial intelligence (Vol. 2, pp. 674–679).Google Scholar
  39. 39.
    Tomasi, C., & Kanade, T. (1991). Detection and tracking of point features. Pittsburgh: School of Computer Science, Carnegie Mellon Univ.Google Scholar
  40. 40.
    Shi, J., & Tomasi, C. (1994). Good features to track. In Proceedings of the IEEE computer society conference on computer vision and pattern recognition (pp. 593–600).Google Scholar
  41. 41.
    Black, M. J., & Jepson, A. D. (1998). Eigentracking: Robust matching and tracking of articulated objects using a view-based representation. International Journal of Computer Vision, 26(1), 63–84.CrossRefGoogle Scholar
  42. 42.
    Comaniciu, D., Ramesh, V., & Meer, P. (2003). Kernel-based object tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(5), 564–577.CrossRefGoogle Scholar
  43. 43.
    Jepson, A. D., Fleet, D. J., & El-Maraghi, T. F. (2003). Robust online appearance models for visual tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(10), 1296–1311.CrossRefGoogle Scholar
  44. 44.
    Yilmaz, A., Javed, O., & Shah, M. (2006). Object tracking: A survey. ACM computing surveys (CSUR), 38(4), 13.CrossRefGoogle Scholar
  45. 45.
    Torr, P. H. S., & Murray, D. W. (1997). The development and comparison of robust methodsfor estimating the fundamental matrix. International Journal of Computer Vision, 24(3), 271–300.CrossRefGoogle Scholar
  46. 46.
    Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381–395.MathSciNetCrossRefGoogle Scholar
  47. 47.
    Cevher, V., Shah, F., Velmurugan, R., & McClellan, J. H. (2007). A multi target bearing tracking system using random sampling consensus. IEEE Aerospace Conference, 1–15.Google Scholar
  48. 48.
    Torr, P. H., & Zisserman, A. (2000). MLESAC: A new robust estimator with application to estimating image geometry. Computer Vision and Image Understanding, 78(1), 138–156.CrossRefGoogle Scholar
  49. 49.
    Kato, H., & Billinghurst, M. (1999). Marker tracking and hmd calibration for a video-based augmented reality conferencing system. In Proceedings of 2nd IEEE and ACM international workshop on augmented reality (IWAR’99) (pp. 85–94).Google Scholar
  50. 50.
    Wright, S. (2013). Digital compositing for film and video. Boca Raton: CRC Press.Google Scholar
  51. 51.
    Afifi, M., Hussain, K. F., Ibrahim, H. M., & Omar, N. M. (2014). Video face replacement system using a modified Poisson blending technique. In IEEE international symposium on intelligent signal processing and communication systems (ISPACS) 2005–2010.Google Scholar

Copyright information

© 3D Research Center, Kwangwoon University and Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  1. 1.Department of Information TechnologyAssiut UniversityAsyutEgypt
  2. 2.Department of Computer ScienceAssiut UniversityAsyutEgypt

Personalised recommendations