Advertisement

An Automatic Base Expression Selection Algorithm Based on Local Blendshape Model

  • Ziqi Tu
  • Dongdong WengEmail author
  • Dewen Cheng
  • Yihua Bao
  • Bin Liang
  • Le Luo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11902)

Abstract

In order to give a virtual human rich and realistic facial expression in the film production process, a good blendshape model is needed. But selecting and capturing base expressions for blendshape model requires a lot of manual work, time and effort, and the model also lacks expressiveness. A method for automatically selecting a set of base expressions from a sequence of facial motions is proposed in this paper. In this method, the Procrustes analysis is used to estimate the difference between face meshes and determine the composition of the base expressions. And the base expressions are used to build a local blendshape model which can enhance expressiveness. The results of reconstructing facial expressions by the local blendshape model are shown in this paper. By this method, the base expressions can be automatically selected from the expression sequence, reducing the manual operation.

Keywords

Base expression selection Local blendshape model Facial expression reconstruction 

References

  1. 1.
    Black, M.J., Yacoob, Y.: Tracking and recognizing rigid and non-rigid facial motions using local parametric models of image motion. In: Proceedings of IEEE International Conference on Computer Vision, pp. 374–381. IEEE (1995)Google Scholar
  2. 2.
    Blanz, V., Vetter, T., et al.: A morphable model for the synthesis of 3D faces. In: Siggraph 1999, pp. 187–194 (1999)Google Scholar
  3. 3.
    Brunton, A., Bolkart, T., Wuhrer, S.: Multilinear wavelets: a statistical shape space for human faces. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 297–312. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10590-1_20CrossRefGoogle Scholar
  4. 4.
    Cao, C., Weng, Y., Zhou, S., Tong, Y., Zhou, K.: Facewarehouse: a 3D facial expression database for visual computing. IEEE Trans. Visual Comput. Graph. 20(3), 413–425 (2014)CrossRefGoogle Scholar
  5. 5.
    Decarlo, D., Metaxas, D.: Optical flow constraints on deformable models with applications to face tracking. Int. J. Comput. Vision 38(2), 99–127 (2000) CrossRefGoogle Scholar
  6. 6.
    Flueckiger, B.: Computer-generated characters in avatar and Benjamin button. Digitalitat und Kino 1 (2011). Translation from German by B. LetzlerGoogle Scholar
  7. 7.
    Friesen, E., Ekman, P.: Facial action coding system: a technique for the measurement of facial movement. Palo Alto 3 (1978)Google Scholar
  8. 8.
    Gower, J.C.: Generalized procrustes analysis. Psychometrika 40(1), 33–51 (1975)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Ichim, A.E., Kadleček, P., Kavan, L., Pauly, M.: Phace: physics-based face modeling and animation. ACM Trans. Graph. (TOG) 36(4), 153 (2017)CrossRefGoogle Scholar
  10. 10.
    Joshi, P., Tien, W.C., Desbrun, M., Pighin, F.: Learning controls for blend shape based realistic facial animation. In: ACM Siggraph 2006 Courses, p. 17. ACM (2006)Google Scholar
  11. 11.
    Neumann, T., Varanasi, K., Wenger, S., Wacker, M., Magnor, M., Theobalt, C.: Sparse localized deformation components. ACM Trans. Graph. (TOG) 32(6), 179 (2013)CrossRefGoogle Scholar
  12. 12.
    Sagar, M.: Facial performance capture and expressive translation for King Kong. In: ACM SIGGRAPH 2006 Courses, p. 7. ACM (2006)Google Scholar
  13. 13.
    Tena, J.R., De la Torre, F., Matthews, I.: Interactive region-based linear 3D face models. ACM Trans. Graph. (TOG) 30, 76 (2011)Google Scholar
  14. 14.
    Weise, T., Bouaziz, S., Li, H., Pauly, M.: Realtime performance-based facial animation. ACM Trans. Graph. (TOG) 30, 77 (2011)Google Scholar
  15. 15.
    Wu, C., Bradley, D., Gross, M., Beeler, T.: An anatomically-constrained local deformation model for monocular face capture. ACM Trans. Graph. (TOG) 35(4), 115 (2016)Google Scholar
  16. 16.
    Zhang, L., Snavely, N., Curless, B., Seitz, S.M.: Spacetime faces: high-resolution capture for modeling and animation. In: ACM Annual Conference on Computer Graphics, pp. 548–558, August 2004Google Scholar
  17. 17.
    Zhang, Q., Liu, Z., Quo, G., Terzopoulos, D., Shum, H.Y.: Geometry-driven photorealistic facial expression synthesis. IEEE Trans. Visual Comput. Graph. 12(1), 48–60 (2006)CrossRefGoogle Scholar
  18. 18.
    Zhang, Y., Ji, Q., Zhu, Z., Yi, B.: Dynamic facial expression analysis and synthesis with MPEG-4 facial animation parameters. IEEE Trans. Circuits Syst. Video Technol. 18(10), 1383–1396 (2008)CrossRefGoogle Scholar
  19. 19.
    Zollhöfer, M., et al.: State of the art on monocular 3D face reconstruction, tracking, and applications. In: Computer Graphics Forum, vol. 37, pp. 523–550. Wiley Online Library (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Ziqi Tu
    • 1
  • Dongdong Weng
    • 1
    • 2
    Email author
  • Dewen Cheng
    • 1
  • Yihua Bao
    • 2
  • Bin Liang
    • 1
  • Le Luo
    • 1
  1. 1.Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and PhotonicsBeijing Institute of TechnologyBeijingChina
  2. 2.AICFVE of Beijing Film AcademyBeijingChina

Personalised recommendations