Advertisement

Personalized Face Modeling for Improved Face Reconstruction and Motion Retargeting

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12350)

Abstract

Traditional methods for image-based 3D face reconstruction and facial motion retargeting fit a 3D morphable model (3DMM) to the face, which has limited modeling capacity and fail to generalize well to in-the-wild data. Use of deformation transfer or multilinear tensor as a personalized 3DMM for blendshape interpolation does not address the fact that facial expressions result in different local and global skin deformations in different persons. Moreover, existing methods learn a single albedo per user which is not enough to capture the expression-specific skin reflectance variations. We propose an end-to-end framework that jointly learns a personalized face model per user and per-frame facial motion parameters from a large corpus of in-the-wild videos of user expressions. Specifically, we learn user-specific expression blendshapes and dynamic (expression-specific) albedo maps by predicting personalized corrections on top of a 3DMM prior. We introduce novel training constraints to ensure that the corrected blendshapes retain their semantic meanings and the reconstructed geometry is disentangled from the albedo. Experimental results show that our personalization accurately captures fine-grained facial dynamics in a wide range of conditions and efficiently decouples the learned face model from facial motion, resulting in more accurate face reconstruction and facial motion retargeting compared to state-of-the-art methods.

Keywords

3D face reconstruction Face modeling Face tracking Facial motion retargeting 

Notes

Acknowledgements:

We thank the anonymous reviewers for their constructive feedback, Muscle Wu, Wenbin Zhu and Zeyu Chen for helping, and Alex Colburn for valuable discussions.

Supplementary material

Supplementary material 1 (mp4 69062 KB)

504441_1_En_9_MOESM2_ESM.pdf (3.5 mb)
Supplementary material 2 (pdf 3585 KB)

References

  1. 1.
    Bhagavatula, C., Zhu, C., Luu, K., Savvides, M.: Faster than real-time facial alignment: a 3D spatial transformer network approach in unconstrained poses. In: IEEE International Conference on Computer Vision (ICCV) (2017)Google Scholar
  2. 2.
    Blanz, V., Vetter, T.: A morphable model for the synthesis of 3D faces. In: Proceedings SIGGRAPH, pp. 187–194 (1999)Google Scholar
  3. 3.
    Bouaziz, S., Wang, Y., Pauly, M.: Online modeling for realtime facial animation. ACM Trans. Graph. 32(4), 1–10 (2013)CrossRefGoogle Scholar
  4. 4.
    Cao, C., Chai, M., Woodford, O., Luo, L.: Stabilized real-time face tracking via a learned dynamic rigidity prior. ACM Trans. Graph. 37(6), 1–11 (2018)CrossRefGoogle Scholar
  5. 5.
    Cao, C., Hou, Q., Zhou, K.: Displaced dynamic expression regression for real-time facial tracking and animation. ACM Trans. Graph. 33(4), 1–10 (2014)Google Scholar
  6. 6.
    Cao, C., Weng, Y., Lin, S., Zhou, K.: 3D shape regression for real-time facial animation. ACM Trans. Graph. 32(4), 1–10 (2013)CrossRefGoogle Scholar
  7. 7.
    Cao, C., Wu, H., Weng, Y., Shao, T., Zhou, K.: Real-time facial animation with image-based dynamic avatars. ACM Trans. Graph. 35(4), 126:1–126:12 (2016)CrossRefGoogle Scholar
  8. 8.
    Chaudhuri, B., Vesdapunt, N., Wang, B.: Joint face detection and facial motion retargeting for multiple faces. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)Google Scholar
  9. 9.
    Chung, J.S., Nagrani, A., Zisserman, A.: Voxceleb2: deep speaker recognition. In: INTERSPEECH (2018)Google Scholar
  10. 10.
    Deng, J., Trigeorgis, G., Zhou, Y., Zafeiriou, S.: Joint multi-view face alignment in the wild. arXiv preprint arXiv:1708.06023 (2017)
  11. 11.
    Deng, Y., Yang, J., Xu, S., Chen, D., Jia, Y., Tong, X.: Accurate 3D face reconstruction with weakly-supervised learning: from single image to image set. In: IEEE Conference on Computer Vision and Pattern Recognition Workshop on Analysis and Modeling of Faces and Gestures (CVPRW) (2019)Google Scholar
  12. 12.
    Feng, Y., Wu, F., Shao, X., Wang, Y., Zhou, X.: Joint 3D face reconstruction and dense alignment with position map regression network. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. LNCS, vol. 11218, pp. 557–574. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01264-9_33CrossRefGoogle Scholar
  13. 13.
    Garrido, P., Valgaerts, L., Wu, C., Theobalt, C.: Reconstructing detailed dynamic face geometry from monocular video. ACM Trans. Graph. (Proc. SIGGRAPH Asia 2013) 32(6), 1–10 (2013)Google Scholar
  14. 14.
    Garrido, P., et al.: Reconstruction of personalized 3D face rigs from monocular video. ACM Trans. Graph. 35(3), 1–15 (2016)CrossRefGoogle Scholar
  15. 15.
    Gecer, B., Ploumpis, S., Kotsia, I., Zafeiriou, S.: GANFIT: generative adversarial network fitting for high fidelity 3D face reconstruction. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)Google Scholar
  16. 16.
    Genova, K., Cole, F., Maschinot, A., Sarna, A., Vlasic, D., Freeman, W.T.: Unsupervised training for 3D Morphable model regression. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)Google Scholar
  17. 17.
    Gotardo, P., Riviere, J., Bradley, D., Ghosh, A., Beeler, T.: Practical dynamic facial appearance modeling and acquisition. ACM Trans. Graph. 37(6), 1–13 (2018)CrossRefGoogle Scholar
  18. 18.
    Guo, Y., Zhang, J., Cai, J., Jiang, B., Zheng, J.: CNN-based real-time dense face reconstruction with inverse-rendered photo-realistic face images. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 41, 1294–1307 (2018)CrossRefGoogle Scholar
  19. 19.
    Huynh, L., et al.: Mesoscopic facial geometry inference using deep neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)Google Scholar
  20. 20.
    Ichim, A.E., Bouaziz, S., Pauly, M.: Dynamic 3D avatar creation from hand-held video input. ACM Trans. Graph. 34(4), 1–14 (2015)CrossRefGoogle Scholar
  21. 21.
    Jackson, A.S., Bulat, A., Argyriou, V., Tzimiropoulos, G.: Large pose 3D face reconstruction from a single image via direct volumetric CNN regression. In: IEEE International Conference on Computer Vision (ICCV) (2017)Google Scholar
  22. 22.
    Jiang, L., Zhang, J., Deng, B., Li, H., Liu, L.: 3D face reconstruction with geometry details from a single image. IEEE Trans. Image Process. 27(10), 4756–4770 (2018)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Kim, H., et al.: Deep video portraits. ACM Trans. Graph. 37(4), 163:1–163:14 (2018)Google Scholar
  24. 24.
    Kim, H., Zollöfer, M., Tewari, A., Thies, J., Richardt, C., Theobalt, C.: InverseFaceNet: deep single-shot inverse face rendering from a single image. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)Google Scholar
  25. 25.
    Laine, S., et al.: Production-level facial performance capture using deep convolutional neural networks. In: Eurographics Symposium on Computer Animation (2017)Google Scholar
  26. 26.
    Li, H., Weise, T., Pauly, M.: Example-based facial rigging. ACM Trans. Graph. (Proc. SIGGRAPH) 29(3), 1–6 (2010)CrossRefGoogle Scholar
  27. 27.
    Li, H., Yu, J., Ye, Y., Bregler, C.: Realtime facial animation with on-the-fly correctives. ACM Trans. Graph. 32(4), 1–10 (2013)zbMATHGoogle Scholar
  28. 28.
    Li, T., Bolkart, T., Black, M.J., Li, H., Romero, J.: Learning a model of facial shape and expression from 4D scans. ACM Trans. Graph. 36(6), 1–17 (2017)CrossRefGoogle Scholar
  29. 29.
    Lin, J., Yang, H., Chen, D., Zeng, M., Wen, F., Yuan, L.: Face parsing with RoI Tanh-Warping. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)Google Scholar
  30. 30.
    Liu, F., Zhu, R., Zeng, D., Zhao, Q., Liu, X.: Disentangling features in 3D face shapes for joint face reconstruction and recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)Google Scholar
  31. 31.
    Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: International Conference on Computer Vision (ICCV) (2015)Google Scholar
  32. 32.
    Lombardi, S., Saragih, J., Simon, T., Sheikh, Y.: Deep appearance models for face rendering. ACM Trans. Graph. 37(4), 1–13 (2018)CrossRefGoogle Scholar
  33. 33.
    Nagano, K., et al.: paGAN: real-time avatars using dynamic textures. ACM Trans. Graph. 37(6), 1–12 (2018)CrossRefGoogle Scholar
  34. 34.
    Olszewski, K., et al.: Realistic dynamic facial textures from a single image using GANs. In: The IEEE International Conference on Computer Vision (ICCV) (2017)Google Scholar
  35. 35.
    Paysan, P., Knothe, R., Amberg, B., Romdhani, S., Vetter, T.: A 3D face model for pose and illumination invariant face recognition. In: IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) (2009)Google Scholar
  36. 36.
    Ribera, R.B., Zell, E., Lewis, J.P., Noh, J., Botsch, M.: Facial retargeting with automatic range of motion alignment. ACM Trans. Graph. 36(4), 1–12 (2017)CrossRefGoogle Scholar
  37. 37.
    Richardson, E., Sela, M., Or-El, R., Kimmel, R.: Learning detailed face reconstruction from a single image. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)Google Scholar
  38. 38.
    Romdhani, S., Vetter, T.: Estimating 3D shape and texture using pixel intensity, edges, specular highlights, texture constraints and a prior. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2005)Google Scholar
  39. 39.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  40. 40.
    Roth, J., Tong, Y., Liu, X.: Adaptive 3D face reconstruction from unconstrained photo collections. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)Google Scholar
  41. 41.
    Sanyal, S., Bolkart, T., Feng, H., Black, M.: Learning to regress 3D face shape and expression from an image without 3D supervision. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)Google Scholar
  42. 42.
    Shen, J., Zafeiriou, S., Chrysos, G.G., Kossaifi, J., Tzimiropoulos, G., Pantic, M.: The first facial landmark tracking in-the-wild challenge: benchmark and results. In: IEEE International Conference on Computer Vision Workshops (ICCVW) (2015)Google Scholar
  43. 43.
    Sumner, R.W., Popović, J.: Deformation transfer for triangle meshes. In: ACM SIGGRAPH (2004)Google Scholar
  44. 44.
    Tewari, A., et al.: FML: face model learning from videos. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)Google Scholar
  45. 45.
    Tewari, A., et al.: Self-supervised multi-level face model learning for monocular reconstruction at over 250 Hz. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)Google Scholar
  46. 46.
    Tewari, A., et al.: MoFA: model-based deep convolutional face autoencoder for unsupervised monocular reconstruction. In: IEEE International Conference on Computer Vision (ICCV) (2017)Google Scholar
  47. 47.
    Thies, J., Zollhöfer, M., Stamminger, M., Theobalt, C., Nießner, M.: Face2Face: real-time face capture and reenactment of RGB videos. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)Google Scholar
  48. 48.
    Tran, A.T., Hassner, T., Masi, I., Medioni, G.: Regressing robust and discriminative 3D morphable models with a very deep neural network. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)Google Scholar
  49. 49.
    Tran, L., Liu, F., Liu, X.: Towards high-fidelity nonlinear 3D face morphable model. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)Google Scholar
  50. 50.
    Tran, L., Liu, X.: Nonlinear 3D face morphable model. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)Google Scholar
  51. 51.
    Vlasic, D., Brand, M., Pfister, H., Popović, J.: Face transfer with multilinear models. In: ACM SIGGRAPH (2005)Google Scholar
  52. 52.
    Weise, T., Bouaziz, S., Li, H., Pauly, M.: Realtime performance-based facial animation. In: ACM SIGGRAPH (2011)Google Scholar
  53. 53.
    Wu, C., Shiratori, T., Sheikh, Y.: Deep incremental learning for efficient high-fidelity face tracking. ACM Trans. Graph. 37(6), 1–12 (2018)CrossRefGoogle Scholar
  54. 54.
    Yang, J., Deng, J., Zhang, K., Liu, Q.: Facial shape tracking via spatio-temporal cascade shape regression. In: IEEE International Conference on Computer Vision Workshops (ICCVW) (2015)Google Scholar
  55. 55.
    Yin, L., Wei, X., Sun, Y., Wang, J., Rosato, M.J.: A 3D facial expression database for facial behavior research. In: IEEE International Conference on Automatic Face and Gesture Recognition (FGR), pp. 211–216 (2006)Google Scholar
  56. 56.
    Zhang, K., Zhang, Z., Li, Z., Qiao, Y.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process. Lett. 23(10), 1499–1503 (2016)CrossRefGoogle Scholar
  57. 57.
    Zhu, W., Wu, H., Chen, Z., Vesdapunt, N., Wang, B.: ReDA: reinforced differentiable attribute for 3D face reconstruction. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)Google Scholar
  58. 58.
    Zhu, X., Lei, Z., Liu, X., Shi, H., Li, S.Z.: Face alignment across large poses: a 3D solution. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)Google Scholar
  59. 59.
    Zollhöfer, M., et al.: State of the art on monocular 3D face reconstruction, tracking, and applications. Comput. Graph. Forum 37, 523–550 (2018)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.University of WashingtonSeattleUSA
  2. 2.Microsoft Cloud and AIRedmondUSA

Personalised recommendations