Advertisement

The Visual Computer

, Volume 30, Issue 6–8, pp 649–659 | Cite as

Dynamic 3D facial expression modeling using Laplacian smooth and multi-scale mesh matching

  • Jing ChiEmail author
  • Changhe Tu
  • Caiming Zhang
Original Article

Abstract

We propose a novel algorithm for the high-resolution modeling of dynamic 3D facial expressions from a sequence of unstructured face point clouds captured at video rate. The algorithm can reconstruct not only the global facial deformations caused by muscular movements, but also the expressional details generated by local skin deformations. Our algorithm consists of two parts: Extraction of expressional details and Reconstruction of expressions. In the extraction part, we extract the subtle expressional details such as wrinkles and folds from each point cloud with Laplacian smooth operator. In the reconstruction part, we use a multi-scale deformable mesh model to match each point cloud to reconstruct time-varying expressions. In each matching, we first use the low-scale mesh to match the global deformations of point cloud obtained after filtering out the expressional details, and then use the high-scale mesh to match the extracted expressional details. Comparing to many existing non-rigid ICP-based algorithms that match directly the mesh model to the entire point cloud, our algorithm overcomes the probable large errors occurred where the local sharp deformations are matched since it extracts the expressional details for separate matching, therefore, our algorithm can produce a high-resolution dynamic model reflecting time-varying expressions. Additionally, utilization of multi-scale mesh model makes our algorithm achieve high speed because it decreases iterative optimizations in matching. Experiments demonstrate the efficiency of our algorithm.

Keywords

Expression modeling Laplacian smooth Mesh matching Point clouds 

Notes

Acknowledgments

We would like to thank the authors of [6] and [25] for sharing the face data used in our experiments. The work is supported by National Nature Science Foundation of China under Grant 61303088, 61020106001, 61332015, 61272242, U1201258, Nature Science Foundation of Shandong Province under Grant BS2013DX039, Sci-tech Development Project of Jinan City under Grant 201303021.

References

  1. 1.
    Igarashi, T., Nishino, K., Nayar, S.K.: The appearance of human skin. Tech. Rep. CUCS-024-05 (2005)Google Scholar
  2. 2.
    Joshi, P., Tien, W.C., Desbrun, M., Pighin, F.: Learning controls for blendshape based realistic facial animation. In: SCA ’03, pp. 187–192 (2003)Google Scholar
  3. 3.
    Dellepiane, M., Pietroni, N., Tsingos, N., Asselot, M., Scopigno, R.: Reconstructing head models from photographs for individualized 3d-audio processing. In: PG, pp. 1719–1727 (2008)Google Scholar
  4. 4.
    Goldenstein, S., Vogler, C., Metaxas, D.: 3D facial tracking from corrupted movie sequences. In: CVPR, pp. 1880–1885 (2004)Google Scholar
  5. 5.
    Huang, X., Zhang, S., Wang, Y., Metaxas, D., Samaras, D.: A hierarchical framework for high resolution facial expression tracking. In: CVPRW’04, pp. 22–29 (2004)Google Scholar
  6. 6.
    Zhang, L., Snavely, N., Curless, B., Seitz, S.M.: Spacetime faces: high resolution capture for modeling and animation. ACM Trans. Graph. 23(3), 548–558 (2004)CrossRefGoogle Scholar
  7. 7.
    Amberg, B., Romdhani, S., Vetter, T.: Optimal step nonrigid ICP algorithms for surface registration. In: CVPR, pp. 1–8 (2007)Google Scholar
  8. 8.
    Minoi, J.L., Gillies, D.:3D facial expression analysis and deformation. In: The 4th Symposium on Applied Perceptioin in Graphics and Visiualiztion, pp. 138–138 (2007)Google Scholar
  9. 9.
    Schneider, D.C., Eisert, P.: Fast nonrigid mesh registration with a data-driven deformation prior. In: ICCV Workshops, pp. 304–311 (2009)Google Scholar
  10. 10.
    Hyneman, W., Itokazu, H., Williams, L., Zhao, X.: Human face project. In: ACM SIGGRAPH, Article 5 (2005)Google Scholar
  11. 11.
    Oat, C.: Animated wrinkle maps. In: ACM SIGGRAPH, pp. 33–37 (2007)Google Scholar
  12. 12.
    Bickel, B., Botsch, M., Angst, R., Matusik, W., Otaduy, M., Pfister, H., Gross, M.: Multi-scale capture of facial geometry and motion. ACM Trans. Graph. 26(3), 33 (2007)CrossRefGoogle Scholar
  13. 13.
    Huang, H., Chai, J., Tong, X., Wu, H.: Leveraging motion capture and 3D scanning for high-fidelity facial performance acquisition. In: ACM SIGGRAPH, Article 74 (2011)Google Scholar
  14. 14.
    Huang, H., Yin, K., Zhao, L., Qi, Y., Yu, Y., Tong, X.: Detail-preserving controllable deformation from sparse examples. IEEE Trans. Vis. Comput. Graph. 18(8), 1215–1227 (2012)CrossRefGoogle Scholar
  15. 15.
    Furukawa, Y., Ponce, J.: Dense 3d motion capture for human faces. In: CVPR, PP. 1674–1681 (2009)Google Scholar
  16. 16.
    Le, B.H., Zhu, M., Deng, Z.: Marker optimization for facial motion acquisition and deformation. IEEE Trans. Vis. Comput. Graph. 19(11), 1859–1871 (2013)CrossRefGoogle Scholar
  17. 17.
    Li, H., Adams, B., Guibas, L.J., Pauly, M.: Robust single-view geometry and motion reconstruction. ACM Trans. Graph. 28(5), Article 175 (2009)Google Scholar
  18. 18.
    Zeng, Y., Wang, C., Wang, Y., Gu, D., Samaras, D., Paragios, N.: Intrinsic dense 3d surface tracking. In: CVPR, pp. 1225–1232 (2011)Google Scholar
  19. 19.
    Sübmuth, J., Winter, M., Greiner, G.: Reconstructing animated meshes from time-varying point clouds. Comput. Graph. Forum 27(5), 1469–1476 (2008)Google Scholar
  20. 20.
    Wand, M., Adams, B., Ovsjanikov, M., Berner, A., Bokeloh, M., Jenke, P., Guibas, L., Seidel, H.-P., Schilling, A.: Efficient reconstruction of nonrigid shape and motion from real-time 3D scanner data. ACM Trans. Graph. 28(2), Article 15 (2009)Google Scholar
  21. 21.
    Popa, T., South-Dickinson, I., Bradley, D., Sheffer, A., Heidrich, W.: Globally consistent space-time reconstruction. Comput. Graph. Forum 29(5), 1633–1642 (2010)Google Scholar
  22. 22.
    Li, H., Luo, L., Vlasic, D., Peers, P., Popovic, J., Pauly, M., Rusinkiewicz, S.: Temporally coherent completion of dynamic shapes. ACM Trans. Graph. 31(1), Article 2 (2012)Google Scholar
  23. 23.
    Wang, Y., Gupta, M., Zhang, S., Wang, S., Gu, X.F., Samaras, D., Huang, P.S.: High resolution tracking of non-rigid motion of densely sampled 3d data using Harmonic Maps. Int. J. Comput. Vis. 76(3), 283–300 (2008)CrossRefGoogle Scholar
  24. 24.
    Bradley, D., Heidrich, W., Popa, T., Sheffer, A.: High-resolution passive facial performance capture. In: ACM SIGGRAPH, Article 41 (2010)Google Scholar
  25. 25.
    Beeler, T., Hahn, F., Bradley, D., Bickel, B., Beardsley, P., Gotsman, C., Gross, M.: High-quality passive facial performance capture using anchor frame. ACM Trans. Graph. 30(4), Article 75 (2011)Google Scholar
  26. 26.
    Sibbing, D., Habbecke, M., Kobbelt, L.: Markerless reconstruction and synthesis of dynamic facial expression. Comput. Vis. Image Underst. 115(5), 668–680 (2011)CrossRefGoogle Scholar
  27. 27.
    Huang, Y., Zhang, X., Fan, Y., Yin, L., Seversky, L., Allen, J., Lei, T., Dong, W.: Reshaping 3D facial scans for facial appearance modeling and 3D facial expression analysis. Image Vis. Comput. 30(10), 750–761 (2012)CrossRefGoogle Scholar
  28. 28.
    Chi, J., Zhang, C.: Automated capture of real-time 3D facial geometry and motion. Comput. Aided Des. Appl. 8(6), 859–871 (2011)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  1. 1.Department of Computer Science and TechnologyShandong University of Finance and EconomicsJi’nanChina
  2. 2.Shandong UniversityJi’nanChina
  3. 3.Shandong Provincial Key Laboratory of Digital Media TechnologyJi’nanChina

Personalised recommendations