Machine Vision and Applications

, Volume 25, Issue 4, pp 859–879 | Cite as

Fully automatic expression-invariant face correspondence

  • Augusto Salazar
  • Stefanie Wuhrer
  • Chang Shu
  • Flavio Prieto
Original Paper


We consider the problem of computing accurate point-to-point correspondences among a set of human face scans with varying expressions. Our fully automatic approach does not require any manually placed markers on the scan. Instead, the approach learns the locations of a set of landmarks present in a database and uses this knowledge to automatically predict the locations of these landmarks on a newly available scan. The predicted landmarks are then used to compute point-to-point correspondences between a template model and the newly available scan. To accurately fit the expression of the template to the expression of the scan, we use as template a blendshape model. Our algorithm was tested on a database of human faces of different ethnic groups with strongly varying expressions. Experimental results show that the obtained point-to-point correspondence is both highly accurate and consistent for most of the tested 3D face models.


Non-rigid 3D registration Automatic landmark prediction Facial expression-invariant Blendshape model  Energy minimization 



This work was supported by the program “Créditos condonables para estudiantes de Doctorado” from COLCIENCIAS—Colombia, by the program “Convocatoria de apoyo a tesis de posgrado—Doctorados” from Dirección de Investigaciones de Manizales—National University of Colombia, and by the Cluster of Excellence Multimodal Computing and Interaction within the Excellence Initiative of the German Federal Government. We thank Timo Bolkart for help in conducting the comparison to 3DMM, and Jonathan Boisvert, Timo Bolkart, Alan Brunton, and Pengcheng Xi for helpful discussions.


  1. 1.
    Dryden, I., Mardia, K.: Statistical Shape Analysis. Wiley, New York (2002)Google Scholar
  2. 2.
    Blanz, V., Vetter, T.: A morphable model for the synthesis of 3D faces. In: Conference on Computer Graphics and Interactive Techniques, pp. 187–194 (1999)Google Scholar
  3. 3.
    Jiang, D., Hu, Y., Yan, S., Zhang, L., Zhang, H., Gao, W.: Efficient 3D reconstruction for face recognition. Pattern Recognit 38(6), 787–798 (2005)CrossRefGoogle Scholar
  4. 4.
    Romdhani, S., Blanz, V., Vetter, T.: Face identification by fitting a 3D morphable model using linear shape and texture error functions. In: IEEE International Conference on Computer Vision, pp. 3–19 (2002)Google Scholar
  5. 5.
    Romdhani, S., Vetter, T.: Efficient, robust and accurate fitting of a 3D morphable model. In: IEEE International Conference on Computer Vision, vol. 1, pp. 59–66 (2003)Google Scholar
  6. 6.
    Brunton, A., Shu, C., Lang, J., Dubois, E.: Wavelet model-based stereo for fast, robust face reconstruction. In: Canadian Conference on Computer and Robot Vision, pp. 347–354 (2011)Google Scholar
  7. 7.
    van Kaick, O., Zhang, H., Hamarneh, G., Cohen-Or, D.: A survey on shape correspondence. Comput. Graph. Forum 3(6), 1681–1707 (2011)CrossRefGoogle Scholar
  8. 8.
    Xi, P., Shu, C.: Consistent parameterization and statistical analysis of human head scans. Vis. Comput. 25(9), 863–871 (2009)CrossRefGoogle Scholar
  9. 9.
    Li, H., Weise, T., Pauly, M.: Example-based facial rigging. ACM Trans. Graph. (SIGGRAPH) 29(4), 32:1–32:6 (2010)Google Scholar
  10. 10.
    Learned-Miller, E.: Data driven image models through continuous joint alignment. IEEE Trans. Pattern Anal. Mach. Intell. 28(2), 236–250 (2006)CrossRefGoogle Scholar
  11. 11.
    Cox, M., Sridharan, S., Lucey, S., Cohn, J.: Least squares congealing for unsupervised alignment of images. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008)Google Scholar
  12. 12.
    Tong, Y., Liu, X., Wheeler, F., Tu, P.: Semi-supervised facial landmark annotation. Comput. Vis. Image Underst. 116, 922–935 (2012)CrossRefGoogle Scholar
  13. 13.
    Cootes, T., Edwards, G., Taylor, C.: Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 681–685 (2001)Google Scholar
  14. 14.
    Mehryar, S., Martin, K., Plataniotis, K., Stergiopoulos, S.: Automatic landmark detection for 3D face image processing. In: IEEE Congress on Evolutionary Computation, pp. 1–7 (2010)Google Scholar
  15. 15.
    Vezzetti, E., Marcolin, F.: 3D human face description: landmarks measures and geometrical features. Image Vis. Comput. 30(10), 750–761 (2012)CrossRefGoogle Scholar
  16. 16.
    Ben Azouz, Z., Shu, C., Mantel, A.: Automatic locating of anthropometric landmarks on 3D human models. In: International Symposium on 3D Data Processing, Visualization, and Transmission, pp. 750–757 (2006)Google Scholar
  17. 17.
    Berretti, S., Ben Amor, B., Daoudi, M., del Bimbo, A.: 3D facial expression recognition using SIFT descriptors of automatically detected keypoints. Vis. Comput. 27(11), 1021–1036 (2011)CrossRefGoogle Scholar
  18. 18.
    Creusot, C., Pears, N., Austin, J.: 3D face landmark labelling. In: Proceedings ACM workshop on 3D object retrieval, pp. 27–32 (2010)Google Scholar
  19. 19.
    Segundo, M., Silva, L., Pereira, O., Queirolo, C.: Automatic face segmentation and facial landmark detection in range images. IEEE Trans. Syst. Man. Cybern. Part B 40(5), 1319–1330 (2010)CrossRefGoogle Scholar
  20. 20.
    Perakis, P., Theoharis, T., Passalis, G., Kakadiaris, I.: Automatic 3D facial region retrieval from multi-pose facial datasets. In: Eurographics Workshop on 3D Object Retrieval, pp. 37–44 (2009)Google Scholar
  21. 21.
    Perakis, P., Passalis, G., Theoharis, T., Kakadiaris, I.: 3D facial landmark detection & face registration: A 3D facial landmark model & 3D local shape descriptors approach. Technical Report TP-2010-01, Computer Graphics Laboratory, University of Athens (2010)Google Scholar
  22. 22.
    Nair, P., Cavallaro, A.: 3-D face detection, landmark localization, and registration using a point distribution model. IEEE Trans. Multimed. 11(4), 611–623 (2009)CrossRefGoogle Scholar
  23. 23.
    Lu, X., Jain, A.: Automatic feature extraction for multiview 3D face recognition. In: International Conference on Automatic Face and Gesture Recognition, pp. 585–590 (2006)Google Scholar
  24. 24.
    Xiaoguang, L., Jain, A., Colbry, D.: Matching 2.5D face scans to 3d models. IEEE Trans. Pattern Anal. Mach. Intell. 28(1), 31–43 (2006)CrossRefGoogle Scholar
  25. 25.
    Guo, J., Mei, X., Tang, K.: Automatic landmark annotation and dense correspondence registration for 3D human facial images. BMC Bioinform. 14, 232 (2013)CrossRefGoogle Scholar
  26. 26.
    Sun, Y., Abidi, M.: Surface matching by 3D point’s fingerprint. In: IEEE International Conference on Computer Vision, vol. 2, pp. 263–269 (2001)Google Scholar
  27. 27.
    Elad, A., Kimmel, R.: On bending invariant signatures for surfaces. IEEE Trans. Pattern Anal. Mach. Intell. 25(10), 1285–1295 (2003)CrossRefGoogle Scholar
  28. 28.
    Chang, K., Bowyer, K., Flynn, P.: Multiple nose region matching for 3D face recognition under varying facial expression. IEEE Trans. Pattern Anal. Mach. Intell. 28(10), 1695–1700 (2006)CrossRefGoogle Scholar
  29. 29.
    Passalis, G., Perakis, P., Theoharis, T., Kakadiaris, I.: Using facial symmetry to handle pose variations in real-world 3D face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 33(10), 1938–1951 (2011)CrossRefGoogle Scholar
  30. 30.
    Kakadiaris, I., Passalis, G., Toderici, G., Murtuza, M., Yunliang, L., Karampatziakis, N., Theoharis, T.: Three-dimensional face recognition in the presence of facial expressions: an annotated deformable model approach. IEEE Trans. Pattern Anal. Mach. Intell. 29(4), 640–649 (2007)CrossRefGoogle Scholar
  31. 31.
    Mpiperis, I., Malassiotis, S., Strintzis, M.: Bilinear models for 3D face and facial expression recognition. IEEE Trans. Inform. Forensics Secur. 3(3), 498–511 (2008)Google Scholar
  32. 32.
    Huang, Y., Zhang, X., Fan, Y., Yin, L., Seversky, L., Allen, J., Lei, T., Dong, W.: Reshaping 3D facial scans for facial appearance modeling and 3D facial expression analysis. Image Vis. Comput. 30(10), 681–796 (2012)CrossRefGoogle Scholar
  33. 33.
    Xiaoguang, L., Jain, A.: Deformation modeling for robust 3D face matching. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 1377–1383 (2006)Google Scholar
  34. 34.
    Basso, C., Paysan, P., Vetter, T.: Registration of expressions data using a 3D morphable model. In: International Conference on Automatic Face and Gesture Recognition, pp. 205–210 (2006)Google Scholar
  35. 35.
    Amberg, B., Knothe, R., Vetter, T.: Expression invariant 3D face recognition with a morphable model. In: IEEE International Conference on Automatic Face Gesture Recognition, pp. 1–6 (2008)Google Scholar
  36. 36.
    Allen, B., Curless, B., Popović, Z.: The space of human body shapes: reconstruction and parametrisation from range scans. ACM Trans. Graph. (SIGGRAPH) 22(3), 587–594 (2003)CrossRefGoogle Scholar
  37. 37.
    Wuhrer, S., Shu, C., Xi, P.: Landmark-free posture invariant human shape correspondence. Vis. Comput. 27(9), 843–852 (2011)CrossRefGoogle Scholar
  38. 38.
    Bronstein, A., Bronstein, M., Kimmel, R.: Generalized multidimensional scaling: a framework for isometry-invariant partial surface matching. Proc. Natl. Acad. Sci. 103(5), 1168–1172 (2006)Google Scholar
  39. 39.
    Bronstein, A., Bronstein, M., Kimmel, R.: Expression-invariant representations of faces. IEEE Trans. Image Process. 16(1), 188–197 (2007)CrossRefMathSciNetGoogle Scholar
  40. 40.
    Weise, T., Bouaziz, S., Li, H., Pauly, M.: Realtime performance-based facial animation. ACM Trans. Graph. (SIGGRAPH) 30(4), 77:1–77:10 (2011)CrossRefGoogle Scholar
  41. 41.
    Wuhrer, S., Ben Azouz, Z., Shu, C.: Semi-automatic prediction of landmarks on human models in varying poses. In: Canadian Conference on Computer and Robot Vision, pp. 136–142 (2010)Google Scholar
  42. 42.
    Cox, T., Cox, M.: Multidimensional Scaling, 2nd edn. Chapman & Hall CRC, Boca Raton (2001)zbMATHGoogle Scholar
  43. 43.
    Yedidia, J., Freeman, W., Weiss, Y.: Understanding Belief Propagation and its Generalizations. Science & Technology Books (2003)Google Scholar
  44. 44.
    Han, J., Kamber, M.: Data Mining: Concepts and Techniques, 2nd edn. Morgan Kaufmann Publishers, Burlington (2006)Google Scholar
  45. 45.
    Cazals, F., Pouget, M.: Smooth surfaces, umbilics, lines of curvatures, foliations, ridges and the medial axis: a concise overview. Technical Report RR-5138, INRIA (2004)Google Scholar
  46. 46.
    Li, H., Adams, B., Guibas, L., Pauly, M.: Robust single-view geometry and motion reconstruction. ACM Trans. Graph. (SIGGRAPH Asia) 28(5), 175:1–175:10 (2009)Google Scholar
  47. 47.
    Liu, D., Nocedal, J.: On the limited memory BFGS method for large scale optimization. Math. Program. 45, 503–528 (1989)CrossRefzbMATHMathSciNetGoogle Scholar
  48. 48.
    Yin, L., Wei, X., Wang, J., Sun, Y., Rosato, M.: A 3D facial expression database for facial behavior research. In: IEEE International Conference on Automatic Face and Gesture Recognition, pp. 211–216 (2006)Google Scholar
  49. 49.
    Gao, Y.: Efficiently comparing face images using a modified hausdorff distance. In: IEEE Conference on Vision, Image and Signal Processing, pp. 346–350 (2003)Google Scholar
  50. 50.
    Rabiu, H., Saripan, M., Mashohor, S., Marhaban, M.: 3d facial expression recognition using maximum relevance minimum redundancy geometrical features. EURASIP J. Adv. Signal Process. 1, 213 (2012)CrossRefGoogle Scholar
  51. 51.
    Duin, R., Juszczak, P., Paclik, P., Pekalska, E., de Ridder, D., Tax, D., Verzakov, S.: PRTools4.1, A Matlab Toolbox for Pattern Recognition. Delft University of Technology, Delft (2007)Google Scholar
  52. 52.
    Vlasic, D., Brand, M., Pfister, H., Popović, J.: Face transfer with multilinear models. Trans. Graph. (Proc. SIGGRAPH) 24(3), 426–433 (2005)CrossRefGoogle Scholar
  53. 53.
    Savran, A., Alyüz, N., Dibekliouğlu, H., Çeliktutan, O., Gökberk, B., Sankur, B., Akarun, L.: Bosphorus database for 3D face analysis. In: European Workshop on Biometrics and Identity Management, pp. 47–56 (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Augusto Salazar
    • 1
  • Stefanie Wuhrer
    • 2
  • Chang Shu
    • 3
  • Flavio Prieto
    • 4
  1. 1.Perception and Intelligent Control Research GroupNational University of ColombiaManizales, CaldasColombia
  2. 2.Cluster of Excellence, Multimodal Computing and InteractionSaarland UniversitySaarbrückenGermany
  3. 3.National Research Council of CanadaMontrealCanada
  4. 4.GAUNAL Research GroupNational University of ColombiaBogotáColombia

Personalised recommendations