The Visual Computer

, Volume 35, Issue 4, pp 535–548 | Cite as

Emotion information visualization through learning of 3D morphable face model

  • Hai Jin
  • Xun Wang
  • Yuanfeng Lian
  • Jing HuaEmail author
Original Article


Analysis and visualization of human facial expressions and its applications are useful but challenging. This paper presents a novel approach to analyze the facial expressions from images through learning of a 3D morphable face model and a quantitative information visualization scheme for exploring this type of visual data. More specifically, a 3D face database with various facial expressions is employed to build a nonnegative matrix factorization (NMF) part-based morphable 3D face model. From an input image, a 3D face with expression can be reconstructed iteratively by using the NMF morphable 3D face model as a priori knowledge, from which basis parameters and a displacement map are extracted as features for facial emotion analysis and visualization. Based upon the features, two support vector regressions are trained to determine the fuzzy valence–arousal (VA) values to quantify the emotions. The continuously changing emotion status can be intuitively analyzed by visualizing the VA values in VA space. Our emotion analysis and visualization system, based on 3D NMF morphable face model, detect expressions robustly from various head poses, face sizes and lighting conditions and is fully automatic to compute the VA values from images or a sequence of video with various facial expressions. To evaluate our novel method, we test our system on publicly available databases and evaluate the emotion analysis and visualization results. We also apply our method to quantifying emotion changes during motivational interviews. These experiments and applications demonstrate the effectiveness and accuracy of our method.


3D morphable face model Facial expression analysis Emotion visualization 



We would like to thank the reviewers for their valuable suggestions which helped to improve this paper. This work is supported in part by the following grants: LZ16F020002 and NSF CNS-1647200.


  1. 1.
    Ahonen, T., Hadid, A., Pietikainen, M.: Face description with local binary patterns: application to face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 28(12), 2037–2041 (2006)CrossRefzbMATHGoogle Scholar
  2. 2.
    Ashraf, A.B., Lucey, S., Cohn, J.F., Chen, T., Ambadar, Z., Prkachin, K.M., Solomon, P.E.: The painful face-pain expression recognition using active appearance models. Image Vis. Comput. 27(12), 1788–1796 (2009)CrossRefGoogle Scholar
  3. 3.
    Bartlett, M.S., Littlewort, G., Fasel, I., Movellan, J.R.: Real time face detection and facial expression recognition: development and applications to human computer interaction. In: IEEE Conference on Computer Vision and Pattern Recognition Workshop 5, 53–53 (2003)Google Scholar
  4. 4.
    Beeler, T., Bickel, B., Beardsley, P., Sumner, B., Gross, M.: High-quality single-shot capture of facial geometry. ACM Trans. Graph. 29(4), 40 (2010)CrossRefGoogle Scholar
  5. 5.
    Beeler, T., Hahn, F., Bradley, D., Bickel, B., Beardsley, P., Gotsman, C., Sumner, R.W., Gross, M.: High-quality passive facial performance capture using anchor frames. ACM Trans. Graph. 30(4), 75 (2011)CrossRefGoogle Scholar
  6. 6.
    Blanz, V., Vetter, T.: A morphable model for the synthesis of 3D faces. In: SIGGRAPH, pp. 187–194 (1999)Google Scholar
  7. 7.
    Blanz, V., Vetter, T.: Face recognition based on fitting a 3D morphable model. IEEE Trans. Pattern Anal. Mach. Intell. 25(9), 1063–1074 (2003)CrossRefGoogle Scholar
  8. 8.
    Cao, C., Hou, Q., Zhou, K.: Displaced dynamic expression regression for real-time facial tracking and animation. ACM Trans. Graph. 33(4), 43 (2014)Google Scholar
  9. 9.
    Cao, C., Weng, Y., Lin, S., Zhou, K.: 3D shape regression for real-time facial animation. ACM Trans. Graph. 32(4), 41 (2013)CrossRefzbMATHGoogle Scholar
  10. 10.
    Cao, C., Weng, Y., Zhou, S., Tong, Y., Zhou, K.: Facewarehouse: a 3D facial expression database for visual computing. IEEE Trans. Vis. Comput. Graph. 20(3), 413–425 (2014)CrossRefGoogle Scholar
  11. 11.
    Cao, C., Wu, H., Weng, Y., Shao, T., Zhou, K.: Real-time facial animation with image-based dynamic avatars. ACM Trans. Graph. 35(4), 126 (2016)CrossRefGoogle Scholar
  12. 12.
    Chen, Y.L., Wu, H.T., Shi, F., Tong, X., Chai, J.: Accurate and robust 3D facial capture using a single RGBD camera. In: IEEE International Conference on Computer Vision, pp. 3615–3622 (2013)Google Scholar
  13. 13.
    Cohen, I., Garg, A., Huang, T.S., et al.: Emotion recognition from facial expressions using multilevel HMM. In: Neural Information Processing Systems, vol. 2 (2000)Google Scholar
  14. 14.
    Cootes, T.F., Edwards, G.J., Taylor, C.J., et al.: Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 681–685 (2001)CrossRefGoogle Scholar
  15. 15.
    Cootes, T.F., Wheeler, G.V., Walker, K.N., Taylor, C.J.: View-based active appearance models. Image Vis. Comput. 20(9), 657–664 (2002)CrossRefGoogle Scholar
  16. 16.
    Cristinacce, D., Cootes, T.F.: Feature detection and tracking with constrained local models. In: British Machine Vision Conference, vol. 1, p. 3 (2006)Google Scholar
  17. 17.
    Ekman, P.: An argument for basic emotions. Cognit. Emot. 6(3–4), 169–200 (1992)CrossRefGoogle Scholar
  18. 18.
    Fanelli, G., Dantone, M., Gall, J., Fossati, A., Van Gool, L.: Random forests for real time 3D face analysis. Int. J. Comput. Vis. 101(3), 437–458 (2013)CrossRefGoogle Scholar
  19. 19.
    Fasel, B., Luettin, J.: Automatic facial expression analysis: a survey. Pattern Recognit. 36(1), 259–275 (2003)CrossRefzbMATHGoogle Scholar
  20. 20.
    Geng, X., Zhou, Z.H., Smith-Miles, K.: Automatic age estimation based on facial aging patterns. IEEE Trans. Pattern Anal. Mach. Intell. 29(12), 2234–2240 (2007)CrossRefGoogle Scholar
  21. 21.
    Granger, S., Pennec, X.: Multi-scale EM-ICP: a fast and robust approach for surface registration. In: International Conference on Computer Vision, pp. 418–432 (2002)Google Scholar
  22. 22.
    Gunes, H., Pantic, M.: Automatic, dimensional and continuous emotion recognition. Int. J. Synth. Emotions. 1(1), 68–99 (2010)CrossRefGoogle Scholar
  23. 23.
    Guo, X., Hua, J., Qin, H.: Scalar-function-driven editing on point set surfaces. IEEE Comput. Graph. Appl. 24(4), 43–52 (2004)CrossRefGoogle Scholar
  24. 24.
    Guo, X., Hua, J., Qin, H.: Touch-based haptics for interactive editing on point set surfaces. IEEE Comput. Graph. Appl. 24(6), 31–39 (2004)CrossRefGoogle Scholar
  25. 25.
    Ioannou, S.V., Raouzaiou, A.T., Tzouvaras, V.A., Mailis, T.P., Karpouzis, K.C., Kollias, S.D.: Emotion recognition through facial expression analysis based on a neurofuzzy network. Neural Netw. 18(4), 423–435 (2005)CrossRefGoogle Scholar
  26. 26.
    Joshi, P., Tien, W.C., Desbrun, M., Pighin, F.: Learning controls for blend shape based realistic facial animation. In: SIGGRAPH, p. 8 (2005)Google Scholar
  27. 27.
    Kapoor, A., Burleson, W., Picard, R.W.: Automatic prediction of frustration. Int. J. Human Comput. Stud. 65(8), 724–736 (2007)CrossRefGoogle Scholar
  28. 28.
    Ko, K.E., Sim, K.B.: Development of the facial feature extraction and emotion recognition method based on ASM and bayesian network. In: IEEE International Conference on Fuzzy Systems, pp. 2063–2066 (2009)Google Scholar
  29. 29.
    Kobayashi, H., Hara, F.: Facial interaction between animated 3D face robot and human beings. IEEE International Conference on Computational Cybernetics and Simulation 4, 3732–3737 (1997)Google Scholar
  30. 30.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  31. 31.
    Lee, D.D., Seung, H.S.: Algorithms for non-negative matrix factorization. In: Advances in Neural Information Processing Systems, pp. 556–562 (2001)Google Scholar
  32. 32.
    Lei, Z., Bai, Q., He, R., Li, S.: Face shape recovery from a single image using cca mapping between tensor spaces. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–7 (2008)Google Scholar
  33. 33.
    Liao, Q., Jin, X., Zeng, W.: Enhancing the symmetry and proportion of 3D face geometry. IEEE Trans. Vis. Comput. Graph. 18(10), 1704–1716 (2012)CrossRefGoogle Scholar
  34. 34.
    Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended cohn-kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 94–101 (2010)Google Scholar
  35. 35.
    Morency, L.P., Rahimi, A., Darrell, T.: Adaptive view-based appearance models. In: IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. I-803. IEEE (2003)Google Scholar
  36. 36.
    Newcombe, R.A., Fox, D., Seitz, S.M.: Dynamicfusion: reconstruction and tracking of non-rigid scenes in real-time. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 343–352 (2015)Google Scholar
  37. 37.
    Nicolaou, M.A., Gunes, H., Pantic, M.: Continuous prediction of spontaneous affect from multiple cues and modalities in valence-arousal space. IEEE Trans. Affect. Comput. 2(2), 92–105 (2011)CrossRefGoogle Scholar
  38. 38.
    Sandbach, G., Zafeiriou, S., Pantic, M., Rueckert, D.: Recognition of 3D facial expression dynamics. Image Vis. Comput. 30(10), 762–773 (2012)CrossRefGoogle Scholar
  39. 39.
    Schuller, B., Valster, M., Eyben, F., Cowie, R., Pantic, M.: Avec 2012: the continuous audio/visual emotion challenge. In: ACM International Conference on Multimodal Interaction, pp. 449–456 (2012)Google Scholar
  40. 40.
    Sebe, N., Lew, M.S., Sun, Y., Cohen, I., Gevers, T., Huang, T.S.: Authentic facial expression analysis. Image Vis. Comput. 25(12), 1856–1863 (2007)CrossRefGoogle Scholar
  41. 41.
    Suwajanakorn, S., Kemelmacher-Shlizerman, I., Seitz, S.M.: Total moving face reconstruction. In: European Conference on Computer Vision, pp. 796–812 (2014)Google Scholar
  42. 42.
    Tena, J.R., De la Torre, F., Matthews, I.: Interactive region-based linear 3D face models. ACM. Trans. Graph. 30(4), 76 (2011)CrossRefGoogle Scholar
  43. 43.
    Turk, M.A., Pentland, A.P.: Face recognition using eigenfaces. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–591 (1991)Google Scholar
  44. 44.
    Valstar, M., Schuller, B., Smith, K., Almaev, T., Eyben, F., Krajewski, J., Cowie, R., Pantic, M.: Avec 2014: 3D dimensional affect and depression recognition challenge. In: International Workshop on Audio/Visual Emotion Challenge, pp. 3–10 (2014)Google Scholar
  45. 45.
    Valstar, M.F., Pantic, M.: Combined support vector machines and hidden markov models for modeling facial action temporal dynamics. In: International Workshop on Human-Computer Interaction, pp. 118–127 (2007)Google Scholar
  46. 46.
    Wang, H., Ahuja, N.: Facial expression decomposition. In: IEEE International Conference on Computer Vision, pp. 958–965 (2003)Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Wayne State UniversityDetroitUSA

Personalised recommendations