Advertisement

Facial Expression Recognition by Transfer Learning for Small Datasets

  • Jianjun LiEmail author
  • Siming Huang
  • Xin Zhang
  • Xiaofeng Fu
  • Ching-Chun Chang
  • Zhuo Tang
  • Zhenxing Luo
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 895)

Abstract

As a re-identification of facial attributes, facial expression recognition remains a challenging problem and the small datasets further exacerbate the task. Most previous works realize facial expression by fine-tuning the network pre-trained on a related domain. Therefore they have limitations inevitably. In this paper, we propose an optimal Feature Transfer Learning (FTL) algorithm to model the high-level neurons in a unified way. The proposed FTL structure is based on two models by correcting marginal distribution, matching the distribution between domains and optimizing the entire network connection by a parameter sharing method. Evaluation experiments based on three most public datasets of facial expression recognition: CK+, Oulu-CASIA and MMI, show that the proposed method is comparable to or better than most of the state-of-the-art approaches in both recognition accuracy and model size. Furthermore, we also demonstrate that our approach obtains more accurate results than other methods, such as directly fine-tuning a deeper network, training a shallower network from scratch.

Keywords

Facial expression recognition Transfer learning Feature transfer 

Notes

Acknowledgments

This work was supported by the National Natural Science Fund of China (No. 61871170. and No. 61672199) and the National Equipment Development Pre-research Fund: 6140137050202.

References

  1. 1.
    Bodla, N., Zheng, J., Xu, H., Chen, J.-C., Castillo, C., Chellappa, R.: Deep heterogeneous feature fusion for template-based face recognition. In: IEEE Winter Conference on Applications of Computer Vision, pp. 586–595 (2017)Google Scholar
  2. 2.
    Ding, H., Zhou, S.K., Chellappa, R.: Facenet2expnet: regularizing a deep face recognition net for expression recognition. In: Proceedings of the IEEE International Conference on Automatic Face Gesture Recognition, pp. 118–126 (2017)Google Scholar
  3. 3.
    Ghifary, M., Kleijn, W.B., Zhang, M.: Domain adaptive neural networks for object recognition. In: Pacific Rim International Conference on Artificial Intelligence, pp. 898–904. Springer (2014)Google Scholar
  4. 4.
    Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)Google Scholar
  5. 5.
    Gretton, A., Borgwardt, K.M., Rasch, M.J., Schölkopf, B., Smola, A.: A kernel two-sample test. J. Mach. Learn. Res. 13, 723–773 (2012)MathSciNetzbMATHGoogle Scholar
  6. 6.
    Guo, Y., Zhao, G., Pietikäinen, M.: Dynamic facial expression recognition using longitudinal facial expression atlases. In: European Conference on Computer Vision, pp. 631–644. Springer (2012)Google Scholar
  7. 7.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)
  8. 8.
    Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM international conference on Multimedia, pp. 675–678 (2014)Google Scholar
  9. 9.
    Jung, H., Lee, S., Yim, J., Park, S., Kim, J.: Joint fine-tuning in deep neural networks for facial expression recognition. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2983–2991 (2015)Google Scholar
  10. 10.
    Klaser, A., Marszałek, M., Schmid, C.: A spatio-temporal descriptor based on 3d-gradients. In: British Machine Vision Conference, pp. 1–10 (2008)Google Scholar
  11. 11.
    Liu, M., Li, S., Shan, S., Wang, R., Chen, X.: Deeply learning deformable facial action parts model for dynamic expression analysis. In: Asian Conference on Computer Vision, pp. 143–157. Springer (2014)Google Scholar
  12. 12.
    Liu, M., Shan, S., Wang, R., Chen, X.: Learning expression lets on spatio-temporal manifold for dynamic facial expression recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1749–1756 (2014)Google Scholar
  13. 13.
    Liu, P., Han, S., Meng, Z., Tong, Y.: Facial expression recognition via a boosted deep belief network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1805–1812 (2014)Google Scholar
  14. 14.
    Long, M., Cao, Y., Wang, J., Jordan, M.I.: Learning transferable features with deep adaptation networks. arXiv preprint arXiv:1502.02791 (2015)
  15. 15.
    Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended cohn-kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 94–101 (2010)Google Scholar
  16. 16.
    Mollahosseini, A., Chan, D., Mahoor, M.H.: Going deeper in facial expression recognition using deep neural networks. In: IEEE Winter Conference on Applications of Computer Vision, pp. 1–10 (2016)Google Scholar
  17. 17.
    Ng, H.-W., Nguyen, V.D., Vonikakis, V., Winkler, S.: Deep learning for emotion recognition on small datasets using transfer learning. In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pp. 443–449 (2015)Google Scholar
  18. 18.
    Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)CrossRefGoogle Scholar
  19. 19.
    Parkhi, O.M., Vedaldi, A., Zisserman, A., et al.: Deep face recognition. In: British Machine Vision Conference, p. 6 (2015)Google Scholar
  20. 20.
    Sikka, K., Sharma, G., Bartlett, M.: Lomo: latent ordinal model for facial analysis in videos. In: Computer Vision and Pattern Recognition, pp. 5580–5589 (2016)Google Scholar
  21. 21.
    Sikka, K., Wu, T., Susskind, J., Bartlett, M.: Exploring bag of words architectures in the facial expression domain. In: European Conference on Computer Vision, pp. 250–259. Springer (2012)Google Scholar
  22. 22.
    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  23. 23.
    Tang, Y., Zhang, X.M., Wang, H.: Geometric-convolutional feature fusion based on learning propagation for facial expression recognition. IEEE Access 6, 42532–42540 (2018)CrossRefGoogle Scholar
  24. 24.
    Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., Darrell, T.: Deep domain confusion: maximizing for domain invariance. arXiv preprint arXiv:1412.3474 (2014)
  25. 25.
    Urban, G., Geras, K.J., Kahou, S.E., Aslan, O., Wang, S., Caruana, R., Mohamed, A., Philipose, M., Richardson, M.: Do deep convolutional nets really need to be deep and convolutional? arXiv preprint arXiv:1603.05691 (2016)
  26. 26.
    Valstar, M., Pantic, M.: Induced disgust, happiness and surprise: an addition to the MMI facial expression database. In: Proceedings of the 3rd International Workshop on EMOTION (satellite of LREC): Corpora for Research on Emotion and Affect, p. 65 (2010)Google Scholar
  27. 27.
    Vittayakorn, S., Umeda, T., Murasaki, K., Sudo, K., Okatani, T., Yamaguchi, K.: Automatic attribute discovery with neural activations. In: European Conference on Computer Vision, pp. 252–268. Springer (2016)Google Scholar
  28. 28.
    Yan, C., Xie, H., Yang, D., Yin, J., Zhang, Y., Dai, Q.: Supervised hash coding with deep neural network for environment perception of intelligent vehicles. IEEE Trans. Intell. Transp. Syst. 19(1), 284–295 (2018)CrossRefGoogle Scholar
  29. 29.
    Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: Advances in Neural Information Processing Systems, pp. 3320–3328 (2014)Google Scholar
  30. 30.
    Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., Lipson, H.: Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579 (2015)
  31. 31.
    Zhang, K., Huang, Y., Du, Y., Wang, L.: Facial expression recognition based on deep evolutional spatial-temporal networks. IEEE Trans. Image Process. PP(99), 1 (2017)Google Scholar
  32. 32.
    Zhao, G., Huang, X., Taini, M., Li, S.Z., PietikäInen, M.: Facial expression recognition from near-infrared videos. Image Vis. Comput. 29(9), 607–619 (2011)CrossRefGoogle Scholar
  33. 33.
    Zhao, X., Liang, X., Liu, L., Li, T., Han, Y., Vasconcelos, N., Yan, S.: Peak-piloted deep network for facial expression recognition. In: European Conference on Computer Vision, pp 425–442. Springer (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Jianjun Li
    • 2
    Email author
  • Siming Huang
    • 2
  • Xin Zhang
    • 2
  • Xiaofeng Fu
    • 2
  • Ching-Chun Chang
    • 3
  • Zhuo Tang
    • 1
  • Zhenxing Luo
    • 1
  1. 1.Science and Technology on Communication and Information Security Control Laboratory of the 36th Institute of China Electronics Technology Group CorporationJiaxingChina
  2. 2.School of Computer Science and EngineeringHangzhou Dianzi UniversityHangzhouChina
  3. 3.Department of Computer ScienceUniversity of WarwickCoventryUK

Personalised recommendations