Recognition of Facial Expressions Based on Deep Conspicuous Net

  • João Paulo CanárioEmail author
  • Luciano Oliveira
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9423)


Facial expression has an important role in human interaction and non-verbal communication. Hence more and more applications, which automatically detect facial expressions, start to be pervasive in various fields, such as education, entertainment, psychology, human-computer interaction, behavior monitoring, just to cite a few. In this paper, we present a new approach for facial expression recognition using a so-called deep conspicuous neural network. The proposed method builds a conspicuous map of region faces, training it via a deep network. Experimental results achieved an average accuracy of 90% over the extended Cohn-Kanade data set for seven basic expressions, demonstrating the best performance against four state-of-the-art methods.


Conspicuity Facial expression Deep learning 


  1. 1.
    Darwin, C., Ekman, P., Prodger, P.: The Expression of the Emotions in Man and Animals. Oxford University Press, USA (1998)Google Scholar
  2. 2.
    Nakatsu, R., Nicholson, J., Tosa, N.: Emotion recognition and its application to computer agents with spontaneous interactive capabilities. Knowledge-Based Systems 13, 497–504 (2000)CrossRefGoogle Scholar
  3. 3.
    Fasel, B., Luettin, J.: Automatic facial expression analysis: a survey. Pattern Recognition 36, 259–275 (2003)CrossRefzbMATHGoogle Scholar
  4. 4.
    Ekman, P., Friesen, W.V.: Measuring facial movement. Environmental psychology and nonverbal behavior. Human Sciences Press (1976)Google Scholar
  5. 5.
    Tian, Y.L., Kanade, T., Cohn, J.F.: Recognizing action units for facial expression analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 23, 97–115 (2001)CrossRefGoogle Scholar
  6. 6.
    Busso, C., Deng, Z., Yildirim, S., Bulut, M., Lee, C.M., Kazemzadeh, A., Lee, S., Neumann, U., Narayanan, S.: Analysis of emotion recognition using facial expressions, speech and multimodal information. In: Proceedings of the 6th International Conference on Multimodal Interfaces, pp. 205–211. ACM (2004)Google Scholar
  7. 7.
    Ekman, P., Friesen, W.V., Ellsworth, P.: Emotion in the human face: Guidelines for research and an integration of findings. Elsevier (2013)Google Scholar
  8. 8.
    Friesen, W.V., Ekman, P.: EMFACS-7: Emotional Facial Action Coding System. Unpublished manuscript, University of California, San Francisco (1983)Google Scholar
  9. 9.
    Chew, S.W., Lucey, P., Lucey, S., Saragih, J., Cohn, J.F., Sridharan, S.: Person-independent facial expression detection using constrained local models. In: IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2011), pp. 915–920 (2011)Google Scholar
  10. 10.
    Lee, C.S., Chellappa, R.: Sparse localized facial motion dictionary learning for facial expression recognition. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3548–3552 (2014)Google Scholar
  11. 11.
    Nie, S., Wang, Z., Ji, Q.: A generative restricted boltzmann machine based method for high-dimensional motion data modeling. Computer Vision and Image Understanding (2015)Google Scholar
  12. 12.
    Shojaeilangari, S., Yau, W.Y., Li, J., Teoh, E.K.: Multi-scale Analysis of Local Phase and Local Orientation for Dynamic Facial Expression Recognition. Journal of Multimedia Theory and Application, 1 (2014)Google Scholar
  13. 13.
    Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial expression analysis. In: Fourth IEEE International Conference on Automatic Face & Gesture Recognition, pp. 46–53 (2000)Google Scholar
  14. 14.
    Sutskever, I., Martens, J., Dahl, G., Hinton, G.E.: On the importance of initialization and momentum in deep learning. In: Proceedings of the 30th International Conference on Machine Learning (ICML 2013), pp. 1139–1147 (2013)Google Scholar
  15. 15.
    Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. CoRR (2012)Google Scholar
  16. 16.
    Huang, G.B., Lee, H., Learned-Miller, E.: Learning hierarchical representations for face verification with convolutional deep belief networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2518–2525 (2012)Google Scholar
  17. 17.
    Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The Extended Cohn-Kanade Dataset (CK+): a complete dataset for action unit and emotion-specified expression. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 94–101 (2010)Google Scholar
  18. 18.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis & Machine Intelligence 20, 1254–1259 (1998)CrossRefGoogle Scholar
  19. 19.
    Greenspan, H., Belongie, S., Goodman, R., Perona, P., Rakshit, S., Anderson, C.H.: Overcomplete steerable pyramid filters and rotation invariance. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 222–228 (1994)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Intelligent Vision Research LabFederal University of Bahia, UFBASalvadorBrazil

Personalised recommendations