Facial expression identification using gradient local phase

  • Sonia KherchaouiEmail author
  • Amrane Houacine


This paper presents an automatic facial expression recognition (FER) system. The method proposed is based on an adapted gradient local phase quantization (LPQ) descriptor. Two methods for the quantization of the local phase are proposed here to improve the conventional LPQ. These methods are the “phase thresholding” and the “phase LTP coding”. An experimental study of these methods is performed for identification of both the six and seven basic expressions: happy, surprised, fear, disgust, sad, anger and neutral state. The FER system consists of three main stages. The first step consists of the detection of the face, selection of a region of interest, and normalization of this region. Then extraction of features is done by the adapted gradient LPQ method. The third step is the classification of the emotional states. The SVM are used for this purpose. Evaluation of the system performance is done on the well-known JAFFE, and Cohn and Kanade, databases, with both six and seven facial expressions.


Facial expression identification Local phase quantization Local ternary patterns Support vector machines (SVM) 



  1. 1.
    Abbo AA, Jeanne V, Ouwerkerk M, Shan C, Braspenning R, Ganesh A, Corporaal H (2008) “Mapping facial expression recognition algorithms on a low-power Smart camera,” 2nd ACM/IEEE International Conference on Distributed Smart Cameras, Stanford, CA, USA, 7–11Google Scholar
  2. 2.
    Ahonen T, Hadid A, Pietikäinen M (2004) “Face Recognition with Local Binary Patterns,” In Pajdla T., Matas J. Eds., Computer Vision ECCV. Lecture Notes in Computer Science, vol.3021, Springer, Berlin, HeidelbergGoogle Scholar
  3. 3.
    D. Arumugan, S. Purushothaman, emotion classification using facial expression, Int J Adv Comput Sci Appl, vol. 2, n° 7, pp.92–98, 2011Google Scholar
  4. 4.
    Bartlett MS, Hager JC, Ekman P, Sejnowski TJ (1999) Measuring facial expressions by computer image analysis. Psychophysiol Cambridge Univ USA 36:253–263CrossRefGoogle Scholar
  5. 5.
    T. F. Cootes, G. J. Edwards and C. J. Taylor, “Active appearance models,” IEEE Trans Patt Anal Mach, vol.23, n° 6, 681–685, 2001Google Scholar
  6. 6.
    Ekman P (1994) Strong evidence of universals in facial expressions: a reply to Russell’s mistaken critique. Psychol Bull:268–287Google Scholar
  7. 7.
    Ekman P, Friesen WV (1978) Facial action coding system: investigator’s guide. Consulting Psychologists Press, Palo AltoGoogle Scholar
  8. 8.
    Heikkila J, Ojansivu V, Rahtu E (2010) Improved blur insensitivity for decorrelated local phase quantization, Proc. 20th International Conference on Pattern Recognition (ICPR), pp.818–821, Istanbul, TurkeyGoogle Scholar
  9. 9.
    Holder R, Tapamo J (2017) J Image Video Proc 2017:42. CrossRefGoogle Scholar
  10. 10.
    Kanade T, Cohn JF, Tian Y (2000) “Comprehensive Database for Facial Expression Analysis,” In Proc. IEEE International Conference on Automatic Face & Gesture Recognition, 46–53Google Scholar
  11. 11.
    Shaik Taj Mahaboob, S. Narayana Reddy, “Comparative performance analysis of LBP and LTP based facial expression recognition,” Int J Appl Eng Res, vol.12, n°17, pp.6897–6900, 2017Google Scholar
  12. 12.
    Naik N, Rathna GN (2014) “Real time face detection on GPU using opencl,” Computer Science and Information Tech. (CS & IT), 441–448Google Scholar
  13. 13.
    Nguyen H-T (2014) Contributions to facial feature extraction for Face recognition", PhD thesis , GIPSA Laboratory, University of Grenoble, FranceGoogle Scholar
  14. 14.
    Ojansivu V, Heikkilä J (2008) Blur insensitive texture classification using local phase quantization. In: Elmoataz A, Lezoray O, Nouboud F, Mammass D (eds) Image and signal processing, ICISP 2008, lecture notes in computer science, vol 5099. Springer, BerlinGoogle Scholar
  15. 15.
    Pantic M, Rothkrantz LJM (2000) Expert Ststem for automatic analysis of Facilal expressions. Image Vis Comput 18:881–905CrossRefGoogle Scholar
  16. 16.
    Rowley H, Baluja S, Kanade T (1995) “Human face detection in visual scenes,” Technical Report CMU-CS-95-158R, School of Computer Science, Carnegie Mellon UniversityGoogle Scholar
  17. 17.
    Sung KK, Poggio T (1994) “Example-based learning for view-based human face detection,” Technical Report A.I. Memo 1521, CBCL Paper 112, MITGoogle Scholar
  18. 18.
    Tan X, Triggs B (2010) Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans Image Process 19:1635–1650MathSciNetCrossRefGoogle Scholar
  19. 19.
    Vapnik VN (1998) Statistical learning theory. Wiley, New YorkzbMATHGoogle Scholar
  20. 20.
    P. Viola and M. Jones (2001) “Robust Real Time Object Detection,“Second International Workshop on Statistical and Computational Theories of Vision Modelling Learning Computing and Sampling, Vancouver, CanadaGoogle Scholar
  21. 21.
    Viola P, Jones MJ (2001) “Rapid object detection using a boosted Cascade of simple features,” proceedings of IEEE computer society Conf. On computer vision and. Pattern Recogn 1:511–518Google Scholar
  22. 22.
    Chenggang Yan, Yongdong Zhang, Jizheng Xu, Feng Dai, Jun Zhang, Qionghai Dai, and Feng Wu, “Efficient parallel framework for HEVC motion estimation on many-Core processors,” IEEE Trans Circ Syst Video Tech, vol.24, N°.12, pp.2077–2089, 2014Google Scholar
  23. 23.
    Zhang Z (1999) Feature-based facial expression recognition: sensitivity analysis and experiments with a multi-layer perceptron. Int J Pattern Recognit Artif Intell 13(6):893–911CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.LCPTS, Faculty of Electronics and Computer ScienceUniversity of Sciences and Technology Houari BoumedieneBab EzzouarAlgeria

Personalised recommendations