Advertisement

Dynamic Eyes and Mouth Reinforced LBP Histogram Descriptors Based on Emotion Classification in Video Sequences

  • Ithaya Rani Panneer SelvamEmail author
  • T. Hari Prasath
Chapter
Part of the Studies in Fuzziness and Soft Computing book series (STUDFUZZ, volume 374)

Abstract

In the world of visual technology, classifying emotions from the face image is a challenging task. In the recent surveys, they have focused on capturing the whole facial signatures. But the mouth and eyes are the utmost vital facial components involved in classifying the emotions. This paper proposes an innovative approach to emotion classification using dynamic eyes and mouth signatures with high performance in minimum time. Initially, each eye and mouth image is separated into non-intersecting regions from this video sequences. The regions are further separated into small intersecting sub-regions. Dynamic reinforced local binary pattern signatures are seized from the sub-region of eyes and mouth in the subsequent frames which shows the dynamic changes of eyes and mouth aspects, respectively. In each sub-region, the dynamic eyes and mouth signatures are normalized using Z-score which is further converted into binary form signatures with the help of threshold values. The binary signatures are obtained for each pixel in a region on eyes and mouth computing histogram signatures. Concatenate the histogram signature which is captured from all the regions in the eye and mouth into a single enhanced signature. The discriminative dynamic signatures are categorized into seven emotions utilizing multi-class AdaBoost categorizer algorithm.

Keywords

Signature extraction Classification Normalization Detection of facial components 

References

  1. 1.
    Fakhreddin, K., Milad, A., Jamil, A.S.: NoNA: Human-computer interaction: overview on state of the art. Int. J. Smart Sens. Intell. Syst. 1, 23 (2008)Google Scholar
  2. 2.
    Cohn, J.F.: Advances in behavioral science using automated facial image analysis and synthesis. IEEE Signal Process. 27, 128–133 (2010)Google Scholar
  3. 3.
    Ekman, P., Friesen, A.: The Facial Action Coding System. W.V., Consulting Psychologist Press, San Francisco (1978)Google Scholar
  4. 4.
    Fasela, B., Juergen, L.: Automatic facial expression analysis: a survey. Pattern Classif. 36, 259–275 (2003)CrossRefGoogle Scholar
  5. 5.
    Xingguo, J., Bin, F., Liangnian, J.: Facial expression classification via sparse representation using positive and reverse templates. IET Image Process. 10, 616–623 (2016)Google Scholar
  6. 6.
    Sunil, K., Bhuyan, M.K., Biplab, K.C.: Extraction of informative regions of a face for facial expression classification. IET Comput. Vis. 10, 567–576 (2016)Google Scholar
  7. 7.
    Ahmed, K.R, Alexandre, B., et al.: Framework for reliable, real-time facial expression classification for low resolution images. Pattern Recognit. Lett. 34, 1159–1168 (2013)Google Scholar
  8. 8.
    Liu, P., Han, S., Meng, Z.: Facial expression classification via a boosted deep belief network. In: IEEE Conference on Computer Vision and Pattern Classification, pp. 1805–1812 (2014)Google Scholar
  9. 9.
    Zhong, L., Liu, Q., et al.: Learning multiscale active facial patches for expression analysis. IEEE Trans. Cybern. 45, 1499–1510 (2014)Google Scholar
  10. 10.
    Zdzisław, L., Piotr, C.: Identification of emotions based on human facial expressions using a color-space approach. In: International Conference on Diagnostics of Processes and Systems Advanced Solutions in Diagnostics and Fault Tolerant Control, pp. 291–303 (2017)Google Scholar
  11. 11.
    Sreenivasa, K., Shashidhar, G., et al.: Classification of emotions from video using acoustic and facial descriptors. Signal Image Video Process. 9(10.5), 1029–1045 (2015)Google Scholar
  12. 12.
    Shubhada, D., Manasi, P., et al.: Survey on real-time facial expression classification techniques. IET Biom. 5, 155–163 (2016); Yun, T., Ling, G.: A deformable 3-D facial expression model for dynamic human emotional state classification. IEEE Trans. Circuits Syst. Video Technol. 23, 142–157 (2013)Google Scholar
  13. 13.
    Kalyan, V.P., Suja, P., et al.: Emotion classification from facial expressions for 4D videos using geometric approach. In: Advances in Signal Processing and Intelligent Classification Systems, pp. 3–14. Springer, Cham (2015)Google Scholar
  14. 14.
    Niese, R., Al-Hamadi, A., et al.: Facial expression classification based on geometric and optical flow descriptors in colour image sequences. IET Comput. Vis. 6(10.2), 79–88 (2012)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Siti, K., Mohamed, H., et al.: Spatiotemporal descriptor extraction for facial expression classification. IET Image Process. 10(10.7), 534–541 (2016)Google Scholar
  16. 16.
    Yi, J., Idrissi, K.: Automatic facial expression classification based on spatiotemporal descriptors. Pattern Classif. Lett. 33, 1373–1380 (2012)CrossRefGoogle Scholar
  17. 17.
    Isabelle, M., Menne, F.: Faces of emotion: investigating emotional facial expressions towards a robot. Int. J. Soc. Robot. 30, 1–11 (2017)Google Scholar
  18. 18.
    Chakraborty, A., Konar, A.: Fuzzy models for facial expression-based emotion classification and control. Emot. Intell. (Springer-Verlag SCI) 23, 33–173 (2009)Google Scholar
  19. 19.
    Ligang, Z., Tjondronegoro, D.: Facial expression classification using facial movement descriptors. IEEE Trans. Affect. Comput. 2, 219–229 (2011)Google Scholar
  20. 20.
    Sugata, B., Abhishek, V., et al.: LBP and color descriptors for image classification. Cross Disciplinary Biometric Systems, pp. 205–225. Springer, Berlin (2012)Google Scholar
  21. 21.
    Shan, C., Gong, S., Mcowan, P.: Facial expression classification based on local binary patterns: a comprehensive study. Image Vis. Comput. 27, 803–816 (2009)CrossRefGoogle Scholar
  22. 22.
    Liu, Z., Wu, M., Cao, W., et al.: A facial expression emotion recognition based human-robot interaction system. IEEE/CAA J. Autom. Sin. 4(10.4), 668–676 (2017)CrossRefGoogle Scholar
  23. 23.
    Ithayarani, P., Muneeswaran, K .: Facial emotion classification based on eye and mouth regions. Int. J. Pattern Classif. Artif. Intell. 30, 5020–5025 (2016)Google Scholar
  24. 24.
    Daugman, J.: Demodulation by complex-valued wavelets for stochastic pattern classification. Int. J. Wavelets Multiresolution Inform. Process. (2003)Google Scholar
  25. 25.
    Xiaoyang, T., Bill, T.: Fusing Gabor and LBP Descriptor Sets for Kernel-based Face Classification, pp. 235–249. INRIA & Laboratoire Jean Kuntzmann, France (2007)Google Scholar
  26. 26.
    Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55, 119–139 (1997)MathSciNetCrossRefGoogle Scholar
  27. 27.
    Saberian, M., Vasoncelos, J.: Multiclass boosting: theory and algorithms. In: Proceedings of Neural Information Processing Systems. (NIPS), pp. 2124–2132., Granada, Spain (2011)Google Scholar
  28. 28.
    Yongjin, W., Ling, G.: Recognizing human emotional state from audiovisual signals. IEEE Trans. Multimed. 10, 659–668 (2008)Google Scholar
  29. 29.
    Kanade, T., Cohn, J.F., et al.: Comprehensive database for facial expression analysis. In: IEEE International Conference on Automatic Face & Gesture Classification (FG) (2000)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringSethu Institute of TechnologyVirudhunagarIndia
  2. 2.Department of Electrical and Electronics EngineeringKamaraj College of Engineering and TechnologyVirudhunagarIndia

Personalised recommendations