Emotion Recognition from Images Under Varying Illumination Conditions

  • P. Suja
  • Sherin Mariam Thomas
  • Shikha Tripathi
  • V. K. Madan
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 357)

Abstract

Facial expressions are one of the most powerful and immediate means for human beings to communicate their emotions. Recognizing human emotion has varied range of applications in humanoid robots, animation industry, psychology, forensic analysis, medical aid, automotive industry, etc. This work focuses on emotion recognition under various illumination conditions using images from CMU-MultiPIE database. The database is provided with five basic expressions like neutral, happiness, anger, disgust and surprise with varying pose and illuminations. The experiment has been conducted on images with varying illuminations initially without pre-processing and also by applying a proposed ratio-based pre-processing method followed by feature extraction and classification. Dual—Tree-Complex Wavelet Transform (DT-CWT) was applied for the formation of feature vectors along with K-Nearest Neighbour (KNN) as the classifier. The result shows that pre-processed images give better results than original images. It is thus concluded that varying illumination has effect on emotion recognition and the pre-processing algorithm demonstrates improvement in accuracy of recognition. Future work may include a broader perspective of using body language and speech data for emotion recognition.

Keywords

Illumination normalization Anisotropic smoothing Ratio based normalization DT-CWT KNN classifier 

References

  1. 1.
    Liu WF, Wang Y-J (2008) Expression feature extraction based on difference of local binary pattern histogram sequences. In: ICSP2008 proceedingsGoogle Scholar
  2. 2.
    Jabid T Md, Kabir H, Chae O (2010) Robust facial expression recognition based on local directional pattern. ETRI J 32(5)Google Scholar
  3. 3.
    Adin RR, Jorge RC, Oksam C (2013) Local directional number pattern for face analysis: face and expression recognition. IEEE Trans Image Process 22(5)Google Scholar
  4. 4.
    Wang S, Liu Z, Lv S, Lv Y, Wu G, Peng P, Chen F, Wang X (2010) A natural visible and infrared facial expression database for expression recognition and emotion inference. IEEE Trans Multimedia 12(7)Google Scholar
  5. 5.
    Suja P, Tripathi S, Keerthana DN (2013) Emotion recognition using DWT, KL transform and neural network. In: Second international conference on advances in signal processing and communications (SPC2013), Lucknow, India, June 2013. Proceedings of mobile communications II, Springer, pp 360–367. ACEEE conference proceedings series 01, Elsevier pp 834–839Google Scholar
  6. 6.
    Lajevardi SM, Wu HR (2012) Facial expression recognition in perceptual color space. IEEE Trans Image Process 21(8)Google Scholar
  7. 7.
    Li Y, Ruan Q, Li X (2010) Facial expression recognition based on complex wavelet transform. In: IET 3rd international conference on wireless, mobile and multimedia network, IEEEGoogle Scholar
  8. 8.
    Sanghoon KIM, Chung S-T, Jung S, Cho S (2008) An improved illumination normalization based on anisotropic smoothing for face recognition. World Acad Sci Eng Technol 2:01–21Google Scholar
  9. 9.
    Stratou G, Ghosh A, Debevec P, Morency L-P (2011) Effect of illumination on automatic expression recognition:a novel 3D relightable facial database. In: IEEE international conference on automatic face and gesture recognition and workshops (FG 2011), pp 611–618Google Scholar
  10. 10.
    Suja P, Tripathi S, Deepthy J (2014) Emotion recognition from facial expressions using frequency domain techniques. In: First international symposium on signal processing and intelligent recognition systems—SIRS 2014, Trivandrum, India, March 2014. Proceedings of advances in intelligent systems and computing, vol 264. Springer, pp 299–310Google Scholar
  11. 11.
    González RC, Woods RE (2007) Digital image processing. Prentice HallGoogle Scholar
  12. 12.
    Selsnick W, Baraniuk RG, Kingsburg NG (2005) The dual—tree complex wavelet transform—a coherent framework for multiscale signal and image processing. IEEE Sign Process Mag 22(6):123–151Google Scholar
  13. 13.
    Gross R, Matthews I, Cohn J, Kanade T, Baker S (2010) Guide to the CMU multi-PIE face database. The Robotics Institute, Carnegie Mellon UniversityGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • P. Suja
    • 1
  • Sherin Mariam Thomas
    • 1
  • Shikha Tripathi
    • 1
  • V. K. Madan
    • 2
  1. 1.Amrita Robotic Research Centre, Amrita School of EngineeringAmrita Vishwa VidyapeethamBengaluruIndia
  2. 2.Bhabha Atomic Research CentreMumbaiIndia

Personalised recommendations