Applied Intelligence

, Volume 49, Issue 12, pp 4150–4174 | Cite as

Detecting facial emotions using normalized minimal feature vectors and semi-supervised twin support vector machines classifier

  • Manoj Prabhakaran KumarEmail author
  • Manoj Kumar Rajagopal


In this paper, human facial emotions are detected through normalized minimal feature vectors using semi-supervised Twin Support Vector Machine (TWSVM) learning. In this study, face detection and tracking are carried out using the Constrained Local Model (CLM), which has 66 entire feature vectors. Based on Facial Animation Parameter’s (FAPs) definition, entire feature vectors are those things that visibly affect human emotion. This paper proposes the 13 minimal feature vectors that have high variance among the entire feature vectors are sufficient to identify the six basic emotions. Using the Max & Min and Z-normalization technique, two types of normalized minimal feature vectors are formed. The novelty of this study is methodological in that the normalized data of minimal feature vectors fed as input to the semi-supervised multi-class TWSVM classifier to classify the human emotions is a new contribution. The macro facial expression datasets are used by a standard database and several real-time datasets. 10-fold and hold out cross-validation is applied with the cross-database (combining standard and real-time). In the experimental result, using ‘One vs One’ and ‘One vs All’ multi-class techniques with 3 kernel functions produce a 36 trained model of each emotion and their validation parameters are calculated. The overall accuracy achieved for 10-fold cross-validation is 93.42 ± 3.25% and hold out cross-validation is 92.05 ± 3.79%. The overall performance (Precision, Recall, F1-score, Error rate and Computation Time) of the proposed model was also calculated. The performance of the proposed model and existing methods were compared and results indicate them to be more reliable than existing models.


Semi-supervised learning Minimal feature vectors Twin support vector machines Facial animation parameters Human-computer interaction 



The authors would like to thank, Internet of Things (IOT) laboratory, SENSE and research colleague of Vellore Institute Technology, Chennai, India for real time dataset of facial emotion and execution of this research work.


No funding.

Compliance with Ethical Standards

Conflict of Interest

The authors declare that they have no conflict of interest.


  1. 1.
    Owusu E, Zhan Y, Qi RM (2014) An svm-adaboost facial expression recognition system. Appl Intell 40(3):536–545CrossRefGoogle Scholar
  2. 2.
    Patil H, Kothari A, Bhurchandi K (2016) Expression invariant face recognition using semidecimated dwt, patch-ldsmt, feature and score level fusion. Appl Intell 44(4):913–930CrossRefGoogle Scholar
  3. 3.
    Siddiqi MH (2018) Accurate and robust facial expression recognition system using real-time youtube-based datasets. Appl Intell: 1–18Google Scholar
  4. 4.
    Suwa M, Sugie N, Fujimora K (1978) A preliminary note on pattern recognition of human emotional expression. In: International joint conference on pattern recognition, vol 1978, pp 408–410Google Scholar
  5. 5.
    Samal A, Iyengar PA (1992) Automatic recognition and analysis of human faces and facial expressions: a survey. Pattern Recogn 25(1):65–77CrossRefGoogle Scholar
  6. 6.
    Zhang Z, Girard JM, Wu Y, Zhang X, Liu P, Ciftci U, Canavan S, Reale M, Horowitz A, Yang H, et al. (2016) Multimodal spontaneous emotion corpus for human behavior analysis. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3438–3446Google Scholar
  7. 7.
    Martins P, Batista J (2009) Identity and expression recognition on low dimensional manifolds. In: 2009 16Th IEEE international conference on image processing (ICIP). IEEE, pp 3341–3344Google Scholar
  8. 8.
    Tian Y-I, Kanade T, Cohn JF (2001) Recognizing action units for facial expression analysis. IEEE Trans Pattern Anal Mach Intell 23(2):97–115CrossRefGoogle Scholar
  9. 9.
    Wu Y, Ji Q (2016) Constrained joint cascade regression framework for simultaneous facial action unit recognition and facial landmark detection. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR)Google Scholar
  10. 10.
    Ekman PE, Sorenson RE, Friesen WV (1969) Pan-cultural elements in facial displays of emotion. Science 164(3875):86–88CrossRefGoogle Scholar
  11. 11.
    Pandzic IS, Forchheimer R (eds) (2003) MPEG-4 Facial animation: the standard, implementation and applications. Wiley, New YorkGoogle Scholar
  12. 12.
    Salam S (2013) Multi-Object modelling of the face. PhD thesis, SupélecGoogle Scholar
  13. 13.
    Uddin MZ (2014) An efficient local feature-based facial expression recognition system. Arab J Sci Eng 39(11):7885–7893CrossRefGoogle Scholar
  14. 14.
    Yu K, Wang Z, Hagenbuchner M, Feng DD (2014) Spectral embedding based facial expression recognition with multiple features. Neurocomputing 129:136–145CrossRefGoogle Scholar
  15. 15.
    Saeed A, Al-Hamadi A, Niese R, Elzobi M (2014) Frame-based facial expression recognition using geometrical features. Advances in Human-Computer Interaction 2014:4CrossRefGoogle Scholar
  16. 16.
    Wan C, Tian Y, Liu S (2012) Facial expression recognition in video sequences. In: 2012 10th world congress on intelligent control and automation (WCICA). IEEE, pp 4766–4770Google Scholar
  17. 17.
    Mohammadian A, Aghaeinia H, Towhidkhah F (2015) Video-based facial expression recognition by removing the style variations. IET Image Process 9(7):596–603CrossRefGoogle Scholar
  18. 18.
    Ren F, Huang Z (2015) Facial expression recognition based on aam–sift and adaptive regional weighting. IEEJ Trans Electr Electron Eng 10(6):713–722CrossRefGoogle Scholar
  19. 19.
    Jiang X, Feng B, Jin L (2016) Facial expression recognition via sparse representation using positive and reverse templates. IET Image Process 10(8):616–623CrossRefGoogle Scholar
  20. 20.
    Papachristou K, Tefas A, Pitas I (2014) Symmetric subspace learning for image analysis. IEEE Trans Image Process 23(12):5683–5697MathSciNetCrossRefGoogle Scholar
  21. 21.
    Nikitidis S, Tefas A, Pitas I (2014) Maximum margin projection subspace learning for visual data analysis. IEEE Trans Image Process 23(10):4413–4425MathSciNetCrossRefGoogle Scholar
  22. 22.
    Kumar MP, Rajagopal MK (2018) Detecting happiness in human face using minimal feature vectors. In: Computational signal processing and analysis. Springer, pp 1–10Google Scholar
  23. 23.
    Khemchandani R, Chandra S et al (2007) Twin support vector machines for pattern classification. IEEE Trans Pattern Anal Mach Intell 29(5):905–910CrossRefGoogle Scholar
  24. 24.
    Kumar MP, Rajagopal MK (2018) Detecting happiness in human face using unsupervised twin-support vector machines. International Journal of Intelligent Systems and Applications 10(8):85CrossRefGoogle Scholar
  25. 25.
    Tomar D, Agarwal S (2016) Multi-class twin support vector machine for pattern classification. In: Proceedings of 3rd international conference on advanced computing, networking and informatics. Springer, pp 97–110Google Scholar
  26. 26.
    Ding S, Zhao X, Zhang J, Zhang X, Xue Y (2017) A review on multi-class twsvm. Artif Intell Rev, pp 1–27Google Scholar
  27. 27.
    Cohen I, Sebe N, Cozman FG, Huang TS (2003) Semi-supervised learning for facial expression recognition. In: Proceedings of the 5th ACM SIGMM international workshop on Multimedia information retrieval. ACM, pp 17–22Google Scholar
  28. 28.
    Rifai S, Bengio Y, Courville A, Vincent P, Mirza M (2012) Disentangling factors of variation for facial expression recognition. In: Computer vision–ECCV 2012. Springer, pp 808–822Google Scholar
  29. 29.
    Jiang B, Jia K, Sun Z (2013) Research on the algorithm of semi-supervised robust facial expression recognition, pp 136–145. ChamGoogle Scholar
  30. 30.
    Saragih JM, Lucey S, Cohn JF (2011) Deformable model fitting by regularized landmark mean-shift. Int J Comput Vis 91(2):200–215MathSciNetCrossRefGoogle Scholar
  31. 31.
    Wang Y, Lucey S, Cohn JF (2008) Enforcing convexity for improved alignment with constrained local models IEEE conference on computer vision and pattern recognition, 2008. CVPR 2008. IEEE, pp 1–8Google Scholar
  32. 32.
    Silverman BW (1986) Density estimation for statistics and data analysis, vol 26. CRC Press, Boca RatonCrossRefGoogle Scholar
  33. 33.
    Cheng Y (1995) Mean shift, mode seeking, and clustering. IEEE Trans Pattern Anal Mach Intell 17(8):790–799CrossRefGoogle Scholar
  34. 34.
    Tekalp AM, Ostermann J (2000) Face and 2-d mesh animation in mpeg-4. Signal Process Image Commun 15(4):387–421CrossRefGoogle Scholar
  35. 35.
    Chen H, Li J, Zhang F, Li Y, Wang H (2015) 3d model-based continuous emotion recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1836–1845Google Scholar
  36. 36.
    Tomar D, Agarwal S (2015) An effective weighted multi-class least squares twin support vector machine for imbalanced data classification. International Journal of Computational Intelligence Systems 8(4):761–778CrossRefGoogle Scholar
  37. 37.
    Valstar M, Pantic M (2010) Induced disgust, happiness and surprise: an addition to the mmi facial expression database. In: Proc 3rd intern workshop on EMOTION (satellite of LREC): Corpora for research on emotion and affect, p 65Google Scholar
  38. 38.
    Zhao G, Huang X, Taini M, Li SZ, Pietikäinen M (2011) Facial expression recognition from near-infrared videos. Image Vis Comput 29(9):607–619CrossRefGoogle Scholar
  39. 39.
    Kanade T, Cohn JF, Tian Y (2000) Comprehensive database for facial expression analysis. In: Fourth IEEE international conference on automatic face and gesture recognition, 2000. Proceedings. IEEE, pp 46–53Google Scholar
  40. 40.
    Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I (2010) The extended Cohn-Kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer society conference on computer vision and pattern recognition-workshops. IEEE, pp 94–101Google Scholar
  41. 41.
    Petridis S, Martinez B, Pantic M (2013) The mahnob laughter database. Image Vis Comput 31(2):186–202. Affect analysis in continuous inputCrossRefGoogle Scholar
  42. 42.
    Dapogny A, Bailly K, Dubuisson S (2015) Pairwise conditional random forests for facial expression recognition. In: Proceedings of the IEEE international conference on computer vision, pp 3783–3791Google Scholar
  43. 43.
    Guo Y, Zhao G, Pietikäinen M (2012) Dynamic facial expression recognition using longitudinal facial expression atlases. In: Computer vision–ECCV 2012. Springer, pp 631–644Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.School Of Electronics EngineeringVellore Institute of TechnologyChennaiIndia

Personalised recommendations