Advertisement

Multimedia Tools and Applications

, Volume 77, Issue 1, pp 917–937 | Cite as

Evaluating real-life performance of the state-of-the-art in facial expression recognition using a novel YouTube-based datasets

  • Muhammad Hameed Siddiqi
  • Maqbool Ali
  • Mohamed Elsayed Abdelrahman Eldib
  • Asfandyar Khan
  • Oresti Banos
  • Adil Mehmood Khan
  • Sungyoung Lee
  • Hyunseung Choo
Article

Abstract

Facial expression recognition (FER) is one of the most active areas of research in computer science, due to its importance in a large number of application domains. Over the years, a great number of FER systems have been implemented, each surpassing the other in terms of classification accuracy. However, one major weakness found in the previous studies is that they have all used standard datasets for their evaluations and comparisons. Though this serves well given the needs of a fair comparison with existing systems, it is argued that this does not go in hand with the fact that these systems are built with a hope of eventually being used in the real-world. It is because these datasets assume a predefined camera setup, consist of mostly posed expressions collected in a controlled setting, using fixed background and static ambient settings, and having low variations in the face size and camera angles, which is not the case in a dynamic real-world. The contributions of this work are two-fold: firstly, using numerous online resources and also our own setup, we have collected a rich FER dataset keeping in mind the above mentioned problems. Secondly, we have chosen eleven state-of-the-art FER systems, implemented them and performed a rigorous evaluation of these systems using our dataset. The results confirm our hypothesis that even the most accurate existing FER systems are not ready to face the challenges of a dynamic real-world. We hope that our dataset would become a benchmark to assess the real-life performance of future FER systems.

Keywords

Facial expressions Classification YouTube Real-life scenarios 

Notes

Acknowledgments

This research was supported by the MSIP, Korea, under the G-ITRC support program (IITP-2015-R6812-15-0001) supervised by the IITP, and by the Priority Research Centers Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (NRF-2010-0020210).

References

  1. 1.
    Abidin Z, Alamsyah A (2015) Wavelet based approach for facial expression recognition. Int J Adv Intell Inf 1(1):7–14CrossRefGoogle Scholar
  2. 2.
    Aleix M (1998) Martinez. The ar face database. CVC Technical ReportGoogle Scholar
  3. 3.
    Bartlett MS, Littlewort G, Fasel I, Movellan JR (2003) Real time face detection and facial expression recognition: Development and applications to human computer interaction. In: Conference on computer vision and pattern recognition workshop, 2003. CVPRW’03, vol 5, pp 53–53. IEEEGoogle Scholar
  4. 4.
    Bartlett MS, Littlewort G, Frank M, Lainscsek C, Fasel I, Movellan J (2005) Recognizing facial expression: machine learning and application to spontaneous behavior. In: IEEE Computer society conference on computer vision and pattern recognition, 2005. CVPR 2005, vol 2, pp 568–573. IEEEGoogle Scholar
  5. 5.
    Bettadapura V (2012) Face expression recognition and analysis: the state of the art. arXiv:1203.6722
  6. 6.
    Bioid face db - humanscan ag, switzerland. https://www.bioid.com/About/BioID-Face-Database. Accessed: 2014-12-15
  7. 7.
    Chen L, Man H, Nefian AV (2005) Face recognition based on multi-class mapping of fisher scores. Pattern Recog 38(6):799–811CrossRefGoogle Scholar
  8. 8.
    Dantcheva A, Chen C, Ross A (2012) Can facial cosmetics affect the matching accuracy of face recognition systems?. In: 2012 IEEE Fifth international conference on biometrics: theory, applications and systems (BTAS), pp 391–398. IEEEGoogle Scholar
  9. 9.
    Face recognition and artificial vision group frav2d face database. http://www.frav.es/index.php/en/. Accessed: 2014-12-15
  10. 10.
    Fotosizer software. http://www.gomlab.com/eng/. Accessed: 2014-08-30
  11. 11.
    Fotosizer software. http://www.fotosizer.com/Download.aspx. Accessed: 2015-02-20
  12. 12.
    Garris MD (1994) Design, collection, and analysis of handwriting sample image databases. Encyclop Comput Sci Technol 31(16):189–213Google Scholar
  13. 13.
    Georghiades AS, Belhumeur PN, Kriegman D (2001) From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Trans Pattern Anal Mach Intell 23(6):643–660CrossRefGoogle Scholar
  14. 14.
    Grgic M, Delac K, Grgic S (2011) Scface–surveillance cameras face database. Multi Tools Appl 51(3):863–879CrossRefGoogle Scholar
  15. 15.
    Gross R, Matthews I, Cohn J, Kanade T, Baker S (2010) Multi-pie. Image Vis Comput 28(5):807–813CrossRefGoogle Scholar
  16. 16.
    Happy S, Routray A (2015) Automatic facial expression recognition using features of salient facial patches. IEEE Trans Affect Comput 6(1):1–12CrossRefGoogle Scholar
  17. 17.
    Happy SL, Routray A (2015) Robust facial expression classification using shape and appearance features. In: 2015 Eighth International Conference on Advances in Pattern Recognition (ICAPR), pp 1–5. IEEEGoogle Scholar
  18. 18.
    Jabid T, Md HK, Chae O (2010) Robust facial expression recognition based on local directional pattern. ETRI J 32(5):784–794CrossRefGoogle Scholar
  19. 19.
    Jain V, Mukherjee A (2002) The indian face database. http://vis-www.cs.umass.edu/-vidit/IndianFaceDatabase
  20. 20.
    Kabir Md H, Jabid T, Chae O (2012) Local directional pattern variance (ldpv): a robust feature descriptor for facial expression recognition. Int Arab J Inf Technol 9(4):382–391Google Scholar
  21. 21.
    Kanade T, Cohn JF, Tian Y (2000) Comprehensive database for facial expression analysis. In: Fourth IEEE international conference on automatic face and gesture recognition, 2000. Proceedings, pp 46–53. IEEEGoogle Scholar
  22. 22.
    Kang D, Han H, Anil KJ, Lee S-W (2014) Nighttime face recognition at large standoff: Cross-distance and cross-spectral matching. Pattern Recogn 47(12):3750–3766CrossRefGoogle Scholar
  23. 23.
    Kasinski A, Florek A, Schmidt A (2008) The put face database. Image Process Commun 13(3-4):59–64Google Scholar
  24. 24.
    Lisetti CL, LeRouge C (2004) Affective computing in tele-home health: design science possibilities in recognition of adoption and diffusion issues. In: Proceedings 37th IEEE Hawaii international conference on system sciences, Hawaii, USAGoogle Scholar
  25. 25.
    Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I (2010) The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp 94–101. IEEEGoogle Scholar
  26. 26.
    Lyons M, Akamatsu S, Kamachi M, Gyoba J (1998) Coding facial expressions with gabor wavelets. In: Third IEEE International Conference on Automatic Face and Gesture Recognition, 1998. Proceedings, pp 200–205. IEEEGoogle Scholar
  27. 27.
    Marszalec E, Martinkauppi B, Soriano M, Pietika M et al (2000) Physics-based face database for color research. J Electron Imaging 9(1):32–38CrossRefGoogle Scholar
  28. 28.
    Moore S, Bowden R (2009) The effects of pose on facial expression recognition. In: Proceedings of the British machine vision conference, pp 1–11Google Scholar
  29. 29.
    Nagaraja S, Prabhakar CJ (2014) Extraction of curvelet based rlbp features for representation of facial expression. In: 2014 international conference on contemporary computing and informatics (IC3I), pp 845–850. IEEEGoogle Scholar
  30. 30.
    Qi J, Gao X, He G, Luo Z, Yi W (2015) Multi-layer sparse representation for weighted lbp-patches based facial expression recognition. Sensors 15(3):6719–6739CrossRefGoogle Scholar
  31. 31.
    Rivera AR, Castillo R, Chae O (2013) Local directional number pattern for face analysis: Face and expression recognition. IEEE Trans Image Process 22(5):1740–1752MathSciNetCrossRefMATHGoogle Scholar
  32. 32.
    Samaria FS, Harter AC (1994) Parameterisation of a stochastic model for human face identification. In: Proceedings of the Second IEEE Workshop on Applications of Computer Vision, 1994, pp 138–142. IEEEGoogle Scholar
  33. 33.
    Shbib R, Zhou S (2015) Facial expression analysis using active shape model. International Journal of Signal Processing, Image Processing and Pattern Recognition 8 (1):9–22CrossRefGoogle Scholar
  34. 34.
    Siddiqi MH, Ali R, Idris M, Khan AM, Kim ES, Whang MC, Lee S (2016) Human facial expression recognition using curvelet feature extraction and normalized mutual information feature selection. Multimedia Tools Appl 75(2):935–959CrossRefGoogle Scholar
  35. 35.
    Siddiqi MH, Ali R, Khan AM, Kim ES, Kim GJ, Lee S (2015) Facial expression recognition using active contour-based face detection, facial movement-based feature extraction, and non-linear feature selection. Multimedia Systems 21(6):541–555CrossRefGoogle Scholar
  36. 36.
    Siddiqi MH, Ali R, Khan AM, Park Y-T, Lee S (2015) Human facial expression recognition using stepwise linear discriminant analysis and hidden conditional random fields. IEEE Trans Image Process 24(4):1386–1398MathSciNetCrossRefGoogle Scholar
  37. 37.
    Siddiqi MH, Lee S, Lee Y-K, Khan AM, Truc PTH (2013) Hierarchical recognition scheme for human facial expression recognition systems. Sensors 13(12):16682–16713CrossRefGoogle Scholar
  38. 38.
    Sim T, Baker S, Bsat M (2003) The cmu pose, illumination, and expression database. IEEE Trans Pattern Anal Mach Intell 25(12):1615–1618CrossRefGoogle Scholar
  39. 39.
    Singh R, Vatsa M, Bhatt HS, Bharadwaj S, Noore A, Nooreyezdan SS (2010) Plastic surgery: A new dimension to face recognition. IEEE Trans Inf Forensics Secur 5(3):441–448CrossRefGoogle Scholar
  40. 40.
    Somanath G, Rohith MV, Vadana CK (2011) A dense dataset for facial image analysis. In: 2011 IEEE international conference on computer vision workshops (ICCV Workshops), pp 2175–2182. IEEEGoogle Scholar
  41. 41.
    Sung K-K, Poggio T (1998) Example-based learning for view-based human face detection. IEEE Trans Pattern Anal Mach Intell 20(1):39–51CrossRefGoogle Scholar
  42. 42.
    The color feret database. http://www.nist.gov/itl/iad/ig/colorferet.cfm. Accessed: 2014-12-15
  43. 43.
    Thomaz CE, Giraldi GA (2010) A new ranking method for principal components analysis and its application to face image analysis. Image Vis Comput 28(6):902–913CrossRefGoogle Scholar
  44. 44.
    Wang S, Liu Z, Lv S, Lv Y, Wu G, Peng P, Chen F, Wang X (2010) A natural visible and infrared facial expression database for expression recognition and emotion inference. IEEE Trans Multimedia 12(7):682–691CrossRefGoogle Scholar
  45. 45.
    Wolf L, Hassner T, Maoz I (2011) Face recognition in unconstrained videos with matched background similarity. In: 2011 IEEE conference on computer vision and pattern recognition (CVPR), pp 529–534. IEEEGoogle Scholar
  46. 46.
    Wolf L, Hassner T, Taigman Y (2011) Effective unconstrained face recognition by combining multiple descriptors and learned background statistics. IEEE Trans Pattern Anal Mach Intell 33(10):1978–1990CrossRefGoogle Scholar
  47. 47.
    Wu X, Zhao J (2010) Curvelet feature extraction for face recognition and facial expression recognition. In: 2010 6th international conference on natural computation (ICNC), vo 3, pp 1212–1216. IEEEGoogle Scholar
  48. 48.
    Zhang B, Zhang L, Zhang D, Shen L (2010) Directional binary code with application to polyu near-infrared face database. Pattern Recogn Lett 31(14):2337–2344CrossRefGoogle Scholar
  49. 49.
    Zhang L, Tjondronegoro D (2011) Facial expression recognition using facial movement features. IEEE Trans Affect Comput 2(4):219–229CrossRefGoogle Scholar
  50. 50.
    Zhu Z, Ji Q (2006) Robust real-time face pose and facial expression recovery. In: 2006 IEEE computer society conference on computer vision and pattern recognition, vo 1, pp 681–688. IEEEGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  • Muhammad Hameed Siddiqi
    • 1
  • Maqbool Ali
    • 2
  • Mohamed Elsayed Abdelrahman Eldib
    • 3
  • Asfandyar Khan
    • 4
  • Oresti Banos
    • 5
  • Adil Mehmood Khan
    • 6
  • Sungyoung Lee
    • 2
  • Hyunseung Choo
    • 1
  1. 1.Department of Computer Science and EngineeringSungkyunkwan UniversitySuwonKorea
  2. 2.Department of Computer EngineeringKyung Hee UniversitySuwonKorea
  3. 3.Department of Biomedical EngineeringKyung Hee UniversitySuwonKorea
  4. 4.Department of Computer ScienceUniversity of Science & TechnologyBannuPakistan
  5. 5.Center for Telematics and Information TechnologyUniversity of TwenteEnschedeNetherlands
  6. 6.Department of Computer ScienceInnopolis UniversityKazanRussia

Personalised recommendations