Advertisement

Graph-based dynamic ensemble pruning for facial expression recognition

  • Danyang Li
  • Guihua WenEmail author
  • Xu Li
  • Xianfa Cai
Article
  • 34 Downloads

Abstract

Ensemble learning is an effective method to enhance the recognition accuracy of facial expressions. The performance of ensemble learning can be affected by many factors, such as the accuracy of the classifier pool’s component members and the diversity of classifier pool. Therefore, choosing the component members of ensemble learning reasonably can be helpful to maintain or enhance the recognition rate of facial expressions. In this paper, we propose a novel dynamic ensemble pruning method called graph-based dynamic ensemble pruning (GDEP) and apply it to the field of facial expression recognition. The GDEP’s main intension is to solve the problem that in the dynamic ensemble pruning methods, the classifier selection process is heavily sensitive to the membership in test sample’s neighborhood. Like all other dynamic ensemble pruning methods, GDEP can be divided into three steps: 1) Construct the neighborhood; 2) Evaluate the classifiers’ performance; 3) Form the selected classifier subset according to the classifiers’ capacity for recognizing a specific test image. And in order to achieve the GDEP’s intension, in the first step, this paper chooses neighborhood members more carefully by taking use of the statistics of classifiers’ behavior to characterize the intensity and similarity of emotions in data samples, and using the geodesic distance to calculate the data samples’ similarity. In the second step, GDEP builds the must-link and cannot-link graphs in the neighborhood to measure the classifiers’ performance and reduce the impact of inappropriate samples in the neighborhood. The experiments on the Fer2013, JAFFE and CK+ databases show the effectiveness of GDEP and demonstrate that it can compete with many state-of-art methods.

Keywords

Facial expression recognition Dynamic ensemble pruning Classifier behavior Geodesic distance 

Notes

Acknowledgements

The PhD Start-up Fund of Natural Science Foundations of Guangdong Province (2015A030310267) and the Research Projects of Introducing Talents in Guizhou University (703/702571183301).

References

  1. 1.
    Ekman P, Friesen WV (1971) Constants across cultures in the face and emotion. Pers Soc Psychol 17 (2):124–129CrossRefGoogle Scholar
  2. 2.
    Liu, Yuan XH, Gong X, Xie Z, Fang F, Luo ZW (2018) Conditional convolution neural network enhanced random forest for facial expression recognition. Pattern Recogn 84:251–261CrossRefGoogle Scholar
  3. 3.
    Yaddaden Y, Adda M, Bouzouane A, Gaboury S, Bouchard B (2018) User action and facial expression recognition for error detection system in an ambient assisted environment. Expert Syst Appl 112:173–189.  https://doi.org/10.1016/j.eswa.2018.06.033 CrossRefGoogle Scholar
  4. 4.
    Yu ZB, Liu GC, Liu QS, Deng JK (2018) Spatio-temporal convolutional features with nested LSTM for facial expression recognition. Neurocomputing 317:50–57.  https://doi.org/10.1016/j.neucom.2018.07.028 CrossRefGoogle Scholar
  5. 5.
    Sun Z, Chiong R, Hu ZP (2018) An extended dictionary representation approach with deep subspace learning for facial expression recognition. Neurocomputing 316:1–9.  https://doi.org/10.1016/j.neucom.2018.07.045 CrossRefGoogle Scholar
  6. 6.
    Sun B, Li LD, Wu XW, Zuo T, Chen Y, Zhou GY, He J, Zhu XM (2016) Combining feature-level and decision-level fusion in a hierarchical classifier for emotion recognition in the wild. J Multimodal User Interfaces 10(2):125–137.  https://doi.org/10.1007/s12193-015-0203-6 CrossRefGoogle Scholar
  7. 7.
    Jia XB, Liu SQ, Powers D, Cardiff B (2017) A multi-layer fusion-based facial expression recognition approach with optimal weighted AUs. Appl Sci, 7(2)Google Scholar
  8. 8.
    AI-Sumaidaee SAM, Abdullah MAM, AI-Nima RRO, Dlay SS, Chambers JA (2017) Multi-gradient features and elongated quinary pattern encoding for image-based facial expression recognition. Pattern Recogn 71:249–263.  https://doi.org/10.1016/j.patcog.2017.06.007 CrossRefGoogle Scholar
  9. 9.
    Tamon C, Xiang J (2000) On the boosting pruning problem. In: 11th European conference on machine learning, pp 404–412Google Scholar
  10. 10.
    Lu ZY, Wu XD, Zhu XQ, Bongard J (2010) Ensemble pruning via individual contribution ordering. In: 16th ACM SIGKDD international conference on knowledge discovery and data mining.  https://doi.org/10.1145/1835804.1835914, pp 871–880
  11. 11.
    Jia XB, Zhang YH, Powers D, Ali HB (2014) Multi-classifier fusion based facial expression recognition approach. KSII Trans Internet Inf Syst 8(1):196–212.  https://doi.org/10.3837/tiis.2014.01.012 CrossRefGoogle Scholar
  12. 12.
    Zavaschi THH, Koerich AL, Oliveira LES (2011) Facial expression recognition using ensemble of classifiers. In: 2011 IEEE international conference on acoustics speech and signal processing (ICASSP).  https://doi.org/10.1109/ICASSP.2011.5946775, pp 1489–1492
  13. 13.
    Li DY, Wen GH (2018) MRMR-Based ensemble pruning for facial expression recognition. Multimed Tools Appl 77(12):15251– 15272.  https://doi.org/10.1007/s11042-017-5105-z CrossRefGoogle Scholar
  14. 14.
    Sultan Zia M, Hussain M, Arfan Jaffar M (2018) A novel spontaneous facial expression recognition using dynamically weighted majority voting based ensemble classifier. Multimed Tools Appl 77(19):25537–25567.  https://doi.org/10.1007/s11042-018-5806-y CrossRefGoogle Scholar
  15. 15.
    Ko AHR, Sabourin R, Britto AS (2008) From dynamic classifier selection to dynamic ensemble selection. Pattern Recogn 41(5):1718–1731.  https://doi.org/10.1016/j.patcog.2007.10.015 CrossRefzbMATHGoogle Scholar
  16. 16.
    Markatopoulou F, Tsoumakas G, Vlahavas I (2015) Dynamic ensemble pruning based on multi-label classification. Neurocomputing 150:501–512.  https://doi.org/10.1016/j.neucom.2014.07.063 CrossRefGoogle Scholar
  17. 17.
    Cruz RMO, Sabourin R, Cavalcanti GDC, Ren TI (2015) META-DES: A dynamic ensemble selection framework using meta-learning. Pattern Recogn 48(5):1925–1935.  https://doi.org/10.1016/j.patcog.2014.12.003 CrossRefGoogle Scholar
  18. 18.
    Cruz RMO, Cavalcanti GDC, Ren T (2011) A method for dynamic ensemble selection based on a filter and an adaptive distance to improve the quality of the regions of competence. In: The 2011 international joint conference on neural networls (IJCNN)Google Scholar
  19. 19.
    Lima TPF, Sergio AT, Ludermir TB (2014) Improving Classifiers and Regions of Competence in Dynamic Ensemble Selection. In: 2014 Brazilian conference on intelligent systems (BRACIS), pp 13–18, DOI  https://doi.org/10.1109/BRACIS.2014.14, (to appear in print)
  20. 20.
    Cruz RMO, Sabourin R, Cavalcanti GDC (2017) Analyzing different prototype selection techniques for dynamic classifier and ensemble selection. In: 2017 international joint conference on neural networks (IJCNN).  https://doi.org/10.1109/IJCNN.2017.7966355, pp 3959–3966
  21. 21.
    Zhang DQ, Chen SC, Zhou ZH (2008) Constraint score: A new filter method for feature selection with pairwise constraints. Pattern Recogn 41(5):1440–1451.  https://doi.org/10.1016/j.patcog.2007.10.009 CrossRefzbMATHGoogle Scholar
  22. 22.
    Alalga A, Benabdeslem K, Taleb N (2016) Soft-constrained Laplacian score for semi-supervised multi-label feature selection. Knowl Inf Syst 47(1):75–98.  https://doi.org/10.1007/s10115-015-0841-8 CrossRefGoogle Scholar
  23. 23.
    Yang XK, He L, Qu D (2016) Semi-supervised feature selection for audio classification based on constraint compensated Laplacian score. Eurasip J Audio Speech Music Process, 1–10.  https://doi.org/10.1186/s13636-016-0086-9
  24. 24.
    Benabdeslem K, Hindawi M (2014) Efficient semi-supervised feature selection: constraint, relevance and redundancy. IEEE Trans Knowl Data Eng 26 (5):1131–1143.  https://doi.org/10.1109/TKDE.2013.86 CrossRefGoogle Scholar
  25. 25.
    Abdel-Hamid O, Mohamed AR, Jiang H, Deng L, Penn G, Yu D (2014) Convolutional neural networks for speech recognition. IEEE ACM Trans Audio Speech Language Process 22(10):1533–1545.  https://doi.org/10.1109/TASLP.2014.2339736 CrossRefGoogle Scholar
  26. 26.
    Goodfellow LJ, Erhan D, Carrier PL, Courville A, Mirza M, Hamner B, Cukierski W, Tang YC, Thaler D, Lee DH (2015) Challenges in representation learning: A report on three machine learning contests. Neural Netw 64:59–63.  https://doi.org/10.1016/j.neunet.2014.09.005 CrossRefGoogle Scholar
  27. 27.
    Lyons M, Akamatsu S, Kamachi M, Gyoba J (1998) Coding facial expressions with gabor wavelets. In: 3rd IEEE international conference on automatic face and gesture recognition.  https://doi.org/10.1109/AFGR.1998.670949, pp 200–205
  28. 28.
    Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I (2010) The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. In: 2010 IEEE computer society conference on computer vision and pattern recognition workshops (CVPR Workshops).  https://doi.org/10.1109/CVPRW.2010.5543262, pp 94–101
  29. 29.
    Krizhevsky A (2012) Learning multiple layers of features from tiny images, University of TorontoGoogle Scholar
  30. 30.
    Coates A, Lee H, Ng AY (2011) An analysis of single-layer networks in unsupervised feature learning. In: Proceedings of the 14th international conference on artificial intelligence and statistics (AISTATS 2011), 15, pp 215–223Google Scholar
  31. 31.
    Bradski G (2000) The openCV library. Dr. Dobb’s J Softw Tool 25(11):120–126Google Scholar
  32. 32.
    Zhao JD, Lu K, He XF (2008) Locality sensitive Semi-Supervised feature selection. Neurocomputing 71 (10-12):1842–1849.  https://doi.org/10.1016/j.neucom.2007.06.014 CrossRefGoogle Scholar
  33. 33.
    Dai Q, Han XM (2016) An efficient ordering-based ensemble pruning algorithm via dynamic programming. Appl Intell 44(4):816–830.  https://doi.org/10.1007/s10489-015-0729-z MathSciNetCrossRefGoogle Scholar
  34. 34.
    Partalas I, Tsoumakas G, Vlahavas I (2010) An ensemble uncertainty aware measure for directed hill climbing ensemble pruning. Mach Learn 81(3):257–282.  https://doi.org/10.1007/s10994-010-5172-0 MathSciNetCrossRefGoogle Scholar
  35. 35.
    Dai Q, Li ML (2015) Introducing randomness into greedy ensemble pruning algorithms. Appl Intell 42 (3):406–429.  https://doi.org/10.1007/s10489-014-0605-2 CrossRefGoogle Scholar
  36. 36.
    Li N, Yu Y, Zhou ZH (2012) Diversity Regularized ensemble pruning, machine learning and knowledge discovery in databases. In: Proceedings of the european conference (ECML PKDD 2012), pp 330–345Google Scholar
  37. 37.
    Okun O, Valentini G (2009) Applications Of supervised and unsupervised ensemble methods. Springer, BerlinCrossRefzbMATHGoogle Scholar
  38. 38.
    Kuncheva LI (2013) A bound on Kappa-Error diagrams for analysis of classifier ensemble. IEEE Trans Knowl Data Eng 25(3):494–501.  https://doi.org/10.1109/TKDE.2011.234 CrossRefGoogle Scholar
  39. 39.
    Kuncheva LI, Whitaker CJ (2003) Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach Learn 51 (2):181–207.  https://doi.org/10.1023/A:1022859003006 CrossRefzbMATHGoogle Scholar
  40. 40.
    Wang SJ, Li BJ, Liu YJ, Yan WJ, Ou XY, Huang XH, Xu F, Fu XL (2018) Micro-expression recognition with small sample size by transferring long-term convolutional neural network. Neurocomputing 312:251–262.  https://doi.org/10.1016/j.neucom.2018.05.107 CrossRefGoogle Scholar
  41. 41.
    Dai Q (2013) A novel ensemble pruning algorithm based on randomized greedy selective strategy and ballot. Neurocomputing 122:258–265.  https://doi.org/10.1016/j.neucom.2013.06.026 CrossRefGoogle Scholar
  42. 42.
    Zhang HX, Cao LL (2014) A spectral clustering based ensemble pruning approach. Neurocomputing 139:289–297.  https://doi.org/10.1016/j.neucom.2014.02.030 CrossRefGoogle Scholar
  43. 43.
    Woods K, Kegelmeyer WP, Bowyer K (1997) Combination of multiple classifiers using local accuracy estimates. IEEE Trans Pattern Anal Mach Intel 19(4):405–410.  https://doi.org/10.1109/34.588027 CrossRefGoogle Scholar
  44. 44.
    Giacinto G, Roli F (2001) Dynamic classifier selection based on multiple classifier behaviour. Pattern Recogn 34(9):1879–1881.  https://doi.org/10.1016/S0031-3203(00)00150-3 CrossRefzbMATHGoogle Scholar
  45. 45.
    Zhang ML, Zhou ZH (2007) ML-KNN: A lazy learning approach to multi-label learning. Pattern Recogn 40(7):2038–2048.  https://doi.org/10.1016/j.patcog.2006.12.019 CrossRefzbMATHGoogle Scholar
  46. 46.
    Hou CQ, Xia Y, Xu ZR, Sun J (2016) Learning classifier competence based on graph for dynamic classifier selection. In: 2016 12th international conference on natural computation, fuzzy systems and knowledge discovery (ICNC-FSKD), pp 1164–1168Google Scholar
  47. 47.
    Li S, Deng WH (2018) Reliable crowdsourcing and deep locality-preserving learning for unconstrained facial expression recognition. IEEE Trans Image Process 28(1):356–370.  https://doi.org/10.1109/TIP.2018.2868382 MathSciNetCrossRefzbMATHGoogle Scholar
  48. 48.
    Pons G, Masip D (2017) Supervised committee of convolutional neural networks in automated facial expression analysis. IEEE Trans Affective Comput.  https://doi.org/10.1109/TAFFC.2017.2753235
  49. 49.
    Peng H, Long F, Ding C (2005) Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans Pattern Anal Mach Intel 27(8):1226–1238.  https://doi.org/10.1109/TPAMI.2005.159 CrossRefGoogle Scholar
  50. 50.
    Cruz RMO, Sabourin R, Cavalcanti GDC (2017) META-DES.Oracle: Meta-learning and feature selection for dynamic ensemble selection. Inf Fusion 38:84–103.  https://doi.org/10.1016/j.inffus.2017.02.010 CrossRefGoogle Scholar
  51. 51.
    Kim BK, Roh J, Lee SY (2016) Hierarchical committee of deep convolutional neural networks for robust facial expression recognition. J Multimodel User Interfaces 10(2):173–189.  https://doi.org/10.1007/s12193-015-0209-0 CrossRefGoogle Scholar
  52. 52.
    He KM, Zhang XY, Ren SQ, Sun J (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: 2015 IEEE international conference on computer vision (ICCV).  https://doi.org/10.1109/ICCV.2015.123, pp 1026–1034
  53. 53.
    Li DY, Wen GH, Hou Z, Huan EY, Hu Y, Li HH (2018) RTCRelief-F: an effective clustering and ordering-based ensemble pruning algorithm for facial expression recognition. Knowl Inf Syst.  https://doi.org/10.1007/s10115-018-1176-z
  54. 54.
    Zarbakhsh P, Demirel H (2018) Low-rank sparse coding and region of interest pooling for dynamic 3D facial expression recognition. Signal Image Video Process 12(8):1611–1618.  https://doi.org/10.1007/s11760-018-1318-5 CrossRefGoogle Scholar
  55. 55.
    Pons G, Masip D (2018) Supervised committee of convolutional neural networks in automated facial expression analysis. IEEE Trans Affect Comput 9(3):343–350.  https://doi.org/10.1109/TAFFC.2017.2753235 CrossRefGoogle Scholar
  56. 56.
    Guo HP, Liu HB, Li R, Wu CG, Guo YB, Xu ML (2018) Margin & diversity based ordering ensemble pruning. Neurocomputing 275:237–246.  https://doi.org/10.1016/j.neucom.2017.06.052 CrossRefGoogle Scholar
  57. 57.
    Xia X, Lin T, Chen Z (2018) Maximum relevancy maximum complementary based ordered aggregation for ensemble pruning. Appl Intell 48(9):2568–2579.  https://doi.org/10.1007/s10489-017-1106-x CrossRefGoogle Scholar
  58. 58.
    Zhu XH, Ni ZW, Zhang GR, Jin FF, Cheng MY, Li JM (2018) Combining weak-link co-evolution binary artificial fish swarm algorithm and complementarity measure for ensemble pruning. J Intel Fuzzy Syst 35 (2):1431–1444.  https://doi.org/10.3233/JIFS-169685 CrossRefGoogle Scholar
  59. 59.
    Magdin M, Prikler F (2017) Real time facial expression recognition using webcam and SDK affectiva. In: International journal of interactive multimedia and artificial intelligence.  https://doi.org/10.9781/ijimai.2017.11.002
  60. 60.
    Magdin M, Prikler F (2018) Are Instructed Emotional States Suitable for Classification? Demonstration of How They Can Significantly Influence the Classification Result in An Automated Recognition System. International Journal of Interactive Multimedia and Artificial Intelligence.  https://doi.org/10.9781/ijimai.2018.03.002

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Guizhou UniversityGuiyangChina
  2. 2.South China University of TechnologyGuangzhouChina
  3. 3.Guizhou Agricultural Information CenterGuiyangChina
  4. 4.Guangdong Pharmaceutical UniversityGuangzhouChina

Personalised recommendations