Human Emotion Recognition from Distance-Shape-Texture Signature Trio

  • Paramartha DuttaEmail author
  • Asit Barman
Part of the Cognitive Intelligence and Robotics book series (CIR)


The previous Chaps.  2,  3, and  4 consist of feature descriptors individually distance signature, shape signature, and texture signature and Chaps.  5,  6, and  7 are considered as combined descriptors such as distance and shape (D-S), distance and texture (D-T), and shape and texture (S-T). In course of Distance-Shape-Texture (D-S-T) signature trio respective stability indices and statistical measures supplement the signature features with a view to enhance the performance task of facial expression classification. Incorporation of these supplementary features is duly justified through extensive study and analysis of results obtained thereon.


  1. 1.
    T.F Cootes, G.J. Edwards, C.J. Taylor, et al., Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 681–685 (2001)Google Scholar
  2. 2.
    T. Ojala, M. Pietikäinen, D. Harwood, A comparative study of texture measures with classification based on featured distributions. Pattern Recogn. 29(1), 51–59 (1996)CrossRefGoogle Scholar
  3. 3.
    G. Tzimiropoulos, M. Pantic, Optimization problems for fast AAM fitting in-the-wild, in Proceedings of the IEEE International Conference on Computer Vision, pp. 593–600 (2013)Google Scholar
  4. 4.
    Y. Tie, L. Guan, Automatic landmark point detection and tracking for human facial expressions. EURASIP J. Image Video Process. 2013(1), 8 (2013)CrossRefGoogle Scholar
  5. 5.
    A. Barman, P. Dutta, Facial expression recognition using distance signature feature, in Advanced Computational and Communication Paradigms (Springer, 2018), pp. 155–163Google Scholar
  6. 6.
    A. Barman, P. Dutta, Facial expression recognition using shape signature feature, in 2017 Third International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN) (IEEE, 2017), pp. 174–179Google Scholar
  7. 7.
    A. Barman, P. Dutta, Texture signature based facial expression recognition using NARX, in 2017 IEEE Calcutta Conference (CALCON) (IEEE, 2017), pp. 6–10Google Scholar
  8. 8.
    A. Barman, P. Dutta, Facial expression recognition using distance and shape signature features. Pattern Recogn. Lett. (2017)Google Scholar
  9. 9.
    A. Barman, P. Dutta, Facial expression recognition using distance and texture signature relevant features. Appl. Soft Comput. 77, 88–105 (2019)CrossRefGoogle Scholar
  10. 10.
    D. Chakrabarti, D. Dutta, Facial expression recognition using eigenspaces. Procedia Technol. 10, 755–761 (2013)CrossRefGoogle Scholar
  11. 11.
    H. Jaeger, Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the “echo state network” approach, vol. 5. GMD-Forschungszentrum Informationstechnik (2002)Google Scholar
  12. 12.
    M. Rosenblum, Y. Yacoob, L.S Davis, Human expression recognition from motion using a radial basis function network architecture. IEEE Trans. Neural Netw. 7(5), 1121–1138 (1996)Google Scholar
  13. 13.
    P. Lucey, J.F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, I. Matthews, The extended Cohn-Kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression, in Computer Society Conference on Computer Vision and Pattern Recognition-Workshops (IEEE, 2010), pp. 94–101Google Scholar
  14. 14.
    M. Lyons, S. Akamatsu, M. Kamachi, J. Gyoba, Coding facial expressions with Gabor wavelets, in Proceedings of Third IEEE International Conference on Automatic Face and Gesture Recognition, 1998 (IEEE, 1998), pp. 200–205Google Scholar
  15. 15.
    M.F. Valstar, M. Pantic, Induced disgust, happiness and surprise: an addition to the mmi facial expression database, in Proceedings of International Conference Language Resources and Evaluation, Workshop on EMOTION, Malta, May 2010, pp. 65–70Google Scholar
  16. 16.
    N. Aifanti, C. Papachristou, A. Delopoulos, The mug facial expression database, in Proceedings of 11th International Workshop on Image Analysis for Facial Expression Database, Desenzano, Italy, April 2010, pp. 12–14Google Scholar
  17. 17.
    S.L. Happy, A. Routray, Automatic facial expression recognition using features of salient facial patches. IEEE Trans. Affect. Comput. 6(1), 1–12 (2015)Google Scholar
  18. 18.
    A. Poursaberi, H. Ahmadi Noubari, M. Gavrilova, S.N. Yanushkevich, Gauss–Laguerre wavelet textural feature fusion with geometrical information for facial expression identification. EURASIP J. Image Video Process. 2012(1), 1–13 (2012)Google Scholar
  19. 19.
    L. Zhong, Q. Liu, P. Yang, J. Huang, D.N. Metaxas, Learning multiscale active facial patches for expression analysis. IEEE Trans. Cybern. 45(8), 1499–1510 (2015)Google Scholar
  20. 20.
    L. Zhong, Q. Liu, P. Yang, B. Liu, J. Huang, D.N. Metaxas, Learning active facial patches for expression analysis, in 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2012), pp. 2562–2569Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  1. 1.Department of Computer and Systems SciencesVisva-Bharati UniversitySantiniketanIndia
  2. 2.Department of Computer Science and Engineering and Information TechnologySiliguri Institute of TechnologySiliguriIndia

Personalised recommendations