Distance-Shape Signature Duo for Determination of Human Emotion

  • Paramartha DuttaEmail author
  • Asit Barman
Part of the Cognitive Intelligence and Robotics book series (CIR)


Previous three chapters contain three feature descriptors individually distance signature, shape signature, and texture signature. In course of Distance-Shape (D-S) signature, respective stability indices and statistical measures supplement the signature features with a view to enhance the performance of the task of facial expression classification. Incorporation of these supplementary features is duly justified through extensive study and analyses of results obtained thereon.


  1. 1.
    G. Tzimiropoulos, M. Pantic, Optimization problems for fast AAM fitting in-the-wild, in Proceedings of the IEEE International Conference on Computer Vision (2013), pp. 593–600Google Scholar
  2. 2.
    A. Barman, P. Dutta, Facial expression recognition using distance and shape signature features. Pattern Recognit. Lett. (2017)Google Scholar
  3. 3.
    P. Lucey, J.F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, I. Matthews, The extended Cohn-Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression, in Computer Society Conference on Computer Vision and Pattern Recognition-Workshops (IEEE, 2010), pp. 94–101Google Scholar
  4. 4.
    M.F. Valstar, M. Pantic, Induced disgust, happiness and surprise: an addition to the mmi facial expression database, in Proceedings of International Conference on Language Resources and Evaluation, Workshop on EMOTION Malta, May (2010), pp. 65–70Google Scholar
  5. 5.
    M. Lyons, S. Akamatsu, M. Kamachi, J. Gyoba, Coding facial expressions with Gabor wavelets, in Proceedings of the Third IEEE International Conference on Automatic Face and Gesture Recognition (IEEE, 1998), pp. 200–205Google Scholar
  6. 6.
    N. Aifanti, C. Papachristou, A. Delopoulos, The MUG facial expression database, in Proceedings of the 11th International Workshop on Image Analysis for Facial Expression Database, Desenzano, Italy, April (2010), pp. 12–14Google Scholar
  7. 7.
    L. Zhong, Q. Liu, P. Yang, J. Huang, D.N. Metaxas, Learning multiscale active facial patches for expression analysis. IEEE Trans. Cybern. 45(8), 1499–1510 (2015)Google Scholar
  8. 8.
    SL Happy and Aurobinda Routray, Automatic facial expression recognition using features of salient facial patches. IEEE Trans. Affect. Comput. 6(1), 1–12 (2015)CrossRefGoogle Scholar
  9. 9.
    A. Poursaberi, H.A. Noubari, M. Gavrilova, S.N. Yanushkevich, Gauss–Laguerre wavelet textural feature fusion with geometrical information for facial expression identification. EURASIP J. Image Video Process. 2012(1), 1–13 (2012)Google Scholar
  10. 10.
    L. Zhong, Q. Liu, P. Yang, B. Liu, J. Huang, D.N. Metaxas, Learning active facial patches for expression analysis, in 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2012), pp. 2562–2569Google Scholar
  11. 11.
    Ligang Zhang, Dian Tjondronegoro, Facial expression recognition using facial movement features. IEEE Trans. Affect. Comput. 2(4), 219–229 (2011)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  1. 1.Department of Computer and Systems SciencesVisva-Bharati UniversitySantiniketanIndia
  2. 2.Department of Computer Science and Engineering and Information TechnologySiliguri Institute of TechnologySiliguriIndia

Personalised recommendations