Neural Network Classification of Photogenic Facial Expressions Based on Fiducial Points and Gabor Features

  • Luciana R. Veloso
  • João M. de Carvalho
  • Claudio S. V. C. Cavalvanti
  • Eduardo S. Moura
  • Felipe L. Coutinho
  • Herman M. Gomes
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4872)


This work reports a study about the use of Gabor coefficients and coordinates of fiducial (landmark) points to represent facial features and allow the discrimination between photogenic and non-photogenic facial images, using neural networks. Experiments have been performed using 416 images from the Cohn-Kanade AU-Coded Facial Expression Database [1]. In order to extract fiducial points and classify the expressions, a manual processing was performed. The facial expression classifications were obtained with the help of the Action Unit information available in the image database. Various combinations of features were tested and evaluated. The best results were obtained with a weighted sum of a neural network classifier using Gabor coefficients and another using only the fiducial points. These indicated that fiducial points are a very promising feature for the classification performed.


facial expression analysis Gabor coefficients facial fiducial points neural networks 


  1. 1.
    Cohn, J.F., Zlochower, A., Lien, J., Kanade, T.: Automated face analysis by feature point tracking has high concurrent validity with manual facs coding. Psychophysiology 36, 35–43 (1999)CrossRefGoogle Scholar
  2. 2.
    Pantic, M., Rothkrantz, L.: Automatic analysis of facial expressions: The state of the art. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(12), 1424–1445 (2000)CrossRefGoogle Scholar
  3. 3.
    Mehrabian, A.: Communication without words. Psychology Today 2(4), 53–56 (1968)Google Scholar
  4. 4.
    Darwin, C.: The Expression of the Emotions in Man and Animals. Appleton and Company, New York (1872)Google Scholar
  5. 5.
    Batista, L.B., Gomes, H.M., Carvalho, J.M.: Photogenic facial expression discrimination. In: International Conference on Computer Vision Theory and Applications, pp. 166–171 (2006)Google Scholar
  6. 6.
    Essa, I.A., Pentland, A.P.: Coding analysis interpretation and recognition of facial expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(7), 757–763 (1997)CrossRefGoogle Scholar
  7. 7.
    Cohn, J., Zlochower, A., Lien, J., Kanade, T.: Feature-point tracking by optical flow discriminates subtle differences in facial expression. In: IEEE International Conference on Automatic Face and Gesture Recognition, pp. 396–401 (1998)Google Scholar
  8. 8.
    Wang, J., Yin, L.: Static topographic modeling for facial expression recognition and analysis. Computer Vision and Image Understanding 108(1-2), 19–34 (2007)CrossRefGoogle Scholar
  9. 9.
    Lanitis, A., Taylor, C.J., Cootes, T.F.: Automatic interpretation and coding of face images using flexible models. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(7), 743–756 (1997)CrossRefGoogle Scholar
  10. 10.
    Zhang, Z., Lyons, M., Schuster, M., Akamatsu, S.: Comparison between geometry-based and gabor wavelets based facial expression recognition using multi-layer perceptron. In: IEEE International Conference on Automatic Face and Gesture Recognition, pp. 454–461 (1998)Google Scholar
  11. 11.
    Pantic, M., Rothkrantz, L.J.M.: Expert system for automatic analysis of facial expression. Image and Vision Computing 18, 881–905 (2000)CrossRefGoogle Scholar
  12. 12.
    Zhu, Z., Ji, Q.: Robust pose invariant facial feture detection and tracking in real-time. In: International Conference on Pattern Recognition, vol. 1, pp. 1092–1095 (2006)Google Scholar
  13. 13.
    Tong, Y., Ji, Q.: Multiview facial feature tracking with a milti-modal probabilistc model. In: International Conference on Pattern Recognition, vol. 1, pp. 307–310 (2006)Google Scholar
  14. 14.
    Bartlett, M., Littlewort, G., Braathen, B., Sejnowski, T., Movellan, J.: A prototype for automatic recognition of spontaneous facial actions. Advances in Neural Information Processing Systems 15, 1271–1278 (2002)Google Scholar
  15. 15.
    Tian, Y.: Evaluation of face resolution for expression analysis. In: Computer Vision and Pattern Recognition Workshop, pp. 82–82 (2004)Google Scholar
  16. 16.
    Lin, D.T., Yang, C.M.: Real-time eye detection using face-circle fitting and dark-pixel filtering. In: IEEE International Conference on Multimedia and Expo, vol. 2, pp. 1167–1170 (2004)Google Scholar
  17. 17.
    Haykin, S.: Neural Networks: A comprehensive Foundation, 2nd edn. Prentice-Hall, Englewood Cliffs (1998)Google Scholar
  18. 18.
    Carvalho, J.M., Oliveira, J., Freitas, C.O.A., Sabourin, R.: Handwritten month word recognition using multiple classifiers. In: Brazilian Symposium on Computer Graphics and Image Processing, pp. 82–89 (2004)Google Scholar
  19. 19.
    Cootes, T.F., Cooper, D.H., Graham, J.: Active shape models- their training and application. Computer Vision and Image Understanding 61(1), 38–59 (1995)CrossRefGoogle Scholar
  20. 20.
    Cootes, T.F., Taylor, C.J.: Statistical models of appearance for computer vision. Technical report, University of Manchester, UK, Imaging Science and Biomedical Engineering (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Luciana R. Veloso
    • 1
  • João M. de Carvalho
    • 1
  • Claudio S. V. C. Cavalvanti
    • 1
  • Eduardo S. Moura
    • 1
  • Felipe L. Coutinho
    • 1
  • Herman M. Gomes
    • 1
  1. 1.Universidade Federal de Campina Grande, Av. Aprigio Veloso s/n, 58109-970 Campina Grande PB 

Personalised recommendations