Face Detection and Facial Expression Recognition Using a Novel Variational Statistical Framework

  • Wentao Fan
  • Nizar Bouguila
Part of the Communications in Computer and Information Science book series (CCIS, volume 287)

Abstract

In this paper, we propose a statistical Bayesian framework based on finite generalized Dirichlet (GD) mixture models and apply it to two challenging problems namely face detection and facial expression recognition. The proposed Bayesian model is learned through a principled variational approach and allows simultaneous clustering and feature selection. The feature selection process is taken into account via the integration of a background density to handle irrelevant features to which small weights have to be affected. Moreover, a variational form of the Deviance Information Criterion (DIC) is incorporated within the proposed statistical framework for evaluating the correctness of the model complexity (i.e. number of mixture components and number of relevant features). The effectiveness of the proposed model is illustrated through extensive empirical results using challenging real examples.

Keywords

Generalized Dirichlet mixture variational learning face detection facial expression recognition 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Amit, Y., Trouvé, A.: POP: Patchwork of Parts Models for Object Recognition. International Journal of Computer Vision 75, 267–282 (2007)CrossRefGoogle Scholar
  2. 2.
    Attias, H.: A Variational Bayes Framework for Graphical Models. In: Proc. of Advances in Neural Information Processing Systems (NIPS), pp. 209–215 (1999)Google Scholar
  3. 3.
    Bishop, C.M.: Pattern recognition and machine learning. Springer (2006)Google Scholar
  4. 4.
    Bosch, A., Zisserman, A., Muñoz, X.: Scene Classification Via pLSA. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006, Part IV. LNCS, vol. 3954, pp. 517–530. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  5. 5.
    Bouguila, N., Ziou, D.: A Hybrid SEM Algorithm for High-Dimensional Unsupervised Learning Using a Finite Generalized Dirichlet Mixture. IEEE Transactions on Image Processing 15(9), 2657–2668 (2006)CrossRefGoogle Scholar
  6. 6.
    Bouguila, N., Ziou, D.: High-Dimensional Unsupervised Selection and Estimation of a Finite Generalized Dirichlet Mixture Model Based on Minimum Message Length. IEEE Transactions on Pattern Analysis and Machine Intelligence 29, 1716–1731 (2007)CrossRefGoogle Scholar
  7. 7.
    Boutemedjet, S., Bouguila, N., Ziou, D.: A Hybrid Feature Extraction Selection Approach for High-Dimensional Non-Gaussian Data Clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(8), 1429–1443 (2009)CrossRefGoogle Scholar
  8. 8.
    Constantinopoulos, C., Titsias, M.K., Likas, A.: Bayesian Feature and Model Selection for Gaussian Mixture Models. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(6), 1013–1018 (2006)CrossRefGoogle Scholar
  9. 9.
    Csurka, G., Dance, C.R., Fan, L., Willamowski, J., Bray, C.: Visual Categorization with Bags of Keypoints. In: Workshop on Statistical Learning in Computer Vision (ECCV), pp. 1–22 (2004)Google Scholar
  10. 10.
    Fasel, B., Luettin, J.: Automatic Facial Expression Analysis: A Survey. Pattern Recognition 36(1), 259–275 (1999)CrossRefGoogle Scholar
  11. 11.
    Hofmann, T.: Unsupervised Learning by Probabilistic Latent Semantic Analysis. Machine Learning 42(1/2), 177–196 (2001)MATHCrossRefGoogle Scholar
  12. 12.
    Hwang, W.S., Weng, J.: Hierarchical discriminant regression. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(11), 1277–1293 (2000)CrossRefGoogle Scholar
  13. 13.
    Jaakkola, T.S., Jordan, M.I.: Computing Upper and Lower Bounds on Likelihoods in Intractable Networks. In: Proc. of the Conference in Uncertainty in Artificial Intelligence (UAI), pp. 340–348 (1996)Google Scholar
  14. 14.
    Jordan, M.I., Ghahramani, Z., Jaakkola, T., Saul, L.K.: An Introduction to Variational Methods for Graphical Models. Machine Learning 37, 183–233 (1999)MATHCrossRefGoogle Scholar
  15. 15.
    Kotsia, I., Pitas, I., Zafeiriou, S., Zafeiriou, S.: Novel Multiclass Classifiers Based on the Minimization of the Within-Class Variance. IEEE Transactions on Neural Networks 20(1), 14–34 (2009)CrossRefGoogle Scholar
  16. 16.
    Law, M.H.C., Figueiredo, M.A.T., Jain, A.K.: Simultaneous Feature Selection and Clustering Using Mixture Models. IEEE Transactions on Pattern Analysis and Machine Intelligence 26(9), 1154–1166 (2004)CrossRefGoogle Scholar
  17. 17.
    Li, S.Z., Zhu, L., Zhang, Z., Blake, A., Zhang, H., Shum, H.: Statistical Learning of Multi-view Face Detection. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002, Part IV. LNCS, vol. 2353, pp. 67–81. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  18. 18.
    Lien, J.J.J.: Automatic Recognition of Facial Expressions Using Hidden Markov Models and Estimation of Expression Intensity. Ph.D. thesis, Robotics Institute, Carnegie Mellon University (April 1998)Google Scholar
  19. 19.
    Lien, J.J., Kanade, T., Cohn, J., Li, C.C.: Subtly Different Facial Expression Recognition and Expression Intensity Estimation. In: Proceedings of 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 853–859 (June 1998)Google Scholar
  20. 20.
    Lowe, D.G.: Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision 60(2), 91–110 (2004)CrossRefGoogle Scholar
  21. 21.
    Lyons, M., Akamatsu, S., Kamachi, M., Gyoba, J.: Coding Facial Expressions with Gabor Wavelets. In: Proceedings of Third IEEE International Conference on Automatic Face and Gesture Recognition 1998, pp. 200–205 (April 1998)Google Scholar
  22. 22.
    Ma, Z., Leijon, A.: Bayesian Estimation of Beta Mixture Models with Variational Inference. IEEE Transactions on Pattern Analysis and Machine Intelligence 33(11), 2160–2173 (2011)CrossRefGoogle Scholar
  23. 23.
    McGrory, C.A., Titterington, D.M.: Variational Approximations in Bayesian Model Selection for Finite Mixture Distributions. Computational Statistics and Data Analysis 51, 5352–5367 (2006)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Saul, L.K., Jordan, M.I.: Exploiting Tractable Substructures in Intractable Networks. In: Advances in Neural Information Processing Systems (NIPS), pp. 486–492 (1995)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Wentao Fan
    • 1
  • Nizar Bouguila
    • 1
  1. 1.Concordia Institute for Information Systems EngineeringConcordia UniversityCanada

Personalised recommendations