Advertisement

Coupling Adaboost and Random Subspace for Diversified Fisher Linear Discriminant

  • Hui Kong
  • Jian-Gang Wang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4232)

Abstract

Fisher Linear Discriminant (FLD) is a popular method for feature extraction in face recognition. However, It often suffers from the small sample size, bias and overfitting problems when dealing with the high dimensional face image data. In this paper, a framework of Ensemble Learning for Diversified Fisher Linear Discriminant (E n LDFLD) is proposed to improve the current FLD based face recognition algorithms. Firstly, the classifier ensemble in E n LDFLD is composed of a set of diversified component FLD classifiers, which are selected intentionally by computing the diversity between the candidate component classifiers. Secondly, the candidate component classifiers are constructed by coupling the random subspace and adaboost methods, and it can also be shown that such a coupling scheme will result in more suitable component classifiers so as to increase the generalization performance of E n LDFLD. Experiments on two common face databases verify the superiority of the proposed E n LDFLD over the state-of-the-art algorithms in recognition accuracy.

Keywords

Training Sample Face Recognition Face Image Face Database Random Subspace 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Fukunnaga, K.: Introduction to Statistical Pattern Recognition, 2nd edn. Academic Press, London (1991)Google Scholar
  2. 2.
    Belhumeur, P., Hespanha, J., Kriengman, D.: Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Trans. Pattern Analysis and Machine Intelligence 19, 711–720 (1997)CrossRefGoogle Scholar
  3. 3.
    Chen, L., Liao, H., Lin, J., Kao, M., Yu, G.: A new lda-based face recognition system which can solve the small sample size problem. Pattern Recognition 33, 1713–1726 (2000)CrossRefGoogle Scholar
  4. 4.
    Huang, R., Liu, Q., Lu, H., Ma, S.: Solving the small sample size problem of lda. In: Intl. Conf. on Pattern Recognition, vol. 3, pp. 29–32 (2002)Google Scholar
  5. 5.
    Cevikalp, H., Neamtu, M., Wilkes, M., Barkana, A.: Discriminative common vectors for face recognition. IEEE Trans. on Pattern Analysis andMachine Intelligence 27, 4–13 (2005)CrossRefGoogle Scholar
  6. 6.
    Kong, H., Wang, L., Teoh, E., Wang, J., Venkateswarlu, R.: A framework of 2d fisher discriminant analysis: Application to face recognition with small number of training samples. In: IEEE International Conf. on Computer Vision and Pattern Recognition, vol. 2, pp. 1083–1088 (2005)Google Scholar
  7. 7.
    Yang, M.: Kernel eigenfaces vs. kernel fisherfaces: Face recognition using kernel methods. In: IEEE Intl. Conf. Automatic Face and Gesture Recognition, pp. 215–220 (2002)Google Scholar
  8. 8.
    Liu, Q., Huang, R., Lu, H., Ma, S.: Face recognition using kernel based fisher discriminant analysis. In: IEEE Intl. Conf. Automatic Face and Gesture Recognition, pp. 197–201 (2002)Google Scholar
  9. 9.
    Kim, T., Kittler, J.: Locally linear discriminant analysis for multimodally distributed classes for face recognition with a single model image. IEEE Transactions on Pattern Analysis and Machine Intelligence 27, 318–327 (2005)CrossRefGoogle Scholar
  10. 10.
    Brunelli, R., Falavigna, D.: Person identification using multiple cues. IEEE Transactions on Pattern Analysis and Machine Intelligence 17, 955–966 (1995)CrossRefGoogle Scholar
  11. 11.
    Huang, Y., Suen, C.: A method of combining multiple experts for the recognition of unconstrained handwritten numerals. IEEE Transactions on Pattern Analysis and Machine Intelligence 17, 90–94 (1995)CrossRefGoogle Scholar
  12. 12.
    Kitter, J., Hatef, M., Duin, R., Matas, J.: On combining classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence 20, 226–239 (1998)CrossRefGoogle Scholar
  13. 13.
    Yacoub, S., Abdeljaoud, Y., Mayoraz, E.: Fusion of face and speech data for person identity verification. IEEE Transactions on Neural Networks 10, 1065–1074 (1999)CrossRefGoogle Scholar
  14. 14.
    Skurichina, M., Kuncheva, L., Duin, R.: Bagging, boosting, and the random subspace method for linear classifier. Pattern Analysis and Applications 5, 121–135 (2002)MATHCrossRefGoogle Scholar
  15. 15.
    Ho, T.: The random subspace method for constructing decision trees. IEEE Transactions on Pattern Analysis and Machine Intelligence 20, 832–844 (1998)CrossRefGoogle Scholar
  16. 16.
    Schapire, R., Singer, Y.: Improved boosting algorithms using confidence-rated predictions. Machine Learning 37, 297–336 (1999)MATHCrossRefGoogle Scholar
  17. 17.
    Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996)MATHMathSciNetGoogle Scholar
  18. 18.
    Lu, X., Jain, A.: Resampling for face recognition. In: International Conference on Audio- And Video-Based Biometric Person Authentication, pp. 869–877 (2003)Google Scholar
  19. 19.
    Lu, J., Plataniotis, K., Venetsanopoulos, A.: Boosting linear discriminant analysis for face recognition. In: IEEE International Conference on Image Processing (2003)Google Scholar
  20. 20.
    Wang, X., Tang, X.: Random sampling lda for face recognition. In: IEEE International Conf. on Computer Vision and Pattern Recognition, vol. 2, pp. 259–265 (2004)Google Scholar
  21. 21.
    Wang, X., Tang, X.: Subspace analysis using random mixture models. In: IEEE International Conf. on Computer Vision and Pattern Recognition, vol. 1, pp. 574–580 (2005)Google Scholar
  22. 22.
    Chawla, N., Bowyer, K.: Random subspaces and subsampling for 2-d face recognition. In: IEEE International Conf. on Computer Vision and Pattern Recognition, vol. 2, pp. 582–589 (2005)Google Scholar
  23. 23.
    Kuncheva, L., Whitaker, C.: Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine Learning 51, 181–207 (2003)MATHCrossRefGoogle Scholar
  24. 24.
    Windeatt, T.: Diversity measures for multiple classifier system analysis and design. Information Fusion 6, 21–36 (2005)CrossRefGoogle Scholar
  25. 25.
    Dietterich, T.: An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Machine Learning 40, 139–157 (2000)CrossRefGoogle Scholar
  26. 26.
    Haykin, S.: Neural Networks: A Comprehensive Foundation. Prentice Hall, Englewood Cliffs (1999)MATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Hui Kong
    • 1
  • Jian-Gang Wang
    • 2
  1. 1.School of Electrical and Electronic EngineeringNanyang Technological UniversitySingapore
  2. 2.Department of MediaInstitute for Infocomm ResearchSingapore

Personalised recommendations