Experimental Study on Multiple LDA Classifier Combination for High Dimensional Data Classification

  • Xiaogang Wang
  • Xiaoou Tang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3077)


Multiple classifier systems provide an effective way to improve pattern recognition performance. In this paper, we use multiple classifier combination to improve LDA for high dimensional data classification. When dealing with the high dimensional data, LDA often suffers from the small sample size problem and the constructed classifier is biased and unstable. Although some approaches, such as PCA+LDA and Null Space LDA, have been proposed to address this problem, they are all at cost of discarding some useful discriminative information. We propose an approach to generate multiple Principal Space LDA and Null Space LDA classifiers by random sampling on the feature vector and training set. The two kinds of complementary classifiers are integrated to preserve all the discriminative information in the feature space.


Linear Discriminant Analysis Face Image Recognition Accuracy Majority Vote Null Space 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Belhumeur, P.N., Hespanda, J., Kiregeman, D.: Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection. IEEE Trans. on PAMI 19(7), 711–720 (1997)Google Scholar
  2. 2.
    Chen, L., Liao, H., Ko, M., Liin, J., Yu, G.: A New LDA-based Face Recognition System Which can Solve the Samll Sample Size Problem. Pattern Recognition 33(10), 1713–1726 (2000)CrossRefGoogle Scholar
  3. 3.
    Martinez, M., Kak, A.C.: PCA versus LDA. IEEE Trans. on PAMI 23(2), 228–233 (2001)Google Scholar
  4. 4.
    Kam Ho, T.: The Random Subspace Method for Constructing Decision Forests. IEEE Trans. on PAMI 20(8), 832–844 (1998)Google Scholar
  5. 5.
    Breiman, L.: Bagging Predictors. Machine Learning 24(2), 123–140 (1996)zbMATHMathSciNetGoogle Scholar
  6. 6.
    Fukunnaga, K.: Introduction to Statistical Pattern Recognition, 2nd edn. Academic Press, London (1991)Google Scholar
  7. 7.
    Skurichina, M., Duin, R.P.W.: Bagging and the Random Subspace Method for Linear Classifiers. Pattern Analysis and Application 5(2), 121–135 (2002)zbMATHCrossRefMathSciNetGoogle Scholar
  8. 8.
    Lu, X., Jain, A.K.: Resampling for Face Recognition. In: Kittler, J., Nixon, M.S. (eds.) AVBPA 2003. LNCS, vol. 2688, Springer, Heidelberg (2003)CrossRefGoogle Scholar
  9. 9.
    Wang, X., Tang, X.: Random Sampling Based LDA for Face Recognition. In: Proceedings of CVPR (2004)Google Scholar
  10. 10.
    Wang, X., Tang, X.: Random Subspace Based LDA for Face Recognition Integrating Multi-Features. In: Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition (2004)Google Scholar
  11. 11.
    Kittler, J., Roli, F. (eds.): Multiple Classifier SystemsGoogle Scholar
  12. 12.
    Messer, K., Matas, J., Kittler, J., Luettin, J., Maitre, G.: XM2VTSDB: The Extended M2VTS Database. In: Proceedings of International Conference on Audio- and Video-Based Person Authentication, pp. 72–77 (1999)Google Scholar
  13. 13.
    Phillips, P.J., Moon, H., Rizvi, S.A., Rauss, P.J.: The FERET Evaluation. In: Wechsler, H., Phillips, P.J., Bruce, V., Soulie, F.F., Huang, T.S. (eds.) Face Recognition: From Theory to Applications, Springer, Berlin (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Xiaogang Wang
    • 1
  • Xiaoou Tang
    • 1
  1. 1.Department of Information EngineeringThe Chinese University of Hong Kong 

Personalised recommendations