Maximal local interclass embedding with application to face recognition

Original Paper

Abstract

Dimensionality reduction of high dimensional data is involved in many problems in information processing. A new dimensionality reduction approach called maximal local interclass embedding (MLIE) is developed in this paper. MLIE can be viewed as a linear approach of a multimanifolds-based learning framework, in which the information of neighborhood is integrated with the local interclass relationships. In MLIE, the local interclass graph and the intrinsic graph are constructed to find a set of projections that maximize the local interclass scatter and the local intraclass compactness simultaneously. This characteristic makes MLIE more powerful than marginal Fisher analysis (MFA). MLIE maintains all the advantages of MFA. Moreover, the computational complexity of MLIE is less than that of MFA. The proposed algorithm is applied to face recognition. Experiments have been performed on the Yale, AR and ORL face image databases. The experimental results show that owing to the locally discriminating property, MLIE consistently outperforms up-to-date MFA, Smooth MFA, neighborhood preserving embedding and locality preserving projection in face recognition.

Keywords

Dimensionality reduction Manifold learning Graph embedding Marginal Fisher analysis (MFA) 

References

  1. 1.
    Batur, A., Hayes, M.: Linear Subspaces for Illumination Robust Face Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’01), vol. 2, pp. 296–301. Kauai Marriott, Hawaii, 8–14 Dec 2001Google Scholar
  2. 2.
    Yang J., Zhang D., Yang J.: Globally maximizing, locally minimizing: unsupervised discriminant projection with applications to face and palm biometrics. IEEE Trans. Pattern Anal. Mach. Intell. 29(4), 650–664 (2007)CrossRefGoogle Scholar
  3. 3.
    Jain A.K., Duin R.P.W., Mao J.: Statistical pattern recognition: a review. IEEE Trans. Pattern Anal. Mach.Intell. 22(1), 4–37 (2000)CrossRefGoogle Scholar
  4. 4.
    Joliffe I.: Principal Component Analysis. Springer, Berlin (1986)Google Scholar
  5. 5.
    Martinez A.M., Kak A.C.: PCA versus LDA. IEEE Trans. Pattern Anal. Mach. Intell. 23(2), 228–233 (2001)CrossRefGoogle Scholar
  6. 6.
    Turk, M., Pentland, A.: Face Recognition Using Eigenfaces. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’91), pp. 586–591. Maui, Hawaii, 3–6 June 1991Google Scholar
  7. 7.
    Fukunnaga K.: Introduction to Statistical Pattern Recognition. Academic Press, San Diego (1991)Google Scholar
  8. 8.
    Ye J., Janardan R., Park C., Park H.: An optimization criterion for generalized discriminant analysis on under-sampled problems. IEEE Trans. Pattern Anal. Mach. Intell. 26(8), 982–994 (2004)CrossRefGoogle Scholar
  9. 9.
    Yu H., Yang J.: A direct LDA algorithm for high dimensional data-with application to face recognition. Pattern Recognit. 34(10), 2067–2070 (2001)MATHCrossRefGoogle Scholar
  10. 10.
    Schölkopf B., Smola A., Muller K.R.: Nonlinear component analysis as a Kernel Eigenvalue problem. Neural Comput. 10(5), 1299–1319 (1998)CrossRefGoogle Scholar
  11. 11.
    Baudat G., Anouar F.: Generalized discriminant analysis using a Kernel approach. Neural Comput. 12(10), 2385–2404 (2000)CrossRefGoogle Scholar
  12. 12.
    An, S., Liu, W., Venkatesh, S.: Face recognition using Kernel ridge regression. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’07). Minneopolis, Minnesota, 17–22 June 2007Google Scholar
  13. 13.
    Tenenbaum J.B., deSilva V., Langford J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290, 2319–2323 (2000)CrossRefGoogle Scholar
  14. 14.
    Roweis S.T., Saul L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290, 2323–2326 (2000)CrossRefGoogle Scholar
  15. 15.
    Belkin, M., Niyogi, P.: Laplacian eigenmaps and spectral techniques for embedding and clustering. In: Proceedings of the 15th Annual Conference on Neural Information Processing Systems, Vancouver, Canada, 3–6 Dec 2001Google Scholar
  16. 16.
    He X., Yan S., Hu Y., Niyogi P., Zhang H.: Face recognition using laplacianfaces. IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 328–340 (2005)CrossRefGoogle Scholar
  17. 17.
    He, X., Niyogi, P.: Locality preserving projections. In: Proceedings of the 17th Annual Conference on Neural Information Processing Systems, Vancouver and Whistler, Canada, 8–13 Dec 2003Google Scholar
  18. 18.
    Yan S., Xu D., Zhang B., Zhang H.-J.: Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans. Pattern Anal. Mach. Intell. 29(1), 40–51 (2007)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Cai, D., He, X., Hu, Y., Han, J., Huang, T.: Learning a Spatially Smooth Subspace for Face Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’07). Minneopolis, Minnesota, 17–22 June 2007Google Scholar
  20. 20.
    Golub G.H., Loan C.F.V.: Matrix Computations. Johns Hopkins University Press, Baltimore (1996)MATHGoogle Scholar
  21. 21.
    He, X., Cai, D., Yan, S., Zhang, H.: Neighborhood preserving embedding. In: Proceedings of the 10th IEEE International Conference on Computer Vision, pp. 1208–1213. Beijing, China, 17–20 October 2005Google Scholar
  22. 22.
    Martinez, A.M., Benavente, R.: The AR Face Database. CVC Technical Report #24, June 1998Google Scholar

Copyright information

© Springer-Verlag 2011

Authors and Affiliations

  1. 1.School of Computer ScienceNanjing University of Science and TechnologyNanjingPeople’s Republic of China
  2. 2.Department of Physics and ElectronicsMinjian CollegeFuzhouPeople’s Republic of China

Personalised recommendations