Face Subspace Learning

Abstract

In this chapter, we will present three groups of dimension reduction algorithms for subspace based face recognition. Specifically, we present the general mean criteria and the max-min distance analysis (MMDA) algorithm; manifold learning algorithms, including the discriminative locality alignment (DLA) and manifold elastic net (MEN); and the transfer subspace learning framework. Experiments on face recognition are also provided.

References

  1. 1.
    Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 711–720 (1997) CrossRefGoogle Scholar
  2. 2.
    Belkin, M., Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 15(6), 1373–1396 (2003) MATHCrossRefGoogle Scholar
  3. 3.
    Bian, W., Tao, D.: Harmonic mean for subspace selection. In: 19th International Conference on Pattern Recognition, pp. 1–4 (2008) CrossRefGoogle Scholar
  4. 4.
    Bian, W., Tao, D.: Manifold regularization for sir with rate root-n convergence (2010) Google Scholar
  5. 5.
    Bian, W., Tao, D.: Max-min distance analysis by using sequential sdp relaxation for dimension reduction. IEEE Trans. Pattern Anal. Mach. Intell. 99(PrePrints) (2010) Google Scholar
  6. 6.
    Bishop, C.M., Svensén, M., Williams, C.K.I.: GTM: The generative topographic mapping. Technical Report NCRG/96/015, Neural Computing Research Group, Dept of Computer Science & Applied Mathematics, Aston University, Birmingham B4 7ET, United Kingdom, April 1997 Google Scholar
  7. 7.
    Cai, D., He, X., Han, J.: Using graph model for face analysis. Technical report, Computer Science Department, UIUC, UIUCDCS-R-2005-2636, September 2005 Google Scholar
  8. 8.
    Cai, D., He, X., Han, J.: Srda: An efficient algorithm for large-scale discriminant analysis. IEEE Trans. Knowl. Data Eng. 20(1), 1–12 (2008) CrossRefGoogle Scholar
  9. 9.
    D’aspremont, A., Ghaoui, L.E., Jordan, M.I., Lanckriet, G.R.G.: A direct formulation for sparse PCA using semidefinite programming. SIAM Rev. 49(3), 434–448 (2007) MathSciNetMATHCrossRefGoogle Scholar
  10. 10.
    Decell, H., Mayekar, S.: Feature combinations and the divergence criterion. Comput. Math. Appl. 3(4), 71–76 (1977) MATHCrossRefGoogle Scholar
  11. 11.
    Donoho, D.L., Grimes, C.: Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data. Proc. Natl. Acad. Sci. USA 100(10), 5591–5596 (2003) MathSciNetMATHCrossRefGoogle Scholar
  12. 12.
    Edelman, A., Arias, T.A., Smith, S.T.: The geometry of algorithms with orthogonality constraints. SIAM J. Matrix Anal. Appl. 20, 303–353 (1998) MathSciNetMATHCrossRefGoogle Scholar
  13. 13.
    Efron, B., Hastie, T., Johnstone, L., Tibshirani, R.: Least angle regression. Ann. Stat. 32, 407–499 (2004) MathSciNetMATHCrossRefGoogle Scholar
  14. 14.
    Fisher, R.A.: The use of multiple measurements in taxonomic problems. Ann. Eugen. 7, 179–188 (1936) CrossRefGoogle Scholar
  15. 15.
    Fukunaga, K.: Introduction to Statistical Pattern Recognition, 2nd edn. Academic Press, San Diego (1990) MATHGoogle Scholar
  16. 16.
    Fukunaga, K., Mantock, J.: Nonparametric discriminant analysis. IEEE Trans. Pattern Anal. Mach. Intell. 5, 671–678 (1983) MATHCrossRefGoogle Scholar
  17. 17.
    Graham, D.B., Allinson, N.M.: Characterizing virtual eigensignatures for general purpose face recognition. In: Wechsler, H., Phillips, P.J., Bruce, V., Fogelman-Soulie, F., Huang, T.S. (eds.) Face Recognition: From Theory to Applications. NATO ASI Series F, Computer and Systems Sciences, vol. 163, pp. 446–456 (1998) Google Scholar
  18. 18.
    Hamsici, O.C., Martinez, A.M.: Bayes optimality in linear discriminant analysis. IEEE Trans. Pattern Anal. Mach. Intell. 30(4), 647–657 (2008) CrossRefGoogle Scholar
  19. 19.
    He, X., Cai, D., Yan, S., Zhang, H.-J.: Neighborhood preserving embedding. In: Proc. Int. Conf. Computer Vision (ICCV’05) (2005) Google Scholar
  20. 20.
    He, X., Niyogi, P.: Locality preserving projections. In: Thrun, S., Saul, L., Scholkopf, B. (eds.) Advances in Neural Information Processing Systems, vol. 16. MIT Press, Cambridge (2004) Google Scholar
  21. 21.
    He, X., Yan, S., Hu, Y., Niyogi, P.: Face recognition using Laplacianfaces. IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 328–340 (2005) CrossRefGoogle Scholar
  22. 22.
    Huang, J., Smola, A.J., Gretton, A., Borgwardt, K.M., Schölkopf, B.: Correcting sample selection bias by unlabeled data. In: NIPS, pp. 601–608 (2006) Google Scholar
  23. 23.
    Jolliffe, I.: Principal Component Analysis, 2nd edn. Springer Series in Statistics, Springer, New York (2002) MATHGoogle Scholar
  24. 24.
    Li, L.: Sparse sufficient dimension reduction. Biometrika 94(3), 603–613 (2007) MathSciNetMATHCrossRefGoogle Scholar
  25. 25.
    Li, S.Z.: Face recognition based on nearest linear combinations. In: CVPR, pp. 839–844 (1998) Google Scholar
  26. 26.
    Li, S.Z., Lu, J.: Face recognition using the nearest feature line method. IEEE Trans. Neural Netw. 10(2), 439–443 (1999) CrossRefGoogle Scholar
  27. 27.
    Li, Z., Lin, D., Tang, X.: Nonparametric discriminant analysis for face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 31(4), 755–761 (2009) CrossRefGoogle Scholar
  28. 28.
    Loog, M., Duin, R., Haeb-Umbach, R.: Multiclass linear dimension reduction by weighted pairwise Fisher criteria. IEEE Trans. Pattern Anal. Mach. Intell. 23(7), 762–766 (2001) CrossRefGoogle Scholar
  29. 29.
    Loog, M., Duin, R.P.W.: Linear dimensionality reduction via a heteroscedastic extension of lda: The Chernoff criterion. IEEE Trans. Pattern Anal. Mach. Intell. 26, 732–739 (2004) CrossRefGoogle Scholar
  30. 30.
    Lotlikar, R., Kothari, R.: Fractional-step dimensionality reduction. IEEE Trans. Pattern Anal. Mach. Intell. 22(6), 623–627 (2000) CrossRefGoogle Scholar
  31. 31.
    Pan, S.J., Kwok, J.T., Yang, Q.: Transfer learning via dimensionality reduction. In: Proc. of the Twenty-Third AAAI Conference on Artificial Intelligence (2008) Google Scholar
  32. 32.
    Phillips, P.J., Moon, H., Rizvi, S.A., Rauss, P.J.: The Feret evaluation methodology for face-recognition algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 22, 1090–1104 (2000) CrossRefGoogle Scholar
  33. 33.
    Rao, C.R.: The utilization of multiple measurements in problems of biological classification. J. R. Stat. Soc., Ser. B, Methodol. 10(2), 159–203 (1948) MATHGoogle Scholar
  34. 34.
    Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290, 2323–2326 (2000) CrossRefGoogle Scholar
  35. 35.
    Saon, G., Padmanabhan, M.: Minimum Bayes error feature selection for continuous speech recognition. In: Advances in Neural Information Processing Systems, vol. 13, pp. 800–806. MIT Press, Cambridge (2001) Google Scholar
  36. 36.
    Schervish, M.: Linear discrimination for three known normal populations. J. Stat. Plan. Inference 10, 167–175 (1984) MathSciNetMATHCrossRefGoogle Scholar
  37. 37.
    Shakhnarovich, G., Moghaddam, B.: Face recognition in subspaces. In: Handbook of Face Recognition, pp. 141–168 (2004) Google Scholar
  38. 38.
    Si, S., Tao, D., Geng, B.: Bregman divergence-based regularization for transfer subspace learning. IEEE Trans. Knowl. Data Eng. 22(7), 929–942 (2010) CrossRefGoogle Scholar
  39. 39.
    Tao, D., Li, X., Wu, X., Maybank, S.J.: Geometric mean for subspace selection. IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 260–274 (2009) CrossRefGoogle Scholar
  40. 40.
    Tenenbaum, J.B., de Silva, V., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290(5500), 2319–2323 (2000) CrossRefGoogle Scholar
  41. 41.
    Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc., Ser. B, Stat. Methodol. 58, 267–288 (1996) MathSciNetMATHGoogle Scholar
  42. 42.
    Tipping, M.E., Bishop, C.M.: Probabilistic principal component analysis. J. R. Stat. Soc., Ser. B, Stat. Methodol. 61(3), 611–622 (1999) MathSciNetMATHCrossRefGoogle Scholar
  43. 43.
    Turk, M., Pentland, A.: Eigenfaces for recognition. J. Cogn. Neurosci. 3, 71–86 (1991) CrossRefGoogle Scholar
  44. 44.
    Wang, X., Tang, X.: A unified framework for subspace face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 26, 1222–1228 (2004) CrossRefGoogle Scholar
  45. 45.
    Wang, X., Tang, X.: Subspace analysis using random mixture models. In: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 1, pp. 574–580 (2005) CrossRefGoogle Scholar
  46. 46.
    Wang, X., Tang, X.: Random sampling for subspace face recognition. Int. J. Comput. Vis. 70, 91–104 (2006) CrossRefGoogle Scholar
  47. 47.
    Wright, J., Yang, A.Y., Ganesh, A., Sastry, S.S., Ma, Y.: Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31, 210–227 (2009) CrossRefGoogle Scholar
  48. 48.
    Yan, S., Xu, D., Zhang, B., Zhang, H.-J., Yang, Q., Lin, S.: Graph embedding and extensions: A general framework for dimensionality reduction. IEEE Trans. Pattern Anal. Mach. Intell. 29(1), 40–51 (2007) CrossRefGoogle Scholar
  49. 49.
    Ye, J.: Least squares linear discriminant analysis. In: Proceedings of the 24th International Conference on Machine Learning, ICML ’07, pp. 1087–1093 (2007) CrossRefGoogle Scholar
  50. 50.
    Ye, J., Ji, S.: Discriminant analysis for dimensionality reduction: An overview of recent developments. In: Boulgouris, N., Plataniotis, K.N., Micheli-Tzanakou, E. (eds.) Biometrics: Theory, Methods, and Applications. Wiley-IEEE Press, New York (2010). Chap. 1 Google Scholar
  51. 51.
    Ye, J., Li, Q.: A two-stage linear discriminant analysis via qr-decomposition. IEEE Trans. Pattern Anal. Mach. Intell. 27(6), 929–941 (2005) CrossRefGoogle Scholar
  52. 52.
    Ye, J., Li, Q.: A two-stage linear discriminant analysis via qr-decomposition. IEEE Trans. Pattern Anal. Mach. Intell. 27, 929–941 (2005) CrossRefGoogle Scholar
  53. 53.
    Zhang, Z., Zha, H.: Principal manifolds and nonlinear dimensionality reduction via tangent space alignment. SIAM J. Sci. Comput. 26, 313–338 (2005) MathSciNetCrossRefGoogle Scholar
  54. 54.
    Zhang, T., Tao, D., Yang, J.: Discriminative locality alignment. In: Proceedings of the 10th European Conference on Computer Vision, pp. 725–738, Berlin, Heidelberg, 2008 Google Scholar
  55. 55.
    Zhang, T., Tao, D., Li, X., Yang, J.: Patch alignment for dimensionality reduction. IEEE Trans. Knowl. Data Eng. 21, 1299–1313 (2009) CrossRefGoogle Scholar
  56. 56.
    Zhou, T., Tao, D., Wu, X.: Manifold elastic net: A unified framework for sparse dimension reduction. Data Min. Knowl. Discov. (2010) Google Scholar
  57. 57.
    Zhu, M., Martinez, A.M.: Subclass discriminant analysis. IEEE Trans. Pattern Anal. Mach. Intell. 28, 1274–1286 (2006) CrossRefGoogle Scholar
  58. 58.
    Zou, H., Hastie, T.: Regularization and variable selection via the Elastic Net. J. R. Stat. Soc. B 67, 301–320 (2005) MathSciNetMATHCrossRefGoogle Scholar
  59. 59.
    Zou, H., Hastie, T., Tibshirani, R.: Sparse principal component analysis. J. Comput. Graph. Stat. 15 (2004) Google Scholar

Copyright information

© Springer-Verlag London Limited 2011

Authors and Affiliations

  1. 1.Centre for Quantum Computation & Intelligence Systems, FEITUniversity of TechnologySydneyAustralia

Personalised recommendations