Advertisement

Supervised Subspace Learning with Multi-class Lagrangian SVM on the Grassmann Manifold

  • Duc-Son Pham
  • Svetha Venkatesh
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7106)

Abstract

Learning robust subspaces to maximize class discrimination is challenging, and most current works consider a weak connection between dimensionality reduction and classifier design. We propose an alternate framework wherein these two steps are combined in a joint formulation to exploit the direct connection between dimensionality reduction and classification. Specifically, we learn an optimal subspace on the Grassmann manifold jointly minimizing the classification error of an SVM classifier. We minimize the regularized empirical risk over both the hypothesis space of functions that underlies this new generalized multi-class Lagrangian SVM and the Grassmann manifold such that a linear projection is to be found. We propose an iterative algorithm to meet the dual goal of optimizing both the classifier and projection. Extensive numerical studies on challenging datasets show robust performance of the proposed scheme over other alternatives in contexts wherein limited training data is used, verifying the advantage of the joint formulation.

Keywords

Dimensionality Reduction Reproduce Kernel Hilbert Space Grassmann Manifold Locality Preserve Projection Joint Formulation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Belhumeur, P., Hespanha, J., Kriegman, D.: Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(7), 711–720 (1997)CrossRefGoogle Scholar
  2. 2.
    Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press (2004)Google Scholar
  3. 3.
    Cai, D.: Codes and datasets for subspace learning, http://www.cs.uiuc.edu/homes/dengcai2/Data/data.html (as retrieved on October 2007)
  4. 4.
    Cai, D., He, X., Han, J., Zhang, H.J.: Orthogonal Laplacianfaces for Face Recognition. IEEE Transactions on Image Processing 5(11), 3608–3614 (2006)CrossRefGoogle Scholar
  5. 5.
    Cai, D., He, X., Hu, Y., Han, J., Huang, T.: Learning a spatially smooth subspace for face recognition. In: Proceedings CVPR (2007)Google Scholar
  6. 6.
    Fukumizu, K., Bach, F.R., Jordan, M.I.: Kernel dimension reduction in regression. Annals of Statistics 37(4), 1871–1905 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Georghiades, A., Belhumeur, P., Kriegman, D.: From few to many: Illumination cone models for face recognition under variablelighting and pose. IEEE Transactions on Pattern Analysis and Machine Intelligence 23(6), 643–660 (2001)CrossRefGoogle Scholar
  8. 8.
    He, X., Niyogi, P.: Locality preserving projection. In: Proceedings NIPS (2003)Google Scholar
  9. 9.
    Ji, S., Ye, J.: Linear dimensionality reduction for multi-label classification. In: Proc. IJCAI (2009)Google Scholar
  10. 10.
    Lee, Y., Lin, Y., Wahba, G.: Multicategory support vector machines. Technical report, Department of Statistics, University of Wisconsin-Madison (2001)Google Scholar
  11. 11.
    Lin, D., Yan, S., Tang, X.: Pursuing informative projection on Grassman manifold. In: Proceedings CVPR (2006)Google Scholar
  12. 12.
    Mangasarian, O., Musicant, D.: Lagrangian support vector machines. Journal of Machine Learning Research 1, 161–177 (2001)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Manton, J.H.: Optimization algorithms exploiting unitary constraints. IEEE Transactions on Signal Processing 50(3), 4311–4322 (2002)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Pham, D.S., Venkatesh, S.: Robust learning of discriminative projection for multicategory classification on the stiefel manifold. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR 2008), Anchorage, Alaska, June 24-26 (2008)Google Scholar
  15. 15.
    Sim, T., Baker, S., Bsat, M.: The CMU pose, illumination, and expression (PIE) databse. IEEE Transactions on Pattern Analysis and Machine Intelligence 25(12), 1615–1618 (2003)CrossRefGoogle Scholar
  16. 16.
    Yan, S., Xu, D., Zhang, B., Zhang, H.-J., Yang, Q., Lin, S.: Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(1), 40–51 (2007)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Duc-Son Pham
    • 1
  • Svetha Venkatesh
    • 1
  1. 1.Institute for Multi-sensor Processing and Content AnalysisCurtin UniversityPerthAustralia

Personalised recommendations