Sparse Manifold Subspace Learning

Chapter

Abstract

In this chapter, we introduce a new subspace learning framework called “Sparse Manifold Subspace Learning (SMSL)”. Compared with the conventional methods considering global data structure e.g., PCA, LDA, SMSL aims at preserving the local neighborhood structure on the data manifold and provides a more accurate data representation via locality sparse coding. In addition, it removes the common concerns of many local structure based subspace learning methods e.g., Local Linear Embedding (LLE), Neighborhood Preserving Embedding (NPE), that how to choose appropriate neighbors. SMSL adaptively selects neighbors based on their distances and importance, which is less sensitive to noise than NPE. Moreover, the dual-sparse processes, i.e., the locality sparse coding, and sparse eigen-decomposition in graph embedding yield a noise-tolerant framework. Finally, SMSL is learned in an inductive fashion, and therefore easily extended to different tests. We exhibit experimental results on several databases and demonstrate the effectiveness of the proposed method.

Keywords

Subspace learning Manifold learning Sparse coding Graph embedding Sparse eigen-decomposition 

Notes

Acknowledgments

This research is supported in part by the NSF CNS award 1314484, ONR award N00014-12-1-1028, ONR Young Investigator Award N00014-14-1-0484, and U.S. Army Research Office Young Investigator Award W911NF-14-1-0218.

References

  1. 1.
    P. Belhumeur, J. Hespanha, D. Kriegman, Eigenfaces vs. Fisherfaces: recognition using class specific linear projection. IEEE TPAMI 19(7), 711–720 (2002)CrossRefGoogle Scholar
  2. 2.
    M. Belkin, P. Niyogi, Laplacian eigenmaps and spectral techniques for embedding and clustering, in NIPS (2001)Google Scholar
  3. 3.
    D.L. Donoho, C. Grimes, Hessian eigenmaps: new locally linear embedding techniques for high-dimensional data. Natl. Acad. Sci. U.S.A. 100, 5591–5596 (2003)MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    E. Elhamifar, R. Vidal, Sparse manifold clustering and embedding, in NIPS (2011)Google Scholar
  5. 5.
    M.A.T. Figueiredo, R.D. Nowak, S.J. Wright, Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Sign. Proces. 1(4), 586–597 (2007)Google Scholar
  6. 6.
    X. He, D. Cai, S. Yan, H.J. Zhang, Neighborhood preserving embedding, in IEEE ICCV (2005)Google Scholar
  7. 7.
    X. He, P. Niyogi, Locality preserving projections, in NIPS (2003)Google Scholar
  8. 8.
    L.S. Sam Roweis, Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000)CrossRefGoogle Scholar
  9. 9.
    B. Shaw, T. Jebara, Minimum volume embedding, in International Conference on Artificial Intelligence and Statistics (2007)Google Scholar
  10. 10.
    B. Shaw, T. Jebara, Structure preserving embedding, in ICML (2009)Google Scholar
  11. 11.
    T. Sim, S. Baker, M. Bsat, The cmu pose, illumination, and expression (pie) database, in IEEE FGR (2002)Google Scholar
  12. 12.
    J.B. Tenenbaum, V. de Silva, J.C. Langford, A global geometric framework for nonlinear dimensionality reduction. Science 290(5500), 2319–2323 (2000)CrossRefGoogle Scholar
  13. 13.
    M. Turk, A. Pentland, Eigenfaces for recognition. J. Cogn. Neurosci. 3(1), 71–86 (1991)CrossRefGoogle Scholar
  14. 14.
    J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, Y. Gong, Locality-constrained linear coding for image classification, in IEEE CVPR (2010)Google Scholar
  15. 15.
    K. Weinberger, L.K. Saul, Unsupervised learning of image manifolds by semidefinite programming, in IEEE CVPR (2004)Google Scholar
  16. 16.
    S. Yan, D. Xu, B. Zhang, H. Zhang, Q. Yang, S. Lin, Graph embedding and extension: a general framework for dimensionality reduction. IEEE TPAMI 29, 40–51 (2007)CrossRefGoogle Scholar
  17. 17.
    A.Y. Yang, S.S. Sastry, A. Ganesh, Y. Ma, Fast l1-minimization algorithms and an application in robust face recognition: a review, in ICIP (2010)Google Scholar
  18. 18.
    K. Yu, T. Zhang, Y. Gong, Nonlinear learning using local coordinate coding, in NIPS, pp. 2223–2231 (2009)Google Scholar
  19. 19.
    X. Yuan, T. Zhang, Truncated power method for sparse eigenvalue problems. Technical report (2011)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Department of Electrical and Computer EngineeringNortheastern UniversityBostonUSA
  2. 2.Department of Computer Science, The Graduate CenterCUNYNew YorkUSA
  3. 3.Department of Electrical and Computer Engineering, College of Computer and Information Science (Affiliated)Northeastern UniversityBostonUSA

Personalised recommendations