Multi-Subspace Representation and Discovery

  • Dijun Luo
  • Feiping Nie
  • Chris Ding
  • Heng Huang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6912)


This paper presents the multi-subspace discovery problem and provides a theoretical solution which is guaranteed to recover the number of subspaces, the dimensions of each subspace, and the members of data points of each subspace simultaneously. We further propose a data representation model to handle noisy real world data. We develop a novel optimization approach to learn the presented model which is guaranteed to converge to global optimizers. As applications of our models, we first apply our solutions as preprocessing in a series of machine learning problems, including clustering, classification, and semi-supervised learning. We found that our method automatically obtains robust data presentation which preserves the affine subspace structures of high dimensional data and generate more accurate results in the learning tasks. We also establish a robust standalone classifier which directly utilizes our sparse and low rank representation model. Experimental results indicate our methods improve the quality of data by preprocessing and the standalone classifier outperforms some state-of-the-art learning approaches.


Sparse Representation Sparse Code Robust Principal Component Analysis Single Connected Component Machine Learning Task 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Jenatton, R., Obozinski, G., Bach, F.: Structured sparse principal component analysis. In: Proc. AISTATS. Citeseer (2009)Google Scholar
  2. 2.
    Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B 67, 301–320 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Beygelzimer, A., Kephart, J., Rish, I.: Evaluation of optimization methods for network bottleneck diagnosis. In: ICAC (2007)Google Scholar
  4. 4.
    Luo, D., Ding, C., Huang, H.: Towards structural sparsity: An explicit ℓ2/ℓ0 approach. In: 2010 IEEE International Conference on Data Mining, pp. 344–353. IEEE, Los Alamitos (2010)CrossRefGoogle Scholar
  5. 5.
    Liu, G., Lin, Z., Yu, Y.: Robust subspace segmentation by low-rank representation. In: Proceedings of the 26th International Conference on Machine Learning. Citeseer, Haifa (2010)Google Scholar
  6. 6.
    Olshausen, B.A., Field, D.J.: Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision research 37(23), 3311–3325 (1997)CrossRefGoogle Scholar
  7. 7.
    Tibshirani, R.: Regression shrinkage and selection via the LASSO. J. Royal. Statist. Soc. B 58, 267–288 (1996)MathSciNetzbMATHGoogle Scholar
  8. 8.
    Vinje, W.E., Gallant, J.L.: Sparse coding and decorrelation in primary visual cortex during natural vision. Science 287, 1273 (2000)CrossRefGoogle Scholar
  9. 9.
    Wright, J., Yang, A., Ganesh, A., Sastry, S., Ma, Y.: Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 31, 210–227 (2009)CrossRefGoogle Scholar
  10. 10.
    Bach, F., Jordan, M.: Predictive low-rank decomposition for kernel methods. In: Proceedings of the 22nd International Conference on Machine Learning, pp. 33–40. ACM, New York (2005)Google Scholar
  11. 11.
    Candes, E., Li, X., Ma, Y., Wright, J.: Robust principal component analysis (2009) (preprint) Google Scholar
  12. 12.
    Troyanskaya, O., Cantor, M., Sherlock, G., Brown, P., Hastie, T., Tibshirani, R., Botstein, D., Altman, R.B.: Missing value estimation methods for DNA microarrays. Bioinformatics 17, 520 (2001)CrossRefGoogle Scholar
  13. 13.
    Frank, A., Asuncion, A.: UCI machine learning repository (2010)Google Scholar
  14. 14.
    Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 22, 888–905 (2002)Google Scholar
  15. 15.
    Nie, F., Xu, D., Tsang, I., Zhang, C.: Spectral embedded clustering. In: Proceedings of the 21st International Joint Conference on Artifical intelligence, pp. 1181–1186. Morgan Kaufmann Publishers Inc., San Francisco (2009)Google Scholar
  16. 16.
    Zhou, D., Bousquet, O., Lal, T.N., Weston, J., Schölkopf, B.: Learning with local and global consistency. In: Proc. Neural Info. Processing Systems (2003)Google Scholar
  17. 17.
    Zhu, X., Ghahramani, Z., Lafferty, J.: Semi-supervised learning using gaussian fields and harmonic functions. In: Proc. Int’l Conf. Machine Learning (2003)Google Scholar
  18. 18.
    Efron, B., Hastie, T., Johnstone, L., Tibshirani, R.: Least angle regression. Annals of Statistics 32, 407–499 (2004)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Dijun Luo
    • 1
  • Feiping Nie
    • 1
  • Chris Ding
    • 1
  • Heng Huang
    • 1
  1. 1.Department of Computer Science and EngineeringUniversity of TexasArlingtonUSA

Personalised recommendations