Skip to main content

L 1 Graph Based on Sparse Coding for Feature Selection

  • Conference paper
Advances in Neural Networks – ISNN 2013 (ISNN 2013)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7951))

Included in the following conference series:

Abstract

In machine learning and pattern recognition, feature selection has been a very active topic in the literature. Unsupervised feature selection is challenging due to the lack of label which would supply the categorical information. How to define an appropriate metric is the key for feature selection. In this paper, we propose a “filter” method for unsupervised feature selection, which is based on the geometry properties of ℓ1 graph. ℓ1 graph is constructed through sparse coding. The graph establishes the relations of feature subspaces and the quality of features is evaluated by features’ local preserving ability. We compare our method with classic unsupervised feature selection methods (Laplacian score and Pearson correlation) and supervised method (Fisher score) on benchmark data sets. The classification results based on support vector machine, k-nearest neighbors and multi-layer feed-forward networks demonstrate the efficiency and effectiveness of our method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Mitra, P., Murthy, C.A., Pal, S.K.: Unsupervised feature selection using feature similarity. IEEE Transactions on Pattern Analysis and Machine Intelligence 24, 301–312 (2002)

    Article  Google Scholar 

  2. Mutch, J., Lowe, D.G.: Multiclass object recognition with sparse, localized features. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 11–18 (2006)

    Google Scholar 

  3. Xu, J., He, H., Man, H.: DCPE Co-Training for Classification. Neurocomputing 86, 75–85 (2012)

    Article  Google Scholar 

  4. Elad, M., Aharon, M.: Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image Processing 15(12), 3736–3745 (2006)

    Article  MathSciNet  Google Scholar 

  5. Xu, J., Yin, Y., Man, H., He, H.: Feature selection based on sparse imputation. In: The 2012 International Joint Conference on Neural Networks (IJCNN), pp. 1–7 (2012)

    Google Scholar 

  6. Xu, J., Man, H.: Dictionary Learning Based on Laplacian Score in Sparse Coding. In: Perner, P. (ed.) MLDM 2011. LNCS, vol. 6871, pp. 253–264. Springer, Heidelberg (2011)

    Google Scholar 

  7. Li, Y., Amari, S., Cichocki, A., Ho, D.W.C., Xie, S.: Underdetermined blind source separation based on sparse representation. IEEE Transactions on Signal Processing 54(2), 423–437 (2006)

    Article  Google Scholar 

  8. Chung, F.R.K.: Spectral Graph Theory. Regional Conference Series in Mathematics, vol. 92 (1997)

    Google Scholar 

  9. Frank, A., Asuncion, A.: UCI Machine Learning Repository. University of California, School of Information and Computer Science, Irvine (2010), http://archive.ics.uci.edu/ml

    Google Scholar 

  10. Guyon, I., Elisseeff, A.: An Introduction to Variable and Feature Selection. Journal of Machine Learning Research 3, 1157–1182 (2003)

    MATH  Google Scholar 

  11. He, X., Cai, D., Niyogi, P.: Laplacian score for feature selection. In: Proc. Advances in the Neural Information Processing Systems 18, Vancouver, Canada (2005)

    Google Scholar 

  12. Candes, E., Tao, T.: Near optimal signal recovery from random projections and universal encoding strategies. IEEE Trans. Inform. Theory 52, 5406–5425 (2006)

    Article  MathSciNet  Google Scholar 

  13. Kim, S., Koh, K., Lustig, M., Boyd, S., Gorinevsky, D.: An interior-point method for largescale l1-regularized least squares. IEEE Journal of Selected Topics in Signal Processing 1(4), 606–617 (2007)

    Article  Google Scholar 

  14. Elhamifar, E., Vidal, R.: Sparse Subspace Clustering. In: IEEE International Conference on Computer Vision and Pattern Recognition (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Xu, J., Yang, G., Man, H., He, H. (2013). L 1 Graph Based on Sparse Coding for Feature Selection. In: Guo, C., Hou, ZG., Zeng, Z. (eds) Advances in Neural Networks – ISNN 2013. ISNN 2013. Lecture Notes in Computer Science, vol 7951. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-39065-4_71

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-39065-4_71

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-39064-7

  • Online ISBN: 978-3-642-39065-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics