Skip to main content
Log in

Double sparse-representation feature selection algorithm for classification

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

since amount of unlabeled and high-dimensional datasets need to be preprocessed, unsupervised learning plays a more and more important role in machine learning field. This paper proposed a novel unsupervised feature selection algorithm that can select informative features from dataset without label, by mixing two sparse representation and self-representation loss function into a unified framework. That is, we use self-representation loss function to represent every feature with remainder features and achieve minimum reconstruction mirror, and then utilize l 2 , 1-norm regularization term and l 1-norm regularization term simultaneously to enforce coefficient matrix to be sparse, such that filter redundant and irrelative features in order to conduct feature selection, where l 2 , 1-norm regularization can enforce group sparsity while l 1-norm regularization enforce element sparsity. By this way that utilize both of sparse representation terms, we can choose representative features more accurately. At final, we feed reduced data into support vector machine (SVM) to conduct classification accuracy, which is main assessment criteria to validate performance of algorithm. Extensive experiments on synthetic datasets and real-world datasets have exhibited that our proposed method outperform most of common-used methods, such as PCA, LPP and so on.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Cheng B, Zhang D, Chen S, Kaufer D, Shen D (2013) Semi-supervised multimodal relevance vector regression improves cognitive performance estimation from imaging and biological biomarkers. Neuroinformatics 11(3):339–353

    Article  Google Scholar 

  2. Cheng D, Zhang S, Liu X et al. (2015) Feature selection by combining subspace learning with sparse representation[J], Multimedia Systems., 1–7

  3. Feng Y, Xiao J, Zhuang Y et al. (2012) Adaptive Unsupervised Multi-view Feature Selection for Visual Concept Recognition [J], Compu Vis – ECCV 2012, 7724:343–357

  4. Hai TN, Franke K, Petrovic S (2011) On General Definition of L1-norm Support Vector Machines for Feature Selection [J]. 1(2):279–283

  5. He R, Tan T, Wang L, et al. (2012) l2, 1 Regularized correntropy for robust feature selection [C]. IEEE Conference on Computer Vision & Pattern Recognition. 2504–2511

  6. Jolliffe IT (1986) Principal Component. Springer Series in Statistics Analysis. Springer, New York. doi:10.1007/b98835

  7. Kan M, Shan S, Zhang H (2012) Multi-view discriminant analysis [J]. Comput Vis ECCV 2012(7525):808–821

    Google Scholar 

  8. Lai H, Pan Y, Liu C et al (2013) Sparse learning-to-rank via an efficient primal-dual algorithm [J]. IEEE Trans Comput 62(6):1221–1233

    Article  MathSciNet  MATH  Google Scholar 

  9. Laporte L, Flamary R, Canu S et al (2015) Non-convex Regularizations for Feature Selection in Ranking With Sparse SVM [J]. IEEE Trans Neural Netw Learn Syst 25(6):1118–1130

    Article  Google Scholar 

  10. Leordeanu M, Hebert M (2009) Unsupervised learning for graph matching [J]. Int J Comput Vis 96(1):28–45

    Article  MathSciNet  MATH  Google Scholar 

  11. Lu H, Plataniotis KN, Venetsanopoulos AN (2011) A survey of multilinear subspace learning for tensor data [J]. Pattern Recogn 44(7):1540–1551

    Article  MATH  Google Scholar 

  12. Nie F, Huang H, Cai X, Ding CHQ (2010) Efficient and robust feature selection via joint l_2,1-norms minimization [J], in: NIPS, 1813–1821

  13. Peng H, Long F, Ding C (2005) Feature selection based on mutual information: criteria of max-dependency, max-relevance, and Min-redundancy [J]. IEEE Trans Pattern Anal Mach Intell 27(8):1226–1238

    Article  Google Scholar 

  14. Qian M, Zhai C (2013) Robust unsupervised feature selection [C], IJCAI ‘13 proceedings of the twenty-third international joint conference on artificial intelligence, 1621–1627

  15. Qin Y, Zhang S, Zhu X et al (2007) Semi-parametric optimization for missing data imputation [J]. Appl Intell 27(1):79–88

    Article  MATH  Google Scholar 

  16. Ryali S, Menon V (2009) Feature Selection and Classification of fMRI data using Logistic Regression with L1 norm regularization [J]. Neuroimage 47(47):S57

    Article  Google Scholar 

  17. Shakhnarovich G, Moghaddam B (2004) Face recognition in subspaces, In: S.Z. Li, A.K. Jain (Eds.), Handbook of Face Recognition, Springer-Verlag, 141–168

  18. Shang R, Zhang Z, Jiao L et al (2016) Self-representation based dual-graph regularized feature selection clustering [J]. Neurocomputing 171(C):1242–1253

    Article  Google Scholar 

  19. Stonnington CM, Chu C et al (2010) Predicting clinical scores from magnetic resonance scans in Alzheimer’s disease. NeuroImage 51(4):1405–1413

    Article  Google Scholar 

  20. Tomar D, Agarwal S (2014) Feature selection based Least Square twin support vector machine for diagnosis of heart disease [J]. Int J Biol Sci Bio/Technology 6:69–82

    Google Scholar 

  21. Wang JY, Bensmail H, Gao X (2014) Feature selection and multi-kernel learning for sparse representation on a manifold [J]. Neural Netw 51(3):9–16

    Article  MATH  Google Scholar 

  22. Wang S, Lu J, Gu X, Du H et al (2016) Semi-supervised linear discriminant analysis for dimension reduction and classification [J]. Pattern Recogn 57:179–189

    Article  Google Scholar 

  23. Weston J, Mukherjee S, Chapelle O et al (2000) Feature selection for SVMs [J]. Adv Neural Inf Proces Syst 13:668–674

    Google Scholar 

  24. Xiao R, Zhao Q (2011) David Zhang et al. facial expression recognition on multiple manifolds [J]. Pattern Recogn 44(1):107–116

    Article  Google Scholar 

  25. Xu Y, Wang C, Lai J (2016) Weighted multi-view clustering with feature selection [J]. Pattern Recogn 53:25–35

    Article  Google Scholar 

  26. Yang Y, Shen HT, Ma Z, et al. (2011) l 2,1-norm regularized discriminative feature selection for unsupervised learning [C]. International Joint Conference on Artificial Intelligence. 1589–1594

  27. Yuan GX, Ho CH, Lin CJ (2011) An improved GLMNET for L1-regularized logistic regression [J]. J Mach Learn Res 13(1):33–41

    MathSciNet  MATH  Google Scholar 

  28. Zhang C, Qin Y, Zhu X et al. (2006) Clustering-based Missing Value Imputation for Data Preprocessing [J]. pp. 1081–1086

  29. Zhu Y, Lucey S (2013) Convolutional sparse coding for trajectory reconstruction [J]. IEEE Trans Pattern Anal Mach Intell 37(3):529–540

    Article  Google Scholar 

  30. Zhu X, Zhang S, Zhang J, et al. (2007) Cost-Sensitive Imputing Missing Values with Ordering [C]. AAAI Conference on Artificial Intelligence, July 22–26, 2007, Vancouver, British Columbia, pp. 1922-1923

  31. Zhu X, Zhang S, Jin Z, Zhang Z, Xu Z (2011a) Missing value estimation for mixed-attribute data sets. IEEE Trans Knowl Data Eng 23(1):110–121

    Article  Google Scholar 

  32. Zhu Y, Cox M, Lucey S (2011b) 3D motion reconstruction for real-world camera motion [C], Cvpr, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 1–8

  33. Zhu X, Huang Z, Cui J, Shen H (2013a) Video-to-shot tag propagation by graph sparse group lasso. IEEE Trans Multimedia 15(3):633–646

    Article  Google Scholar 

  34. Zhu X, Huang Z, Cheng H, Cui J, Shen H (2013b) Sparse hashing for fast multimedia search [J]. ACM Trans Inf Syst 31(2):9

    Article  Google Scholar 

  35. Zhu X, Huang Z, Yang Y, Shen HT, Xu C, Luo J (2013c) Self-taught dimensionality reduction on the high-dimensional small-sized data [J]. Pattern Recogn 46(1):215–229

    Article  MATH  Google Scholar 

  36. Zhu X, Suk H, Shen D (2014a) A novel matrix-similarity based loss function for joint regression and classification in AD diagnosis [J]. NeuroImage 100:91–105

    Article  Google Scholar 

  37. Zhu X, Zhang L, Huang Z (2014b) A sparse embedding and least variance encoding approach to hashing. IEEE Trans Image Process 23(9):3737–3750

    Article  MathSciNet  Google Scholar 

  38. Zhu P, Zuo W, Zhang L et al (2015a) Unsupervised feature selection by regularized self-representation [J]. Pattern Recogn 48(2):438–446

    Article  Google Scholar 

  39. Zhu X, Suk H, Shen D (2015b) Low-rank dimensionality reduction for multi-modality AD classification [J]. Neuroimage 100:91–105

  40. Zhu Y, Huang D, De La Torre F, Lucey S (2015c) Complex non-rigid motion 3d reconstruction by union of subspaces [C]. 2014 I.E. Conf Comput Vis Pattern Recognit 37(3):1542–1549

    Google Scholar 

  41. Zhu X, Li X, Zhang S, et al. (2016a) Robust Joint Graph Sparse Coding for Unsupervised Spectral Feature Selection. [J]. IEEE Transactions on Neural Networks & Learning Systems, pp. 1–13

  42. Zhu X, Li X, Zhang S (2016b) Block-Row Sparse Multiview Multilabel Learning for Image Classification [J]. IEEE Trans Cybern 46(2):1

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported in part by the China “1000-Plan” National Distinguished Professorship; the Nation Natural Science Foundation of China (Grants No: 61263035, 61573270 and 61672177), the China 973 Program (Grant No: 2013CB329404); the China Key Research Program (Grant No: 2016YFB1000905); the Guangxi Natural Science Foundation (Grant No: 2015GXNSFCB139011); the Innovation Project of Guangxi Graduate Education under grant YCSZ2016046; the Guangxi High Institutions’ Program of Introducing 100 High-Level Overseas Talents; and the Guangxi Collaborative Innovation Center of Multi-Source Information Integration and Intelligent Processing; and the Guangxi “Bagui” Teams for Innovation and Research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xuejun Zhang.

Additional information

http://archive.ics.uci.edu/ml/; http://featureselection.asu.edu/old/datasets.php

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhu, Y., Zhang, X., Wen, G. et al. Double sparse-representation feature selection algorithm for classification. Multimed Tools Appl 76, 17525–17539 (2017). https://doi.org/10.1007/s11042-016-4121-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-016-4121-8

Keywords

Navigation