Skip to main content
Log in

Joint sparse representation and locality preserving projection for feature extraction

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript


Traditional graph-based feature extraction methods use two separated procedures, i.e., graph learning and projection learning to perform feature extraction. They make the feature extraction result highly dependent on the quality of the initial fixed graph, while the graph may not be the optimal one for feature extraction. In this paper, we propose a novel unsupervised feature extraction method, i.e., joint sparse representation and locality preserving projection (JSRLPP), in which the graph construction and feature extraction are simultaneously carried out. Specifically, we adaptively learn the similarity matrix by sparse representation, and at the same time, learn the projection matrix by preserving local structure. Compared with traditional feature extraction methods, our approach unifies graph learning and projection learning to a common framework, thus learns a more suitable graph for feature extraction. Experiments on several public image data sets demonstrate the effectiveness of our proposed algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others


  1. Shang R, Chang J, Jiao L, Xue Y (2017) Unsupervised feature selection based on self-representation sparse regression and local similarity preserving. Int J Mach Learn Cybern 7:1–14

    Article  Google Scholar 

  2. He X, Cai D, Niyogi P (2005) Laplacian score for feature selection. In: International Conference on Neural Information Processing Systems, MIT Press, pp 507–514

  3. Wen J, Fang X, Cui J, Fei L, Yan K, Chen Y, Xu Y (2018) Robust sparse linear discriminant analysis. IEEE Trans Circ Syst Video Technol.

  4. Hou C, Nie F, Li X, Yi D, Wu Y (2017) Joint embedding learning and sparse regression: a framework for unsupervised feature selection. IEEE Trans Cybern 44(6):793–804

    Google Scholar 

  5. Fang X, Xu Y, Li X, Fan Z, Liu H, Chen Y (2014) Locality and similarity preserving embedding for feature selection. Neurocomputing 128(5):304–315

    Article  Google Scholar 

  6. Cai Z, Zhu W (2017) Multi-label feature selection via feature manifold learning and sparsity regularization. Int J Mach Learn Cybern, 1–14

  7. Nie F, Xu D, Tsang I, Zhang C (2010) Flexible manifold embedding: a framework for semi-supervised and unsupervised dimension reduction. IEEE Trans Image Process 19(7):1921–1932

    Article  MathSciNet  MATH  Google Scholar 

  8. Xu Y, Zhu Q, Fan Z, Wang Y, Pan J (2013) From the idea of sparse representation to a representation-based transformation method for feature extraction. Neurocomputing 113:168–176

    Article  Google Scholar 

  9. Turk M, Pentland A (1991) Eigenfaces for recognition. J Cognit Neurosci 3(1):71–86

    Article  Google Scholar 

  10. Belhumeur P, Hespanha J, Kriegman D (1996) Eigenfaces vs. fisherfaces: recognition using class specific linear projection. In: European Conference on Computer Vision, pp 43–58

  11. Bartlett M (1998) Independent component representations for face recognition. In: Proceedings of SPIE Symposium on Electronic Imaging, vol 3299

  12. Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35(8):1798–1828

    Article  Google Scholar 

  13. Muller K, Mika S, Ratsch G, Tsuda K, Scholkopf B (2001) An introduction to kernel-based learning algorithms. IEEE Trans Neural Netw 12(2):181–201

    Article  Google Scholar 

  14. Yang M (2002) Kernel eigenfaces vs. kernel fisherfaces: face recognition using kernel methods. In: IEEE International Conference on Automatic Face and Gesture Recognition, pp 215–220

  15. Yang J, Gao X, Zhang D, Yang J (2005) Kerenel ICA: an alternative formulation and its application to face recognition. Pattern Recognit 38(10):1784–1787

    Article  MATH  Google Scholar 

  16. Roweis S, Saul L (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500):2323–6

    Article  Google Scholar 

  17. Belkin M, Niyogi P (2003) Laplacian eigenmaps for dimensionality reduction and data representation. MIT Press 15(6):1373–1396

  18. He X, Cai D, Yan S, Zhang H (2005) Neighborhood preserving embedding. In: IEEE International Conference on Computer Vision, pp 2(23):1208–1213

  19. He X, Yan S, Hu Y, Niyogi P, Zhang H (2005) Face recognition using laplacianfaces. IEEE Trans Pattern Anal Mach Intell 27(3):328–340

    Article  Google Scholar 

  20. Pang Y, Zhang L, Liu Z, Yu N, Li H (2005) Neighborhood preserving projections (NPP): a novel linear dimension reduction method. In: International Conference on Advances in Intelligent Computing, pp 3644:117–125

  21. Nie F, Xiang S, Song Y, Zhang C (2009) Orthogonal locality minimizing globality maximizing projections for feature extraction. Opt Eng 48(1):017202

    Article  Google Scholar 

  22. Nie F, Cai X, Huang H (2014) Flexible shift-invariant locality and globality preserving projections. In: The European Conference on Machine Learning and Knowledge Discovery in Databases, pp 485–500

  23. Lai Z, Xu Y, Yang J, Shen L, Zhang D (2017) Rotational invariant dimensionality reduction algorithms. IEEE Trans Cybern 47(11):3733

    Article  Google Scholar 

  24. Meng M, Wei J, Wang J, Ma Q, Wang X (2015) Adaptive semi-supervised dimensionality reduction based on pairwise constraints weighting and graph optimizing. Int J Mach Learn Cybern 8(3):793–805

    Article  Google Scholar 

  25. Bengio Y, Vincent P, Delalleau O, Roux N, Ouimet M (2003) Out-of-sample extensions for LLE, Isomap, MDS, Eigenmaps, and Spectral clustering. In: International Conference on Neural Information Processing Systems, pp 16:177–184

  26. Wright J, Yang A, Arvind G, Sastry S, Ma Y (2009) Robust face recognition via sparse representation. IEEE Trans Pattern Anal Mach Intell 31(2):210–227

    Article  Google Scholar 

  27. Zhang Z, Xu Y, Yang J, Li X, Zhang D (2017) A survey of sparse representation: algorithms and applications. IEEE Access 3:490–530

    Article  Google Scholar 

  28. Lai Z, Wong W, Xu Y, Yang J, Zhang D (2016) Approximate orthogonal sparse embedding for dimensionality reduction. IEEE Trans Neural Netw Learn Syst 27(4):723–735

    Article  MathSciNet  Google Scholar 

  29. Zhuang L, Gao H, Lin Z, Ma Y, Zhang X, Yu N (2012). Non-negative low rank and sparse graph for semi-supervised learning. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 157(10):2328–2335

  30. Nie F, Zhu W, Li X (2017) Unsupervised large graph embedding. In: Proceeding of the Thirty-First AAAI Conference on Artificial Intelligence, pp 2422–2428

  31. Yan S, Wang H (2009) Semi-supervised learning by sparse representation. In: Siam International Conference on Data Mining, 792–801

  32. Qiao L, Chen S, Tan X (2010) Sparsity preserving projections with applications to face recognition. Pattern Recognit 43(1):331–341

    Article  MATH  Google Scholar 

  33. Liu Z, Yin J, Jin Z (2010) Locality preserving projections based on L1 Graph. Pattern Recognition, 1–4

  34. Nie F, Wang X, Huang H (2014) Clustering and projected clustering with adaptive neighbors. In: ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp 977–986

  35. Wong W, Lai Z, Wen J, Fang X, Lu Y (2017) Low rank embedding for robust image feature extraction. IEEE Trans Image Process 26(6):2905–2917

    Article  MathSciNet  MATH  Google Scholar 

  36. Fang X, Xu Y, Li X, Lai Z, Teng S, Fei L (2017) Orthogonal self-guided similarity preserving projection for classification and clustering. Neural Netw 88:1–8

    Article  Google Scholar 

  37. Wen J, Zhang B, Xu Y, Yang J, Han N (2018) Adaptive weighted nonnegative low-rank representation. Pattern Recognit 81:326–340

    Article  Google Scholar 

  38. Baraniuk R (2007) A lecture on compressive sensing. IEEE Signal Processing Magazine, 1–9

  39. Murray J, Kreutz-Delgado K (2007) Visual recognition and inference using dynamic overcomplete sparse learning. Neural Comput 19(9):2301

    Article  MathSciNet  MATH  Google Scholar 

  40. Amaldi E, Kann V (1998) On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems. Theor Comput Sci 209(1–2):237–260

    Article  MathSciNet  MATH  Google Scholar 

  41. Donoho D (2006) For most large underdetermined systems of linear equations the minimal L1-norm solution is also the sparsest solution. Commun Pure Appl Math 59(6):797–829

    Article  MathSciNet  MATH  Google Scholar 

  42. Xu Y, Fang X, Wu J, Li X, Zhang D (2016) Discriminative transfer subspace learning via low-rank and sparse representation. IEEE Trans Image Process 25(2):850–863

    Article  MathSciNet  MATH  Google Scholar 

  43. Boyd S, Parikh N, Chu E, Peleato B, Eckstein J (2010) Distributed optimization and statistical learning via the alternating direction method of multipliers. Found Trends Mach Learn 3(1):1–122

    Article  MATH  Google Scholar 

  44. Cai J, Cands E, Shen Z (2008) A singular value thresholding algorithm for matrix completion. Siam J Optim 20(4):1956–1982

    Article  MathSciNet  Google Scholar 

  45. Georghiades A, Belhumeur P, Kriegman D (2001) From few to many: illumination cone models for face recognition under variable lighting and pose. IEEE Trans Pattern Anal Mach Intell 23(6):643–660

    Article  Google Scholar 

  46. Martinez A (1998) The AR face database. Cvc Technical Report, 24

  47. Nene S, Nayar S, Murase H (1996) Columbia object image library (COIL-100). Columbia University

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Xiaozhao Fang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work is supported in part by the National Natural Science Foundation of China under Grants 61702110, 61603100 and 61772141, by the Guangdong Provincial Natural Science Foundation under Grant 17ZK0422, by Guangdong Higher Education letter 2015[133], 2014[97], and by the Guangzhou Science and Technology Project under Grants 201508010067, 201604020145, 2016201604030034, 201604046017 and 201804010347.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, W., Kang, P., Fang, X. et al. Joint sparse representation and locality preserving projection for feature extraction. Int. J. Mach. Learn. & Cyber. 10, 1731–1745 (2019).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: