Skip to main content

Exclusive lasso-based k-nearest-neighbor classification

Abstract

Conventionally, the k nearest-neighbor (kNN) classification is implemented with the use of the Euclidean distance-based measures, which are mainly the one-to-one similarity relationships such as to lose the connections between different samples. As a strategy to alleviate this issue, the coefficients coded by sparse representation have played a role of similarity gauger for nearest-neighbor classification as well. Although SR coefficients enjoy remarkable discrimination nature as a one-to-many relationship, it carries out variable selection at the individual level so that possible inherent group structure is ignored. In order to make the most of information implied in the group structure, this paper employs the exclusive lasso strategy to perform the similarity evaluation in two novel nearest-neighbor classification methods. Experimental results on both benchmark data sets and the face recognition problem demonstrate that the EL-based kNN method outperforms certain state-of-the-art classification techniques and existing representation-based nearest-neighbor approaches, in terms of both the size of feature reduction and the classification accuracy.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

References

  1. Cover TM, Hart PE (1967) Nearest neighbor pattern classification. IEEE Trans Inf Theory 13(1):21–27

    Article  Google Scholar 

  2. Weinberger Kilian Q, Saul Lawrence K (2009) Distance metric learning for large margin nearest neighbor classification. J Machine Learn Res 10:207–244

    MATH  Google Scholar 

  3. Gou J, Ma H, Ou W, Zeng S, Rao Y, Yang H (2019) A generalized mean distance-based k-nearest neighbor classifier. Expert Syst Appl 115:356–372

    Article  Google Scholar 

  4. Li SZ, Lu J (1999) Face recognition using the nearest feature line method. IEEE Trans Neural Net 10(2):439–443

    Article  Google Scholar 

  5. Gao Q, Wang Z (2007) Center-based nearest neighbor classifier. Pattern Recogn 40(1):346–349

    Article  MathSciNet  Google Scholar 

  6. Donoho D (2006) For most large underdetermined systems of linear equations the minimal 1-norm solution is also the sparsest solution. Commun Pure Appl Math 59(6):797–829

    Article  MathSciNet  Google Scholar 

  7. Zhang Z, Xu Y, Yang J, Li X, Zhang D (2017) A survey of sparse representation: algorithms and applications. IEEE Access 3:490–530

    Article  Google Scholar 

  8. Zhang J, Yang J (2014) Linear reconstruction measure steered nearest neighbor classification framework. Pattern Recogn 47(4):1709–1720

    Article  Google Scholar 

  9. Tibshirani RJ (1996) Regression shrinkage and selection via the lasso. J Roy Stat Soc B 58(1)

  10. Wright J, Yang AY, Ganesh A, Sastry SS, Ma Y (2009) Robust face recognition via sparse representation. IEEE Trans Pattern Anal Mach Intell 31(2):210–227

    Article  Google Scholar 

  11. Li J, Lu C (2013) A new decision rule for sparse representation based classification for face recognition. Neurocomputing 116:265–271

    Article  Google Scholar 

  12. Xu Y, Zhu Q, Fan Z, Qiu M, Chen Y, Liu H (2013) Coarse to fine k nearest neighbor classifier. Pattern Recogn Lett 34(9):980–986

    Article  Google Scholar 

  13. Ma H, Gou J, Wang X, Ke J, Zeng S (2017) Sparse coefficient-based \({k}\)-nearest neighbor classification. IEEE Access 5:16618–16634

    Article  Google Scholar 

  14. Zhang S, Cheng D, Deng Z, Zong M, Deng X (2018) A novel knn algorithm with data-driven k parameter computation. Pattern Recogn Lett 109:44–54

    Article  Google Scholar 

  15. Wright J, Ganesh A, Zhou Z, Wagner A (2009) Robust (2007) face recognition via sparse representation. IEEE Trans Pattern Anal Mach Intell 31(2):210–227

    Article  Google Scholar 

  16. Yang M, Zhang L, Yang J, Zhang D (2011) Robust sparse coding for face recognition. CVPR 2011 42(7): 625–632

  17. Yuan M, Lin Y (2006) Model selection and estimation in regression with grouped variables. J Roy Stat Soc 68(1):49–67

    Article  MathSciNet  Google Scholar 

  18. Jacob L, Obozinski G, Vert JP (2009) Group lasso with overlap and graph lasso. In: International Conference on Machine Learning

  19. Chen J, Zhou S, Kang Z, Wen Q (2020) Locality-constrained group lasso coding for microvessel image classification - sciencedirect. Pattern Recogn Lett 130:132–138

    Article  Google Scholar 

  20. Diwu Z, Cao H, Wang L, Chen X (2021) Collaborative double sparse period-group lasso for bearing fault diagnosis. IEEE Trans Instrum Meas 70:1–10

    Article  Google Scholar 

  21. Zhang S, Zong M, Sun K, Liu Y, Cheng D (2014) Efficient knn algorithm based on graph sparse reconstruction. Lect Notes in Comput Sci 8933:356–369

    Article  Google Scholar 

  22. Tang Y, Li X, Xu Y, Liu S (2014) Group lasso based collaborative representation for face recognition. In: 2014 4th IEEE International Conference on Network Infrastructure and Digital Content

  23. Zheng S, Ding C (2020) A group lasso based sparse knn classifier. Pattern Recogn Lett 131:227–233

    Article  Google Scholar 

  24. Zhou Y, Jin R , Hoi S (2010) Exclusive lasso for multi-task feature selection. In: Proceedings of the thirteenth international conference on artificial intelligence and statistics. JMLR.org, pp 988–995

  25. Gou J, Du L, Zhang Y, Xiong T (2012) A new distance-weighted k-nearest neighbor classifier. J Inf Comput Sci 9(6):1429–1436

    Google Scholar 

  26. Campbell F, Allen G (2017) Within group variable selection through the exclusive lasso. Electron J Stat 11(2):4220–4257

    Article  MathSciNet  Google Scholar 

  27. Zhao P, Rocha G, Yu B (2009) The composite absolute penalties family for grouped and hierarchical variable selection. Ann Stat 37(6A):3468–3497

    Article  MathSciNet  Google Scholar 

  28. Obozinski G, Bach F (2012) Convex relaxation for combinatorial penalties. Eprint Arxiv:125.1240

  29. Kong D, Fujimaki R, Liu J, Nie F, Ding C (2014) Exclusive feature learning on arbitrary structures via l(1,2)-norm. In: Advances in neural information processing systems 27 (NIPS 2014), 27

  30. Sun Y, Chain B, Kaski S, Shawe-Taylor J (2020) Correlated feature selection with extended exclusive group lasso. CoRR, abs/2002.12460

  31. Dheeru D, Graff C (2017) UCI machine learning repository. University of California, School of Information and Computer Sciences, lrvine

    Google Scholar 

  32. Triguero I, Gonzalez SV, Moyano J, Garcia S (2017) Keel 3.0: an open source software for multi-stage analysis in data mining. Int J Comput Intell Syst 10(1):1238–1249

    Article  Google Scholar 

  33. Li J, Chen K, Wang S, Morstatter F (2018) Feature selection: a data perspective. ACM Comput Surv (CSUR) 50(6):94

    Article  Google Scholar 

  34. Martinez AM, Benavente R (1998) The ar face database. In: CVC Technical Report #24, 01

  35. Georghiades AS, Belhumeur PN, Kriegman DJ (2002) From few to many: illumination cone models for face recognition under variable lighting and pose. IEEE Transac Pattern Analy Machine Intell 23(6):643–660

    Article  Google Scholar 

  36. Stegmann MB, Ersboll BK, Larsen R (2003) Fame-a flexible appearance modeling environment. IEEE Trans Med Imaging 22(10):1319–1331

    Article  Google Scholar 

  37. Bengio Y, Grandvalet Y (2005) Bias in estimating the variance of K-fold cross-validation. 1:75–95

  38. Pan Z, Wang Y, Ku W (2017) A new k-harmonic nearest neighbor classifier based on the multi-local means. Expert Syst Appl 67:115–125

    Article  Google Scholar 

  39. Peng L, Yang B, Chen Y, Abraham A (2009) Data gravitation based classification. Inf Sci 179(6):809–819

    Article  Google Scholar 

  40. John GH, Langley P (1995) Estimating continuous distributions in bayesian classifiers. In: Eleventh conference on uncertainty in artificial intelligence, 338–345

  41. Cohen WW (1995) Fast effective rule induction. Mach Learn Proc 95:115–123

    Google Scholar 

  42. Quinlan R (1992) C4.5 : programs for machine learning. Morgan Kaufmann Publishers Inc.

  43. Freund Y, Schapire R (1996) Experiments with a new boosting algorithm draft-please do not distribute. In: Thirteenth international conference on international conference on machine learning

  44. Breiman L (2001) Random forests. Mach Learn 45:5–32

    Article  Google Scholar 

  45. Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–14

    MathSciNet  MATH  Google Scholar 

  46. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back propagating errors. Nature 323(6088):533–536

    Article  Google Scholar 

Download references

Acknowledgements

This work was jointly supported by the Innovation Support Plan for Dalian High-level Talents (No. 2018RQ70) and partly by two awards under the S\({{\hat{e}}}\)r Cymru II COFUND Fellowship scheme, UK. The authors are grateful to the anonymous reviewers for their constructive comments, which have helped improve this work significantly.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yanpeng Qu.

Ethics declarations

Conflicts of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Qiu, L., Qu, Y., Shang, C. et al. Exclusive lasso-based k-nearest-neighbor classification. Neural Comput & Applic 33, 14247–14261 (2021). https://doi.org/10.1007/s00521-021-06069-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-021-06069-5

Keywords

  • Exclusive lasso
  • Sparse coefficient
  • kNN
  • Classification