Skip to main content
Log in

Robust SVM for Cost-Sensitive Learning

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Although the performance of cost-sensitive support vector machine (CS-SVM) has been demonstrated to approximate to the cost-sensitive Bayes risk, previous CS-SVM methods still suffer from the influence of outlier samples and redundant features. Recently, a few studies have focused on separately solving these two issues by the sparse theory. In this paper, we propose a new robust cost-sensitive support vector machine to simultaneously solve them in a unified framework. To do this, we employ robust statistics and sparse theory, respectively, to take the sample importance and the feature importance into account, for avoiding the influence of outliers and redundant features. Furthermore, we propose a new optimization method to solve the primal problem of our proposed objective function. Experimental results on synthetic and real data sets show that our proposed method outperforms all the comparison methods in terms of cost-sensitive classification.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. http://featureselection.asu.edu/datasets.php.

References

  1. Schlkopf B, Smola AJ, Bach F (2018) Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press, Cambridge

    Book  Google Scholar 

  2. Zhu X, Song B, Shi F, Chen Y, Hu R, Gan J, Zhang W, Li M, Wang L, Gao Y et al (2021) Joint prediction and time estimation of covid-19 developing severe symptoms using chest ct scan. Med Image Anal 67:101–824

    Google Scholar 

  3. Masnadi-Shirazi H, Vasconcelos N (2010) Risk minimization, probability elicitation, and cost-sensitive svms. In: International Conference on Machine Learning, pp 759–766

  4. Gu B, Sheng VS, Li S (2015) Bi-parameter space partition for cost-sensitive svm. In: International Joint Conferences on Artificial Intelligence, pp 3532–3539

  5. Gu B, Quan X, Gu Y, Sheng VS, Zheng G (2018) Chunk incremental learning for cost-sensitive hinge loss support vector machine. Pattern Recogn 83:196–208

    Article  Google Scholar 

  6. Katsumata S, Takeda A (2015) Robust cost sensitive support vector machine. In: International Conference on Artificial Intelligence and Statistics, pp 434–443

  7. Benítez-Peña S, Blanquero R, Carrizosa E, Ramírez-Cobo P (2019) Cost-sensitive feature selection for support vector machines. Computers Op Res 106:169–178

    Article  MathSciNet  Google Scholar 

  8. Mercer BJ (1909) Functions of positive and negative type, and their connection the theory of integral equations. Philos Trans R Soc A 209(441–458):415–446

    MATH  Google Scholar 

  9. Chen Y, Gupta MR, Recht B (2009) Learning kernels from indefinite similarities. In: International Conference on Machine Learning, pp 145–152

  10. Xue H, Song Y, Xu H (2017) Multiple indefinite kernel learning for feature selection. In: International Joint Conferences on Artificial Intelligence, pp 3210–3216

  11. Xu H-M, Xue H, Chen X, Wang Y (2017) Solving indefinite kernel support vector machine with difference of convex functions programming. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp 2782–2788

  12. Li YF, Kwok JT, Zhou ZH (2010) Cost-sensitive semi-supervised support vector machine. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp 500–505

  13. Wu J-S, Zhou Z-H (2013) Sequence-based prediction of microrna-binding residues in proteins using cost-sensitive laplacian support vector machines. IEEE/ACM Trans Comput Biol Bioinf 10(3):752–759

    Article  Google Scholar 

  14. Maldonado S, Weber R, Famili F (2014) Feature selection for high-dimensional class-imbalanced data sets using support vector machines. Inf Sci 286:228–246

    Article  Google Scholar 

  15. Xu J, Nie F, Han J (2017) Feature selection via scaling factor integrated multi-class support vector machines. In: International Joint Conferences on Artificial Intelligence, pp 3168–3174

  16. Nie WX, Feiping Huang H (2017) Multiclass capped lp-norm svm for robust classifications. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp 2415–2421

  17. Chang C-C, Lin C-J (2011) Libsvm: a library for support vector machines. TIST 2(3):27

    Article  Google Scholar 

  18. Shalev-Shwartz S, Singer Y, Srebro N, Cotter A (2011) Pegasos: primal estimated sub-gradient solver for svm. Math Programm 127(1):3–30

    Article  MathSciNet  Google Scholar 

  19. Seiffert C, Khoshgoftaar TM, Van Hulse J, Napolitano A (2008) A comparative study of data sampling and cost sensitive learning. In: International Conference on Data Mining Workshops, pp 46–52

  20. Zhou Z-H, Liu X-Y (2010) On multi-class cost-sensitive learning. Comput Intell 26(3):232–257

    Article  MathSciNet  Google Scholar 

  21. Domingos P (1999) Metacost: A general method for making classifiers cost-sensitive. In: ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp 155–164

  22. Sheng VS, Ling CX (2006) Thresholding for making classifiers cost-sensitive. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp 476–481

  23. Geng Y, Luo X (2018) ‘Cost-sensitive convolution based neural networks for imbalanced time-series classification.’ arXiv:1801.04396

  24. Zhang S (2020) Cost-sensitive knn classification. Neurocomputing 391:234–242

    Article  Google Scholar 

  25. Iranmehr A, Masnadi-Shirazi H, Vasconcelos N (2019) Cost-sensitive support vector machines. Neurocomputing 343:50–64

    Article  Google Scholar 

  26. Chen Y-L, Wu C-C, Tang K (2016) Time-constrained cost-sensitive decision tree induction. Inf Sci 354:140–152

    Article  Google Scholar 

  27. Shen HT, Zhu X, Zhang Z, Wang S-h, Chen Y, Xu X, Shao J (2021) Heterogeneous data fusion for predicting mild cognitive impairment conversion. Inf Fus. https://doi.org/10.1016/j.inffus.2020.08.023

    Article  Google Scholar 

  28. Zhu X, Gan J, Lu G, Li J, Zhang S (2019) Spectral clustering via half-quadratic optimization. World Wide Web, pp 1–20

  29. Hu R, Zhu X, Zhu Y, Gan J (2020) Robust svm with adaptive graph learning. World Wide Web 23(3):1945–1968

    Article  Google Scholar 

  30. Cao P, Zhao D, Zaiane O (2013) An optimized cost-sensitive svm for imbalanced data learning. In: Pacific-Asia conference on knowledge discovery and data mining, pp 280–292

  31. Pourpanah F, Shi Y, Lim CP, Hao Q, Tan CJ (2019) Feature selection based on brain storm optimization for data classification. Appl Soft Comput 80:761–775

    Article  Google Scholar 

  32. Yang L, Liu X, Nie F, Liu Y (2020) Robust and efficient linear discriminant analysis with l 2, 1-norm for feature selection. IEEE Access 8:44–100

    Google Scholar 

  33. Zheng W, Zhu X, Wen G, Zhu Y, Yu H, Gan J (2020) Unsupervised feature selection by self-paced learning regularization. Pattern Recogn Lett 132:4–11

    Article  Google Scholar 

  34. Liu M, Xu C, Luo Y, Xu C, Wen Y, Tao D (2017) Cost-sensitive feature selection via f-measure optimization reduction. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp 2252–2258

  35. Nie F, Huang Y, Wang X, Huang H (2014) New primal svm solver with linear computational cost for big data classifications. In: International Conference on Machine Learning, pp II–505–II–513

  36. Zhu X, Li X, Zhang S, Ju C, Wu X (2017) Robust joint graph sparse coding for unsupervised spectral feature selection. IEEE Trans Neural Netw Learn Syst 28(6):1263–1275

    Article  MathSciNet  Google Scholar 

  37. Barron JT (2019) “A general and adaptive robust loss function,” in CVPR, pp. 4331–4339

  38. Zhu X, Zhang S, Zhu Y, Zhu P, Gao Y (2020) Unsupervised spectral feature selection with dynamic hyper-graph learning. IEEE Trans Know Data Eng. https://doi.org/10.1109/TKDE.2020.3017250

    Article  Google Scholar 

  39. Borah P, Gupta D (2019) Functional iterative approaches for solving support vector classification problems based on generalized huber loss. Neural Comput Appl 32:9245–9265

  40. Alam MA, Fukumizu K, Wang Y-P (2018) Influence function and robust variant of kernel canonical correlation analysis. Neurocomputing 304:12–29

    Article  Google Scholar 

  41. Bourgain J, Dirksen S, Nelson J (2015) Toward a unified theory of sparse dimensionality reduction in euclidean space. Geom Func Anal 25(4):1009–1088

    Article  MathSciNet  Google Scholar 

  42. Z. Kang, X. Zhao, Shi, c. Peng, H. Zhu, J. T. Zhou, X. Peng, W. Chen, and Z. Xu, “Partition level multiview subspace clustering,” Neural Networks, vol. 122, pp. 279–288, 2020

  43. Wang H, Nie F, Huang H (2014) “Robust distance metric learning via simultaneous l1-norm minimization and maximization,” in International conference on machine learning, pp. 1836–1844

  44. Nikolova M, Chan RH (2007) The equivalence of half-quadratic minimization and the gradient linearization iteration. IEEE Trans Image Process 16(6):1623–1627

    Article  MathSciNet  Google Scholar 

  45. Zhu X, Li X, Zhang S, Xu Z, Yu L, Wang C (2017) Graph pca hashing for similarity search. IEEE Trans Multimed 19(9):2033–2044

    Article  Google Scholar 

  46. Wright J, Ganesh A, Rao S, Peng Y, Ma Y (2009) “Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization,” in NIPS, pp. 2080–2088

  47. Geman S (1987) Statistical methods for tomographic image reconstruction. Bull Int Stat Inst 4:5–21

    MathSciNet  Google Scholar 

  48. Zhao P, Zhang Y, Wu M, Hoi SC, Tan M, Huang J (2019) Adaptive cost-sensitive online classification. IEEE Trans Know Data Eng 31(2):214–228

    Article  Google Scholar 

  49. Negahban S, Yu B, Wainwright MJ, Ravikumar PK (2009) “A unified framework for high-dimensional analysis of \( m \)-estimators with decomposable regularizers,” in NIPS, pp. 1348–1356

Download references

Acknowledgements

The Key Program of the National Natural Science Foundation of China (Grant No: 61836016); the Natural Science Foundation of China (Grants No: 61876046); the Project of Guangxi Science and Technology (GuiKeAD17195062); the Guangxi Natural Science Foundation (Grant No: 2017GXNSFBA198221); the Guangxi Collaborative Innovation Center of Multi-Source Information Integration and Intelligent Processing; the Guangxi High Institutions Program of Introducing 100 High-Level Overseas Talents and the Research Fund of Guangxi Key Lab of Multisource Information Mining & Security (18-A-01-01).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yangcai Xie.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gan, J., Li, J. & Xie, Y. Robust SVM for Cost-Sensitive Learning. Neural Process Lett 54, 2737–2758 (2022). https://doi.org/10.1007/s11063-021-10480-3

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-021-10480-3

Keywords

Navigation