Advertisement

A two stages sparse SVM training

  • Ziqiang Li
  • Mingtian Zhou
  • Hao Lin
  • Haibo Pu
Original Article

Abstract

The small number of support vectors is an important factor for SVM to fast deal with very large scale problems. This paper considers fitting each class of data with a plane by a new model, which captures separability information between classes and can be solved by fast core set methods. Then training on the core sets of the fitting-planes yields a very sparse SVM classifier. The computing complexity of the proposed algorithm is up bounded by \( {\text{\rm O}}(1/\varepsilon ) \). Experimental results show that the new algorithm trains faster than both CVM and SVMperf averagely, and with comparable generalization performance.

Keywords

Fitting-plane Sparsity Core set SVM 

Notes

Acknowledgments

This work was supported by Scientific Research Fund of SiChuan Provincial Education Department under Grant No. 12ZA112 and the National Natural Science Foundation of China (No. 61202256).

References

  1. 1.
    Bach FR, Jordan MI (2005) Predictive low-rank decomposition for kernel methods. In: 22nd international conference on machine learning, Bonn. ICML 2005. Association for Computing Machinery, pp 33–40Google Scholar
  2. 2.
    Badoiu M, Clarkson KL (2008) Optimal core-sets for balls. Comput Geom Theory Appl 40(1):14–22. doi: 10.1016/j.comgeo.2007.04.002 CrossRefzbMATHMathSciNetGoogle Scholar
  3. 3.
    Burges CJC (1996) Simplified support vector decision rules. In: Proceedings of 13th international conference on machine learning, p 7Google Scholar
  4. 4.
    Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273–297zbMATHGoogle Scholar
  5. 5.
    Downs T, Gates KE, Masters A (2002) Exact simplification of support vector solutions. J Mach Learn Res 2(2):293–297. doi: 10.1162/15324430260185637 zbMATHMathSciNetGoogle Scholar
  6. 6.
    Fan R-E, Chen P-H, Lin C-J (2005) Working set selection using second order information for training support vector machines. J Mach Learn Res 6:1889–1918zbMATHMathSciNetGoogle Scholar
  7. 7.
    Jayadeva, Khemchandani R, Chandra S (2007) Twin support vector machines for pattern classification. IEEE Trans Pattern Anal Mach Intell 29(5):905–910. doi: 10.1109/tpami.2007.1068
  8. 8.
    Joachims T (1998) Making large scale SVM learning practical. Advances in kernel methods—support vector learningGoogle Scholar
  9. 9.
    Joachims T, Yu CNJ (2009) Sparse kernel SVMs via cutting-plane training. Mach Learn 76(2–3):179–193. doi: 10.1007/s10994-009-5126-6 CrossRefGoogle Scholar
  10. 10.
    Keerthi SS, Chapelle O, DeCoste D (2006) Building support vector machines with reduced classifier complexity. J Mach Learn Res 7:1493–1515zbMATHMathSciNetGoogle Scholar
  11. 11.
    Lee YJ, Huang SY (2007) Reduced support vector machines: a statistical theory. IEEE Trans Neural Netw 18(1):1–13. doi: 10.1109/tnn.2006.883722 Google Scholar
  12. 12.
    Liang X, Chen RC, Guo XY (2008) Pruning support vector machines without altering performances. IEEE Trans Neural Netw 19(10):1792–1803. doi: 10.1109/tnn.2008.2002696 CrossRefGoogle Scholar
  13. 13.
    Licheng J, Liefeng B, Ling W (2007) Fast sparse approximation for least squares support vector machine. IEEE Trans Neural Netw 18(3):685–697CrossRefGoogle Scholar
  14. 14.
    Lin KM, Lin CJ (2003) A study on reduced support vector machines. IEEE Trans Neural Netw 14(6):1449–1459. doi: 10.1109/tnn.2003.820828 CrossRefGoogle Scholar
  15. 15.
    Peng XJ (2011) Building sparse twin support vector machine classifiers in primal space. Inf Sci 181(18):3967–3980. doi: 10.1016/j.ins.2011.05.004 CrossRefGoogle Scholar
  16. 16.
    Smola A, Schölkopf B (2000) Sparse greedy matrix approximation for machine learning. Paper presented at the ICMLGoogle Scholar
  17. 17.
    Sun P, Yao X (2010) Sparse approximation through boosting for learning large scale kernel machines. IEEE Trans Neural Netw 21(6):883–894. doi: 10.1109/tnn.2010.2044244 CrossRefMathSciNetGoogle Scholar
  18. 18.
    Suykens JAK, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293–300CrossRefMathSciNetGoogle Scholar
  19. 19.
    Tsang IWH, Kwok JTY, Zurada JM (2006) Generalized core vector machines. IEEE Trans Neural Netw 17(5):1126–1140. doi: 10.1109/tnn.2006.878123 CrossRefGoogle Scholar
  20. 20.
    Wu M, Scholkopf B, Bakir G Building sparse large margin classifiers. In: 22nd international conference on machine learning, Bonn. ICML 2005. Association for Computing Machinery, pp 1001–1008Google Scholar
  21. 21.
    Khemchandani R, Karpatne A, Chandra S (2013) Twin support vector regression for the simultaneous learning of a function and its derivatives. Int J Mach Learn Cybern 4(1):51–63CrossRefGoogle Scholar
  22. 22.
    Wang X, Shu-Xia L, Zhai J-H (2008) Fast fuzzy multi-category SVM based on support vector domain description. Int J Pattern Recognit Artif Intell 22(1):109–120CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  1. 1.School of Information and EngineeringSichuan Agricultural UniversityYaanPeople’s Republic of China
  2. 2.School of Computer Science and EngineeringUniversity of Electronic Science and Technology of ChinaChengduPeople’s Republic of China
  3. 3.School of Life Science and TechnologyUniversity of Electronic Science and Technology of ChinaChengduPeople’s Republic of China

Personalised recommendations