Skip to main content
Log in

First and Second Order SMO Algorithms for LS-SVM Classifiers

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Least squares support vector machine (LS-SVM) classifiers have been traditionally trained with conjugate gradient algorithms. In this work, completing the study by Keerthi et al., we explore the applicability of the SMO algorithm for solving the LS-SVM problem, by comparing First Order and Second Order working set selections concentrating on the RBF kernel, which is the most usual choice in practice. It turns out that, considering all the range of possible values of the hyperparameters, Second Order working set selection is altogether more convenient than First Order. In any case, whichever the selection scheme is, the number of kernel operations performed by SMO appears to scale quadratically with the number of patterns. Moreover, asymptotic convergence to the optimum is proved and the rate of convergence is shown to be linear for both selections.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Suykens JAK, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3): 293–300

    Article  MathSciNet  Google Scholar 

  2. Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3): 273–297

    MATH  Google Scholar 

  3. Keerthi SS, Shevade SK, Bhattacharyya C, Murthy KRK (2000) A fast iterative nearest point algorithm for support vector machine classifier design. IEEE Trans Neural Netw 11(1): 124–136

    Article  Google Scholar 

  4. Suykens JAK, Lukas L, Van Dooren P, De Moor B, Vandewalle J (1999) Least squares support vector machine classifiers: a large scale algorithm. In: Proceedings of the European conference on circuit theory and design (ECCTD), pp 839–842

  5. Keerthi SS, Shevade SK (2003) SMO algorithm for least-squares SVM formulations. Neural Comput 15(2): 487–507

    Article  MATH  Google Scholar 

  6. Shalev-Shwartz S, Singer Y, Srebro N (2007) Pegasos: primal estimated sub-gradient solver for SVM. In: Proceedings of the 24th international conference on machine learning (ICML), pp 807–814

  7. Joachims T (2006) Training linear SVMs in linear time. In: Proceedings of the 12th international conference on knowledge discovery and data mining (SIGKDD), pp 217–226

  8. Fan R-E, Chang K-W, Hsieh C-J, Wang X-R, Lin C-J (2008) LIBLINEAR: a library for large linear classification. J Mach Learn Res 9: 1871–1874

    Google Scholar 

  9. Zhou T, Tao D, Wu X (2010) NESVM: a fast gradient method for support vector machines. In: Proceedings of the 12th international conference on data mining (ICDM)

  10. Fan R-E, Chen P-H, Lin C-J (2005) Working set selection using second order information for training support vector machines. J Mach Learn Res 6: 1889–1918

    MathSciNet  Google Scholar 

  11. Chang C-C, Lin C-J (2001) LIBSVM: a library for support vector machines software. http://www.csie.ntu.edu.tw/~cjlin/libsvm

  12. Platt J-C (1999) Fast training of support vector machines using sequential minimal optimization. In: Advances in kernel methods: support vector learning. MIT Press, Cambridge, pp 185–208

  13. Joachims T (1999) Making large-scale support vector machine learning practical. In: Advances in kernel methods: support vector learning. MIT Press, Cambridge, pp 169–184

  14. Keerthi SS, Shevade SK, Bhattacharyya C, Murthy KRK (2001) Improvements to Platt’s SMO algorithm for SVM classifier design. Neural Comput 13(3): 637–649

    Article  MATH  Google Scholar 

  15. López J, Barbero Á, Dorronsoro JR (2008) On the equivalence of the SMO and MDM algorithms for SVM training. In: Lecture notes in computer science: machine learning and knowledge discovery in databases, vol 5211. Springer, New York, pp 288–300

  16. Lin C-J (2001) Linear convergence of a decomposition method for support vector machines. Technical report

  17. Chen P-H, Fan R-E, Lin C-J (2006) A study on SMO-type decomposition methods for support vector machines. IEEE Trans Neural Netw 17: 893–908

    Article  Google Scholar 

  18. Rätsch G (2000) Benchmark repository. http://ida.first.fhg.de/projects/bench/benchmarks.htm

  19. Van Gestel T, Suykens JAK, Baesens B, Viaene S, Vanthienen J, Dedene G, De Moor B, Vandewalle J (2004) Benchmarking least squares support vector machine classifiers. Mach Learn 54(1): 5–32

    Article  MATH  Google Scholar 

  20. Guo XC, Yang JH, Wu CG, Wang CY, Liang YC (2008) A novel LS-SVMs hyper-parameter selection based on particle swarm optimization. Neurocomputing 71(16–18): 3211–3215

    Article  Google Scholar 

  21. Barbero Á, López J, Dorronsoro JR (2009) Cycle-breaking acceleration of SVM training. Neurocomputing 72(7–9): 1398–1406

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Johan A. K. Suykens.

Rights and permissions

Reprints and permissions

About this article

Cite this article

López, J., Suykens, J.A.K. First and Second Order SMO Algorithms for LS-SVM Classifiers. Neural Process Lett 33, 31–44 (2011). https://doi.org/10.1007/s11063-010-9162-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-010-9162-9

Keywords

Navigation