Neural Computing and Applications

, Volume 29, Issue 9, pp 533–551 | Cite as

On a new approach for Lagrangian support vector regression

  • S. Balasundaram
  • Gagandeep Benipal
Original Article


In this paper, a simplification of the necessary and sufficient Karush–Kuhn–Tucker (KKT) optimality conditions for the Lagrangian support vector regression in 2-norm resulting a naïve root finding problem of a system of equations in m variables is proposed, where m is the number of training examples. However, since the resulting system of equations contains terms having the nonsmooth “plus” function, two approaches are considered: (i) the problem is reformulated into an equivalent absolute value equation problem and solved by functional iterative and generalized Newton methods and (ii) solve the original root finding problem by generalized Newton method. The proofs of convergence and pseudo-codes of the iterative methods are given. Numerical experiments performed on a number of synthetic and real-world benchmark datasets showing similar generalization performance with much faster learning speed in comparison with support vector regression (SVR) and as good as performance in comparison with the least squares SVR, unconstrained Lagrangian SVR proposed in Balasundaram et al. (Neural Netw 51:67–79, 2014), twin SVR and extreme learning machine clearly indicate the effectiveness and suitability of the proposed problem formulation solved by functional iterative and generalized Newton methods.


Absolute value equation Functional iterative method Generalized Jacobian Generalized Newton method Support vector regression 



The authors are very thankful to the referees for their useful comments. Mr. Gagandeep Benipal acknowledges the financial assistance by Maulana Azad National Fellowship, Government of India.


  1. 1.
    Balasundaram S, Gupta D, Kapil (2014) Lagrangian support vector regression via unconstrained convex minimization. Neural Netw 51:67–79CrossRefzbMATHGoogle Scholar
  2. 2.
    Balasundaram S, Gupta D (2014) Training Lagrangian twin support vector regression via unconstrained convex minimization. Knowl Based Syst 59:85–96CrossRefzbMATHGoogle Scholar
  3. 3.
    Balasundaram S, Kapil (2011) Finite Newton method for implicit Lagrangian support vector regression. Int J Knowl Based Intell Eng Syst 15:203–214CrossRefGoogle Scholar
  4. 4.
    Balasundaram S, Kapil (2010) On Lagrangian support vector regression. Expert Syst Appl 37:8784–8792CrossRefGoogle Scholar
  5. 5.
    Chen S, Wang M (2005) Seeking multi-threshold directly from support vectors for image segmentation. Neurocomputing 67:335–344CrossRefGoogle Scholar
  6. 6.
    Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines and other kernel based learning methods. Cambridge University Press, CambridgeCrossRefzbMATHGoogle Scholar
  7. 7.
    Demsar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30MathSciNetzbMATHGoogle Scholar
  8. 8.
    Ding S, Yu J, Qi B (2014) An overview on twin support vector machines. Artif Intell Rev 42(2):245–252CrossRefGoogle Scholar
  9. 9.
    Fung G, Mangasarian OL (2004) A feature selection Newton method for support vector machine classification. Comput Optim Appl 28:185–202MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Golub GH, Van Loan CF (1996) Matrix computations, 3rd edn. The Johns Hopkins University Press, BaltimorezbMATHGoogle Scholar
  11. 11.
    Gretton A, Doucet A, Herbrich R, Rayner PJW, Scholkopf B (2001) Support vector regression for black-box system identification. In: Proceedings of the 11th IEEE workshop on statistical signal processing, pp 341–344Google Scholar
  12. 12.
    Guyon I, Weston J, Barnhill S, Vapnik V (2002) Gene selection for cancer classification using support vector machine. Mach Learn 46:389–422CrossRefzbMATHGoogle Scholar
  13. 13.
    Huang H, Ding S, Shi Z (2013) Primal least squares twin support vector regression. J Zhejiang Univ Sci C 14(9):722–732CrossRefGoogle Scholar
  14. 14.
    Huang G-B, Zhou H, Ding X, Zhang R (2012) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern Part B Cybern 42:513–528CrossRefGoogle Scholar
  15. 15.
    Huang G-B, Zhu Q-Y, Siew C-K (2006) Extreme learning machine: Theory and applications. Neurocomputing 70:489–501CrossRefGoogle Scholar
  16. 16.
    Jayadeva, Khemchandani R, Chandra S (2007) Twin support vector machines for pattern classification. IEEE Trans Pattern Anal Mach Intell 29(5):905–910CrossRefzbMATHGoogle Scholar
  17. 17.
    Joachims T, Ndellec C, Rouveriol (1998) Text categorization with support vector machines: learning with many relevant features. In: European conference on machine learning No. 10, Chemnitz, Germany, pp 137–142Google Scholar
  18. 18.
    Mangasarian OL (2009) A generalized Newton method for absolute value equations. Optim Lett 3:101–108MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    Mangasarian OL, Musicant DR (2001) Lagrangian support vector machines. J Mach Learn Res 1:161–177MathSciNetzbMATHGoogle Scholar
  20. 20.
    Mangasarian OL, Musicant DR (2001) Active set support vector machine classification. In: Leen TK, Dietterich TG, Tesp V (eds) Advances in neural information processing systems, vol 13. MIT Press, Cambridge, pp 577–586Google Scholar
  21. 21.
    Murphy PM, Aha DW (1992) UCI repository of machine learning databases. University of California, Irvine.
  22. 22.
    Musicant DR, Feinberg A (2004) Active set support vector regression. IEEE Trans Neural Netw 15(2):268–275CrossRefGoogle Scholar
  23. 23.
    Osuna E, Freund R, Girosi F (1997) Training support vector machines: an application to face detection. In: IEEE conference on computer vision and pattern recognition, pp 130–136Google Scholar
  24. 24.
    Peng X (2010) TSVR: An efficient twin support vector machine for regression. Neural Netw 23(3):365–372CrossRefGoogle Scholar
  25. 25.
    Sjoberg J, Zhang Q, Ljung L, Berveniste A, Delyon B, Glorennec P, Hjalmarsson H, Juditsky A (1995) Nonlinear black-box modeling in system identification: a unified overview. Automatica 31:1691–1724MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    Souza LGM, Barreto GA (2006) Nonlinear system identification using local ARX models based on the self-organizing map. Learn Nonlinear Models Rev Soc Brasil Redes Neurais (SBRN) 4(2):112–123CrossRefGoogle Scholar
  27. 27.
    Suykens JAK, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293–300CrossRefzbMATHGoogle Scholar
  28. 28.
    Vapnik VN (2000) The nature of statistical learning theory, 2nd edn. Springer, New YorkCrossRefzbMATHGoogle Scholar

Copyright information

© The Natural Computing Applications Forum 2016

Authors and Affiliations

  1. 1.School of Computer and Systems SciencesJawaharlal Nehru UniversityNew DelhiIndia

Personalised recommendations