Applied Intelligence

, Volume 46, Issue 1, pp 124–134 | Cite as

A new approach for training Lagrangian twin support vector machine via unconstrained convex minimization

  • S. Balasundaram
  • Deepak Gupta
  • Subhash Chandra Prasad


In this paper, a novel unconstrained convex minimization problem formulation for the Lagrangian dual of the recently introduced twin support vector machine (TWSVM) in simpler form is proposed for constructing binary classifiers. Since the objective functions of the modified minimization problems contain non-smooth ‘plus’ function, we solve them by Newton iterative method either by considering their generalized Hessian matrices or replacing the ‘plus’ function by a smooth approximation function. Numerical experiments were performed on a number of interesting real-world benchmark data sets. Computational results clearly illustrates the effectiveness and the applicability of the proposed approach as comparable or better generalization performance with faster learning speed is obtained in comparison with SVM, least squares TWSVM (LS-TWSVM) and TWSVM.


Generalized Hessian approach Smooth approximation formulation Twin support vector machine 



The authors are thankful to the anonymous reviewers for their comments.


  1. 1.
    Balasundaram S, Gupta D (2014) Training Lagrangian twin support vector regression via unconstrained convex minimization. Knowl-Based Syst 59:85–96CrossRefzbMATHGoogle Scholar
  2. 2.
    Cortes C, Vapnik V N (1995) Support vector networks. Mach Learn 20:273–297zbMATHGoogle Scholar
  3. 3.
    Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines and other kernel based learning method. Cambridge University Press, CambridgeCrossRefzbMATHGoogle Scholar
  4. 4.
    Demsar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30MathSciNetzbMATHGoogle Scholar
  5. 5.
    Fung G, Mangasarian OL (2003) Finite Newton method for Lagrangian support vector machine. Neurocomputing 55:39–55CrossRefGoogle Scholar
  6. 6.
    Golub GH, Van Loan CF (1996) Matrix computations, 3rd ed., The Johns Hopkins University PressGoogle Scholar
  7. 7.
    Guyon I, Weston J, Barnhill S, Vapnik V (2002) Gene selection for cancer classification using support vector machine. Mach Learn 46:389–422CrossRefzbMATHGoogle Scholar
  8. 8.
    Hiriart-Urruty J -B, Strodiot J J, Nguyen V H (1984) Generalized Hessian matrix and second-order optimality conditions for problems with C 1,1 data. Appl Math Optim 11:43–56MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Jayadeva K R, Chandra S (2007) Twin support vector machines for pattern classification. IEEE Trans Pattern Anal Mach Intell 29(5):905–910CrossRefzbMATHGoogle Scholar
  10. 10.
    Joachims T, Ndellec C, Rouveriol (1998) Text categorization with support vector machines: learning with many relevant features. In: European conference on machine learning, no.10, Chemnitz, Germany, pp 137–142Google Scholar
  11. 11.
    Kumar M A, Gopal M (2009) Least squares twin support vector machines for pattern classification. Expert Syst Appl 36:7535–7543CrossRefGoogle Scholar
  12. 12.
    Kumar M A, Gopal M (2008) Application of smoothing technique on twin support vector machines. Pattern Recogn Lett 29:1842–1848CrossRefGoogle Scholar
  13. 13.
    Lee Y J, Mangasarian O L (2001) SSVM: A smooth support vector machine for classification. Comput Optim Appl 20(1):5–22MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Mangasarian O L (2002) A finite Newton method for classification. Optimization Methods and Software 17:913–929MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Mangasarian O L, Musicant D R (2001) Lagrangian support vector machines. J Mach Learn Res 1:161–177MathSciNetzbMATHGoogle Scholar
  16. 16.
    Mangasarian O L, Wild E W (2006) Multisurface proximal support vector classification via generalized eigenvalues. IEEE Trans Pattern Anal Mach Intell 28(1):69–74CrossRefGoogle Scholar
  17. 17.
    Murphy P M, Aha D W (1992) UCI Repository of machine learning databases. University of California, Irvine. Google Scholar
  18. 18.
    Osuna E, Freund R, Girosi F (1997) Training support vector machines: an application to face detection. In: Proceedings of Computer Vision and Pattern Recognition, pp 130–136Google Scholar
  19. 19.
    Platt J (1999) Fast training of support vector machines using sequential minimal optimization. In: Scholkopf B, Burges CJC, Smola AJ (Ed.), Advances in kernel methods- support vector learning, MIT press, Cambridge, MA, pp 185–208Google Scholar
  20. 20.
    Peng X (2011) TPMSVM: A novel twin parametric-margin support vector machine for pattern recognition. Pattern Recogn 44(10-11):2678–2692CrossRefzbMATHGoogle Scholar
  21. 21.
    Peng X (2010) TSVR: An efficient twin support vector machine for regression. Neural Netw 23(3):365–372CrossRefGoogle Scholar
  22. 22.
    Rockafellar R T (1974) Conjugate duality and optimization. SIAM, PhiladelphiaCrossRefzbMATHGoogle Scholar
  23. 23.
    Shao Y, Zhang C, Wang X, Deng N (2011) Improvements on twin support vector machines. IEEE Trans Neural Netw 22(6):962–968CrossRefGoogle Scholar
  24. 24.
    Suykens J A K, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293–300CrossRefzbMATHGoogle Scholar
  25. 25.
    Vapnik VN (2000) The nature of statistical learning theory, 2nd ed. Springer, New YorkCrossRefGoogle Scholar
  26. 26.
    Zhou S, Liu H, Zhou L, Ye F (2007) Semi-smooth Newton support vector machine. Pattern Recogn Lett 28:2054–2062CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  • S. Balasundaram
    • 1
  • Deepak Gupta
    • 1
  • Subhash Chandra Prasad
    • 1
  1. 1.School of Computer and Systems SciencesJawaharlal Nehru UniversityNew DelhiIndia

Personalised recommendations