Skip to main content
Log in

On Lagrangian twin support vector regression

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

In this paper, a simple and linearly convergent Lagrangian support vector machine algorithm for the dual of the twin support vector regression (TSVR) is proposed. Though at the outset the algorithm requires inverse of matrices, it has been shown that they would be obtained by performing matrix subtraction of the identity matrix by a scalar multiple of inverse of a positive semi-definite matrix that arises in the original formulation of TSVR. The algorithm can be easily implemented and does not need any optimization packages. To demonstrate its effectiveness, experiments were performed on well-known synthetic and real-world datasets. Similar or better generalization performance of the proposed method in less training time in comparison with the standard and twin support vector regression methods clearly exhibits its suitability and applicability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Balasundaram S, Kapil (2010) On Lagrangian support vector regression. Exp Syst Appl 37:8784–8792

    Article  Google Scholar 

  2. Balasundaram S, Kapil (2010) Application of Lagrangian twin support vector machines for classification. In: 2nd international conference on machine learning and computing, ICMLC2010, IEEE Press, pp 193–197

  3. Box GEP, Jenkins GM (1976) Time series analysis: forecasting and control. Holden-Day, San Francisco

    MATH  Google Scholar 

  4. Brown MPS, Grundy WN, Lin D et al (2000) Knowledge-based analysis of microarray gene expression data using support vector machine. Proc Nat Acad Sci USA 97(1):262–267

    Article  Google Scholar 

  5. Chen X, Yang J, Liang J, Ye Q (2011) Smooth twin support vector regression. Neural Comput Appl. doi:10.1007/s00521-010-0454-9

  6. Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines and other kernel based learning method. Cambridge University Press, Cambridge

    Book  Google Scholar 

  7. DELVE (2005) Data for evaluating learning in valid experiments. http://www.cs.toronto.edu/~delve/data

  8. Jayadeva, Khemchandani R, Chandra S (2007) Twin support vector machines for pattern classification. IEEE Trans Patt Anal Mach Intell 29(5):905–910

    Article  Google Scholar 

  9. Joachims T, Ndellec C, Rouveriol (1998) Text categorization with support vector machines: learning with many relevant features. In: European conference on machine learning no. 10, Chemnitz, Germany, pp 137–142

  10. Kumar MA, Gopal M (2009) Least squares twin support vector machines for pattern classification. Expert Syst Appl 36:7535–7543

    Article  Google Scholar 

  11. Li G, Wen C, Guang G-B, Chen Y (2011) Error tolerance based support vector machine for regression. Neurocomputing 74(5):771–782

    Article  Google Scholar 

  12. Mangasarian OL (1994) Nonlinear programming. SIAM Philadelphia, PA

    Book  MATH  Google Scholar 

  13. Mangasarian OL, Musicant DR (2001) Lagrangian support vector machines. J Mach Learn Res 1:161–177

    MathSciNet  MATH  Google Scholar 

  14. Mangasarian OL, Solodov MV (1993) Nonlinear complementarity as unconstrained and constrained minimization. Math Program B 62:277–297

    Article  MathSciNet  MATH  Google Scholar 

  15. Mangasarian OL, Wild EW (2006) Multisurface proximal support vector classification via generalized eigenvalues. IEEE Trans Patt Anal Mach Intell 28(1):69–74

    Article  Google Scholar 

  16. Mukherjee S, Osuna E, Girosi F (1997) Nonlinear prediction of chaotic time series using support vector machines. In: NNSP’97: Neural networks for signal processing VII: Proceedings of IEEE signal processing society workshop, Amelia Island, FL, USA, pp 511–520

  17. Muller KR, Smola AJ, Ratsch G, Schölkopf B, Kohlmorgen J (1999) Using support vector machines for time series prediction. In: Schölkopf B, Burges CJC, Smola AJ (eds) Advances in Kernel methods- support vector learning. MIT Press, Cambridge, pp 243–254

    Google Scholar 

  18. Murphy PM, Aha DW (1992) UCI repository of machine learning databases. University of California, Irvine. http://www.ics.uci.edu/~mlearn

  19. Osuna E, Freund R, Girosi F (1997) Training support vector machines: an application to face detection. In: Proceedings of computer vision and pattern recognition, pp 130–136

  20. Peng X (2010) TSVR: an efficient twin support vector machine for regression. Neural Netw 23(3):365–372

    Article  Google Scholar 

  21. Ribeiro B (2002) Kernelized based functions with Minkovsky’s norm for SVM regression. In: Proceedings of the international joint conference on neural networks, 2002, IEEE press, pp 2198–2203

  22. Shao Y-H, Zhang C-H, Wang X-B, Deng N-Y (2011) Improvements on twin support vector machines. IEEE Trans Neural Netw 22(6):962–968

    Article  Google Scholar 

  23. The MOSEK optimization tools (2008) Version 5.0, Denmark. http://www.mosek.com

  24. Vapnik VN (2000) The nature of statistical learning theory, 2nd edn. Springer, New York

    MATH  Google Scholar 

  25. Zhong P, Xu Y, Zhao Y (2011) Training twin support vector regression via linear programming. Neural Comput Appl. doi:10.1007/s00521-011-0526-6

Download references

Acknowledgments

The authors are extremely thankful to the learned referees for their critical and constructive comments that greatly improved the earlier version of the paper. Mr. Tanveer acknowledges the financial support given as scholarship by Council of Scientific and Industrial Research, India.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to S. Balasundaram.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Balasundaram, S., Tanveer, M. On Lagrangian twin support vector regression. Neural Comput & Applic 22 (Suppl 1), 257–267 (2013). https://doi.org/10.1007/s00521-012-0971-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-012-0971-9

Keywords

Navigation