Skip to main content
Log in

Large-margin Distribution Machine-based regression

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

This paper presents an efficient and robust Large-margin Distribution Machine formulation for regression. The proposed model is termed as ‘Large-margin Distribution Machine-based Regression’ (LDMR) model, and it is in the spirit of Large-margin Distribution Machine (LDM) (Zhang and Zhou, in: Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, 2014) classification model. The LDM model optimizes the margin distribution instead of minimizing a single-point margin as is done in the traditional SVM. The optimization problem of the LDMR model has been mathematically derived from the optimization problem of the LDM model using an interesting result of Bi and Bennett (Neurocomputing 55(1):79–108, 2003). The resulting LDMR formulation attempts to minimize the \(\epsilon\)-insensitive loss function and the quadratic loss function simultaneously. Further, the successive over-relaxation technique (Mangasarian and Musicant, IEEE Trans Neural Netw 10(5):1032−1037, 1999) has also been applied to speed up the training procedure of the proposed LDMR model. The experimental results on artificial datasets, UCI datasets and time-series financial datasets show that the proposed LDMR model owns better generalization ability than other existing models and is less sensitive to the presence of outliers.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Cortes C, Vapnik V (1995) Support vector networks. Mach Learn 20(3):273–297

    MATH  Google Scholar 

  2. Burges JC (1998) A tutorial on support vector machines for pattern recognition. Data Min Knowl Discov 2(2):121–167

    Article  Google Scholar 

  3. Cherkassky V, Mulier F (2007) Learning from data: concepts, theory and methods. Wiley, New York

    Book  Google Scholar 

  4. Vapnik V (1998) Statistical learning theory, vol 1. Wiley, New York

    MATH  Google Scholar 

  5. Osuna E, Freund R, Girosit F (1997) Training support vector machines: an application to face detection. In: Proceedings of IEEE computer vision and pattern recognition, San Juan, Puerto Rico, pp 130–136

  6. Joachims T (1998) Text categorization with support vector machines: learning with many relevant features, European conference on machine learning. Springer, Berlin

    Google Scholar 

  7. Schlkopf B, Tsuda K, Vert JP (2004) Kernel methods in computational biology. MIT Press, Cambridge

    Book  Google Scholar 

  8. Lal TN, Schroder M, Hinterberger T, Weston J, Bogdan M, Birbaumer N, Scholkopf B (2004) Support vector channel selection in BCI. IEEE Trans Biomed Eng 51(6):10031010

    Article  Google Scholar 

  9. Bradley P, Mangasarian OL (2000) Massive data discrimination via linear support vector machines. Optim Methods Softw 13(1):1–10

    Article  MathSciNet  Google Scholar 

  10. Freund Y, Schapire RE (1995) A decision-theoretic generalization of on-line learning and an application to boosting. In: Proceedings of the 2nd European conference on computational learning theory, Barcelona, Spain, p. 2337

  11. Zhou ZH (2012) Ensemble methods: foundations and algorithms. CRC Press, Boca Raton

    Book  Google Scholar 

  12. Breiman L (1999) Prediction games and arcing classifiers. Neural Comput 11(7):14931517

    Article  Google Scholar 

  13. Schapire RE, Freund Y, Bartlett PL, Lee WS (1998) Boosting the margin: a new explanation for the effectives of voting methods. Annu Stat 26(5):16511686

    MATH  Google Scholar 

  14. Reyzin L, Schapire RE (2006) How boosting the margin can also boost classifier complexity. In: Proceedings of 23rd international conference on machine learning, Pittsburgh, PA, p 753–760

  15. Wang L, Sugiyama M, Yang C, Zhou ZH, Feng J (2008) On the margin explanation of boosting algorithm. In: Proceedings of the 21st annual conference on learning theory, Helsinki, Finland, p 479–490

  16. Gao W, Zhou ZH (2013) On the doubt about margin explanation of boosting. Artif Intell 199–200:2244

    MathSciNet  MATH  Google Scholar 

  17. Zhang T, Zhou ZH (2014) Large margin distribution machine. In: Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM

  18. Vapnik V, Golowich SE, Smola AJ (1997) Support vector method for function approximation, regression estimation and signal processing. In: Mozer M, Jordan M, Petsche T (eds) Advances in neural information processing systems. MIT Press, Cambridge, pp 281–287

    Google Scholar 

  19. Drucker H, Burges CJ, Kaufman L, Smola AJ, Vapnik V (1997) Support vector regression machines. In: Mozer MC, Jordan MI, Petsche T (eds) Advances in neural information processing systems. MIT Press, Cambridge, pp 155–161

    Google Scholar 

  20. Bi J, Bennett KP (2003) A geometric approach to support vector regression. Neurocomputing 55(1):79–108

    Article  Google Scholar 

  21. Suykens JAK, Lukas L, van Dooren P, De Moor B, Vandewalle J (1999) Least squares support vector machine classifiers: a large scale algorithm. In: Proceedings of European conference of circuit theory design, pp 839–842

  22. Suykens JAK, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293300

    Article  Google Scholar 

  23. Shao YH, Zhang C, Yang Z, Deng N (2013) An \(\epsilon \)-twin support vector machine for regression. Neural Comput Appl 23(1):175–185

    Article  Google Scholar 

  24. Tanveer M, Mangal M, Ahmad I, Shao YH (2016) One norm linear programming support vector regression. Neurocomputing 173:1508–1518

    Article  Google Scholar 

  25. Mangasarian OL, Musicant DR (1999) Successive overrelaxation for support vector machines. IEEE Trans Neural Netw 10(5):1032–1037

    Article  Google Scholar 

  26. Luo ZQ, Tseng P (1993) Error bounds and convergence analysis of feasible descent methods: a general approach. Ann Oper Res 46(1):157–178

    Article  MathSciNet  Google Scholar 

  27. Chang CC, Lin CJ (2011) LIBSVM, a library for support vector machines. ACM Trans Intell Syst Technol (TIST) 2(3):27

    Google Scholar 

  28. Blake CI, Merz CJ (1998) UCI repository for machine learning databases [http://www.ics.uci.edu/*mlearn/MLRepository.html]

  29. Huang X, Shi L, Suykens JA (2014) Support vector machine classifier with pinball loss. IEEE Trans Pattern Anal Mach Intell 36(5):984–997

    Article  Google Scholar 

  30. Hsu CW, Lin CJ (2002) A comparison of methods for multi class support vector machines. IEEE Trans Neural Netw 13:415–425

    Article  Google Scholar 

  31. Duda RO, Hart PR, Stork DG (2001) Pattern classification, 2nd edn. Wiley, Hoboken

    MATH  Google Scholar 

  32. Kohavi R (1995) A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Ijcai, Vol. 14, No. 2

Download references

Acknowledgements

We would like to thank the learned referees for their valuable comments and suggestions which has substantially improved the contents and presentation of the manuscript. We would also like to acknowledge Ministry of Electronics and Information Technology, Government of India, as this work has been funded by them under Visvesvaraya Ph.D. Scheme for Electronics and IT, Order No. Phd-MLA/4(42)/2015-16.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Reshma Rastogi.

Ethics declarations

Conflict of Interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rastogi, R., Anand, P. & Chandra, S. Large-margin Distribution Machine-based regression. Neural Comput & Applic 32, 3633–3648 (2020). https://doi.org/10.1007/s00521-018-3921-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-018-3921-3

Keywords

Navigation