Skip to main content
Log in

A flexible support vector machine for regression

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

In this paper, a novel regression algorithm coined flexible support vector regression is proposed. We first model the insensitive zone in classic support vector regression, respectively, by its up- and down-bound functions and then give a kind of generalized parametric insensitive loss function (GPILF). Subsequently, based on GPILF, we propose an optimization criterion such that the unknown regressor and its up- and down-bound functions can be found simultaneously by solving a single quadratic programming problem. Experimental results on both several publicly available benchmark data sets and time series prediction show the feasibility and effectiveness of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Vapnik VN (1998) Statistical learning theory. Wiley, New York

    MATH  Google Scholar 

  2. Schölkopf B, Smola AJ (2002) Learning with kernels. MIT Press, Cambridge

    Google Scholar 

  3. Vapnik VN (1999) The nature of statistical learning theory, 2nd edn. Springer, New York

    Google Scholar 

  4. Joachims T (1998) Text categorization with support vector machines: learning with many relevant features [A]. In: European conference on machine learning no. 10[C] 1398. Chemnitz, Springer, pp 137–142

  5. Cao L, Tay FEH (2001) Financial forecasting using support vector machines. Neural Comput Appl 10(2):184–192

    Article  MATH  Google Scholar 

  6. Osuna E, Freund R, Girosi F (1997) Training support vector machines: an application to face detection [A]. In: Proceedings of the 1997 conference computer vision and pattern recognition[C]. IEEE Computer Society, Washington, pp 130–136

  7. Platt JC (1998) Fast training of support vector machines using sequential minimal optimization. In: Advances in Kernel methods—support vector machines. Cambridge

  8. Joachims T (1999) Making large-scale SVM learning practical. In: Advances in Kernel methods support vector machine. Cambridge

  9. Collobert R, Bengio S (2001) SVMTorch: support vector machines for large-scale regression problems. J Mach Learn 1(2):143–160

    MathSciNet  Google Scholar 

  10. Chang CC, Lin CJ, LIBSVM: a library for support vector machines. Available from. http://www.csie.ntu.edu.tw/∼cjlin

  11. Yang HQ, Chan LW, King I (2002) Support vector machine regression for volatile stock market prediction. In Intelligent data engineering and automated learning (IDEAL 2002). Springer, NewYork, 2412 of LNCS, pp 391–396

  12. Cao LJ, Chua KS, Guan LK (2003) Ascending support vector machines for financial time series forecasting. In: International conference on computational intelligence for financial engineering (CIFEr2003). pp 329–335

  13. Schölkopf B, Smola AJ, Williamson R, Bartlett PL (2000) New support vector algorithms. Neural Comput 12(5):1207–1245

    Article  Google Scholar 

  14. Hao PY (2010) New support vector algorithms with parametric insensitive/margin model. Neural Netw 23:60–73

    Article  Google Scholar 

  15. Huang KZ, Yang HQ, King I, Lyu M (2008) Machine learning: modeling data locally and globally. In: Advanced topics in science and tecnology in China, 1st edn. Springer, Berlin, ISBN-13: 978-3540794516. Zhejiang University Press, Hangzhou, ISBN-10: 540794514

  16. Yang HQ, Huang KZ, King I, Lyu MR (2009) Localized support vector regression for time series prediction. Neurocomputing 72:2659–2669

    Google Scholar 

  17. Chen XB, Yang J, Liang J, Ye QL (2010) Smooth twin support vector regression. Neural Comput Appl. doi:10.1007/s00521-010-0454-9

  18. Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge Univ. Press, Cambridge

    MATH  Google Scholar 

  19. Fung G, Mangasarian OL (2001) Proximal support vector machine classifiers. In: Proceedings KDD-2001: knowledge discovery and data mining. San Francisco, pp 77–86

  20. Mangasarian OL, Musicant DR (1999) Successive overrelaxation for support vector machines. IEEE Trans Neural Netw 10(5):1032–1037

    Article  Google Scholar 

  21. Muphy PM, Aha DW (1992) UCI repository of machine learning databases

  22. The MOSEK Optimization Tools Version 5.0, Denmark. [Online]. Available: http://www.mosek.com (2008)

  23. Balasundaram S, Kapil (2010) On Lagrangian support vector regression. Expert Syst Appl 37:8784–8792

    Article  Google Scholar 

  24. Guo XC, Yang JH, Wu CG, Wang CY, Liang YC (2008) A novel LS-SVMs hyper-parameter selection based on particle swarm optimization. Neurocomputing 71:3211–3215

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank the anonymous reviewers for their critical and constructive comments and suggestions. The authors are also thankful to Scientific Foundation of Jiangsu province (BK2010339) and Natural Science Fund for Colleges and Universities in Jiangsu Province (10KJD580001).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaobo Chen.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Chen, X., Yang, J. & Liang, J. A flexible support vector machine for regression. Neural Comput & Applic 21, 2005–2013 (2012). https://doi.org/10.1007/s00521-011-0623-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-011-0623-5

Keywords

Navigation