A method to sparsify the solution of support vector regression

Article

Abstract

Although the solution of support vector machine (SVM) is relatively sparse, it makes unnecessarily liberal use of basis functions since the number of support vectors required typically grows linearly with the size of the training set. In this paper, we present a simple post-processing method to sparsify the solution of support vector regression (SVR). The main idea is as follows: first, we train a SVR machine on the full training set; then another SVR machine is trained only on a subset of the full training set with modified target values. This process is done several times iteratively. Experiments indicate that the proposed method can reduce the support vectors greatly while maintaining the good generalization capacity of SVR.

Keywords

Support vector regression (SVR) Sparseness RVM Adaptive sparse Supervised learning (ASSL) KLASSO 

References

  1. 1.
    Drucker H, Burges CJC, Kaufman L, Smola AJ, Vapnik V (1997) Support vector regression machines. In: Mozer M, Jordan M, Petsche T (eds) Advances in Neural Information Processing System. MIT press, CambridgeGoogle Scholar
  2. 2.
    Schölkopf B, Smola AJ (2002) Learning with kernels. MIT Press, CambridgeGoogle Scholar
  3. 3.
    Girosi F (1998) An equivalence between sparse approximation and support vector machines. Neural Comput 10(6):1455-1480CrossRefGoogle Scholar
  4. 4.
    Tipping ME(2001) Sparse Bayesian learning and the relevance vector machine. J Mach Learn Res 1:211–244MATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Figueiredo MAT (2003) Adaptive sparseness for supervised learning. IEEE Trans Pattern Anal Mach Intell 25(9):1150–1159CrossRefGoogle Scholar
  6. 6.
    Roth V (2004) The generalized LASSO. IEEE Trans Neural Netw 15(1):16–28CrossRefGoogle Scholar
  7. 7.
    MacKay D (1994) Bayesian nonlinear modeling for the prediction competition. In: ASHRAE Trans, vol 100, Atlanta, GA, pp 1053–1062Google Scholar
  8. 8.
    Burges CJC (1996) Simplified support vector decision rules. In: Saitta L (ed) Proceeding of the thirteenth international conference on machine learning. Morgan Kaufmann, Bari, pp 71–77Google Scholar
  9. 9.
    Burges CJC, Schölkopf B (1997) Improving the accuracy and speed of support vector machine. In: Mozer M, Jordan M, Petsche T (eds) Advances in Neural Information Processing System 9. MIT press, Cambridge, MA, pp 375–381Google Scholar
  10. 10.
    Platt JC (1998) Fast training of support vector machines using sequential optimization. In: Schölkopf B, Burges CJC, Smola AJ (eds) Advanced in kernel methods: support vector machines. MIT press, Cambridge, pp185–208Google Scholar
  11. 11.
    Shevade SK, Keerthi SS, Bhattacharyya C, Murthy KRK (2000) Improvements to SMO algorithm for SVM regression. IEEE Trans Neural Network, 11(5):1188–1193CrossRefGoogle Scholar
  12. 12.
    Wu MR, Schölkopf B, Bakir G (2006) A direct method for building sparse kernel learning algorithms. J Mach Learn Res 7:455-491MathSciNetGoogle Scholar
  13. 13.
    Dekel O, Singer Y (2006) Support vector machines on a budget. In: Schölkopf B, Platt J, Hofmann T (eds) Advances in Neural Information Processing Systems 19. MIT press, Cambridge, pp 345–352Google Scholar
  14. 14.
    Chang CC, Lin CJ (2001) LIBSVM: a library for support vector machines. Software available at http://www.csie.ntu.edu.tw/cjlin/libsvm
  15. 15.
    Collobert R, Bengio S(2001) SVMTorch: support vector machines for large-scale regression problems. J Mach Learn Res 1:143–160CrossRefMathSciNetGoogle Scholar
  16. 16.
    Osuna E, Girosi F(1999) Reduing the run-time complexity of support vector machines. In: Schölkopf B, Smola AJ (eds) Advances in kernel methods: support vector learning, MIT press, Cambridge, pp 271–284Google Scholar
  17. 17.
    Schölkopf B, Smola AJ, Williamson RC, Bartlett PL (2000) New support vector algorithms. Neural Comput 12:1207–1245CrossRefGoogle Scholar
  18. 18.
    Chang CC, Lin CJ(2002) Training ν-support vector regression: theory and algorithms. Neural Comput 13(9):2119–2147CrossRefGoogle Scholar
  19. 19.
    Chen PH, Lin CJ, Schlökopf B (2005) A tutorial on ν-support vector machines. Appl Stoch Model Bus Ind 21:111–136MATHCrossRefGoogle Scholar
  20. 20.
    Mattera D, Palmieri F, Haykin S (1999) Simple and robust methods for support vector expansions. IEEE Trans Neural Network 10(5):1038-1047CrossRefGoogle Scholar
  21. 21.
    Eubank RL (1999) Nonparametric regression and spline smoothing. In: Statistics: textbooks and monographs, 2nd edn, vol 157. Marcel Dekker, New YorkGoogle Scholar
  22. 22.
    Michie D, Spiegelhalter DJ, Taylor CC (1994) Machine learning, neural and statistical classification. Prentice-Hall, Englewood Cliffs. Data available at ftp://ncc.up.pt/pub/statlog/
  23. 23.
    Li HQ, Wan BW (2003) Multi-input-layer wavelet neural network and its application. In: Proceedings of the fifth international conference on computational intelligence and multimedia application (ICCIMA’03), pp 468–473Google Scholar

Copyright information

© Springer-Verlag London Limited 2009

Authors and Affiliations

  1. 1.School of ScienceXi’an University of TechnologyXi’anChina
  2. 2.Institute for Information Science and System Science, Faculty of ScienceXi’an Jiaotong UniversityXi’anChina

Personalised recommendations