Advertisement

Sparse Least Squares Support Vector Machine for Function Estimation

  • Liang-zhi Gan
  • Hai-kuan Liu
  • You-xian Sun
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3971)

Abstract

Sparse least squares support vector machine (SLS-SVM) for regression is proposed to solve the problem of regression for large sample data. The samples are mapped into Reproducing Kernel Hilbert Space (RKHS) and span a subspace there. Then we could find the basis of the subspace. The basis can represent all the samples linearly. So we can get the least squares support vector machine by solving a small equations set. A numerical example is used to illustrate that this approach can be used to fit nonlinear models for large data set. Being compared with common least squares support vector machine, this method can find sparse solution without any pruning or surgeon, and the computing speed is much faster because the final result is found by solving a small-scale equations set.

Keywords

Support Vector Machine Feature Space Function Estimation Little Square Support Vector Machine Quadratic Programming Problem 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Vapnik, V.: Statistical Learning Theory. John Wiley and Sons, New York (1998)MATHGoogle Scholar
  2. 2.
    Suykens, J.A.K.: Least Squares Support Vector Machine Classifiers. Neural Process. Lett. 9, 293 (1999)CrossRefMathSciNetGoogle Scholar
  3. 3.
    Quandt, R.E., Ramsey, J.B.: Estimating Mixtures of Normal Distributions and Switching Regressions. Journal American Statistical Associate 73, 730–752 (1978)MATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Redner, R.A., Walker, H.F.: Mixture Densities, Maximum Likelihood and the EM Algorithm. SIAM Review 31(2), 195–239 (1984)CrossRefMathSciNetGoogle Scholar
  5. 5.
    Caudill, S.B., Acharya, R.N.: Maximum-likelihood Estimation of a Mixture of Normal Regressions: Starting Values and Singularities. Comments Statistics-Simulation 27(3), 667–674 (1998)MATHCrossRefGoogle Scholar
  6. 6.
    Yee, L., Ma, J.H., Zhang, W.X.: A New Method for Mining Regression Classes in Large Data Sets. IEEE Transactions on Pattern Analysis and Machine Intelligence 23(1), 5–21 (2001)CrossRefGoogle Scholar
  7. 7.
    Burges, C.J.C.: A Tutorial on Support Vector Machines for Pattern Recognition. Data Mining and Knowledge Discovery 2, 121–167 (1998)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Liang-zhi Gan
    • 1
    • 2
  • Hai-kuan Liu
    • 2
  • You-xian Sun
    • 1
  1. 1.National Laboratory of Industrial Control TechnologyZhejiang UniversityHangzhouChina
  2. 2.Electrical Engineering DepartmentXuzhou Normal UniversityXuzhouChina

Personalised recommendations