Advertisement

A Greedy Training Algorithm for Sparse Least-Squares Support Vector Machines

  • Gavin C. Cawley
  • Nicola L. C. Talbot
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2415)

Abstract

Suykens et al. [1] describes a form of kernel ridge regression known as the least-squares support vector machine (LS-SVM). In this paper, we present a simple, but efficient, greedy algorithm for constructing near optimal sparse approximations of least-squares support vector machines, in which at each iteration the training pattern minimising the regularised empirical risk is introduced into the kernel expansion. The proposed method demonstrates superior performance when compared with the pruning technique described by Suykens et al. [1], over the motorcycle and Boston housing datasets.

Keywords

Support Vector Machine Ridge Regression Training Pattern Sparse Approximation Kernel Ridge Regression 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    J. A. K. Suykens, J. De Brabanter, L. Lukas, and J. Vandewalle. Weighted least squares support vector machines: robustness and sparse approximation. Neuro-computing, 2001.Google Scholar
  2. [2]
    A. E. Hoerl and R. W. Kennard. Ridge regression: Biased estimation for nonorthogonal Problems. Technometrics, 12(1):55–67, 1970.zbMATHCrossRefMathSciNetGoogle Scholar
  3. [3]
    A. N. Tikhonov and V. Y. Arsenin. Solutions of ill-posed Problems. John Wiley, New York, 1977.zbMATHGoogle Scholar
  4. [4]
    S. Geman, E. Bienenstock, and R. Doursat. Neural networks and the bias/variance dilemma. Neural Computation, 4(1):1–58, 1992.CrossRefGoogle Scholar
  5. [5]
    C. Saunders, A. Gammerman, and V. Vovk. Ridge regression learning algorithm in dual variables. In Proceedings, 15th International Conference on Machine Learning, pages 515–521, Madison, WI, July 24–27 1998.Google Scholar
  6. [6]
    J. Suykens, L. Lukas, and J. Vandewalle. Sparse approximation using least-squares support vector machines. In Proceedings, IEEE International Symposium on Circuits and Systems, pages 11757–11760, Geneva, Switzerland, May 2000.Google Scholar
  7. [7]
    G. S. Kimeldorf and G. Wahba. Some results on Tchebycheffian spline functions. J. Math. Anal. Applic., 33:82–95, 1971.zbMATHCrossRefMathSciNetGoogle Scholar
  8. [8]
    B. W. Silverman. Some aspects of the spline smoothing approach to non-parametric regression curve fitting. Journal of the Royal Statistical Society, B, 47(1):1–52, 1985.zbMATHMathSciNetGoogle Scholar
  9. [9]
    D. Harrison and D. L. Rubinfeld. Hedonic prices and the demand for clean air. Journal Environmental Economics and Management, 5:81–102, 1978.zbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Gavin C. Cawley
    • 1
  • Nicola L. C. Talbot
    • 1
  1. 1.School of Information SystemsUniversity of East AngliaNorwichUK

Personalised recommendations