A Greedy Training Algorithm for Sparse Least-Squares Support Vector Machines
Suykens et al.  describes a form of kernel ridge regression known as the least-squares support vector machine (LS-SVM). In this paper, we present a simple, but efficient, greedy algorithm for constructing near optimal sparse approximations of least-squares support vector machines, in which at each iteration the training pattern minimising the regularised empirical risk is introduced into the kernel expansion. The proposed method demonstrates superior performance when compared with the pruning technique described by Suykens et al. , over the motorcycle and Boston housing datasets.
KeywordsSupport Vector Machine Ridge Regression Training Pattern Sparse Approximation Kernel Ridge Regression
Unable to display preview. Download preview PDF.
- J. A. K. Suykens, J. De Brabanter, L. Lukas, and J. Vandewalle. Weighted least squares support vector machines: robustness and sparse approximation. Neuro-computing, 2001.Google Scholar
- C. Saunders, A. Gammerman, and V. Vovk. Ridge regression learning algorithm in dual variables. In Proceedings, 15th International Conference on Machine Learning, pages 515–521, Madison, WI, July 24–27 1998.Google Scholar
- J. Suykens, L. Lukas, and J. Vandewalle. Sparse approximation using least-squares support vector machines. In Proceedings, IEEE International Symposium on Circuits and Systems, pages 11757–11760, Geneva, Switzerland, May 2000.Google Scholar