Abstract
We present a subspace-based variant of LS-SVMs (i.e. regularization networks) that sequentially processes the data and is hence especially suited for online learning tasks. The algorithm works by selecting from the data set a small subset of basis functions that is subsequently used to approximate the full kernel on arbitrary points. This subset is identified online from the data stream. We improve upon existing approaches (esp. the kernel recursive least squares algorithm) by proposing a new, supervised criterion for the selection of the relevant basis functions that takes into account the approximation error incurred from approximating the kernel as well as the reduction of the cost in the original learning task. We use the large-scale data set ’forest’ to compare performance and efficiency of our algorithm with greedy batch selection of the basis functions via orthogonal least squares. Using the same number of basis functions we achieve comparable error rates at much lower costs (CPU-time and memory wise).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Bach, F.R., Jordan, M.I.: Predictive low-rank decomposition for kernel methods. In: Proc. of ICML, vol. 22 (2005)
Csató, L., Opper, M.: Sparse representation for Gaussian process models. In: NIPS, vol. 13 (2001)
Blake, C.L., Newman, D.J., Hettich, S., Merz, C.J.: UCI repository of machine learning databases (1998)
Engel, Y., Mannor, S., Meir, R.: The kernel recursive least squares algorithm. IEEE Trans. on Signal Processing 52(8), 2275–2285 (2004)
Fine, S., Scheinberg, K.: Efficient SVM training using low-rank kernel representation. JMLR 2, 243–264 (2001)
Hoegaerts, L., Suykens, J.A.K., Vandewalle, J., De Moor, B.: Subset based least squares subspace regression in RKHS. Neurocomputing 63, 293–323 (2005)
Luo, Z., Wahba, G.: Hybrid adaptive splines. J. Amer. Statist. Assoc. 92, 107–114 (1997)
Platt, J.: A resource-allocating network for function interpolation. Neural Computation 3, 213–225 (1991)
Popovici, V., Bengio, S., Thiran, J.-P.: Kernel matching pursuit for largedatasets. Pattern Recognition 38(12), 2385–2390 (2005)
Quiñonero Candela, J., Rasmussen, C.E.: A unifying view of sparse approximateGaussian process regression. JMLR 6, 1935–1959 (2005)
Smola, A.J., Bartlett, P.L.: Sparse greedy Gaussian process regression. In: NIPS, vol. 13 (2001)
Smola, A.J., Schölkopf, B.: Sparse greedy matrix approximation for machine learning. In: Proc. of ICML, vol. 17 (2000)
Williams, C., Seeger, M.: Using the nyström method to speed up kernel machines. In: NIPS, vol. 13 (2001)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Jung, T., Polani, D. (2006). Sequential Learning with LS-SVM for Large-Scale Data Sets. In: Kollias, S., Stafylopatis, A., Duch, W., Oja, E. (eds) Artificial Neural Networks – ICANN 2006. ICANN 2006. Lecture Notes in Computer Science, vol 4132. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11840930_39
Download citation
DOI: https://doi.org/10.1007/11840930_39
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-38871-5
Online ISBN: 978-3-540-38873-9
eBook Packages: Computer ScienceComputer Science (R0)