Adaptive pruning algorithm for least squares support vector machine classifier
- 323 Downloads
As a new version of support vector machine (SVM), least squares SVM (LS-SVM) involves equality instead of inequality constraints and works with a least squares cost function. A well-known drawback in the LS-SVM applications is that the sparseness is lost. In this paper, we develop an adaptive pruning algorithm based on the bottom-to-top strategy, which can deal with this drawback. In the proposed algorithm, the incremental and decremental learning procedures are used alternately and a small support vector set, which can cover most of the information in the training set, can be formed adaptively. Using this set, one can construct the final classifier. In general, the number of the elements in the support vector set is much smaller than that in the training set and a sparse solution is obtained. In order to test the efficiency of the proposed algorithm, we apply it to eight UCI datasets and one benchmarking dataset. The experimental results show that the presented algorithm can obtain adaptively the sparse solutions with losing a little generalization performance for the classification problems with no-noises or noises, and its training speed is much faster than sequential minimal optimization algorithm (SMO) for the large-scale classification problems with no-noises.
KeywordsSupport vector machine Least squares support vector machine Pruning Incremental learning Decremental learning Adaptive
The authors would like to thank the anonymous reviewers’ useful comments and suggestions. This work presented in this paper is supported by Australia Research Council (ARC) under discovery grant DP0559213, National Natural Science Foundation of China (10471045, 60433020), Natural Science Foundation of Guangdong Province (031360, 04020079), Key Technology Research and Development Program of Guangdong Province (2005B10101010, 2005B70101118), Key Technology Research and Development Program of Tianhe District (051G041), Open Research Fund of Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education (93K-17-2006-03), and Natural Science Foundation of South China University of Technology (B13-E5050190).
- Cauwenberghs G, Poggio T (2000) Incremental and decremental support vector machine learning. In: Proceedings of advances in neural information processing systems, vol 13, pp 409–415Google Scholar
- Hamers B, Suykens J, De Moor B (2001) A comparison of iterative methods for least squares support vector machine classifiers. ESAT-SISTA, K. U. Leuven, Leuven, Belgium, Internal Rep. 01-110Google Scholar
- Hoegaerts L, Suykens J, Vandewalle J, De Moor B (2004) A comparison of pruning algorithms for sparse least squares support vector machines. In: Proceedings of ICONIP 2004. Lecture Notes in Computer Science, vol 3316. Springer, Berlin, pp 1247–1253Google Scholar
- Joachims T (1998) Making large-scale support vector machine learning practical. In: Proceedings of advances in kernel methods-support vector learning. MIT Press, Cambridge, pp 169–184Google Scholar
- Murphy P, Aha D (1992) UCI repository of machine learning database. http://www.ics.uci.edu/~mlearn/MLRepository.html
- Osuna E, Freund R, Girosi F (1997) An improved training algorithm for support vector machines. IEEE Workshop on Neural Networks and Signal Processing, Amelia Island, pp 276–285Google Scholar
- Platt J (1998) Sequential minimal optimization-a fast algorithm for training support vector machines. In: Proceedings of advances in kernel methods-support vector learning. MIT Press, Cambridge, pp 185–208Google Scholar
- Suykens J, Lukas L, Van Dooren P, De Moor B, Vandewalle J (1999) Least squares support vector machine classifiers: a large scale algorithm. In: Proceedings of Europe conference on circuit theory and design (ECCTD’99), Stresa, Italy, pp 839–842Google Scholar
- Suykens J, Lukas L, Vandewalle J (2000) Sparse approximation using least squares support vector machines. IEEE International Symposium on Circuits and Systems, Genvea, Switzerland, pp 757–760Google Scholar