A method of combining forward with backward greedy algorithms for sparse approximation to KMSE
- 135 Downloads
The prefitting and backfitting methods are commonly used to sparsify the full solution of naive kernel minimum squared error. As known to us, the forward learning methods including prefitting and backfitting only assist us in finding the suboptimal solutions. To enhance the testing real time further, in this paper by virtue of the idea of incorporating the backward learning algorithm into the forward learning algorithm, two improved schemes on the basis of prefitting and backfitting are proposed. Compared with the original versions, two improved algorithms obtain fewer significant nodes, which indicates much better testing real time. Due to the addition of the backward learning to the forward learning, the proposed algorithms need more training computational costs. Investigations on benchmark data sets and a robot arm example are reported to demonstrate the improved effectiveness.
KeywordsKernel learning Kernel minimum squared errors Prefitting Backfitting Forward learning Backward learning
This research was supported by the National Natural Science Foundation of China under Grant number 51006052. The authors would like to thank the anonymous reviewers and editors for their valuable suggestions that greatly improved the quality of this presentation.
Compliance with ethical standards
Conflict of interest
We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.
- Jiang J, Chen X, Gan HT (2014) Feature extraction for kernel minimum squared error by sparsity shrinkage, pp 450–453Google Scholar
- Mika S, Ratsch G, Weston J, Scholkopf B, Muller KR (1999) Fisher discriminant analysis with kernels. In: Proceedings of the 1999 9th IEEE workshop on neural networks for signal processing, pp 41–48Google Scholar
- Saunders C, Gammerman A, Vovk V (1998) Ridge regression learning algorithm in dual variables. In: Proceedings of the fifteenth international conference on machine learning, pp 515–521Google Scholar
- Scholkopf B, Herbrich R, Smola AJ (2001) A generalized representer theorem. In: Proceedings of 14th annual conference on computational learning theory, pp 416–426Google Scholar
- Xu Y, Yang JY, Lu JF (2005) An efficient kernel-based nonlinear regression method for two-class classification. In: Proceedings of 2005 international conference on machine learning and cybernetics, pp 4442–4445Google Scholar
- Xu J, Zhang X, Li Y (2001) Kernel MSE algorithm: a unified framework for KFD, LS-SVM and KRR. In: Proceedings of the international joint conference on neural networks, pp 1486–1491Google Scholar
- Zhang X (2004) Matrix analysis and applications. Tsinghua University Press, BeijingGoogle Scholar
- Zhu Q (2009) A method for rapid feature extraction based on KMSE. In: Proceedings of the 2009 WRI Global Congress on Intelligent Systems, pp 335–338Google Scholar
- Zhu Q, Xu Y, Cui J, Chen C, Wang J, Wu X, Zhao Y (2009) A method for constructing simplified kernel model based on Kernel-MSE. In: Proceedings of Pacific conference on computational intelligence and industrial applications, pp 237–240Google Scholar