Abstract
A general framework of low rank least squares support vector machine (LR-LSSVM) is introduced in this paper. The special structure of controlled model size of the low rank kernel machine brings in remarkable sparsity and hence gigantic breakthrough in computational efficiency. In the meantime, a two-step optimization algorithm with three regimes for gradient descent is proposed. For demonstration purpose, experiments are carried out using a novel robust radial basis function (RRBF), the performances of which mostly dominate.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
If we are considering a regression problem, there is no need to add \(t_n\) in the model (3).
- 2.
The algorithm can be easily adopted to any learnable kernels.
References
Amari, S., Wu, S.: Improving support vector machine classifiers by modifying kernel functions. Neural Netw. 12(6), 783–789 (1999). https://doi.org/10.1016/s0893-6080(99)00032-5
Chen, S.: Local regularization assisted orthogonal least squares regression. NeuroComputing 69, 559–585 (2006)
Chen, S., Cowan, C., Grant, P.: Orthogonal least squares learning algorithm for radial basis function networks. IEEE Trans. Neural Netw. 2(2), 302–309 (1991)
Gao, J., Shi, D., Liu, X.: Critical vector learning to construct sparse kernel regression modelling. Neural Netw. 20(7), 791–798 (2007)
Hong, X., Wei, H., Gao, J.: Sparse least squares support vector machine using simplex basis function. IEEE Trans. Cybern. XX, Submission No. CYB-E-2018-06-1246 (2018)
Liu, D., Shi, Y., Tian, Y., Huang, X.: Ramp loss least squares support vector machine. J. Comput. Sci. 14, 61–68 (2016)
Lotte, F., Congedo, M., Lécuyer, A., Lamarche, F., Arnaldi, B.: A review of classification algorithms for EEG-based brain-computer interfaces. J. Neural Eng. 4(2), R1–13 (2007). https://doi.org/10.1088/1741-2560/4/2/r01
Schölkopf, B., Smola, A.J.: Learning with Kernels. MIT Press, Cambridge (2002)
Suykens, J., Vandewalle, J.: Least squares support vector machine classifiers. Neural Process. Lett. 9, 293–300 (1999)
Williams, C., Seeger, M.: Using the Nyström method to speed up kernel machines. In: Proceedings of NIPS, pp. 682–688 (2001)
Ye, Y., Gao, J., Shao, Y., Li, C., Jin, Y.: Robust support vector regression with general quadratic non-convex \(\epsilon \)-insensitive loss. ACM Trans. Knowl. Discov. Data XX, submitted (2019)
Yu, K., Xu, W., Gong, Y.: Deep learning with kernel regularization for visual recognition. In: NIPS, vol. 21, pp. 1889–1896 (2009)
Zhu, F., Gao, J., Xu, C., Yang, J., Tao, D.: On selecting effective patterns for fast support vector regression training. IEEE Trans. Neural Netw. Learn. Syst. 29(8), 3610–3622 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Xu, D., Fang, M., Hong, X., Gao, J. (2019). Sparse Least Squares Low Rank Kernel Machines. In: Gedeon, T., Wong, K., Lee, M. (eds) Neural Information Processing. ICONIP 2019. Lecture Notes in Computer Science(), vol 11954. Springer, Cham. https://doi.org/10.1007/978-3-030-36711-4_33
Download citation
DOI: https://doi.org/10.1007/978-3-030-36711-4_33
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-36710-7
Online ISBN: 978-3-030-36711-4
eBook Packages: Computer ScienceComputer Science (R0)