Abstract
We propose three techniques for improving accuracy and speed of margin stochastic gradient descent support vector machines (MSGDSVM). The first technique is to use sampling with full replacement. The second technique is to use the new update rule derived from the squared hinge loss function. The third technique is to limit the number of values for tuning of the margin hyperparameter M. We also provide theoretical analysis of a novel optimization problem for the proposed update rule. The first two techniques improve accuracy of MSGDSVM and the last one speed of tuning. Experiments show that the proposed method achieves superior accuracy compared to MSGDSVM for binary and multiclass classification, with similar generalization performance to sequential minimal optimization (SMO) and is faster than MSGDSVM.
M. Orchel—Independent Researcher.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bordes, A., Ertekin, S., Weston, J., Bottou, L.: Fast Kernel classifiers with online and active learning. J. Mach. Learn. Res. 6, 1579–1619 (2005). http://jmlr.org/papers/v6/bordes05a.html
Conditional (ternary) operator, July 2021. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Conditional_Operator
Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge University Press, New York (2000). https://doi.org/10.1017/CBO9780511801389
Delgado, M.F., Cernadas, E., Barro, S., Amorim, D.G.: Do we need hundreds of classifiers to solve real world classification problems? J. Mach. Learn. Res. 15(1), 3133–3181 (2014). http://dl.acm.org/citation.cfm?id=2697065
Goodfellow, I.J., Bengio, Y., Courville, A.C.: Deep Learning. Adaptive Computation and Machine Learning. MIT Press, Cambridge (2016). http://www.deeplearningbook.org/
Hastie, T., Rosset, S., Tibshirani, R., Zhu, J.: The entire regularization path for the support vector machine. J. Mach. Learn. Res. 5, 1391–1415 (2004)
Huang, X., Shi, L., Suykens, J.A.K.: Ramp loss linear programming support vector machine. J. Mach. Learn. Res. 15(1), 2185–2211 (2014). http://dl.acm.org/citation.cfm?id=2670321
Japkowicz, N., Shah, M. (eds.): Evaluating Learning Algorithms: A Classification Perspective. Cambridge University Press, New York (2011)
LIBSVM data sets, June 2011. www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/
Melki, G., Kecman, V., Ventura, S., Cano, A.: OLLAWV: online learning algorithm using worst-violators. Appl. Soft Comput. 66, 384–393 (2018). https://doi.org/10.1016/j.asoc.2018.02.040
Orchel, M.: Incorporating prior knowledge into SVM algorithms in analysis of multidimensional data. Ph.D. thesis, AGH University of Science and Technology (2013). https://doi.org/10.13140/RG.2.1.5004.9441
Orchel, M., Suykens, J.A.K.: Fast hyperparameter tuning for support vector machines with stochastic gradient descent. In: Nicosia, G., et al. (eds.) LOD 2020. LNCS, vol. 12566, pp. 481–493. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-64580-9_40
Shalev-Shwartz, S., Singer, Y., Srebro, N., Cotter, A.: Pegasos: primal estimated sub-gradient solver for SVM. Math. Program. 127(1), 3–30 (2011). https://doi.org/10.1007/s10107-010-0420-4
Steinwart, I., Hush, D.R., Scovel, C.: Training SVMs without offset. J. Mach. Learn. Res. 12, 141–202 (2011). http://portal.acm.org/citation.cfm?id=1953054
Acknowledgments
The theoretical analysis of the method is supported by the National Science Centre in Poland, UMO-2015/17/D/ST6/04010, titled “Development of Models and Methods for Incorporating Knowledge to Support Vector Machines” and the data driven method is supported by the European Research Council under the European Union’s Seventh Framework Programme. Johan Suykens acknowledges support by ERC Advanced Grant E-DUALITY (787960), KU Leuven C1, FWO G0A4917N. This paper reflects only the authors’ views, the Union is not liable for any use that may be made of the contained information.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
9 Appendix
9 Appendix
We provide a derivation of the update rule (6).
Proof
The proof is based on the primal problem OP 4. The technique of a proof is similar as presented in [10]. First, we compute the gradient of the objective function in (15) and we get
Because we have the same condition in the last two factors, we can write
After substitution (18), we get
For the stochastic update, we approximate the above formula (which should be equal to 0 for the optimal solution), so we have
We can generate an update term as for the ordinary iteration method by transforming the equation into a fixed point form for \(\alpha _k\) by dividing by \(ny_k\varphi \left( \boldsymbol{x_k}\right) \). We assume that \(\varphi \left( \boldsymbol{x_k}\right) \ne 0\) for any coefficient. For the RBF kernel, it means that each component of \(\boldsymbol{x}\) should be different from 0. We move all terms except the first one to the right, and we get
We can skip multiplier 2, because it will not affect the final decision boundary. Assuming also initialization with zero and extreme early stopping (updating each parameter maximally one time), we get
By incorporating the additional assumption that the weight is positive, we get
By returning to the original representation with \(\beta _j\) weights, we get (6).
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Orchel, M., Suykens, J.A.K. (2022). Improved Update Rule and Sampling of Stochastic Gradient Descent with Extreme Early Stopping for Support Vector Machines. In: Nicosia, G., et al. Machine Learning, Optimization, and Data Science. LOD 2021. Lecture Notes in Computer Science(), vol 13164. Springer, Cham. https://doi.org/10.1007/978-3-030-95470-3_11
Download citation
DOI: https://doi.org/10.1007/978-3-030-95470-3_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-95469-7
Online ISBN: 978-3-030-95470-3
eBook Packages: Computer ScienceComputer Science (R0)