Abstract
Machine learning is one field of Artificial Intelligence (AI) to help machines solve problems. Support Vector Machines (SVMs) are classic methods in machine learning field and are also used in many other AI fields. However, the model training is very time-consuming when meeting large scale data sets. Some efforts have been devoted to develop it for distributed memory clusters. Their bottleneck is the training phase, where the structure is immobile. In this paper, we propose Multi-Modes Cascade SVMs (MMCascadeSVMs) to adaptively reshape the structure. MMCascadeSVMs employs analytical hierarchy process to qualitatively analyse the similarity between adjacent hierarchies. Furthermore, MMCascadeSVMs leverages a two-stage algorithm: the first stage is to compute the similarity between two adjacent models, and the similarity is built for halting criterion. The second stage is to predict new samples based on multi models. MMCascadeSVMs can modify the structure of SVMs in distributed systems and reduce training time. Experiments show that our approach significantly reduces the total computation cost.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Burges, C.J.C.: A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Discov. 2(2), 121–167 (1998)
Chang, E.Y., Zhu, K., Wang, H., Bai, H., Li, J., Qiu, Z., Cui, H.: PSVM: parallelizing support vector machines on distributed computers. In: Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December, pp. 213–230 (2007)
Chien, L.J., Chang, C.C., Lee, Y.J.: Variant methods of reduced set selection for reduced support vector machines. J. Inf. Sci. Eng. 26(1), 183–196 (2010)
Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines: and Other Kernel-Based Learning Methods. China Machine Press, Beijing (2005)
Graf, H.P., Cosatto, E., Bottou, L.: Parallel support vector machines: the cascade SVM. In: Advance in Neural Information Processing Systems (2008)
Hsieh, C.J., Si, S., Dhillon, I.S.: A divide-and-conquer solver for kernel support vector machines. In: International Conference on Machine Learning, pp. 566–574 (2014)
Menon, A.K.: Large-scale support vector machines: algorithms and theory. Research Exam University of California, San Diego, pp. 1–17 (2009)
Platt, J.C.: Sequential minimal optimization: a fast algorithm for training support vector machines (1998)
Tsang, I.W., Kwok, J.T., Cheung, P.M.: Core vector machines: fast SVM training on very large data sets. J. Mach. Learn. Res. 6(1), 363–392 (2005)
You, Y., Demmel, J., Czechowski, K., Song, L.: CA-SVM: communication avoiding support vector machines on distributed systems, pp. 847–859 (2015)
Zanghirati, G., Zanni, L.: A parallel solver for large quadratic programs in training support vector machines. Parallel Comput. 29(4), 535–551 (2002)
Zhu, Z.A., Chen, W., Wang, G., Zhu, C., Chen, Z.: P-packSVM: parallel primal gradient descent kernel SVM, pp. 677–686 (2009)
Acknowledgment
The work was supported by the National Basic Research Program of China (project No. 2014CB340303) and the National Natural Science Foundation of China (project No. 61402514).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Cui, L., Wang, C., Li, W., Tan, L., Peng, Y. (2017). Multi-Modes Cascade SVMs: Fast Support Vector Machines in Distributed System. In: Kim, K., Joukov, N. (eds) Information Science and Applications 2017. ICISA 2017. Lecture Notes in Electrical Engineering, vol 424. Springer, Singapore. https://doi.org/10.1007/978-981-10-4154-9_51
Download citation
DOI: https://doi.org/10.1007/978-981-10-4154-9_51
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-4153-2
Online ISBN: 978-981-10-4154-9
eBook Packages: EngineeringEngineering (R0)