Distributed SmSVM Ensemble Learning

  • Jeff HajewskiEmail author
  • Suely Oliveira
Conference paper
Part of the Proceedings of the International Neural Networks Society book series (INNS, volume 1)


Traditional ensemble methods are typically performed with models that are fast to construct and evaluate, such as random trees and Naive Baye’s. More complex models frequently suffer from increased computational load in both training and inference. In this work, we present a distributed ensemble method using SmoothSVM, a fast support vector machine (SVM) algorithm. We build and evaluate a large ensemble of SVMs in parallel, with little overhead when compared to a single SVM. The ensemble of SVMs trains in less time than a single SVM while maintaining the same test accuracy and, in some cases, even exhibits improved test accuracy. Our approach also has the added benefit of trivially scaling to much larger systems.


Support vector machine Parallel ensemble learning Distributed SVM SmoothSVM 


  1. 1.
    Alham, N.K., Li, M., Liu, Y., Hammoud, S.: A MapReduce-based distributed SVM algorithm for automatic image annotation. Comput. Math. Appl. 62(7), 2801–2811 (2011). Computers & Mathematics in Natural Computation and Knowledge DiscoveryCrossRefGoogle Scholar
  2. 2.
    Alham, N.K., Li, M., Liu, Y., Qi, M.: A MapReduce-based distributed SVM ensemble for scalable image classification and annotation. Comput. Math. Appl. 66(10), 1920–1934 (2013)CrossRefGoogle Scholar
  3. 3.
    Armijo, L.: Minimization of functions having Lipschitz continuous first partial derivatives. Pac. J. Math. 16(1), 1–3 (1966)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Blackard, J.A., Dean, D.J.: Comparative accuracies of neural networks and discriminant analysis in predicting forest cover types from cartographic variables. In: Second Southern Forestry GIS Conference (1998). UCI Machine Learning Repository:
  5. 5.
    Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)zbMATHGoogle Scholar
  6. 6.
    Chang, E.Y., Zhu, K., Wang, H., Bai, H., Li, J., Qiu, Z., Cui, H.: PSVM: parallelizing support vector machines on distributed computers. In: NIPS (2007)Google Scholar
  7. 7.
    Chen, S., Wang, W., van Zuylen, H.: Construct support vector machine ensemble to detect traffic incident. Expert Syst. Appl. 36(8), 10976–10986 (2009)CrossRefGoogle Scholar
  8. 8.
    Claesen, M., De Smet, F., Suykens, J.A.K., De Moor, B.: EnsembleSVM: a library for ensemble learning using support vector machines. J. Mach. Learn. Res. 15(1), 141–145 (2014)zbMATHGoogle Scholar
  9. 9.
    Dean, J., Ghemawat, S.: MapReduce: simplified data processing on large clusters. Commun. ACM 51(1), 107–113 (2008)CrossRefGoogle Scholar
  10. 10.
    Graf, H.P., Cosatto, E., Bottou, L., Dourdanovic, I., Vapnik, V.: Parallel support vector machines: the cascade SVM. In: Saul, L.K., Weiss, Y., Bottou, L. (eds.) Advances in Neural Information Processing Systems, vol. 17, pp. 521–528. MIT Press, Amsterdam (2005)Google Scholar
  11. 11.
    Hajewski, J., Oliveira, S., Stewart, D.E.: Smoothed hinge loss and \(\ell \)1 support vector machines. In: 2018 International Conference on 2018 Workshop on Optimization Based Techniques for Emerging Data Mining Problems (OEDM) (2018)Google Scholar
  12. 12.
    Ke, X., Jin, H., Xie, X., Cao, J.: A distributed SVM method based on the iterative MapReduce. In: Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing (IEEE ICSC 2015), pp. 116–119, February 2015Google Scholar
  13. 13.
    Liu, C., Wu, B., Yang, Y., Guo, Z.: Multiple submodels parallel support vector machine on spark. In: 2016 IEEE International Conference on Big Data (Big Data), pp. 945–950, December 2016Google Scholar
  14. 14.
    Mehrotra, S.: On the implementation of a primal-dual interior point method. SIAM J. Optim. 2(4), 575–601 (1991)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Nguyen, T.D., Nguyen, V., Le, T., Phung, D.: Distributed data augmented support vector machine on spark. In: 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 498–503, December 2016Google Scholar
  16. 16.
    Nocedal, J., Wright, S.J.: Numerical Optimization, 2nd edn. Springer, New York (2006)zbMATHGoogle Scholar
  17. 17.
    Reyes-Ortiz, J.L., Oneto, L., Anguita, D.: Big data analytics in the cloud: spark on Hadoop vs MPI/OpenMP on Beowulf. Procedia Comput. Sci. 53, 121–130 (2015). iNNS Conference on Big Data 2015 Program San Francisco, CA, USA 8-10CrossRefGoogle Scholar
  18. 18.
    Shvachko, K., Kuang, H., Radia, S., Chansler, R.: The Hadoop distributed file system. In: Proceedings of the 2010 IEEE 26th Symposium on Mass Storage Systems and Technologies (MSST), MSST 2010, pp. 1–10. IEEE Computer Society (2010)Google Scholar
  19. 19.
    Sonnenburg, S., Franc, V., Yom-Tov, E., Sebag, M.: Pascal large scale learning challenge, vol. 10, pp. 1937–1953, January 2008Google Scholar
  20. 20.
    Wang, H., Xiao, Y., Long, Y.: Research of intrusion detection algorithm based on parallel SVM on spark. In: 2017 7th IEEE International Conference on Electronics Information and Emergency Communication (ICEIEC), pp. 153–156, July 2017Google Scholar
  21. 21.
    Yan, B., Yang, Z., Ren, Y., Tan, X., Liu, E.: Microblog sentiment classification using parallel SVM in apache spark. In: 2017 IEEE International Congress on Big Data (BigData Congress), pp. 282–288, June 2017Google Scholar
  22. 22.
    Yu, L., Yue, W., Wang, S., Lai, K.K.: Support vector machine based multiagent ensemble learning for credit risk evaluation. Expert Syst. Appl. 37(2), 1351–1360 (2010)CrossRefGoogle Scholar
  23. 23.
    Zaharia, M., Chowdhury, M., Franklin, M.J., Shenker, S., Stoica, I.: Spark: cluster computing with working sets. In: Proceedings of the 2Nd USENIX Conference on Hot Topics in Cloud Computing, HotCloud 2010, pp. 10–10. USENIX Association, Berkeley (2010)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of IowaIowa CityUSA

Personalised recommendations