Skip to main content

Randomizing SVM Against Adversarial Attacks Under Uncertainty

Part of the Lecture Notes in Computer Science book series (LNAI,volume 10939)


Robust machine learning algorithms have been widely studied in adversarial environments where the adversary maliciously manipulates data samples to evade security systems. In this paper, we propose randomized SVMs against generalized adversarial attacks under uncertainty, through learning a classifier distribution rather than a single classifier in traditional robust SVMs. The randomized SVMs have advantages on better resistance against attacks while preserving high accuracy of classification, especially for non-separable cases. The experimental results demonstrate the effectiveness of our proposed models on defending against various attacks, including aggressive attacks with uncertainty.


  • Adversarial learning
  • Robust SVM
  • Randomization

Y. Chen—The work was completed when the first author was visiting KAUST as an intern.

This is a preview of subscription content, access via your institution.

Buying options

USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-319-93040-4_44
  • Chapter length: 13 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
USD   89.00
Price excludes VAT (USA)
  • ISBN: 978-3-319-93040-4
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   119.99
Price excludes VAT (USA)
Fig. 1.
Fig. 2.
Fig. 3.


  1. Alabdulmohsin, I.M., Gao, X., Zhang, X.: Adding robustness to support vector machines against adversarial reverse engineering. In: CIKM, pp. 231–240 (2014)

    Google Scholar 

  2. Barreno, M., Nelson, B., Sears, R., Joseph, A.D., Tygar, J.D.: Can machine learning be secure? In: Proceedings of the 2006 ACM Symposium on Information, Computer and Communications Security, pp. 16–25 (2006)

    Google Scholar 

  3. Bi, J., Zhang, T.: Support vector classification with input data uncertainty. In: NIPS, pp. 161–168 (2004)

    Google Scholar 

  4. Biggio, B., et al.: Security evaluation of support vector machines in adversarial environments. In: Ma, Y., Guo, G. (eds.) Support Vector Machines Applications, pp. 105–153. Springer, Cham (2014).

    CrossRef  Google Scholar 

  5. Biggio, B., Nelson, B., Laskov, P.: Support vector machines under adversarial label noise. In: Proceedings of the 3rd Asian Conference on Machine Learning, pp. 97–112 (2011)

    Google Scholar 

  6. Brückner, M., Kanzow, C., Scheffer, T.: Static prediction games for adversarial learning problems. J. Mach. Learn. Res. 13(1), 2617–2654 (2012)

    MathSciNet  MATH  Google Scholar 

  7. Dalvi, N., Domingos, P., Mausam, Sanghai, S., Verma, D.: Adversarial classification. In: SIGKDD, pp. 99–108 (2004)

    Google Scholar 

  8. Dekel, O., Shamir, O., Xiao, L.: Learning to classify with missing and corrupted features. Mach. Learn. 81(2), 149–178 (2010)

    CrossRef  MathSciNet  Google Scholar 

  9. Globerson, A., Roweis, S.: Nightmare at test time: robust learning by feature deletion. In: ICML, pp. 353–360 (2006)

    Google Scholar 

  10. Großhans, M., Sawade, C., Brückner, M., Scheffer, T.: Bayesian games for adversarial regression problems. In: ICML, pp. 55–63 (2013)

    Google Scholar 

  11. Xu, H., Caramanis, C., Mannor, S.: Robustness and regularization of support vector machines. J. Mach. Learn. Res. 10, 1485–1510 (2009)

    MathSciNet  MATH  Google Scholar 

  12. Xu, L., Crammer, K., Schuurmans, D.: Robust support vector machine training via convex outlier ablation. In: AAAI, pp. 536–542 (2006)

    Google Scholar 

  13. Zhou, Y., Kantarcioglu, M., Thuraisingham, B., Xi, B.: Adversarial support vector machine learning. In: SIGKDD, pp. 1059–1067 (2012)

    Google Scholar 

Download references


This work was supported by the King Abdullah University of Science and Technology, and by Natural Science Foundation of China, under grant U1736114 and 61672092, and in part by National Key R&D Program of China (2017YFB0802805).

Author information

Authors and Affiliations


Corresponding author

Correspondence to Xiangliang Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Chen, Y., Wang, W., Zhang, X. (2018). Randomizing SVM Against Adversarial Attacks Under Uncertainty. In: Phung, D., Tseng, V., Webb, G., Ho, B., Ganji, M., Rashidi, L. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2018. Lecture Notes in Computer Science(), vol 10939. Springer, Cham.

Download citation

  • DOI:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-93039-8

  • Online ISBN: 978-3-319-93040-4

  • eBook Packages: Computer ScienceComputer Science (R0)