Machine Learning

, Volume 81, Issue 2, pp 121–148

The security of machine learning

  • Marco Barreno
  • Blaine Nelson
  • Anthony D. Joseph
  • J. D. Tygar
Article

DOI: 10.1007/s10994-010-5188-5

Cite this article as:
Barreno, M., Nelson, B., Joseph, A.D. et al. Mach Learn (2010) 81: 121. doi:10.1007/s10994-010-5188-5

Abstract

Machine learning’s ability to rapidly evolve to changing and complex situations has helped it become a fundamental tool for computer security. That adaptability is also a vulnerability: attackers can exploit machine learning systems. We present a taxonomy identifying and analyzing attacks against machine learning systems. We show how these classes influence the costs for the attacker and defender, and we give a formal structure defining their interaction. We use our framework to survey and analyze the literature of attacks against machine learning systems. We also illustrate our taxonomy by showing how it can guide attacks against SpamBayes, a popular statistical spam filter. Finally, we discuss how our taxonomy suggests new lines of defenses.

Keywords

Security Adversarial learning Adversarial environments 
Download to read the full article text

Copyright information

© The Author(s) 2010

Authors and Affiliations

  • Marco Barreno
    • 1
  • Blaine Nelson
    • 1
  • Anthony D. Joseph
    • 1
  • J. D. Tygar
    • 1
  1. 1.Computer Science DivisionUniversity of CaliforniaBerkeleyUSA

Personalised recommendations