Skip to main content

The security of machine learning

Abstract

Machine learning’s ability to rapidly evolve to changing and complex situations has helped it become a fundamental tool for computer security. That adaptability is also a vulnerability: attackers can exploit machine learning systems. We present a taxonomy identifying and analyzing attacks against machine learning systems. We show how these classes influence the costs for the attacker and defender, and we give a formal structure defining their interaction. We use our framework to survey and analyze the literature of attacks against machine learning systems. We also illustrate our taxonomy by showing how it can guide attacks against SpamBayes, a popular statistical spam filter. Finally, we discuss how our taxonomy suggests new lines of defenses.

References

  • Barreno, M., Nelson, B., Sears, R., Joseph, A. D., & Tygar, J. D. (2006). Can machine learning be secure? In Proceedings of the ACM symposium on InformAtion, computer, and communications security (ASIACCS).

  • Cesa-Bianchi, N., & Lugosi, G. (2006). Prediction, learning, and games. Cambridge: Cambridge University Press.

    Book  MATH  Google Scholar 

  • Christmann, A., & Steinwart, I. (2004). On robustness properties of convex risk minimization methods for pattern recognition. Journal of Machine Learning Research (JMLR), 5, 1007–1034.

    MathSciNet  Google Scholar 

  • Chung, S. P., & Mok, A. K. (2006). Allergy attack against automatic signature generation. In Recent advances in intrusion detection (RAID) (pp. 61–80).

  • Chung, S. P., & Mok, A. K. (2007). Advanced allergy attacks: Does a corpus really help? In Recent advances in intrusion detection (RAID) (pp. 236–255).

  • Cormack, G., & Lynam, T. (2005). Spam corpus creation for TREC. In Proceedings of the conference on email and anti-spam (CEAS).

  • Dalvi, N., Domingos, P., Mausam, Sanghai, S., & Verma, D. (2004). Adversarial classification. In Proceedings of the ACM SIGKDD international conference on knowledge discovery and data mining (pp. 99–108).

  • Dredze, M., Gevaryahu, R., & Elias-Bachrach, A. (2007). Learning fast classifiers for image spam. In Proceedings of the conference on email and anti-spam (CEAS).

  • Fisher, R. A. (1948). Question 14: Combining independent tests of significance. American Statistician, 2(5), 30–31.

    Article  Google Scholar 

  • Fogla, P., & Lee, W. (2006). Evading network anomaly detection systems: Formal reasoning and practical techniques. In Proceedings of the ACM conference on computer and communications security (CCS) (pp. 59–68).

  • Globerson, A., & Roweis, S. (2006). Nightmare at test time: Robust learning by feature deletion. In Proceedings of the international conference on machine learning (ICML) (pp. 353–360).

  • Graham, P. (2002). A plan for spam. http://www.paulgraham.com/spam.html.

  • Hampel, F. R., Ronchetti, E. M., Rousseeuw, P. J., & Stahel, W. A. (1986). Robust statistics: the approach based on influence functions. Probability and mathematical statistics. New York: Wiley.

    Google Scholar 

  • Huber, P. J. (1981). Robust statistics. Probability and mathematical statistics. New York: Wiley.

    Google Scholar 

  • Kearns, M., & Li, M. (1993). Learning in the presence of malicious errors. SIAM Journal on Computing, 22, 807–837.

    Article  MathSciNet  MATH  Google Scholar 

  • Kim, H. A., & Karp, B. (2004). Autograph: Toward automated, distributed worm signature detection. In USENIX security symposium.

  • Klimt, B., & Yang, Y. (2004). Introducing the Enron corpus. In Proceedings of the conference on email and anti-spam (CEAS).

  • Kolmogorov, A. N. (1993). Information theory and the theory of algorithms, selected works of A.N. Kolmogorov, Vol. III. Dordrecht: Kluwer.

    Google Scholar 

  • Lowd, D., & Meek, C. (2005a). Adversarial learning. In Proceedings of the ACM SIGKDD international conference on knowledge discovery and data mining (pp. 641–647).

  • Lowd, D., & Meek, C. (2005b). Good word attacks on statistical spam filters. In Proceedings of the conference on email and anti-spam (CEAS).

  • Mahoney, M., & Chan, P. (2003). An analysis of the 1999 DARPA/Lincoln Laboratory evaluation data for network anomaly detection. Lecture notes in computer science (pp. 220–238).

  • Maronna, R. A., Martin, D. R., & Yohai, V. J. (2006). Robust statistics: theory and methods. New York: Wiley.

    Book  MATH  Google Scholar 

  • Meyer, T. A., & Whateley, B. (2004). SpamBayes: Effective open-source, Bayesian based, email classification system. In Proceedings of the conference on email and anti-spam (CEAS).

  • Moore, D., Shannon, C., Brown, D. J., Voelker, G. M., & Savage, S. (2006). Inferring internet denial-of-service activity. ACM Transactions on Computer Systems, 24(2), 115–139.

    Article  Google Scholar 

  • Nelson, B., Barreno, M., Chi, F. J., Joseph, A. D., Rubinstein, B. I. P., Saini, U., Sutton, C., Tygar, J. D., & Xia, K. (2008). Exploiting machine learning to subvert your spam filter. In Proceedings of the workshop on large-scale exploits and emerging threats (LEET).

  • Newsome, J., Karp, B., & Song, D. (2005). Polygraph: Automatically generating signatures for polymorphic worms. In Proceedings of the IEEE symposium on security and privacy (pp. 226–241).

  • Newsome, J., Karp, B., & Song, D. (2006). Paragraph: Thwarting signature learning by training maliciously. In Recent advances in intrusion detection (RAID).

  • Robinson, G. (2003). A statistical approach to the spam problem. Linux Journal.

  • Sculley, D., Wachman, G. M., & Brodley, C. E. (2006). Spam filtering using inexact string matching in explicit feature space with on-line linear classifiers. In Proceedings of the text retrieval conference (TREC).

  • Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27, 379–423 and 623–656.

    MathSciNet  MATH  Google Scholar 

  • Tan, K. M. C., & Maxion, R. A. (2002). “Why 6?” Defining the operational limits of stide, an anomaly-based intrusion detector. In Proceedings of the IEEE symposium on security and privacy (pp. 188–201).

  • Tan, K. M. C., Killourhy, K. S., & Maxion, R. A. (2002). Undermining an anomaly-based intrusion detection system using common exploits. In Recent advances in intrusion detection (RAID) (pp. 54–73).

  • Tan, K. M. C., McHugh, J., & Killourhy, K. S. (2003). Hiding intrusions: From the abnormal to the normal and beyond. In Revised papers from the international workshop on information hiding (IH) (pp. 1–17).

  • Valiant, L. G. (1984). A theory of the learnable. Communications of the ACM, 27(11), 1134–1142.

    Article  MATH  Google Scholar 

  • Valiant, L. G. (1985). Learning disjunctions of conjunctions. In Proceedings of the international joint conference on artificial intelligence (IJCAI) (pp. 560–566).

  • Wagner, D. (2004). Resilient aggregation in sensor networks. In Proceedings of the ACM workshop on security of Ad Hoc and sensor networks (SASN) (pp. 78–87).

  • Wagner, D., & Soto, P. (2002). Mimicry attacks on host-based intrusion detection systems. In Proceedings of the ACM conference on computer and communications security (CCS) (pp. 255–264).

  • Wang, K., Parekh, J. J., & Stolfo, S. J. (2006). Anagram: A content anomaly detector resistant to mimicry attack. In Recent advances in intrusion detection (RAID) (pp. 226–248).

  • Wang, Z., Josephson, W., Lv, Q., Charikar, M., & Li, K. (2007). Filtering image spam with near-duplicate detection. In Proceedings of the conference on email and anti-spam (CEAS).

  • Wittel, G. L., & Wu, S. F. (2004). On attacking statistical spam filters. In Proceedings of the conference on email and anti-spam (CEAS).

  • Yao, A. C. (1988). Computational information theory. In Y. Abu-Mostafa (Ed.), Complexity in information theory (pp. 1–15). Berlin: Springer.

    Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Marco Barreno.

Additional information

Editors: Pavel Laskov and Richard Lippmann.

Rights and permissions

Open Access This is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License (https://creativecommons.org/licenses/by-nc/2.0), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

Reprints and Permissions

About this article

Cite this article

Barreno, M., Nelson, B., Joseph, A.D. et al. The security of machine learning. Mach Learn 81, 121–148 (2010). https://doi.org/10.1007/s10994-010-5188-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10994-010-5188-5

Keywords

  • Security
  • Adversarial learning
  • Adversarial environments