Skip to main content
Log in

Machine learning in adversarial environments

  • Editorial
  • Published:
Machine Learning Aims and scope Submit manuscript

Abstract

Whenever machine learning is used to prevent illegal or unsanctioned activity and there is an economic incentive, adversaries will attempt to circumvent the protection provided. Constraints on how adversaries can manipulate training and test data for classifiers used to detect suspicious behavior make problems in this area tractable and interesting. This special issue highlights papers that span many disciplines including email spam detection, computer intrusion detection, and detection of web pages deliberately designed to manipulate the priorities of pages returned by modern search engines. The four papers in this special issue provide a standard taxonomy of the types of attacks that can be expected in an adversarial framework, demonstrate how to design classifiers that are robust to deleted or corrupted features, demonstrate the ability of modern polymorphic engines to rewrite malware so it evades detection by current intrusion detection and antivirus systems, and provide approaches to detect web pages designed to manipulate web page scores returned by search engines. We hope that these papers and this special issue encourages the multidisciplinary cooperation required to address many interesting problems in this relatively new area including predicting the future of the arms races created by adversarial learning, developing effective long-term defensive strategies, and creating algorithms that can process the massive amounts of training and test data available for internet-scale problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

  • Abernethy, J., Chapelle, O., & Castillo, C. (2010). Graph regularization methods for Web spam detection. Machine Learning, 81(2). doi:10.1007/s10994-010-5171-1.

  • Auer, P., & Cesa-Bianchi, N. (1998). On-line learning with malicious noise and the closure algorithm. Annals of Mathematics and Artificial Intelligence, 23(1–2), 83–99.

    Article  MathSciNet  MATH  Google Scholar 

  • Barreno, M., Nelson, B., Joseph, A., & Tygar, D. (2010). The security of machine learning. Machine Learning, 81(2). doi:10.1007/s10994-010-5188-5.

  • Brückner, M., & Scheffer, T. (2009). Nash equilibria of static prediction games. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, & A. Culotta (Eds.), Advances in neural information processing systems (Vol. 22, pp. 171–179). Red Hook: Curran Associates, Inc.

    Google Scholar 

  • Brumley, D., Newsome, J., Song, D., Wang, H., & Jha, S. (2006). Towards automatic generation of vulnerability-based signatures. In IEEE symposium on security and privacy (pp. 2–16).

  • Bshouty, N. H., Eiron, N., & Kushilevitz, E. (2002). PAC learning with nasty noise. Theoretical Computer Science, 288(3), 255–275.

    Article  MathSciNet  MATH  Google Scholar 

  • Cretu, G., Stavrou, A., Locasto, M., Stolfo, S., & Keromytis, A. (2008). Casting out demons: sanitizing training data for anomaly sensors. In IEEE symposium on security and privacy (pp. 81–95).

  • Dalvi, N., Domingos, P., Sumit, M., & Verma, S. D. (2004). Adversarial classification. In Knowledge discovery in databases (pp. 99–108). New York: ACM Press.

    Google Scholar 

  • Dekel, O., Shamir, O., & Xiao, L. (2010). Learning to classify with missing and corrupted features. Machine Learning, 81(2). doi:10.1007/s10994-009-5124-8.

  • Fogla, P., & Lee, W. (2006). Evading network anomaly detection systems: formal reasoning and practical techniques. In ACM conference on computer and communications security (CCS) (pp. 59–68).

  • Fogla, P., Sharif, M., Perdisci, R., Kolesnikov, O., & Lee, W. (2006). Polymorphic blending attacks. In USENIX security symposium (pp. 241–256).

  • Globerson, A., & Roweis, S. (2006). Nightmare at test time: robust learning by feature deletion. In International conference on machine learning (ICML) (pp. 353–360).

  • Graham-Cumming, J. (2004). How to beat an adaptive spam filter. In MIT spam conference.

  • Kearns, M., & Li, M. (1993). Learning in the presence of malicious errors. SIAM Journal on Computing, 22(4), 807–837.

    Article  MathSciNet  MATH  Google Scholar 

  • Kloft, M., & Laskov, P. (2010). Online anomaly detection under adversarial impact. In 13th international conference on artificial intelligence and statistics (AISTATS) (pp. 405–412).

  • Li, Z., Sandhi, M., Chen, Y., Kao, M.-Y., & Chavez, B. (2006). Hamsa: fast signature generation for zero-day polymorphic worms with provable attack resilience. In IEEE symposium on security and privacy (pp. 32–47).

  • Lippmann, R., & Laskov, P. (2007). NIPS 2007 workshop on machine learning in adversarial environments for computer security. http://mls-nips07.first.fraunhofer.de/.

  • Lowd, D., & Meek, C. (2005a). Adversarial learning. In Conference on email and anti-spam.

  • Lowd, D., & Meek, C. (2005b). Good word attacks on statistical spam filters. In ACM SIGKDD international conference on knowledge discovery and data mining (pp. 641–647).

  • Nelson, B., Rubinstein, B., Huang, L., Joseph, A., Lau, S., Lee, S., Rao, S., Tran, A., & Tygar, D. (2010). Near-optimal evasion of convex-inducing classifiers. In 13th international conference on artificial intelligence and statistics (AISTATS) (pp. 549–556).

  • Newsome, J., Karp, B., & Song, D. (2005). Polygraph: automatically generating signatures for polymorphic worms. In IEEE symposium on security and privacy (pp. 120–132).

  • Perdisci, R., Dagon, D., Lee, W., Fogla, P., & Sharif, M. (2006). Misleading worm signature generators using deliberate noise injection. In IEEE symposium on security and privacy (pp. 17–31).

  • Song, Y., Locasto, M. E., Stavrou, A., Keromytis, A. D., & Stolfo, S. J. (2010). On the infeasibility of modeling polymorphic shellcode. Machine Learning, 81(2). doi:10.1007/s10994-009-5143-5.

  • Teo, C. H., Globerson, A., Roweis, S., & Smola, A. (2008). Convex learning with invariances. In J. Platt, D. Koller, Y. Singer, & S. Roweis (Eds.), Advances in neural information processing systems (Vol. 20, pp. 1489–1496). Cambridge: MIT Press.

    Google Scholar 

  • Wang, K., Parekh, J. J., & Stolfo, S. J. (2006). ANAGRAM: a content anomaly detector resistant to mimicry attack. In Recent advances in intrusion detection (RAID) (pp. 226–248).

  • Wang, K., & Stolfo, S. (2004). Anomalous payload-based network intrusion detection. In Recent advances in intrusion detection (RAID) (pp. 203–222).

  • Wittel, G., & Wu, S. (2004). On attacking statistical spam filters. In Conference on email and anti-spam.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pavel Laskov.

Additional information

Editor: Peter Flach.

This work is sponsored by the United States Air Force under Air Force Contract FA8721-05-C-0002. Opinions, interpretations, conclusions and recommendations are those of the authors and are not necessarily endorsed by the United States Government.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Laskov, P., Lippmann, R. Machine learning in adversarial environments. Mach Learn 81, 115–119 (2010). https://doi.org/10.1007/s10994-010-5207-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10994-010-5207-6

Keywords

Navigation