Machine learning in adversarial environments
- 1.2k Downloads
Whenever machine learning is used to prevent illegal or unsanctioned activity and there is an economic incentive, adversaries will attempt to circumvent the protection provided. Constraints on how adversaries can manipulate training and test data for classifiers used to detect suspicious behavior make problems in this area tractable and interesting. This special issue highlights papers that span many disciplines including email spam detection, computer intrusion detection, and detection of web pages deliberately designed to manipulate the priorities of pages returned by modern search engines. The four papers in this special issue provide a standard taxonomy of the types of attacks that can be expected in an adversarial framework, demonstrate how to design classifiers that are robust to deleted or corrupted features, demonstrate the ability of modern polymorphic engines to rewrite malware so it evades detection by current intrusion detection and antivirus systems, and provide approaches to detect web pages designed to manipulate web page scores returned by search engines. We hope that these papers and this special issue encourages the multidisciplinary cooperation required to address many interesting problems in this relatively new area including predicting the future of the arms races created by adversarial learning, developing effective long-term defensive strategies, and creating algorithms that can process the massive amounts of training and test data available for internet-scale problems.
KeywordsAdversarial learning Adversary Spam Intrusion detection Web spam Robust classifier Feature deletion Arms race Game theory
Unable to display preview. Download preview PDF.
- Abernethy, J., Chapelle, O., & Castillo, C. (2010). Graph regularization methods for Web spam detection. Machine Learning, 81(2). doi: 10.1007/s10994-010-5171-1.
- Barreno, M., Nelson, B., Joseph, A., & Tygar, D. (2010). The security of machine learning. Machine Learning, 81(2). doi: 10.1007/s10994-010-5188-5.
- Brückner, M., & Scheffer, T. (2009). Nash equilibria of static prediction games. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, & A. Culotta (Eds.), Advances in neural information processing systems (Vol. 22, pp. 171–179). Red Hook: Curran Associates, Inc. Google Scholar
- Brumley, D., Newsome, J., Song, D., Wang, H., & Jha, S. (2006). Towards automatic generation of vulnerability-based signatures. In IEEE symposium on security and privacy (pp. 2–16). Google Scholar
- Cretu, G., Stavrou, A., Locasto, M., Stolfo, S., & Keromytis, A. (2008). Casting out demons: sanitizing training data for anomaly sensors. In IEEE symposium on security and privacy (pp. 81–95). Google Scholar
- Dalvi, N., Domingos, P., Sumit, M., & Verma, S. D. (2004). Adversarial classification. In Knowledge discovery in databases (pp. 99–108). New York: ACM Press. Google Scholar
- Dekel, O., Shamir, O., & Xiao, L. (2010). Learning to classify with missing and corrupted features. Machine Learning, 81(2). doi: 10.1007/s10994-009-5124-8.
- Fogla, P., & Lee, W. (2006). Evading network anomaly detection systems: formal reasoning and practical techniques. In ACM conference on computer and communications security (CCS) (pp. 59–68). Google Scholar
- Fogla, P., Sharif, M., Perdisci, R., Kolesnikov, O., & Lee, W. (2006). Polymorphic blending attacks. In USENIX security symposium (pp. 241–256). Google Scholar
- Globerson, A., & Roweis, S. (2006). Nightmare at test time: robust learning by feature deletion. In International conference on machine learning (ICML) (pp. 353–360). Google Scholar
- Graham-Cumming, J. (2004). How to beat an adaptive spam filter. In MIT spam conference. Google Scholar
- Kloft, M., & Laskov, P. (2010). Online anomaly detection under adversarial impact. In 13th international conference on artificial intelligence and statistics (AISTATS) (pp. 405–412). Google Scholar
- Li, Z., Sandhi, M., Chen, Y., Kao, M.-Y., & Chavez, B. (2006). Hamsa: fast signature generation for zero-day polymorphic worms with provable attack resilience. In IEEE symposium on security and privacy (pp. 32–47). Google Scholar
- Lippmann, R., & Laskov, P. (2007). NIPS 2007 workshop on machine learning in adversarial environments for computer security. http://mls-nips07.first.fraunhofer.de/.
- Lowd, D., & Meek, C. (2005a). Adversarial learning. In Conference on email and anti-spam. Google Scholar
- Lowd, D., & Meek, C. (2005b). Good word attacks on statistical spam filters. In ACM SIGKDD international conference on knowledge discovery and data mining (pp. 641–647). Google Scholar
- Nelson, B., Rubinstein, B., Huang, L., Joseph, A., Lau, S., Lee, S., Rao, S., Tran, A., & Tygar, D. (2010). Near-optimal evasion of convex-inducing classifiers. In 13th international conference on artificial intelligence and statistics (AISTATS) (pp. 549–556). Google Scholar
- Newsome, J., Karp, B., & Song, D. (2005). Polygraph: automatically generating signatures for polymorphic worms. In IEEE symposium on security and privacy (pp. 120–132). Google Scholar
- Perdisci, R., Dagon, D., Lee, W., Fogla, P., & Sharif, M. (2006). Misleading worm signature generators using deliberate noise injection. In IEEE symposium on security and privacy (pp. 17–31). Google Scholar
- Song, Y., Locasto, M. E., Stavrou, A., Keromytis, A. D., & Stolfo, S. J. (2010). On the infeasibility of modeling polymorphic shellcode. Machine Learning, 81(2). doi: 10.1007/s10994-009-5143-5.
- Teo, C. H., Globerson, A., Roweis, S., & Smola, A. (2008). Convex learning with invariances. In J. Platt, D. Koller, Y. Singer, & S. Roweis (Eds.), Advances in neural information processing systems (Vol. 20, pp. 1489–1496). Cambridge: MIT Press. Google Scholar
- Wang, K., Parekh, J. J., & Stolfo, S. J. (2006). ANAGRAM: a content anomaly detector resistant to mimicry attack. In Recent advances in intrusion detection (RAID) (pp. 226–248). Google Scholar
- Wang, K., & Stolfo, S. (2004). Anomalous payload-based network intrusion detection. In Recent advances in intrusion detection (RAID) (pp. 203–222). Google Scholar
- Wittel, G., & Wu, S. (2004). On attacking statistical spam filters. In Conference on email and anti-spam. Google Scholar