Adversarial Pattern Classification Using Multiple Classifiers and Randomisation
In many security applications a pattern recognition system faces an adversarial classification problem, in which an intelligent, adaptive adversary modifies patterns to evade the classifier. Several strategies have been recently proposed to make a classifier harder to evade, but they are based only on qualitative and intuitive arguments. In this work, we consider a strategy consisting in hiding information about the classifier to the adversary through the introduction of some randomness in the decision function. We focus on an implementation of this strategy in a multiple classifier system, which is a classification architecture widely used in security applications. We provide a formal support to this strategy, based on an analytical framework for adversarial classification problems recently proposed by other authors, and give an experimental evaluation on a spam filtering task to illustrate our findings.
- 1.Ross, A.A., Nandakumar, K., Jain, A.K.: Handbook of Multibiometrics. Springer, Heidelberg (2006)Google Scholar
- 4.Sahami, M., Dumais, S., Heckerman, D., Horvitz, E.: A bayesian approach to filtering junk e-mail. AAAI Tech. Rep. WS-98-05, Madison, Wisconsin (1998)Google Scholar
- 7.Dalvi, N., Domingos, P., Mausam, Sanghai, S., Verma, D.: Adversarial classification. In: Proc. ACM Int. Conf. Knowledge Discovery Data Mining, pp. 99–108 (2004)Google Scholar
- 8.Globerson, A., Roweis, S.T.: Nightmare at test time: robust learning by feature deletion. In: Proc. Int. Conf. Mach. Learn., vol. 148, pp. 353–360. ACM, New York (2006)Google Scholar
- 9.Barreno, M., Nelson, B., Sears, R., Joseph, A.D., Tygar, J.D.: Can machine learning be secure? In: ASIACCS 2006: Proc. ACM Symp. Information, computer and communications security, pp. 16–25. ACM, New York (2006)Google Scholar
- 10.Lowd, D., Meek, C.: Adversarial learning. In: Proc. ACM Int. Conf. Knowledge Discovery Data Mining (KDD), pp. 641–647 (2005)Google Scholar