Adversarial Pattern Classification Using Multiple Classifiers and Randomisation

  • Battista Biggio
  • Giorgio Fumera
  • Fabio Roli
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5342)

Abstract

In many security applications a pattern recognition system faces an adversarial classification problem, in which an intelligent, adaptive adversary modifies patterns to evade the classifier. Several strategies have been recently proposed to make a classifier harder to evade, but they are based only on qualitative and intuitive arguments. In this work, we consider a strategy consisting in hiding information about the classifier to the adversary through the introduction of some randomness in the decision function. We focus on an implementation of this strategy in a multiple classifier system, which is a classification architecture widely used in security applications. We provide a formal support to this strategy, based on an analytical framework for adversarial classification problems recently proposed by other authors, and give an experimental evaluation on a spam filtering task to illustrate our findings.

References

  1. 1.
    Ross, A.A., Nandakumar, K., Jain, A.K.: Handbook of Multibiometrics. Springer, Heidelberg (2006)Google Scholar
  2. 2.
    Giacinto, G., Roli, F., Didaci, L.: Fusion of multiple classifiers for intrusion detection in computer networks. Pattern Recognition Letters 24, 1795–1803 (2003)CrossRefMATHGoogle Scholar
  3. 3.
    Perdisci, R., Gu, G., Lee, W.: Using an ensemble of one-class svm classifiers to harden payload-based anomaly detection systems. In: Proc. Int. Conf. Data Mining (ICDM), pp. 488–498. IEEE Computer Society, Los Alamitos (2006)CrossRefGoogle Scholar
  4. 4.
    Sahami, M., Dumais, S., Heckerman, D., Horvitz, E.: A bayesian approach to filtering junk e-mail. AAAI Tech. Rep. WS-98-05, Madison, Wisconsin (1998)Google Scholar
  5. 5.
    Haindl, M., Kittler, J., Roli, F. (eds.): MCS 2007. LNCS, vol. 4472. Springer, Heidelberg (2007)MATHGoogle Scholar
  6. 6.
    Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification. Wiley, Chichester (2000)MATHGoogle Scholar
  7. 7.
    Dalvi, N., Domingos, P., Mausam, Sanghai, S., Verma, D.: Adversarial classification. In: Proc. ACM Int. Conf. Knowledge Discovery Data Mining, pp. 99–108 (2004)Google Scholar
  8. 8.
    Globerson, A., Roweis, S.T.: Nightmare at test time: robust learning by feature deletion. In: Proc. Int. Conf. Mach. Learn., vol. 148, pp. 353–360. ACM, New York (2006)Google Scholar
  9. 9.
    Barreno, M., Nelson, B., Sears, R., Joseph, A.D., Tygar, J.D.: Can machine learning be secure? In: ASIACCS 2006: Proc. ACM Symp. Information, computer and communications security, pp. 16–25. ACM, New York (2006)Google Scholar
  10. 10.
    Lowd, D., Meek, C.: Adversarial learning. In: Proc. ACM Int. Conf. Knowledge Discovery Data Mining (KDD), pp. 641–647 (2005)Google Scholar
  11. 11.
    Kittler, J., Hatef, M., Duin, R.P., Matas, J.: On combining classifiers. IEEE Trans. Pattern Analysis and Machine Intelligence 20(3), 226–239 (1998)CrossRefGoogle Scholar
  12. 12.
    Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)MATHGoogle Scholar
  13. 13.
    Ho, T.K.: The random subspace method for constructing decision forests. IEEE Trans. Pattern Analysis and Machine Intelligence 20(8), 832–844 (1998)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Battista Biggio
    • 1
  • Giorgio Fumera
    • 1
  • Fabio Roli
    • 1
  1. 1.Dept. of Electrical and Electronic Eng.University of CagliariCagliariItaly

Personalised recommendations