Bagging Classifiers for Fighting Poisoning Attacks in Adversarial Classification Tasks

  • Battista Biggio
  • Igino Corona
  • Giorgio Fumera
  • Giorgio Giacinto
  • Fabio Roli
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6713)


Pattern recognition systems have been widely used in adversarial classification tasks like spam filtering and intrusion detection in computer networks. In these applications a malicious adversary may successfully mislead a classifier by “poisoning” its training data with carefully designed attacks. Bagging is a well-known ensemble construction method, where each classifier in the ensemble is trained on a different bootstrap replicate of the training set. Recent work has shown that bagging can reduce the influence of outliers in training data, especially if the most outlying observations are resampled with a lower probability. In this work we argue that poisoning attacks can be viewed as a particular category of outliers, and, thus, bagging ensembles may be effectively exploited against them. We experimentally assess the effectiveness of bagging on a real, widely used spam filter, and on a web-based intrusion detection system. Our preliminary results suggest that bagging ensembles can be a very promising defence strategy against poisoning attacks, and give us valuable insights for future research work.


Training Data Intrusion Detection Intrusion Detection System Ensemble Size Kernel Density Estimator 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Barreno, M., Nelson, B., Joseph, A., Tygar, J.: The security of machine learning. Machine Learning 81, 121–148 (2010)CrossRefGoogle Scholar
  2. 2.
    Barreno, M., Nelson, B., Sears, R., Joseph, A.D., Tygar, J.D.: Can machine learning be secure? In: Proc. 2006 ACM Symp. Information, Computer and Comm. Sec. (ASIACCS 2006), NY, USA pp. 16–25 (2006)Google Scholar
  3. 3.
    Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)zbMATHGoogle Scholar
  4. 4.
    Chung, S.P., Mok, A.K.: Advanced allergy attacks: Does a corpus really help? In: Kruegel, C., Lippmann, R., Clark, A. (eds.) RAID 2007. LNCS, vol. 4637, pp. 236–255. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  5. 5.
    Corona, I., Ariu, D., Giacinto, G.: Hmm-web: a framework for the detection of attacks against web applications. In: Proc. 2009 IEEE Int’l Conf. Comm. (ICC 2009), NJ, USA, pp. 747–752 (2009)Google Scholar
  6. 6.
    Dalvi, N., Domingos, P.: Mausam, S. Sanghai, and D. Verma. Adversarial classification. In: Proc. 10th ACM SIGKDD Int’l Conf. Knowledge Disc. and Data Mining (KDD), USA, pp. 99–108 (2004)Google Scholar
  7. 7.
    Fumera, G., Roli, F., Serrau, A.: A theoretical analysis of bagging as a linear combination of classifiers. IEEE TPAMI 30(7), 1293–1299 (2008)CrossRefGoogle Scholar
  8. 8.
    Gabrys, B., Baruque, B., Corchado, E.: Outlier resistant PCA ensembles. In: Gabrys, B., Howlett, R.J., Jain, L.C. (eds.) KES 2006. LNCS (LNAI), vol. 4253, pp. 432–440. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  9. 9.
    Grandvalet, Y.: Bagging equalizes influence. Machine Learning 55, 251–270 (2004)CrossRefzbMATHGoogle Scholar
  10. 10.
    Hall, P., Turlach, B.: Bagging in the presence of outliers. In: Scott, D. (ed.) Mining and Modeling Massive Data Sets In Science, Engineering, and Business, CSS, vol. 29, pp. 536–539 (1998)Google Scholar
  11. 11.
    Kloft, M., Laskov, P.: Online anomaly detection under adversarial impact. In: Proc. 13th Int’l Conf. Artificial Intell. and Statistics (AISTATS), pp. 405–412 (2010)Google Scholar
  12. 12.
    Kolcz, A., Teo, C.H.: Feature weighting for improved classifier robustness. In: 6th Conf. Email and Anti-Spam (CEAS), CA, USA (2009)Google Scholar
  13. 13.
    Laskov, P., Lippmann, R.: Machine learning in adversarial environments. Machine Learning 81, 115–119 (2010)CrossRefGoogle Scholar
  14. 14.
    Nelson, B., Barreno, M., Chi, F.J., Joseph, A.D., Rubinstein, B.I.P., Saini, U., Sutton, C., Tygar, J.D., Xia, K.: Exploiting machine learning to subvert your spam filter. In: Proc. 1st Usenix Workshop on Large-Scale Exploits and Emergent Threats (LEET 2008), CA, USA, pp. 1–9 (2008)Google Scholar
  15. 15.
    Robinson, G.: A statistical approach to the spam problem. Linux J (2001),
  16. 16.
    Perdisci, R., Dagon, D., Lee, W., Fogla, P., Sharif, M.: Misleading worm signature generators using deliberate noise injection. In: Proc. 2006 IEEE Symp. Sec. and Privacy (S&P 2006), USA (2006)Google Scholar
  17. 17.
    Rubinstein, B.I., Nelson, B., Huang, L., Joseph, A.D.: Antidote: understanding and defending against poisoning of anomaly detectors. In: Proc. 9th ACM Internet Meas. Conf. IMC 2009, pp. 1–14 (2009)Google Scholar
  18. 18.
    Segui, S., Igual, L., Vitria, J.: Weighted bagging for graph based one-class classifiers. In: Proc. 9th Int. Workshop on MCSs. LNCS, vol. 5997, pp. 1–10. Springer, Heidelberg (2010)Google Scholar
  19. 19.
    Shieh, A.D., Kamm, D.F.: Ensembles of one class support vector machines. In: Benediktsson, J.A., Kittler, J., Roli, F. (eds.) MCS 2009. LNCS, vol. 5519, pp. 181–190. Springer, Heidelberg (2009)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Battista Biggio
    • 1
  • Igino Corona
    • 1
  • Giorgio Fumera
    • 1
  • Giorgio Giacinto
    • 1
  • Fabio Roli
    • 1
  1. 1.Dept. of Electrical and Electronic EngineeringUniversity of CagliariCagliariItaly

Personalised recommendations