Advertisement

Evolving Systems

, Volume 1, Issue 3, pp 181–197 | Cite as

Adversarial learning: the impact of statistical sample selection techniques on neural ensembles

  • Shir Li WangEmail author
  • Kamran Shafi
  • Chris Lokan
  • Hussein A. Abbass
Original Paper

Abstract

Adversarial learning is a recently introduced term which refers to the machine learning process in the presence of an adversary whose main goal is to cause dysfunction to the learning machine. The key problem in adversarial learning is to determine when and how an adversary will launch its attacks. It is important to equip the deployed machine learning system with an appropriate defence strategy so that it can still perform adequately in an adversarial learning environment. In this paper we investigate artificial neural networks as the machine learning algorithm to operate in such an environment, owing to their ability to learn a complex and nonlinear function even with little prior knowledge about the underlying true function. Two types of adversarial attacks are investigated: targeted attacks, which are aimed at a specific group of instances, and random attacks, which are aimed at arbitrary instances. We hypothesise that a neural ensemble performs better than a single neural network in adversarial learning. We test this hypothesis using simulated adversarial attacks, based on artificial, UCI and spam data sets. The results demonstrate that an ensemble of neural networks trained on attacked data is more robust against both types of attack than a single network. While many papers have demonstrated that an ensemble of neural networks is more robust against noise than a single network, the significance of the current work lies in the fact that targeted attacks are not white noise.

Keywords

Adversarial learning Ensemble Samples selection Representativeness 

References

  1. Angelov P, Lughofer E, Zhou X (2008) Evolving fuzzy classifiers using different model architectures. Fuzzy Sets Syst 159(23):3160–3182 (Theme: Modeling)zbMATHCrossRefMathSciNetGoogle Scholar
  2. Angelov P, Zhou X (2006) Evolving fuzzy systems from data streams in real-time. In: Evolving fuzzy systems, 2006 international symposium, pp 29–35Google Scholar
  3. Barreno M, Nelson B, Sears R, Joseph AD, Tygar JD (2006) Can machine learning be secure? In: ASIACCS ’06: proceedings of the 2006 ACM symposium on information, computer and communications security. ACM, New York, pp 16–25Google Scholar
  4. Caulcott E (1973) Significance tests. Routledge and Kegan Paul, Ltd, USAGoogle Scholar
  5. Clark RD (1997) Optisim: an extended dissimilarity selection method for finding diverse representative subsets. J Chem Inf Comput Sci 37:1181–1188Google Scholar
  6. Dalvi N, Domingos P, Mausam, Sanghai S, Verma D (2004) Adversarial classification. In: Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, New York, pp 99–108Google Scholar
  7. Daszykowski M, Walczak B, Massart DL (2002) Representative subset selection. Anal Chim Acta 468:91–103CrossRefGoogle Scholar
  8. José María Gómez Hidalgo (2010) Machine learning for spam detection resources. ftp://ftp.ics.uci.edu/pub/machine-learning-databases/spambase/
  9. Jorgensen Z, Zhou Y, Inge M (2008) A multiple instance learning strategy for combating good word attacks on spam filters. J Mach Learn Res 9:1115–1146Google Scholar
  10. Keijzer M, Babovic V (2000) Genetic programming, ensemble methods and the bias/variance tradeoff—introductory investigations. In: Lecture notes on computer science, vol 1802. Springer, pp 76–90Google Scholar
  11. Kocjančič R, Zupan J (2000) Modelling of the river flowrate: the influence of the training set selection. Chemomet Intell Lab Syst 54:21–34CrossRefGoogle Scholar
  12. Liu H, Motoda H (2002) On issue of instance selection. Data Min Knowl Discov 6:115–130CrossRefMathSciNetGoogle Scholar
  13. Lowd D, Meek C (2005) Adversarial learning. In: Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining. ACM, New York, pp 641–647Google Scholar
  14. Manly BFJ (1994) Multivariate statistical methods A Primer, 2nd edn. Chapman and Hall, LondonGoogle Scholar
  15. Nelson B, Barreno M, Fuching JC, Joseph AD, Rubinstein BIP, Saini U, Sutton C, Tygar JD, Xia K (2009) Misleading a learner: co-opting your spam filter. In: Machine Learning in Cyber Trust. Springer US, New York, pp 17–51Google Scholar
  16. Nelson B, Joseph AD (2006) Bounding an attack’s complexity for a simple learning model. In: Proceedings of the first workshop on tackling computer system problems with machine learning techniques (SysML), pp 1–5Google Scholar
  17. Newsome J, Karp B, Song D (2006) Paragraph: thwarting signature learning by training maliciously. In: Recent advances in Intrusion detection. Springer, Berlin, pp 81–105Google Scholar
  18. Reinartz T (2002) A unifying view on instance selection. Data Min Knowl Discov 6:191–210zbMATHCrossRefMathSciNetGoogle Scholar
  19. Rimbaud DJ, Massart DL, Saby CA, Puel C (1997) Characterisation of the representativity of selected sets of samples in multivariate calibration and pattern recognition. Anal Chim Acta 350:149–161CrossRefGoogle Scholar
  20. UCI Machine Learning Repository Team (2010) UCI machine learning repository. http://archive.ics.uci.edu/ml/
  21. Veloso A, Meira W Jr (2006) Lazy associative classification for content-based spam detection. In: The proceedings of the Latin American Web CongressGoogle Scholar
  22. Wu W, Walczak B, Massart DL, Heuerding S, Erni F, Last IR, Prebble KA (1996) Artificial neural networks in classification of NIR spectral data: design of the training set. Chemomet Intell Lab Syst 33:35–46CrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2010

Authors and Affiliations

  • Shir Li Wang
    • 1
    Email author
  • Kamran Shafi
    • 2
  • Chris Lokan
    • 1
  • Hussein A. Abbass
    • 2
  1. 1.School of Engineering and Information TechnologyUniversity of New South Wales, University CollegeCanberraAustralia
  2. 2.Defence and Security Applications Research Centre (DSARC)University of New South Wales, University CollegeCanberraAustralia

Personalised recommendations