Evolutionary Intelligence

, Volume 4, Issue 4, pp 243–266 | Cite as

Evolutionary computation as an artificial attacker: generating evasion attacks for detector vulnerability testing

  • Hilmi Güneş KayacıkEmail author
  • A. Nur Zincir-Heywood
  • Malcolm I. Heywood
Research Paper


Intrusion detection systems protect our infrastructures by monitoring for signs of intrusions. However, intrusion detection systems are themselves susceptible to vulnerabilities, which the attackers take advantage of to evade detection. In particular, we focus on evasion attacks in which the attacker aims to generate a stealthy attack that eliminates or minimizes the likelihood of detection. Attackers achieve stealth by mimicking normal behaviour while achieving the attack goals, hence bypassing the detector. Previous work focused on generating evasion attacks using the internal knowledge of the detectors, hence adopting a ‘white-box’ access to the detector. On the other hand, we adopt a ‘black-box’ approach and propose an evolutionary attacker based on Genetic Programming. The access of our ‘black-box’ approach is limited to the feedback of the detector such as anomaly rates and delays. We compare our ‘black-box’ approach with various ‘white-box’ approaches to investigate its effectiveness. In doing so, the impact of anomalies from the break-in stage of the attacks and the delays based on locality frame counts are also discussed. This is particularly important if the performance comparison is to reflect the real capabilities of detectors.


Computer security Intrusion detection Anomaly detection Evasion attacks Evolutionary computation Artificial arms race 



The authors gratefully acknowledge the support of SwissCom Innovations SA., Telecom and Research Alliance (TARA) Inc., Killam, Precarn, MITACS and Natural Sciences and Engineering Research Council (NSERC) funding programs.


  1. 1.
    Banzhaf W, Francone FD, Keller RE, Nordin P (1998) Genetic programming: an introduction: on the automatic evolution of computer programs and its applications. Morgan Kaufmann Publishers Inc., San FranciscoGoogle Scholar
  2. 2.
    Baum LE, Petrie T, Soules G, Weiss N (1970) A maximization technique occurring in the statistical analysis of probabilistic functions of markov chains. Ann Math Stat 41(1):164–171. doi: 10.2307/2239727 MathSciNetzbMATHCrossRefGoogle Scholar
  3. 3.
    Bishop CM (1995) Neural networks for pattern recognition. Oxford University Press Inc., New YorkGoogle Scholar
  4. 4.
    Chambers JM, Cleveland WS, Tukey PA (1983) Graphical methods for data analysis. Wadsworth, BelmontzbMATHGoogle Scholar
  5. 5.
    Deb K (2001) Multi-objective optimization using evolutionary algorithms. Wiley, LondonzbMATHGoogle Scholar
  6. 6.
    Forrest S, Hofmeyr SA, Somayaji A, Longstaff TA (1996) A sense of self for unix processes. In: SP ’96: proceedings of the 1996 IEEE symposium on security and privacy. IEEE Computer Society, Washington, DC, p 120Google Scholar
  7. 7.
    Gao D, Reiter MK, Song D (2004) Gray-box extraction of execution graphs for anomaly detection. In: CCS ’04: proceedings of the 11th ACM conference on computer and communications security. ACM, New York, pp 318–329. doi:
  8. 8.
    Gao D, Reiter MK, Song D (2006) Behavioral distance measurement using hidden markov models. In: Proceedings of the 9th international symposium on recent advances in intrusion detection—RAID. Lecture notes in computer science, LNCS 4219, pp 19–40Google Scholar
  9. 9.
    Giffin JT, Jha S, Miller BP (2006) Automated discovery of mimicry attacks. In: Recent advances in intrusion detection, 9th international symposium, RAID 2006. Lecture notes in computer science, vol 4219. Springer, Berlin, pp 41–60Google Scholar
  10. 10.
    Inoue H, Somayaji A (June 2007) Lookahead pairs and full sequences: a tale of two anomaly detection methods. In: Proceedings of the 2nd annual symposium on information assurance (academic track of the 10th NYS cyber security conference), pp 9–19Google Scholar
  11. 11.
    Japkowicz N, Myers C, Gluck M (1995) A novelty detection approach to classification. In: Proceedings of the fourteenth joint conference on artificial intelligence, pp 518–523Google Scholar
  12. 12.
    Kang DK, Fuller D, Honavar V (2005) Learning classifiers for misuse and anomaly detection using a bag of system calls representation. Information assurance workshop, 2005 IAW ’05 proceedings from the sixth annual IEEE SMC, pp 118–125. doi: 10.1109/IAW.2005.1495942
  13. 13.
    Kayacık HG (2009) Can the best defense be a good offense? evolving (mimicry) attacks for detector vulnerability testing under a +black-box+ assumption. PhD thesis, Dalhousie UniversityGoogle Scholar
  14. 14.
    Kayacık HG, Zincir-Heywood AN (2008) Mimicry attacks demystified: What can attackers do to evade detection? In: PST ’08: proceedings of the 2008 sixth annual conference on privacy, security and trust. IEEE Computer Society, Washington, DC, pp 213–223. doi:
  15. 15.
    Kayacık HG, Heywood M, Zincir-Heywood N (2006) On evolving buffer overflow attacks using genetic programming. In: Proceedings of the conference on genetic and evolutionary computation (GECCO). SIGEVO, ACM, pp 1667–1674Google Scholar
  16. 16.
    Kayacık HG, Heywood M, Zincir-Heywood N (2007) Evolving buffer overflow attacks with detector feedback. In: Proceedings of the EvoWorkshops (EvoCOMNET). LNCS, vol 4448. Springer, Berlin, pp 11–20Google Scholar
  17. 17.
    Kayacık HG, Zincir-Heywood AN, Heywood M, Burschka S (2009) Optimizing anomaly detector deployment under evolutionary black-box vulnerability testing. In: Computational intelligence for security and defense applications, 2009. CISDA 2009. IEEE Symposium, pp 1–8. doi: 10.1109/CISDA.2009.5356546
  18. 18.
    Kramer MA (1991) Nonlinear principal component analysis using autoassociative neural networks. AIChE J 37(2):233–243Google Scholar
  19. 19.
    Kruegel C, Kirda E, Mutz D, Robertson W, Vigna G (2005) Automating mimicry attacks using static binary analysis. In: SSYM’05: proceedings of the 14th conference on USENIX security symposium. USENIX Association, Berkeley, pp 161–176Google Scholar
  20. 20.
    Kumar R, Rockett P (2002) Improved sampling of the pareto-front in multiobjective genetic optimizations by steady-state evolution: a pareto converging genetic algorithm. Evol Comput 10(3):283–314 doi: Google Scholar
  21. 21.
    Lee J, Cho S, Baek J (2003) Trend detection using auto-associative neural networks: intraday kospi 200 futures. Computational intelligence for financial engineering, 2003 proceedings 2003 IEEE international conference, pp 417–420. doi: 10.1109/CIFER.2003.1196290
  22. 22.
    Manevitz L, Yousef M (2007) One-class document classification via neural networks. Neurocomputing 70(7–9):1466–1481, doi: Google Scholar
  23. 23.
    Securityfocus vulnerability archives (2010) Ibnl traceroute heap corruption vulnerability. Last accessed August 2010
  24. 24.
    Securityfocus vulnerability archives (2010) Redhat linux restore insecure environment variables vulnerability. Last accessed August 2010
  25. 25.
    Securityfocus vulnerability archives (2010) Samba ‘call_trans2open’ remote buffer overflow vulnerability. Last accessed August 2010
  26. 26.
    Securityfocus vulnerability archives (2010) Wu-ftpd remote format string stack overwrite vulnerability. Last accessed August 2010
  27. 27.
    Sekar R, Bendre M, Dhurjati D, Bollineni P (2001) A fast automaton-based method for detecting anomalous program behaviors. In: SP ’01: proceedings of the 2001 IEEE symposium on security and privacy. IEEE Computer Society, Washington, DC, p 144Google Scholar
  28. 28.
    Somayaji AB (2002) Operating system stability and security through process homeostasis. PhD thesis, The University of New Mexico, chairperson: Stephanie ForrestGoogle Scholar
  29. 29.
    Stide website (2010) Source code of Stide and system call data sets. Last accessed August 2010
  30. 30.
    Tan KMC, Maxion RA (2002) “why 6?” Defining the operational limits of Stide, an anomaly-based intrusion detector. In: SP ’02: proceedings of the 2002 IEEE symposium on security and privacy. IEEE Computer Society, Washington, DC, p 188Google Scholar
  31. 31.
    Tan KMC, Maxion RA (2003) Determining the operational limits of an anomaly-based intrusion detector. Selected areas in communications. IEEE J 21(1):96–110. doi: 10.1109/JSAC.2002.806130 CrossRefGoogle Scholar
  32. 32.
    Tan KMC, Killourhy KS, Maxion RA (2002) Undermining an anomaly-based intrusion detection system using common exploits. In: Proceedings of the 5th international symposium on recent advances in intrusion detection—RAID. Lecture notes in computer science, LNCS 2516, pp 54–73Google Scholar
  33. 33.
    Tan KMC, McHugh J, Killourhy KS (2003) Hiding intrusions: from the abnormal to the normal and beyond. In: IH ’02: revised papers from the 5th international workshop on information hiding, Springer, London, pp 1–17Google Scholar
  34. 34.
    Vigna G, Robertson W, Balzarotti D (2004) Testing network-based intrusion detection signatures using mutant exploits. In: CCS ’04: proceedings of the 11th ACM conference on computer and communications security, ACM, New York, pp 21–30, doi:
  35. 35.
    Wagner D, Soto P (2002) Mimicry attacks on host-based intrusion detection systems. In: CCS ’02: Proceedings of the 9th ACM conference on computer and communications security. ACM, New York, pp 255–264. doi:

Copyright information

© Springer-Verlag 2011

Authors and Affiliations

  • Hilmi Güneş Kayacık
    • 1
    Email author
  • A. Nur Zincir-Heywood
    • 2
  • Malcolm I. Heywood
    • 2
  1. 1.School of Computer ScienceCarleton UniversityOttawaCanada
  2. 2.Faculty of Computer ScienceDalhousie UniversityHalifaxCanada

Personalised recommendations