Skip to main content
Log in

Investigating the practicality of adversarial evasion attacks on network intrusion detection

  • Published:
Annals of Telecommunications Aims and scope Submit manuscript

Abstract

As machine learning models are increasingly integrated into critical cybersecurity tools, their security issues become a priority. Particularly after the rise of adversarial examples, original data to which a small and well-computed perturbation is added to influence the prediction of the model. Applied to cybersecurity tools, like network intrusion detection systems, they could allow attackers to evade detection mechanisms that rely on machine learning. However, if the perturbation does not consider the constraints of network traffic, the adversarial examples may be inconsistent, thus making the attack invalid. These inconsistencies are a major obstacle to the implementation of end-to-end network attacks. In this article, we study the practicality of adversarial attacks for the purpose of evading network intrusion detection models. We evaluate the impact of state-of-the-art attacks on three different datasets. Through a fine-grained analysis of the generated adversarial examples, we introduce and discuss four key criteria that are necessary for the validity of network traffic, namely value ranges, binary values, multiple category membership, and semantic relations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. https://github.com/mamerzouk/adversarial_analysis

  2. https://github.com/mamerzouk/adversarial_analysis

References

  1. Bai T, Zhao J, Zhu J, Han S, Chen J, Li, B, Kot A (2021) AI-GAN: Attack-inspired generation of adversarial examples. In: IEEE international conference on image processing (ICIP)

  2. Biggio B, Roli F (2018) Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognit

  3. Buczak AL, Guven E (2015) A survey of data mining and machine learning methods for cyber security intrusion detection. IEEE Commun Surv Tutorials

  4. Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: IEEE symposium on security and privacy. IEEE

  5. Dalvi N, Domingos P, Sanghai S, Verma D (2004) Adversarial classification. In: 10th ACM SIGKDD international conference on knowledge discovery and data mining

  6. Devlin J, Chang MW, Lee K, Toutanova K (2019) BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In: Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)

  7. Dhanabal, L, Shantharajah S (2015) A study on nsl-kdd dataset for intrusion detection system based on classification algorithms. Int J Adv Res Comput Commun Eng

  8. Gao L, Huang Z, Song J, Yang Y, Shen HT (2021) Push & Pull: Transferable adversarial examples with attentive attack. IEEE Trans Multimed

  9. Garg S, Ramakrishnan G (2020) BAE: BERT-based Adversarial Examples for Text Classification. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

  10. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2020) Generative adversarial networks. Commun ACM

  11. Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. In: 3rd International Conference on Learning Representations, ICLR

  12. Kurakin A, Goodfellow I, Bengio S (2017) Adversarial examples in the physical world. In: 5th International Conference on Learning Representations, ICLR

  13. Martins, N, Cruz JM, Cruz T, Abreu PH (2020) Adversarial Machine Learning Applied to Intrusion and Malware Scenarios: A Systematic Review. IEEE Access

  14. Merzouk MA, Cuppens F, Boulahia-Cuppens N, Yaich R (2020) A deeper analysis of adversarial examples in intrusion detection. In: 15th International Conference on Risks and Security of Internet and Systems

  15. Moosavi-Dezfooli SM, Fawzi A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: IEEE conference on computer vision and pattern recognition

  16. Moustafa N, Slay J (2015) UNSW-NB15: a comprehensive data set for network intrusion detection systems. In: Military communications and information systems conference (MilCIS)

  17. Nicolae MI, Sinn M, Tran MN, Buesser B, Rawat A, Wistuba M, Zantedeschi, V, Baracaldo N, Chen B, Ludwig H, Molloy I, Edwards B (2018) Adversarial robustness toolbox v1.0.0. arXiv:1807.01069

  18. Papernot N, McDaniel P, Goodfellow I (2016) Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv:1605.07277

  19. Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: IEEE European symposium on security and privacy (EuroS&P)

  20. Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L, et al (2019) Pytorch: An imperative style, high-performance deep learning library. In: Advances in neural information processing systems

  21. Ring M, Wunderlich S, Grüdl D, Landes D, Hotho A (2017) Flow-based benchmark data sets for intrusion detection. In: Proceedings of the 16th European conference on cyber warfare and security. ACPI

  22. Ring M, Wunderlich S, Scheuring D, Landes D, Hotho A (2019) A survey of network-based intrusion detection data sets. Compute Secur

  23. Sharafaldin I, Lashkari AH, Hakak S, Ghorbani AA (2019) Developing realistic distributed denial of service (DDoS) attack dataset and taxonomy. In: International carnahan conference on security technology (ICCST)

  24. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. In: 2nd International Conference on Learning Representations, ICLR

  25. Tavallaee M, Bagheri E, Lu W, Ghorbani AA (2009) A detailed analysis of the kdd cup 99 data set. In: IEEE symposium on computational intelligence for security and defense applications

  26. Xiao C, Li B, Zhu JY, He W, Liu M, Song D (2019) Generating adversarial examples with adversarial networks. In: 27th International Joint Conference on Artificial Intelligence, IJCAI

  27. Yuan X, He P, Zhu Q, Li X (2019) Adversarial examples: Attacks and defenses for deep learning. IEEE Trans Neural Netwo Learn Syst

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohamed Amine Merzouk.

Appendix: Detailed value tables

Appendix: Detailed value tables

Table 1

Table 1 Detection rate and distance metrics

Table 2

Table 2 Proportion of adversarial examples breaking each invalidation criterion

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Merzouk, M.A., Cuppens, F., Boulahia-Cuppens, N. et al. Investigating the practicality of adversarial evasion attacks on network intrusion detection. Ann. Telecommun. 77, 763–775 (2022). https://doi.org/10.1007/s12243-022-00910-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12243-022-00910-1

Keywords

Navigation