Skip to main content
Log in

Amplification methods to promote the attacks against machine learning-based intrusion detection systems

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

The security of machine learning attracts increasing attention in both academia and industry due to its vulnerability to adversarial examples. However, the research on adversarial examples in intrusion detection is currently in its infancy. In this paper, two novel adversarial attack amplification methods based on a unified framework are proposed to promote the attack performance of the classic white-box attack methods. The proposed methods shield the underlying implementation details of the target attack methods and can effectively boost different target attack methods through a unified interface. The proposed methods extract the original adversarial perturbations from the adversarial examples produced by the target attack methods and amplify the original adversarial perturbations to generate the amplified adversarial examples. The preliminary experimental results show that the proposed methods can effectively improve the attack performance of the classic white-box attack methods. Besides, the amplified adversarial examples crafted by the proposed methods show excellent transferability across different machine learning classifiers, which ensures that the application of the proposed methods is not limited to the white-box setting. Consequently, the proposed methods can be utilized to better assess the robustness of the machine learning-based intrusion detection systems against adversarial examples in various contexts.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Algorithm 1
Algorithm 2
Fig. 3
Fig. 4

Similar content being viewed by others

Availability of data and materials

The data used to support the findings of this study is available from the corresponding author upon request.

Code Availability

The code used to support the findings of this study is available from the corresponding author upon request.

References

  1. Harrie L, Oucheikh R, Nilsson Å, Oxenstierna A, Cederholm P, Wei L, Richter K-F, Olsson P (2022) Label Placement Challenges in City Wayfinding Map Production-Identification and Possible Solutions. J. Geovis. Spat. Anal 6(1):16. https://doi.org/10.1007/s41651-022-00115-z

    Article  Google Scholar 

  2. Courtial A, Touya G, Zhang X (2022) Constraint-based evaluation of map images generalized by deep learning. J. Geovis. Spat. Anal 6(1):13. https://doi.org/10.1007/s41651-022-00104-2

    Article  Google Scholar 

  3. Wu Z, Gao P, Cui L, Chen J (2022) An incremental learning method based on dynamic ensemble rvm for intrusion detection. IEEE Trans. Netw. Service Manag. 19(1):671–685. https://doi.org/10.1109/TNSM.2021.3102388

    Article  Google Scholar 

  4. Li X, Hu Z, Xu M, Wang Y, Ma J (2021) Transfer learning based intrusion detection scheme for internet of vehicles. Inf. Sci. 547:119–135. https://doi.org/10.1016/j.ins.2020.05.130

    Article  Google Scholar 

  5. Chou D, Jiang M (2021) A survey on data-driven network intrusion detection. ACM Comput. Surv. 54(9). https://doi.org/10.1145/3472753

  6. Yuan X, He P, Zhu Q, Li X (2019) Adversarial examples: Attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learn. Syst. 30(9):2805–2824. https://doi.org/10.1109/TNNLS.2018.2886017

    Article  MathSciNet  Google Scholar 

  7. Goodfellow IJ, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (ICLR). https://doi.org/10.48550/arXiv.1412.6572

  8. Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (ICLR). https://doi.org/10.48550/arXiv.1706.06083

  9. Dong Y, Liao F, Pang T, Su H, Zhu J, Hu X, Li J (2018) Boosting adversarial attacks with momentum. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 9185–9193. https://doi.org/10.1109/CVPR.2018.00957

  10. Moosavi-Dezfooli S-M, Fawzi A, Frossard P (2016) Deepfool: A simple and accurate method to fool deep neural networks. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 2574–2582. https://doi.org/10.1109/CVPR.2016.282

  11. Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp 39–57. https://doi.org/10.1109/SP.2017.49

  12. Anthi E, Williams L, Rhode M, Burnap P, Wedgbury A (2021) Adversarial attacks on machine learning cybersecurity defences in industrial control systems. J. Inf. Secur. Appl. 58:102717. https://doi.org/10.1016/j.jisa.2020.102717

    Article  Google Scholar 

  13. Pawlicki M, Choraś M, Kozik R (2020) Defending network intrusion detection systems against adversarial evasion attacks. Future Gener. Comput. Syst. 110:148–154. https://doi.org/10.1016/j.future.2020.04.013

    Article  Google Scholar 

  14. Martins N, Cruz JM, Cruz T, Henriques Abreu P (2020) Adversarial machine learning applied to intrusion and malware scenarios: A systematic review. IEEE Access 8:35403–35419. https://doi.org/10.1109/ACCESS.2020.2974752

    Article  Google Scholar 

  15. Rosenberg I, Shabtai A, Elovici Y, Rokach L (2021) Adversarial machine learning attacks and defense methods in the cyber security domain. ACM Comput. Surv. 54(5). https://doi.org/10.1145/3453158

  16. Lin Z, Shi Y, Xue Z (2022) Idsgan: Generative adversarial networks for attack generation against intrusion detection. In: Advances in Knowledge Discovery and Data Mining, pp 79–91. Springer, Cham. https://doi.org/10.1007/978-3-031-05981-0

  17. Serban A, Poll E, Visser J (2020) Adversarial examples on object recognition: A comprehensive survey. ACM Comput. Surv. 53(3). https://doi.org/10.1145/3398394

  18. Sadeghi K, Banerjee A, Gupta SKS (2020) A system-driven taxonomy of attacks and defenses in adversarial machine learning. IEEE Trans. Emerg. Topics Comput. 4(4):450–467. https://doi.org/10.1109/TETCI.2020.2968933

    Article  Google Scholar 

  19. Karatas G, Demir O, Sahingoz OK (2020) Increasing the performance of machine learning-based idss on an imbalanced and up-to-date dataset. IEEE Access 8:32150–32162. https://doi.org/10.1109/ACCESS.2020.2973219

    Article  Google Scholar 

  20. Clements J, Yang Y, Sharma AA, Hu H, Lao Y (2021) Rallying adversarial techniques against deep learning for network security. In: 2021 IEEE symposium series on computational intelligence (SSCI), pp 01–08. https://doi.org/10.1109/SSCI50451.2021.9660011

  21. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2014) Intriguing properties of neural networks. In: International conference on learning representations (ICLR). https://doi.org/10.48550/ARXIV.1312.6199

  22. Yang K, Liu J, Zhang C, Fang Y (2018) Adversarial examples against the deep learning based network intrusion detection systems. In: MILCOM 2018 - 2018 IEEE military communications conference (MILCOM), pp 559–564. https://doi.org/10.1109/MILCOM.2018.8599759

  23. Wang Z (2018) Deep learning-based intrusion detection with adversaries. IEEE Access 6:38367–38384. https://doi.org/10.1109/ACCESS.2018.2854599

    Article  Google Scholar 

  24. Qiu H, Dong T, Zhang T, Lu J, Memmi G, Qiu M (2021) Adversarial attacks against network intrusion detection in iot systems. IEEE Internet Things J. 8(13):10327–10335. https://doi.org/10.1109/JIOT.2020.3048038

  25. Huang X, Kroening D, Ruan W, Sharp J, Sun Y, Thamo E, Wu M, Yi X (2020) A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Comput. Sci. Rev. 37:100270. https://doi.org/10.1016/j.cosrev.2020.100270

    Article  MathSciNet  Google Scholar 

  26. Chkirbene Z, Erbad A, Hamila R, Gouissem A, Mohamed A, Guizani M, Hamdi M (2021) A weighted machine learning-based attacks classification to alleviating class imbalance. IEEE Syst. J. 15(4):4780–4791. https://doi.org/10.1109/JSYST.2020.3033423

    Article  Google Scholar 

  27. Gamage S, Samarabandu J (2020) Deep learning methods in network intrusion detection: A survey and an objective comparison. J. Netw. Comput. Appl. 169:102767. https://doi.org/10.1016/j.jnca.2020.102767

    Article  Google Scholar 

  28. Lu K-D, Zeng G-Q, Luo X, Weng J, Luo W, Wu Y (2021) Evolutionary deep belief network for cyber-attack detection in industrial automation and control system. IEEE Trans. Industr. Inform. 17(11):7618–7627. https://doi.org/10.1109/TII.2021.3053304

    Article  Google Scholar 

  29. Alsaedi A, Tari Z, Mahmud R, Moustafa N, Mahmood A, Anwar A (2023) USMD: UnSupervised misbehaviour detection for multi-sensor data. IEEE Trans. Dependable Secure Comput. 20(1):724–739. https://doi.org/10.1109/TDSC.2022.3143493

    Article  Google Scholar 

  30. Catillo M, Pecchia A, Villano U (2023) Cps-guard: Intrusion detection for cyber-physical systems and iot devices using outlier-aware deep autoencoders. Comput. Secur. 129:103210. https://doi.org/10.1016/j.cose.2023.103210

    Article  Google Scholar 

  31. Alhajjar E, Maxwell P, Bastian N (2021) Adversarial machine learning in network intrusion detection systems. Expert Syst. Appl. 186:115782. https://doi.org/10.1016/j.eswa.2021.115782

    Article  Google Scholar 

  32. Ren H, Huang T, Yan H (2021) Adversarial examples: attacks and defenses in the physical world. Int. J. Mach. Learn. Cybern. 12(11):3325–3336. https://doi.org/10.1007/s13042-020-01242-z

    Article  Google Scholar 

  33. Li Y, Xu Y, Liu Z, Hou H, Zheng Y, Xin Y, Zhao Y, Cui L (2020) Robust detection for network intrusion of industrial iot based on multi-cnn fusion. Measurement 154:107450. https://doi.org/10.1016/j.measurement.2019.107450

    Article  Google Scholar 

  34. Li Z, Rios ALG, Trajković L (2021) Machine learning for detecting anomalies and intrusions in communication networks. IEEE J. Sel. Areas Commun. 39(7):2254–2264. https://doi.org/10.1109/JSAC.2021.3078497

    Article  Google Scholar 

  35. Xu X, Li J, Yang Y, Shen F (2021) Toward effective intrusion detection using log-cosh conditional variational autoencoder. IEEE Internet Things J. 8(8):6187–6196. https://doi.org/10.1109/JIOT.2020.3034621

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported in part by the Central Government Guides Local Science and Technology Development Special Funds, China, under grant No.[2018]4008, in part by the Science and Technology Planned Project of Guizhou Province, China, under grant No.[2020]2Y013, in part by the Science and Technology Planned Project of Guizhou Province, China, under Grant [2023]YB449. I would like to give my most sincere thanks to Prof. Xie, Xiaoyao, Prof. Xu, Yang, and Zhang, Xinyu for their generous assistance and valuable suggestions.

Funding

This work was supported in part by the Central Government Guides Local Science and Technology Development Special Funds, China, under grant No.[2018]4008, in part by the Science and Technology Planned Project of Guizhou Province, China, under grant No.[2020]2Y013, in part by the Science and Technology Planned Project of Guizhou Province, China, under Grant [2023]YB449.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yang Xu.

Ethics declarations

Competing interests

All the authors declare that they have no competing financial interests or personal relationships that could influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, S., Xu, Y., Zhang, X. et al. Amplification methods to promote the attacks against machine learning-based intrusion detection systems. Appl Intell 54, 2941–2961 (2024). https://doi.org/10.1007/s10489-024-05311-6

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-024-05311-6

Keywords

Navigation