Skip to main content
Log in

A Pruning Method Combined with Resilient Training to Improve the Adversarial Robustness of Automatic Modulation Classification Models

  • Research
  • Published:
Mobile Networks and Applications Aims and scope Submit manuscript

Abstract

In the rapidly evolving landscape of wireless communication systems, the vulnerability of automatic modulation classification (AMC) models to adversarial attacks presents a significant security challenge. This study introduces a pruning and training methodology tailored to address the nuances of signal processing within these systems. Leveraging a pruning method based on channel activation contributions, our approach optimizes adversarial training potential, enhancing the model’s capacity to improve robustness against attacks. Additionally, the approach constructs a resilient training method based on a composite strategy, integrating balanced adversarial training, soft target regularization, and gradient masking. This combination effectively broadens the model’s uncertainty space and obfuscates gradients, thereby enhancing the model’s defenses against a wide spectrum of adversarial tactics. The training regimen is carefully adjusted to retain sensitivity to adversarial inputs while maintaining accuracy on original data. Comprehensive evaluations conducted on the RML2016.10A dataset demonstrate the effectiveness of our method in defending against both gradient-based and optimization-based attacks within the realm of wireless communication. This research offers insightful and practical approaches to improving the security and performance of AMC models against the complex and evolving threats present in modern wireless communication environments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Algorithm 1
Algorithm 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data Availibility

No datasets were generated or analysed during the current study.

References

  1. Lin, Y., Zhao, H., Tu, Y et al (2020) Threats of adversarial attacks in dnn-based modulation recognition. In: IEEE INFOCOM 2020-IEEE Conference on Computer Communications, IEEE pp. 2469–2478

  2. Chen Z, Cui H, Xiang J et al (2022) Signet: A novel deep learning framework for radio signal classification. IEEE Trans Cogn Commun Netw 8(2):529–541. https://doi.org/10.1109/TCCN.2021.3120997

    Article  Google Scholar 

  3. Xuan Q, Zhou J, Qiu K et al (2022) Avgnet: Adaptive visibility graph neural network and its application in modulation classification. IEEE Trans Netw Sci Eng 9(3):1516–1526. https://doi.org/10.1109/TNSE.2022.3146836

    Article  Google Scholar 

  4. Qi P, Zhou X, Zheng S et al (2021) Automatic modulation classification based on deep residual networks with multimodal information. IEEE Trans Cogn Commun Netw 7(1):21–33. https://doi.org/10.1109/TCCN.2020.3023145

    Article  Google Scholar 

  5. Chen Z, Wang Z, Xu D et al (2024) Learn to defend: Adversarial multi-distillation for automatic modulation recognition models. IEEE Trans Inf Forensic Secur 1–1. https://doi.org/10.1109/TIFS.2024.3361172

  6. Qi P, Meng Y, Zheng S et al (2024) Adversarial defense embedded waveform design for reliable communication in the physical layer. IEEE Internet Things J 1–1. https://doi.org/10.1109/JIOT.2024.3361447

  7. Zhang L, Lambotharan S, Zheng G et al (2022) A hybrid training-time and run-time defense against adversarial attacks in modulation classification. IEEE Wirel Commun Lett 11(6):1161–1165

    Article  Google Scholar 

  8. Hameed MZ, György A, Gündüz D (2020) The best defense is a good offense: Adversarial attacks to avoid modulation detection. IEEE Trans Inf Forensic Secur 16:1074–1087

    Article  Google Scholar 

  9. Wang D, Li C, Wen S et al (2020) Defending against adversarial attack towards deep neural networks via collaborative multi-task training. IEEE Trans Dependable Secure Comput 19(2):953–965

    Article  Google Scholar 

  10. Yuan X, He P, Zhu Q et al (2019) Adversarial examples: Attacks and defenses for deep learning. IEEE trans neural netw Learn Syst 30(9):2805–2824

    Article  MathSciNet  Google Scholar 

  11. Akhtar N, Mian A, Kardan N et al (2021) Advances in adversarial attacks and defenses in computer vision: A survey. IEEE Access 9:155161–155196

    Article  Google Scholar 

  12. Reed R (1993) Pruning algorithms-a survey. IEEE trans Neural Netw 4(5):740–747

    Article  Google Scholar 

  13. Chen Z, Wang Z, Gao X et al (2023) Channel pruning method for signal modulation recognition deep learning models. IEEE Transactions on Cognitive Communications and Networking

  14. Blalock D, Gonzalez Ortiz JJ, Frankle J et al (2020) What is the state of neural network pruning? Proc mach learn syst 2:129–146

  15. Vadera S, Ameen S (2022) Methods for pruning deep neural networks. IEEE Access 10:63280–63300

    Article  Google Scholar 

  16. Madaan D, Shin J, Hwang SJ (2020) Adversarial neural pruning with latent vulnerability suppression. In: International Conference on Machine Learning PMLR p 6575–6585

  17. Vemparala M-R, Fasfous N, Frickenstein A et al (2021) Adversarial robust model compression using in-train pruning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, p 66–75

  18. Zhang L, Wang Z, Dong X et al (2023) Towards fairness-aware adversarial network pruning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, p 5168–5177

  19. Ye S, Xu K, Liu S et al (2019) Adversarial robustness vs. model compression, or both?. In: Proceedings of the IEEE/CVF International Conference on Computer Vision p 111–120

  20. Huang H, Wu P, Xia S et al (2023) Distortion diminishing with vulnerability filters pruning. Mach Vis Appl 34(6):123

  21. Athalye A, Carlini N (2018) On the robustness of the cvpr 2018 white-box adversarial example defenses. arXiv preprint arXiv:1804.03286

  22. Szegedy C, Zaremba W, Sutskever I et al(2014) Intriguing Properties of Neural Networks. (2014). 2nd International Conference on Learning Representations, ICLR 2014 ; Conference date: 14-04-2014 Through 16-04-2014

  23. Huang R, Xu B, Schuurmans D et al (2015) Learning with a strong adversary. arXiv preprint arXiv:1511.03034

  24. Shaham U, Yamada Y, Negahban S (2018) Understanding adversarial training: Increasing local stability of supervised models through robust optimization. Neurocomputing 307:195–204

    Article  Google Scholar 

  25. Madry A, Makelov A, Schmidt L et al (2017) Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083

  26. Maroto J, Bovet G, Frossard P (2021) Safeamc: Adversarial training for robust modulation recognition models. arXiv preprint arXiv:2105.13746

  27. Wen W, Wu C, Wang Y et al (2016) Learning structured sparsity in deep neural networks. Advances in neural information processing systems, 29

  28. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25

  29. Srivastava N, Hinton G, Krizhevsky A et al (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958

  30. Roth K, Lucchi A, Nowozin S et al (2018) Adversarially robust training through structured gradient regularization. arXiv preprint arXiv:1805.08736

  31. Szegedy C, Vanhoucke V, Ioffe S et al (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, p 2818–2826

  32. Tramèr F, Kurakin A, Papernot N et al (2017) Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204

  33. Xiao C, Zhong P, Zheng C (2019) Enhancing adversarial defense by k-winners-take-all. arXiv preprint arXiv:1905.10510

  34. Li Y, Cheng S, Su H et al (2020) Defense against adversarial attacks via controlling gradient leaking on embedded manifolds. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVIII Springer 16 p 753–769

  35. Han S, Pool J, Tran J et al (2015) Learning both weights and connections for efficient neural network. Advances in neural information processing systems 28

  36. Cai X, Yi J, Zhang F et al (2019) Adversarial structured neural network pruning. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management, p 2433–2436

  37. Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572

  38. O‘Shea TJ, Corgan J, Clancy TC (2016) Convolutional radio modulation recognition networks. In: Engineering Applications of Neural Networks: 17th International Conference, EANN 2016, Aberdeen, UK, September 2-5, 2016, Proceedings Springer 17 p 213–226

  39. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: 3rd International Conference on Learning Representations (ICLR 2015). Computational and Biological Learning Society

  40. Kurakin A, Goodfellow I, Bengio S (2016) Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236

  41. Kurakin A, Goodfellow IJ, Bengio S (2018) Adversarial examples in the physical world. In: Artificial Intelligence Safety and Security, Chapman and Hall/CRC, ??? p 99–112

  42. Croce F, Hein M (2020) Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: International Conference on Machine Learning PMLR, p 2206–2216

  43. Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 Ieee Symposium on Security and Privacy (sp) Ieee, p 39–57

  44. Moosavi-Dezfooli, S-M, Fawzi A, Frossard P (2016) Deepfool: A simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

  45. Andriushchenko M, Flammarion N (2020) Understanding and improving fast adversarial training. Adv Neural Inf Process Syst NeurIPS 33:16048–16059

    Google Scholar 

  46. Cohen J, Rosenfeld E, Kolter Z (2019) Certified adversarial robustness via randomized smoothing. In: International Conference on Machine Learning, PMLR p 1310–1320

Download references

Author information

Authors and Affiliations

Authors

Contributions

C. H. wrote the main manuscript text, L. W. prepared the experimental data, D. L. performed the data visualization, W. C. provided guidance on ideas and B. Y. polished the text. All authors reviewed the manuscript.

Corresponding author

Correspondence to Bin Yan.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Han, C., Wang, L., Li, D. et al. A Pruning Method Combined with Resilient Training to Improve the Adversarial Robustness of Automatic Modulation Classification Models. Mobile Netw Appl (2024). https://doi.org/10.1007/s11036-024-02333-9

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11036-024-02333-9

Keywords

Navigation