Skip to main content
Log in

Vulnerable point detection and repair against adversarial attacks for convolutional neural networks

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Recently, convolutional neural networks have been shown to be sensitive to artificially designed perturbations that are imperceptible to the naked eye. Whether it is image classification, semantic segmentation, or object detection, all of them will face such problem. The existence of adversarial examples raises questions about the security of smart applications. Some works have paid attention to this problem and proposed several defensive strategies to resist adversarial attacks. However, no one explored the vulnerable area of the model under multiple attacks. In this work, we fill this gap by exploring the vulnerable areas of the model, which is vulnerable to adversarial attacks. Specifically, under various attack methods with different strengths, we conduct extensive experiments on two datasets based on three different networks and illustrate some phenomena. Besides, by exploiting the Siamese Network, we propose a novel approach to more intuitively discover the deficiencies of the model. Moreover, we further provide a novel adaptive vulnerable point repair method to improve the adversarial robustness of the model. Extensive experimental results show that our proposed method can effectively improve the adversarial robustness of the model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data availability

The data MNIST and CIFAR10 used in this article can be obtained from the following links: MNIST (http://yann.lecun.com/exdb/mnist/) and CIFAR10 (http://www.cs.toronto.edu/~kriz/cifar.html).

References

  1. Abusnaina A, Wu Y, Arora S, Wang Y, Wang F, Yang H, and Mohaisen D (2021) Adversarial example detection using latent neighborhood graph. In Proceedings of the IEEE/CVF International Conference on Computer Vision. pp 7687–7696

  2. Agarwal A, Vatsa M, Singh R, Ratha N (2021) Cognitive data augmentation for adversarial defense via pixel masking. Pattern Recogn Lett 146:244–251

    Article  Google Scholar 

  3. Alarab I, Prakoonwit S (2022) Adversarial attack for uncertainty estimation: identifying critical regions in neural networks. Neural Process Lett 54(3):1805–1821

    Article  Google Scholar 

  4. Aldahdooh A, Hamidouche W, Fezza SA, Déforges O (2022) Adversarial example detection for dnn models: a review and experimental comparison. Artif Intell Rev 55(6):4403–4462

    Article  Google Scholar 

  5. Andriushchenko M, Flammarion N (2020) Understanding and improving fast adversarial training. Adv Neural Inf Process Syst 33:16048–16059

    Google Scholar 

  6. Carlini N and Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP). IEEE, pp 39–57

  7. Cisse M, Adi Y, Neverova N, and Keshet J (2017) Houdini: fooling deep structured prediction models. arxiv 2017. arXiv preprint arXiv:1707.05373 :1–12

  8. Cohen J, Rosenfeld E and Kolter Z (2019) Certified adversarial robustness via randomized smoothing. In: International Conference on Machine Learning. PMLR, pp 1310–1320

  9. Cohen G, Sapiro G, and Giryes R (2020) Detecting adversarial samples using influence functions and nearest neighbors. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 14453–14462

  10. Deng J, Guo J, Xue N, and Zafeiriou S (2019) Arcface: additive angular margin loss for deep face recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 4690–4699

  11. Ghaffari Laleh N, Truhn D, Veldhuizen GP, Han T, van Treeck M, Buelow RD, Langer R, Dislich B, Boor P, Schulz V et al (2022) Adversarial attacks and adversarial robustness in computational pathology. Nat Commun 13(1):5711

    Article  Google Scholar 

  12. Gong C, Ren T, Ye M and Liu Q (2021) Maxup: lightweight adversarial training with data augmentation improves neural network training. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 2474–2483

  13. Goodfellow IJ, Shlens J, and Szegedy C (2015) Explaining and harnessing adversarial examples. In: Bengio Y and LeCun Y (eds) 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7–9, 2015, Conference Track Proceedings

  14. Gu S, Rigazio L (2015) Towards deep neural network architectures robust to adversarial examples. In: Bengio Y, LeCun Y (eds) 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7–9, 2015. Workshop Track Proceedings

  15. Hirano H, Minagi A, Takemoto K (2021) Universal adversarial attacks on deep neural networks for medical image classification. BMC Med Imaging 21:1–13

    Article  Google Scholar 

  16. Jia S, Ma C, Yao T, Yin B, Ding S and Yang X (2022) Exploring frequency adversarial attacks for face forgery detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 4103–4112

  17. Jia X, Zhang Y, Wu B, Ma K, Wang J and Cao X (2022) Las-at: adversarial training with learnable attack strategy. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 13398–13408

  18. Jin W, Ma Y, Liu X, Tang X, Wang S and Tang J (2020) Graph structure learning for robust graph neural networks. In: Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. pp 66–74

  19. Kong X, Ge Z (2021) Adversarial attacks on neural-network-based soft sensors: directly attack output. IEEE Trans Industr Inf 18(4):2443–2451

    Article  Google Scholar 

  20. Kurakin A, Goodfellow IJ and Bengio S (2017) Adversarial machine learning at scale. In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24–26, 2017, Conference Track Proceedings

  21. Lecuyer M, Atlidakis V, Geambasu R, Hsu D and Jana S (2019) Certified robustness to adversarial examples with differential privacy. In: 2019 IEEE Symposium on Security and Privacy (SP). IEEE, pp 656–672

  22. Liang B, Li H, Su M, Li X, Shi W, Wang X (2018) Detecting adversarial image examples in deep neural networks with adaptive noise reduction. IEEE Trans Dependable Secure Comput 18(1):72–85

    Article  Google Scholar 

  23. Liao F, Liang M, Dong Y, Pang T, Hu X and Zhu J (2018) Defense against adversarial attacks using high-level representation guided denoiser. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1778–1787

  24. Liu S, Chen Z, Li W, Zhu J, Wang J, Zhang W and Gan Z (2022) Efficient universal shuffle attack for visual object tracking. In: ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, pp 2739–2743

  25. Liu M, Liu S, Su H, Cao K and Zhu J (2018) Analyzing the noise robustness of deep neural networks. In: 2018 IEEE Conference on Visual Analytics Science and Technology (VAST). IEEE, pp 60–71

  26. Long T, Gao Q, Xu L and Zhou Z (2022) A survey on adversarial attacks in computer vision: taxonomy, visualization and future directions. Comput Secur 102847

  27. Lyu C, Huang K and Liang HN (2015) A unified gradient regularization family for adversarial examples. In: 2015 IEEE international conference on data mining. IEEE, pp 301–309

  28. Ma Y, Xie T, Li J, Maciejewski R (2019) Explaining vulnerabilities to adversarial machine learning through visual analytics. IEEE Trans Visual Comput Graphics 26(1):1075–1085

    Article  Google Scholar 

  29. Madry A, Makelov A, Schmidt L, Tsipras D and Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30–May 3, 2018, Conference Track Proceedings

  30. Ma X, Li B, Wang Y, Erfani SM, Wijewickrema S, Schoenebeck G, Song D, Houle ME and Bailey J (2018) Characterizing adversarial subspaces using local intrinsic dimensionality. arXiv preprint arXiv:1801.02613

  31. Meng D and Chen H (2017) Magnet: a two-pronged defense against adversarial examples. In: Proceedings of the 2017 ACM SIGSAC conference on computer and communications security. pp 135–147

  32. Michel A, Jha SK and Ewetz R (2022) A survey on the vulnerability of deep neural networks against adversarial attacks. Progress Artif Intell 1–11

  33. Moosavi-Dezfooli SM, Fawzi A and Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 2574–2582

  34. Papernot N, McDaniel P, Wu X, Jha S and Swami A (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In: 2019 IEEE Symposium on Security and Privacy (SP). IEEE, pp 582–597

  35. Schroff F, Kalenichenko D and Philbin J (2015) Facenet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 815–823

  36. Shafahi A, Najibi M, Ghiasi A, Xu Z, Dickerson J, Studer C, Davis LS, Taylor G and Goldstein T (2019) Adversarial training for free! In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. pp 3358–3369

  37. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow IJ and Fergus R (2014) Intriguing properties of neural networks. In: Bengio Y and LeCun Y (eds) 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14–16, 2014, Conference Track Proceedings

  38. Tramer F (2022) Detecting adversarial examples is (nearly) as hard as classifying them. In: International Conference on Machine Learning. PMLR, pp 21692–21702

  39. Tramèr F, Kurakin A, Papernot N, Goodfellow IJ, Boneh D and McDaniel PD (2018) Ensemble adversarial training: attacks and defenses. In: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30–May 3, 2018, Conference Track Proceedings

  40. Wang J (2021) Adversarial examples in physical world. In: International Joint Conference on Artificial Intelligence. pp 4925–4926

  41. Wang X and He K (2021) Enhancing the transferability of adversarial attacks through variance tuning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 1924–1933

  42. Wang N, Chen Y, Xiao Y, Hu Y, Lou W and Hou T (2022) Manda: on adversarial example detection for network intrusion detection system. IEEE Trans Depend Secure Comput

  43. Wang Z, Guo H, Zhang Z, Liu W, Qin Z and Ren K (2021) Feature importance-aware transferable adversarial attacks. In: Proceedings of the IEEE/CVF international conference on computer vision. pp 7639–7648

  44. Wang B, Li Y and Zhou P (2022) Bandits for structure perturbation-based black-box attacks to graph neural networks with theoretical guarantees. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 13379–13387

  45. Wang G, Yan H and Wei X (2022) Enhancing transferability of adversarial examples with spatial momentum. In: Pattern Recognition and Computer Vision: 5th Chinese Conference, PRCV 2022, Shenzhen, China, November 4–7, 2022, Proceedings, Part I. Springer, pp 593–604

  46. Wei Z, Chen J, Goldblum M, Wu Z, Goldstein T, Jiang YG (2022) Towards transferable adversarial attacks on vision transformers. Proc AAAI Conf Artif Intell 36:2668–2676

    Google Scholar 

  47. Woo S, Park J, Lee JY and Kweon IS (2018) Cbam: convolutional block attention module. In: Proceedings of the European conference on computer vision (ECCV). pp 3–19

  48. Wu H, Wang C, Tyshetskiy Y, Docherty A, Lu K and Zhu L (2019) Adversarial examples on graph data: deep insights into attack and defense. arXiv preprint arXiv:1903.01610

  49. Xie C, Tan M, Gong B, Wang J, Yuille AL and Le QV (2020) Adversarial examples improve image recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 819–828

  50. Xie C, Wang J, Zhang Z, Ren Z and Yuille A (2018) Mitigating adversarial effects through randomization. In: International Conference on Learning Representations. pp 1–17

  51. Xie C, Wu Y, Maaten Lvd, Yuille AL and He K (2019) Feature denoising for improving adversarial robustness. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 501–509

  52. Xu H, Ma Y, Liu HC, Deb D, Liu H, Tang JL, Jain AK (2020) Adversarial attacks and defenses in images, graphs and text: a review. Int J Autom Comput 17(2):151–178

    Article  Google Scholar 

  53. Xu W, Evans D and Qi Y (2017) Feature squeezing: detecting adversarial examples in deep neural networks. In: Network and Distributed System Security Symposium. pp 1–15

  54. Yuan Z, Zhang J, Jia Y, Tan C, Xue T and Shan S (2021) Meta gradient adversarial attack. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp 7748–7757

  55. Yu Y, Gao X and Xu CZ (2021) Lafeat: piercing through adversarial defenses with latent features. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 5735–5745

  56. Zhang X, Wang J, Wang T, Jiang R, Xu J, Zhao L (2021) Robust feature learning for adversarial defense via hierarchical feature alignment. Inf Sci 560:256–270

    Article  MathSciNet  Google Scholar 

  57. Zhang J, Li B, Xu J, Wu S, Ding S, Zhang L and Wu C (2022) Towards efficient data free black-box adversarial attack. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 15115–15125

  58. Zhang J, Xu X, Han B, Niu G, Cui L, Sugiyama M and Kankanhalli M (2020) Attacks which do not kill training make adversarial learning stronger. In: International conference on machine learning. PMLR, pp 11278–11287

  59. Zhang H, Yu Y, Jiao J, Xing E, El Ghaoui L and Jordan M (2019) Theoretically principled trade-off between robustness and accuracy. In: International conference on machine learning. PMLR, pp 7472–7482

  60. Zhong Y, Liu X, Zhai D, Jiang J and Ji X (2022) Shadows can be dangerous: stealthy and effective physical-world adversarial attack by natural phenomenon. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 15345–15354

  61. Zuo F and Zeng Q (2021) Exploiting the sensitivity of l2 adversarial examples to erase-and-restore. In: Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security. pp 40–51

Download references

Funding

This work is partly supported by the International Science and Technology Cooperation Research Project of Shenzhen (No. GJHZ20200731095204013), the Key Research and Development Program of Shaanxi (Program Nos. 2021ZDLGY15-01, 2021ZDLGY09-04, and 2021GY-004), and the Natural Science Foundation of Chongqing.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoyi Feng.

Ethics declarations

Conflict of interest

The authors declare that there is no conflict of interest in this manuscript.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gao, J., Xia, Z., Dai, J. et al. Vulnerable point detection and repair against adversarial attacks for convolutional neural networks. Int. J. Mach. Learn. & Cyber. 14, 4163–4192 (2023). https://doi.org/10.1007/s13042-023-01888-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-023-01888-5

Keywords

Navigation