Abstract
Convolutional neural networks have been applied in image recognition widely, and acquired well-known results. However, the discovery of adversarial examples in recent years has attracted many researcher’s attention on the safety and reliability of convolutional neural networks. Adversarial examples are small perturbations added only to the clean image data, which could cause the trained model makes wrong identifications, while people could still recognize. In this paper, we propose a new convolutional neural network to defense the adversarial examples. It breaks into two branches after inputting an image. One branch likes a normal image recognition model, while another branch distills the low-frequency information of the image firstly, and then uses the low-frequency image to train. The two branches combine to make the output of the model. The presented model uses the image’s low-frequency information, through which people often recognizes an object. The experiments show that the presented model suppresses the success rate of the attack in the face of the adversarial examples, and improves the model’s recognition success rate, comparing to some other methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018)
Szegedy, C., Zaremba, W., Sutskever, I., et al.: Intriguing properties of neural networks. arXiv.preprint. arXiv:1312.6199 (2014)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICML (2015)
Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: Computer Vision and Pattern Recognition (CVPR) (2016)
Madry, A., Makelov, A., Schmidt, L., et al.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (2018)
Papernot, N., McDaniel, P.D., Wu, X., et al.: Distillation as a defense to adversarial perturbations against deep neural networks. In: IEEE Symposium on Security and Privacy (SP) (2016)
Guo, C., Rana, M., Cisse, M., van der Maaten, L.: Countering adversarial images using input transformations. In: International Conference on Learning Representations (2018)
Xie, C., Wang, J., Zhang, Z., et al.: Mitigating adversarial effects through randomization. In: International Conference on Learning Representations (2018)
Chen, P., Liu, S., Zhao, H., Jia, J.: GridMask Data Augmentation. arXiv.preprint. arXiv:2001.04086 (2020)
Goodman, D., Li, X., Huan, J., et al.: Improving Adversarial Robustness via Attention and Adversarial Logit Pairing. arXiv.preprint. arXiv:1908.11435 (2020)
Acknowledgement
This work was financially supported by the National Natural Science Foundation (61502127, 61562023), Hainan Natural Science Foundation (620RC604), the Open Funds from Guilin University of Electronic Technology, Guangxi Key Laboratory of Image and Graphic Intelligent Processing (GIIP2012), Scientific Research Projects in Universities of Hainan Province (Hnky2017-17), and Key R&D Projects in Hainan Province (ZDYF2019010).
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Song, Z., Deng, Z. (2021). An Adversarial Examples Defense Method Based on Image Low-Frequency Information. In: Sun, X., Zhang, X., Xia, Z., Bertino, E. (eds) Advances in Artificial Intelligence and Security. ICAIS 2021. Communications in Computer and Information Science, vol 1424. Springer, Cham. https://doi.org/10.1007/978-3-030-78621-2_16
Download citation
DOI: https://doi.org/10.1007/978-3-030-78621-2_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-78620-5
Online ISBN: 978-3-030-78621-2
eBook Packages: Computer ScienceComputer Science (R0)