Abstract
This paper focuses on adversarial attacks and security on machine learning and deep learning models. We apply different methods of perturbation on pictures in ImageNet and record the success rate of examples which are successfully attacked and wrongly recognized and then conclude a graph to describe the relationship between the intensity of attack and the accuracy of recognition. Then, we figure out the reason of examples hard to be attacked which is determined by models or determined by examples themselves. Besides, we analyze the pictures which are extremely defensive to the attacks and find out some visual characters to support them stay strong.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572
Madry A, Makelov A, Schmidt L et al (2017) Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083
Papernot N, McDaniel P, Jha S et al (2016) The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy (EuroS & P). IEEE, New York, pp 372ā387
Szegedy C, Vanhoucke V, Ioffe S et al (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2818ā2826
Moosavi-Dezfooli SM, Fawzi A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2574ā2582
Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy (SP). IEEE, New York, pp 39ā57
Dong Y, Liao F, Pang T et al (2018) Boosting adversarial attacks with momentum. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 9185ā9193
LeCun Y, Boser B, Denker JS et al (1989) Backpropagation applied to handwritten zip code recognition. Neural Comput 1(4):541ā551
He K, Zhang X, Ren S et al (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770ā778
Deng J, Dong W, Socher R et al (2009) Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. IEEE, New York, pp 248ā255
Szegedy C, Liu W, Jia Y et al (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1ā9
Ioffe S, Szegedy C (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167
Szegedy C, Ioffe S, Vanhoucke V et al (2017) Inception-v4, inception-ResNet and the impact of residual connections on learning. In: Thirty-first AAAI conference on artificial intelligence
Kurakin A, Boneh D, TramĆØr F et al (2018) Ensemble adversarial training: attacks and defenses
Russakovsky O, Deng J, Su H et al (2015) Imagenet large scale visual recognition challenge. Int J Comput Vis 115(3):211ā252
Liu Y, Chen X, Liu C et al (2016) Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770
Athalye A, Carlini N, Wagner D (2018) Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420
Dong Y, Fu Q A, Yang X et al (2019) Benchmarking Adversarial Robustness. arXiv preprint arXiv:1912.11852
Liao F, Liang M, Dong Y et al (2018) Defense against adversarial attacks using high-level representation guided denoiser. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1778ā1787
Szegedy C, Zaremba W, Sutskever I et al (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Ā© 2021 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Yang, J. (2021). Hard Examples Mining for AdversarialĀ Attacks. In: Liang, Q., Wang, W., Liu, X., Na, Z., Li, X., Zhang, B. (eds) Communications, Signal Processing, and Systems. CSPS 2020. Lecture Notes in Electrical Engineering, vol 654. Springer, Singapore. https://doi.org/10.1007/978-981-15-8411-4_112
Download citation
DOI: https://doi.org/10.1007/978-981-15-8411-4_112
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-8410-7
Online ISBN: 978-981-15-8411-4
eBook Packages: EngineeringEngineering (R0)