Abstract
Deep neural network is sensitive to adversarial samples that crafted by adding imperceptible perturbations to original images, and many methods of generating adversarial samples have emerged. Although existing methods based on gradient direction have good attack performance, some ill-conditioned issues may reduce their performance on occasion. In this paper, we propose a novel attack method based on three-terms conjugate gradient direction, which is effectively for improving this limitation, and its is named as fast conjugate gradient sign method (FCGSM). The proposed method FCGSM can jump from the local maximum during the process of finding the maximum value of loss function, thus generating more adversarial samples than the SOTA methods APGD and ACG. Experiments conducted on two benchmark datasets show that the FCGSM works well in attacking deep neural network-based classification models.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Carlini, N., Wagner, D.A.: Towards Evaluating the Robustness of Neural Networks. In: IEEE Symposium on Security and Privacy, pp. 39-57 (2017)
Croce, F., Hein, M.: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: International Conference on Machine Learning, pp. 2206-2216 (2020)
Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., Li, J.: Boosting Adversarial Attacks with Momentum. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 9185–9193 (2018)
Goodfellow, I. J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015)
Gowal, S., Qin, C., Uesato, J., Mann, T., Kohli, P.: Uncovering the limits of adversarial training against norm-bounded adversarial examples (2021). arXiv:2010.03593v3
Hager, W.W., Zhang, H.: Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 32, 113–137 (2006)
Ibrahim, A., Shareef, S.: Modified conjugate gradient method for training neural networks based on Logisting mapping. J. Univ. Duhok 22(1), 45–51 (2019)
Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017)
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial machine learning at scale. In: International Conference on Learning Representations, (2017)
Liu, Z., Liu, Q., Liu, T., Xu, N., Lin, X., Wang, Y., Wen, W.: Feature Distillation: DNN-oriented jpeg compression against adversarial examples. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 860-868 (2019)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (2018)
Ma, G., Lin, H., Jin, W., Han, D.: Two modified conjugate gradient methods for unconstrained optimization with applications in image restoration problems. J. Appl. Math. Comput. 68, 4733–4758 (2022)
Papernot, N., Mcdaniel, P., Goodfellow, I., Jha, S., Celik, Z. B., Swami, A.: Practical black-box attacks against machine learning. In: ACM on Asia Conference on Computer and Communications Security, pp. 506-519 (2017)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (2015)
Song, C., He, K., Lin, J., Wang, L., Hopcroft, J. E.: Robust local features for improving the generalization of adversarial training. In: International Conference on Learning Representations (2020)
Sun, J., Zhang, J.: Global convergence of conjugate gradient methods without line search. Ann. Oper. Res. 103, 161–173 (2001)
Szegedy, C., et al.: Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-9 (2015)
Szegedy, C., et al.: Intriguing properties of neural networks. In: International Conference on Learning Representations (2014)
Xie, C., Zhang, Z., Zhou, Y., Bai, S., Yuille, A. L.: Improving transferability of adversarial examples with input diversity. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2730–2739 (2019)
Yamamura, K., et al.: Diversified adversarial attacks based on conjugate gradient method. In: International Conference on Machine Learning, pp. 24872–24894 (2022)
Zhang, L., Zhou, W., Li, D.-H.: A descent modified Polak-Ribiere-Polyak conjugate gradient method and its global convergence. IMA J. Numer. Anal. 26(4), 629–640 (2006)
Acknowledgements
This work was supported in part by the Anhui Provincial Natural Science Foundation (Grant No. 2208085MF168), the Program for Synergy Innovation in the Anhui Higher Education Institutions of China (Grant No. GXXT-2022-052), and the College Students’ Innovation and Entrepreneurship Training Programs (Grant Nos. 202210360079, 202110360079, and S202110360291).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Xia, X., Xue, W., Wan, P., Zhang, H., Wang, X., Zhang, Z. (2023). FCGSM: Fast Conjugate Gradient Sign Method for Adversarial Attack on Image Classification. In: Hung, J.C., Chang, JW., Pei, Y. (eds) Innovative Computing Vol 2 - Emerging Topics in Future Internet. IC 2023. Lecture Notes in Electrical Engineering, vol 1045. Springer, Singapore. https://doi.org/10.1007/978-981-99-2287-1_98
Download citation
DOI: https://doi.org/10.1007/978-981-99-2287-1_98
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-2286-4
Online ISBN: 978-981-99-2287-1
eBook Packages: Computer ScienceComputer Science (R0)