Skip to main content

Adaptive Image Transformations for Transfer-Based Adversarial Attack

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Abstract

Adversarial attacks provide a good way to study the robustness of deep learning models. One category of methods in transfer-based black-box attack utilizes several image transformation operations to improve the transferability of adversarial examples, which is effective, but fails to take the specific characteristic of the input image into consideration. In this work, we propose a novel architecture, called Adaptive Image Transformation Learner (AITL), which incorporates different image transformation operations into a unified framework to further improve the transferability of adversarial examples. Unlike the fixed combinational transformations used in existing works, our elaborately designed transformation learner adaptively selects the most effective combination of image transformations specific to the input image. Extensive experiments on ImageNet demonstrate that our method significantly improves the attack success rates on both normally trained models and defense models under various settings.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/cleverhans-lab/cleverhans/tree/master/cleverhans_v3.1.0/examples/nips17_adversarial_competition/dataset.

  2. 2.

    https://drive.google.com/drive/folders/1CfobY6i8BfqfWPHL31FKFDipNjqWwAhS.

References

  1. Athalye, A., Carlini, N., Wagner, D.A.: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: ICML, vol. 80, pp. 274–283 (2018)

    Google Scholar 

  2. Athalye, A., Engstrom, L., Ilyas, A., Kwok, K.: Synthesizing robust adversarial examples. In: ICML, vol. 80, pp. 284–293 (2018)

    Google Scholar 

  3. Chen, L., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Semantic image segmentation with deep convolutional nets and fully connected CRFs. In: ICLR (2015)

    Google Scholar 

  4. Chen, L., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE TPAMI 40(4), 834–848 (2018)

    Article  Google Scholar 

  5. Chen, L., Papandreou, G., Schroff, F., Adam, H.: Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587 (2017)

  6. Chen, P., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.: ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, AISec@CCS 2017, Dallas, TX, USA, 3 November 2017, pp. 15–26 (2017)

    Google Scholar 

  7. Cheng, S., Dong, Y., Pang, T., Su, H., Zhu, J.: Improving black-box adversarial attacks with a transfer-based prior. In: NeurIPS, pp. 10932–10942 (2019)

    Google Scholar 

  8. Croce, F., Hein, M.: Provable robustness against all adversarial \(l_p\)-perturbations for \(p\ge 1\). In: ICLR (2020)

    Google Scholar 

  9. Croce, F., Hein, M.: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: ICML, vol. 119, pp. 2206–2216 (2020)

    Google Scholar 

  10. Cubuk, E.D., Zoph, B., Mané, D., Vasudevan, V., Le, Q.V.: AutoAugment: learning augmentation strategies from data. In: CVPR, pp. 113–123 (2019)

    Google Scholar 

  11. Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.: RandAugment: practical automated data augmentation with a reduced search space. In: NeurIPS (2020)

    Google Scholar 

  12. Deng, J., Guo, J., Xue, N., Zafeiriou, S.: ArcFace: additive angular margin loss for deep face recognition. In: CVPR, pp. 4690–4699 (2019)

    Google Scholar 

  13. Dong, Y., Deng, Z., Pang, T., Zhu, J., Su, H.: Adversarial distributional training for robust deep learning. In: NeurIPS (2020)

    Google Scholar 

  14. Dong, Y., et al.: Boosting adversarial attacks with momentum. In: CVPR, pp. 9185–9193 (2018)

    Google Scholar 

  15. Dong, Y., Pang, T., Su, H., Zhu, J.: Evading defenses to transferable adversarial examples by translation-invariant attacks. In: CVPR, pp. 4312–4321 (2019)

    Google Scholar 

  16. Du, J., Zhang, H., Zhou, J.T., Yang, Y., Feng, J.: Query-efficient meta attack to deep neural networks. In: ICLR (2020)

    Google Scholar 

  17. Duan, R., Chen, Y., Niu, D., Yang, Y., Qin, A.K., He, Y.: AdvDrop: adversarial attack to DNNs by dropping information. In: ICCV, pp. 7506–7515 (2021)

    Google Scholar 

  18. Eykholt, K., et al.: Robust physical-world attacks on deep learning visual classification. In: CVPR, pp. 1625–1634 (2018)

    Google Scholar 

  19. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)

    Google Scholar 

  20. Guo, C., Rana, M., Cissé, M., van der Maaten, L.: Countering adversarial images using input transformations. In: ICLR (2018)

    Google Scholar 

  21. Guo, Y., Li, Q., Chen, H.: Backpropagating linearly improves transferability of adversarial examples. In: NeurIPS (2020)

    Google Scholar 

  22. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)

    Google Scholar 

  23. He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 630–645. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_38

    Chapter  Google Scholar 

  24. Ilyas, A., Engstrom, L., Athalye, A., Lin, J.: Black-box adversarial attacks with limited queries and information. In: ICML, vol. 80, pp. 2142–2151 (2018)

    Google Scholar 

  25. Jia, J., Cao, X., Wang, B., Gong, N.Z.: Certified robustness for top-k predictions against adversarial perturbations via randomized smoothing. In: ICLR (2020)

    Google Scholar 

  26. Jia, X., Wei, X., Cao, X., Foroosh, H.: ComDefend: an efficient image compression model to defend adversarial examples. In: CVPR, pp. 6084–6092 (2019)

    Google Scholar 

  27. Katz, G., Barrett, C.W., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: CAV, vol. 10426, pp. 97–117 (2017)

    Google Scholar 

  28. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)

    Google Scholar 

  29. Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial machine learning at scale. In: ICLR (2017)

    Google Scholar 

  30. Li, M., Deng, C., Li, T., Yan, J., Gao, X., Huang, H.: Towards transferable targeted attack. In: CVPR, pp. 638–646 (2020)

    Google Scholar 

  31. Li, Y., Li, L., Wang, L., Zhang, T., Gong, B.: NATTACK: learning the distributions of adversarial examples for an improved black-box attack on deep neural networks. In: ICML, vol. 97, pp. 3866–3876 (2019)

    Google Scholar 

  32. Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., Zhu, J.: Defense against adversarial attacks using high-level representation guided denoiser. In: CVPR, pp. 1778–1787 (2018)

    Google Scholar 

  33. Lin, J., Song, C., He, K., Wang, L., Hopcroft, J.E.: Nesterov accelerated gradient and scale invariance for adversarial attacks. In: ICLR (2020)

    Google Scholar 

  34. Liu, W., Wen, Y., Yu, Z., Li, M., Raj, B., Song, L.: SphereFace: deep hypersphere embedding for face recognition. In: CVPR, pp. 6738–6746 (2017)

    Google Scholar 

  35. Liu, Z., Liu, Q., Liu, T., Xu, N., Lin, X., Wang, Y., Wen, W.: Feature distillation: DNN-oriented JPEG compression against adversarial examples. In: CVPR, pp. 860–868 (2019)

    Google Scholar 

  36. Ma, C., Chen, L., Yong, J.: Simulating unknown target models for query-efficient black-box attacks. In: CVPR, pp. 11835–11844 (2021)

    Google Scholar 

  37. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018)

    Google Scholar 

  38. Moosavi-Dezfooli, S., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: CVPR, pp. 2574–2582 (2016)

    Google Scholar 

  39. Naseer, M., Khan, S.H., Hayat, M., Khan, F.S., Porikli, F.: A self-supervised approach for adversarial robustness. In: CVPR, pp. 259–268 (2020)

    Google Scholar 

  40. Pang, T., Yang, X., Dong, Y., Xu, T., Zhu, J., Su, H.: Boosting adversarial training with hypersphere embedding. In: NeurIPS (2020)

    Google Scholar 

  41. Rozsa, A., Rudd, E.M., Boult, T.E.: Adversarial diversity and hard positive generation. In: CVPRW, pp. 410–417 (2016)

    Google Scholar 

  42. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. IJCV 115(3), 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  43. Sutskever, I., Martens, J., Dahl, G.E., Hinton, G.E.: On the importance of initialization and momentum in deep learning. In: ICML, vol. 28, pp. 1139–1147 (2013)

    Google Scholar 

  44. Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-V4, inception-ResNet and the impact of residual connections on learning. In: AAAI, pp. 4278–4284 (2017)

    Google Scholar 

  45. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: CVPR, pp. 2818–2826 (2016)

    Google Scholar 

  46. Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014)

    Google Scholar 

  47. Tramèr, F., Carlini, N., Brendel, W., Madry, A.: On adaptive attacks to adversarial example defenses. In: NeurIPS (2020)

    Google Scholar 

  48. Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I.J., Boneh, D., McDaniel, P.D.: Ensemble adversarial training: Attacks and defenses. In: ICLR (2018)

    Google Scholar 

  49. Uesato, J., O’Donoghue, B., Kohli, P., van den Oord, A.: Adversarial risk and the dangers of evaluating against weak attacks. In: ICML, vol. 80, pp. 5032–5041 (2018)

    Google Scholar 

  50. Wang, H., et al.: CosFace: Large margin cosine loss for deep face recognition. In: CVPR, pp. 5265–5274 (2018)

    Google Scholar 

  51. Wang, X., He, K.: Enhancing the transferability of adversarial attacks through variance tuning. In: CVPR, pp. 1924–1933 (2021)

    Google Scholar 

  52. Wang, X., He, X., Wang, J., He, K.: Admix: enhancing the transferability of adversarial attacks. arXiv preprint arXiv:2102.00436 (2021)

  53. Wang, X., Lin, J., Hu, H., Wang, J., He, K.: Boosting adversarial transferability through enhanced momentum. arXiv preprint arXiv:2103.10609 (2021)

  54. Wang, Y., Ma, X., Bailey, J., Yi, J., Zhou, B., Gu, Q.: On the convergence and robustness of adversarial training. In: ICML, vol. 97, pp. 6586–6595 (2019)

    Google Scholar 

  55. Wong, E., Rice, L., Kolter, J.Z.: Fast is better than free: revisiting adversarial training. In: ICLR (2020)

    Google Scholar 

  56. Wu, D., Wang, Y., Xia, S., Bailey, J., Ma, X.: Skip connections matter: on the transferability of adversarial examples generated with ResNets. In: ICLR (2020)

    Google Scholar 

  57. Wu, D., Xia, S., Wang, Y.: Adversarial weight perturbation helps robust generalization. In: NeurIPS (2020)

    Google Scholar 

  58. Wu, W., Su, Y., Lyu, M.R., King, I.: Improving the transferability of adversarial samples with adversarial transformations. In: CVPR, pp. 9024–9033 (2021)

    Google Scholar 

  59. Xiao, K.Y., Tjeng, V., Shafiullah, N.M.M., Madry, A.: Training for faster adversarial robustness verification via inducing ReLU stability. In: ICLR (2019)

    Google Scholar 

  60. Xie, C., Wang, J., Zhang, Z., Ren, Z., Yuille, A.L.: Mitigating adversarial effects through randomization. In: ICLR (2018)

    Google Scholar 

  61. Xie, C., et al.: Improving transferability of adversarial examples with input diversity. In: CVPR, pp. 2730–2739 (2019)

    Google Scholar 

  62. Xu, W., Evans, D., Qi, Y.: Feature squeezing: detecting adversarial examples in deep neural networks. In: NDSS (2018)

    Google Scholar 

  63. Yang, B., Zhang, H., Zhang, Y., Xu, K., Wang, J.: Adversarial example generation with adabelief optimizer and crop invariance. arXiv preprint arXiv:2102.03726 (2021)

  64. Yuan, H., Chu, Q., Zhu, F., Zhao, R., Liu, B., Yu, N.H.: AutoMA: towards automatic model augmentation for transferable adversarial attacks. IEEE TMM (2021)

    Google Scholar 

  65. Zhang, H., Cissé, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: Beyond empirical risk minimization. In: ICLR (2018)

    Google Scholar 

  66. Zhuang, J., et al.: AdaBelief optimizer: adapting stepsizes by the belief in observed gradients. In: NeurIPS (2020)

    Google Scholar 

  67. Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. In: CVPR, pp. 8697–8710 (2018)

    Google Scholar 

  68. Zou, J., Pan, Z., Qiu, J., Duan, Y., Liu, X., Pan, Y.: Making adversarial examples more transferable and indistinguishable. arXiv preprint arXiv:2007.03838 (2020)

Download references

Acknowledgments

This work is partially supported by National Key R &D Program of China (No. 2017YFA0700800), National Natural Science Foundation of China (Nos. 62176251 and Nos. 61976219).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jie Zhang .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1192 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yuan, Z., Zhang, J., Shan, S. (2022). Adaptive Image Transformations for Transfer-Based Adversarial Attack. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13665. Springer, Cham. https://doi.org/10.1007/978-3-031-20065-6_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20065-6_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20064-9

  • Online ISBN: 978-3-031-20065-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics