Skip to main content
Log in

Generating adversarial examples with collaborative generative models

  • Regular Contribution
  • Published:
International Journal of Information Security Aims and scope Submit manuscript

Abstract

Deep learning has made remarkable progress, and deep learning models have been successfully deployed in many practical applications. However, recent studies indicate that deep learning models are vulnerable to adversarial examples generated by adding an imperceptible perturbation. The study of adversarial attacks and defense has attracted substantial interest from researchers due to its high application value. In this paper, a method named AdvAE-GAN is proposed for generating adversarial examples. The proposed method combines (1) explicit perturbation generated by adversarial autoencoder and (2) implicit perturbation generated by generative adversarial network. A more suitable similarity measurement criteria is incorporated into the model to ensure that the generated examples are imperceptible. The proposed model not only is suitable for white-box attacks, but also can be adapted to black-box attacks. Extensive experiments and comparisons with six state-of-the-art methods (FGSM, SDM-FGSM, PGD, MIM, AdvGAN, and AdvGAN++) demonstrate that the adversarial examples generated by AdvAE-GAN result in high attack success rate with good transferability and are more realistic-looking and natural. Our code is available at https://github.com/xzforeverlove/Generating-Adversarial-Examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Algorithm 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data availability

Datasets are not available.

References

  1. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (NIPS2012), pp. 1097–1105 (2012)

  2. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: The 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7–9 (2015)

  3. He, K.M., Zhang, X.Y., Ren, S.Q., et al.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR2016), Las Vegas, NV, United States, June 27–30, pp. 770–778 (2016)

  4. Szegedy, C., Zaremba, W., Sutskever, I., et al.: Intriguing properties of neural networks. In: The 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14–16 (2014)

  5. Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learn. Syst. 30(9), 2805–2824 (2019)

    Article  MathSciNet  Google Scholar 

  6. Zhang, J., Li, C.: Adversarial examples: opportunities and challenges. IEEE Trans. Neural Netw. Learn. Syst. 31(7), 2578–2593 (2020)

    MathSciNet  Google Scholar 

  7. Xiao, C., Li, B., Zhu, J.Y., et al.: Generating Adversarial Examples with Adversarial Networks. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18), pp. 3905–3911

  8. Jandial, S., Mangla, P., Varshney, S., et al.: AdvGAN++: harnessing latent layers for adversary generation. In: IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019, pp. 2045–2048 (2019)

  9. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: The 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14–16 (2014)

  10. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. In: The 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24–26 (2017)

  11. Madry, A., Makelov, A., Schmidt, L., et al.: Towards deep learning models resistant to adversarial attacks. In: The 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30–May 3 (2018)

  12. Dong, Y., Liao, F., Pang, T., et al.: Boosting adversarial attacks with momentum. In: The 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9185–9193 (2018)

  13. Papernot, N., McDaniel, P., Jha, S., et al.: The limitations of deep learning in adversarial settings. In: The 2016 IEEE European Symposium on Security and Privacy (EuroS &P), pp. 372–387 (2016)

  14. Moosavi-Dezfooli, S., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: The 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2574–258 (2016)

  15. Gao, L., Huang, Z., Song, J., et al.: Push & pull: transferable adversarial examples with attentive attack. IEEE Trans. Multimed. 24, 2329–2338 (2022)

    Article  Google Scholar 

  16. Chaturvedi, A., Garain, U.: Mimic and fool: a task-agnostic adversarial attack. IEEE Trans. Neural Netw. Learn. Syst. 32(4), 1801–1808 (2021)

    Article  Google Scholar 

  17. Zhong, Y., Deng, W.: Towards transferable adversarial attack against deep face recognition. IEEE Trans. Inf. Forensics Secur. 16, 1452–1466 (2021)

    Article  Google Scholar 

  18. Vidnerová, P., Neruda, R.: Vulnerability of classifiers to evolutionary generated adversarial examples. Neural Netw. 127, 168–181 (2020)

    Article  Google Scholar 

  19. Chen, H., Lu, K., Wang, X., et al.: Generating transferable adversarial examples based on perceptually-aligned perturbation. Int. J. Mach. Learn. Cybern. 12, 3295–3307 (2021)

    Article  Google Scholar 

  20. Ren, Y., Zhu, H., Sui, X., et al.: Crafting transferable adversarial examples via contaminating the salient feature variance. Inf. Sci. 644, 119273 (2023)

    Article  Google Scholar 

  21. Wang, Z., Guo, H., Zhang, Z., et al.: Feature Importance-aware Transferable Adversarial Attacks. In: The 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, pp. 7619–7628 (2021)

  22. Xiang, T., Liu, H., Guo, S., et al.: EGM: an efficient generative model for unrestricted adversarial examples. ACM Trans. Sens. Netw. 18(4), 1–25 (2022). Article No.: 51

  23. Byun, J., Kwon, M.J., Cho, S., et al.: Introducing competition to boost the transferability of targeted adversarial examples through clean feature mixup. In: The 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, pp. 24648–24657 (2023)

  24. Xiao, Y., Zhou, J., Chen, K., et al.: Revisiting the transferability of adversarial examples via source-agnostic adversarial feature inducing method. Pattern Recognit. 144, 109828 (2023)

    Article  Google Scholar 

  25. Dong, Y., Tang, L., Tian, C., et al.: Improving transferability of adversarial examples by saliency distribution and data augmentation. Comput. Secur. 120, 102811 (2022)

    Article  Google Scholar 

  26. Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al.: Generative adversarial nets. Adv. Neural. Inf. Process. Syst. 1, 2672–2680 (2014)

    Google Scholar 

  27. Zhu, J., Park, T., Isola, P., et al.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: IEEE International Conference on Computer Vision (ICCV) 2017, pp. 2242–2251 (2017)

  28. Hosseini-Asl, E., Zhou, H., Xiong, C., et al.: Augmented cyclic adversarial learning for low resource domain adaptation. In: 2019 International Conference on Learning Representations, pp. 1–14 (2019)

  29. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019, pp. 4396–4405 (2019)

  30. Karras, T., Laine, S., Aittala, M., et al.: Analyzing and improving the image quality of StyleGAN. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020, pp. 8107–8116 (2020)

  31. Zhai, M., Chen, L., Tung, F., et al.: Lifelong GAN: continual learning for conditional image generation. In; IEEE/CVF International Conference on Computer Vision (ICCV) 2019, pp. 2759–2768 (2019). https://doi.org/10.1109/ICCV.2019.00285

  32. Zhai, M., Chen, L., He, J., et al.: Piggyback GAN: efficient lifelong learning for image conditioned generation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J. (eds.) Computer Vision-ECCV 2020. ECCV 2020. Lecture Notes in Computer Science, vol. 12366. Springer, Cham. https://doi.org/10.1007/978-3-030-58589-1_24

  33. Zhai, M.Y., Chen, L., Mori, G.: Hyper-LifelongGAN: scalable lifelong learning for image conditioned generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR2021), June, pp. 2246–2255 (2021)

  34. Liu, X., Hsieh, C.: Rob-GAN: generator, discriminator, and adversarial attacker. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019, pp. 11226–11235 (2019)

  35. Chen, J., Zheng, H., Xiong, H., et al.: MAG-GAN: massive attack generator via GAN. Inf. Sci. 536, 67–90 (2020)

    Article  MathSciNet  Google Scholar 

  36. Zhao, Z., Dua, D., Singh, S.: Generating natural adversarial examples. In: International Conference on Learning Representations, pp. 1–15 (2018)

  37. Yu, P., Song, K., Lu, J.: Generating adversarial examples with conditional generative adversarial net. In: 2018 24th International Conference on Pattern Recognition (ICPR), pp. 676–681 (2018)

  38. Zhang, W.: Generating adversarial examples in one shot with image-to-image translation GAN. IEEE Access 7, 151103–151119 (2019)

    Article  Google Scholar 

  39. Peng, W., Liu, R., Wang, R., et al.: EnsembleFool: a method to generate adversarial examples based on model fusion strategy. Comput. Secur. 107, 102317 (2021)

    Article  Google Scholar 

  40. Song, Y., Shu, R., Kushman, N., et al.: Constructing unrestricted adversarial examples with generative models. In: 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada, pp. 1–12

  41. Dai, X., Liang, K., Xiao, B.: AdvDiff: Generating Unrestricted Adversarial Examples using Diffusion Models. arXiv:2307.12499

  42. Xue, H., Araujo, A., Hu, B., et al.: Diffusion-Based Adversarial Sample Generation for Improved Stealthiness and Controllability. arXiv:2305.16494

  43. Deng, L.: The MNIST database of handwritten digit images for machine learning research. IEEE Signal Process. Mag. 29(6), 14–142 (2012)

    Google Scholar 

  44. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: A Novel Image Dataset for Benchmarking Machine Learning Algorithms. arXiv:1708.07747 (2017)

  45. Torralba, A., Fergus, R., Freeman, W.T.: 80 million tiny images: a large data set for nonparametric object and scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. 30(11), 1958–1970 (2008)

    Article  Google Scholar 

  46. Lecun, Y., Bottou, L., Bengio, Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  47. Zagoruyko, S., Komodakis, N.: Wide residual networks. In: Proceedings of the British Machine Vision Conference (BMVC), 19-22 September, York, UK, vol. 87, pp. 1–12 (2016)

  48. Liu, Y., Chen, X., Liu, C., et al.: Delving into transferable adversarial examples and black-box attacks. In: The 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24–26 (2017)

  49. Janez, D.: Statistical comparisons of classifiers over multiple datasets. J. Mach. Learn. Res. 7(1), 1–30 (2006)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

This study was supported by the key R &D program of science and technology foundation of Hebei Province (19210310D), and by the natural science foundation of Hebei Province (F2021201020).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junhai Zhai.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, L., Zhai, J. Generating adversarial examples with collaborative generative models. Int. J. Inf. Secur. 23, 1077–1091 (2024). https://doi.org/10.1007/s10207-023-00780-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10207-023-00780-1

Keywords

Navigation