Skip to main content
Log in

Attribute-guided face adversarial example generation

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Deep neural networks (DNNs) are susceptible to adversarial examples generally generated by adding imperceptible perturbations to the clean images, resulting in the degraded performance of DNNs models. To generate adversarial examples, most methods utilize the \(L_p\) norm to limit the perturbations and satisfy such imperceptibility. However, the \(L_p\) norm cannot fully guarantee the semantic authenticity of adversarial examples. Defenses may take advantage of this defect to weaken the attack capability of adversarial examples. Moreover, existing methods with \(L_p\) restriction have poor generalization ability in white-box attacks and have inferior aggressiveness in black-box attacks. To solve the problems mentioned above, we propose a multiple feature interpolation method to generate face adversarial examples. In the proposed method, we perform the multiple feature interpolation to generate face adversarial examples with new semantics in the process of original image reconstruction and conditional attribute-guided image generation based on StarGAN. Experimental results demonstrate that adversarial examples generated by our method possess new attribute-guided semantics and satisfactory attack success rates under both white-box and black-box settings.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Data availability

The datasets involved in our work are publicly available. The test datasets chosen are the CelebA dataset [38] and the CelebA-HQ dataset [23].

References

  1. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)

  2. Wang, L., Sun, Y., Wang, Z.: CCS-GAN: a semi-supervised generative adversarial network for image classification. Vis. Comput. 38(6), 2009–2021 (2022)

    Article  Google Scholar 

  3. Li, T., Zhang, Z., Pei, L., Gan, Y.: HashFormer: vision transformer based deep hashing for image retrieval. IEEE Signal Process. Lett. 29, 827–831 (2022)

    Article  ADS  Google Scholar 

  4. Karthik, K., Kamath, S.S.: A deep neural network model for content-based medical image retrieval with multi-view classification. Vis. Comput. 37(7), 1837–1850 (2021)

    Article  Google Scholar 

  5. Cheng, B., Misra, I., Schwing, A.G., Kirillov, A., Girdhar, R.: Masked-attention mask transformer for universal image segmentation. In: CVPR, pp. 1290–1299 (2022)

  6. Zhou, F., Hu, Y., Shen, X.: MSANet: multimodal self-augmentation and adversarial network for RGB-D object recognition. Vis. Comput. 35(11), 1583–1594 (2019)

    Article  Google Scholar 

  7. Li, B., Yao, Y., Tan, J., Zhang, G., Yu, F., Lu, J., Luo, Y.: Equalized focal loss for dense long-tailed object detection. In: CVPR, pp. 6990–6999 (2022)

  8. Han, G., Huang, S., Ma, J., He, Y., Chang, S.-F.: Meta faster R-CNN: towards accurate few-shot object detection with attentive feature alignment. In: AAAI, vol. 36, pp. 780–789 (2022)

  9. Ranjan, R., Bansal, A., Zheng, J., Xu, H., Gleason, J., Lu, B., Nanduri, A., Chen, J.-C., Castillo, C.D., Chellappa, R.: A fast and accurate system for face detection, identification, and verification. IEEE Trans. Biometr. Behav. Identity Sci. 1(2), 82–96 (2019)

    Article  Google Scholar 

  10. Sun, Y., Chen, Y., Wang, X., Tang, X.: Deep learning face representation by joint identification-verification. In: NeurIPS, vol. 27 (2014)

  11. Gan, Y., Ye, M., Liu, D., Liu, Y.: Training generative adversarial networks by auxiliary adversarial example regulator. Appl. Soft Comput. 136, 110086 (2023)

    Article  Google Scholar 

  12. Liu, H., Zhou, M., Song, M., Ouyang, D., Li, Y., Jing, L., Ng, M.K. Learning hierarchical preferences for recommendation with mixture intention neural stochastic processes, pp. 1–15 (2023). https://doi.org/10.1109/TKDE.2023.3348493

  13. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples (2014). arXiv preprint arXiv:1412.6572

  14. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world (2016). arXiv preprint arXiv:1607.02533

  15. Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., Li, J.: Boosting adversarial attacks with momentum. In: CVPR, pp. 9185–9193 (2018)

  16. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks (2017). arXiv preprint arXiv:1706.06083

  17. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: S &P, pp. 39–57 (2017)

  18. Zhu, M., Chen, T., Wang, Z.: Sparse and imperceptible adversarial attack via a Homotopy algorithm. In: ICML, pp. 12868–12877 (2021)

  19. Choi, Y., Choi, M., Kim, M., Ha, J.-W., Kim, S., Choo, J.: StarGAN: unified generative adversarial networks for multi-domain image-to-image translation. In: CVPR, pp. 8789–8797 (2018)

  20. Ltd., M.T.: FACE++ Compare-API. https://console.faceplusplus.com.cn/documents/4887586

  21. Ltd., A.C.C.: AliYun Compare-API. https://help.aliyun.com/document_detail/151891

  22. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Commun. ACM 63(11), 139–144 (2020)

    Article  MathSciNet  Google Scholar 

  23. Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of gans for improved quality, stability, and variation (2017). arXiv preprint arXiv:1710.10196

  24. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: CVPR, pp. 4401–4410 (2019)

  25. Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of StyleGAN. In: CVPR, pp. 8110–8119 (2020)

  26. Gan, Y., Xiang, T., Liu, H., Ye, M.: Learning-aware feature denoising discriminator. Inf. Fusion 89, 143–154 (2023)

    Article  Google Scholar 

  27. Zhang, G., Kan, M., Shan, S., Chen, X.: Generative adversarial network with spatial attention for face attribute editing. In: ECCV, pp. 417–432 (2018)

  28. Liu, Y., Chen, Y., Bao, L., Sebe, N., Lepri, B., De Nadai, M.: ISF-GAN: an implicit style function for high-resolution image-to-image translation (2021). arXiv preprint arXiv:2109.12492

  29. He, Z., Zuo, W., Kan, M., Shan, S., Chen, X.: AttGAN: facial attribute editing by only changing what you want. IEEE Trans. Image Process. 28(11), 5464–5478 (2019)

    Article  ADS  MathSciNet  PubMed  Google Scholar 

  30. Wang, Y., Wang, S., Qi, G., Tang, J., Li, B.: Weakly supervised facial attribute manipulation via deep adversarial network. In: WACV, pp. 112–121 (2018)

  31. Wang, C.-C., Liu, H.-H., Pei, S.-C., Liu, K.-H., Liu, T.-J.: Face aging on realistic photos by generative adversarial networks. In: ISCAS, pp. 1–5 (2019)

  32. Hsu, S.-Y., Yang, C.-Y., Huang, C.-C., Hsu, J.Y.-j.: SemiStarGAN: semi-supervised generative adversarial networks for multi-domain image-to-image translation. In: ACCV, pp. 338–353 (2018)

  33. Du, J., Zhang, H., Zhou, J.T., Yang, Y., Feng, J.: Query-efficient meta attack to deep neural networks (2019). arXiv preprint arXiv:1906.02398

  34. Bhattad, A., Chong, M.J., Liang, K., Li, B., Forsyth, D.A.: Unrestricted adversarial examples via semantic manipulation (2019). arXiv preprint arXiv:1904.06347

  35. Zhao, Z., Liu, Z., Larson, M.: Towards large yet imperceptible adversarial image perturbations with perceptual color distance. In: CVPR, pp. 1039–1048 (2020)

  36. Joshi, A., Mukherjee, A., Sarkar, S., Hegde, C.: Semantic adversarial attacks: parametric transformations that fool deep classifiers. In: CVPR, pp. 4773–4783 (2019)

  37. Qiu, H., Xiao, C., Yang, L., Yan, X., Lee, H., Li, B.: SemanticAdv: Generating adversarial examples via attribute-conditioned image editing. In: ECCV, pp. 19–37. Springer (2020)

  38. Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: ICCV, pp. 3730–3738 (2015)

  39. Xie, C., Zhang, Z., Zhou, Y., Bai, S., Wang, J., Ren, Z., Yuille, A.L.: Improving transferability of adversarial examples with input diversity. In: CVPR, pp. 2730–2739 (2019)

  40. Deng, J., Guo, J., Xue, N., Zafeiriou, S.: Arcface: additive angular margin loss for deep face recognition. In: CVPR, pp. 4690–4699 (2019)

  41. Guo, Y., Zhang, L., Hu, Y., He, X., Gao, J.: MS-Celeb-1M: a dataset and benchmark for large-scale face recognition. In: ECCV, pp. 87–102 (2016)

  42. Sun, Y., Wang, X., Tang, X.: Deep learning face representation from predicting 10,000 classes. In: CVPR, pp. 1891–1898 (2014)

  43. Zhang, X., Yang, L., Yan, J., Lin, D.: Accelerated training for massive classification via dynamic class selection. In: AAAI (2018)

Download references

Acknowledgements

This work was supported by the National Key R &D Program of China (2022YFB3103500), National Natural Science Foundation of China (62106026, U20A20176, and 62072062), Postdoctoral Fellowship Program of CPSF (GZC20233323), Natural Science Foundation of Chongqing (cstc2021jcyj-msxmX0273 and cstc2022ycjhbgzxm0031), Sichuan Science and Technology Program (2021YFQ0056) and Fundamental Research Funds for the Central Universities (2023CDJXY-039). We would like to express our deepest gratitude to Deqiang Ouyang from Chongqing University for his linguistic help.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tao Xiang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gan, Y., Xiao, X. & Xiang, T. Attribute-guided face adversarial example generation. Vis Comput (2024). https://doi.org/10.1007/s00371-024-03265-x

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00371-024-03265-x

Keywords

Navigation