Skip to main content

HLR: Generating Adversarial Examples by High-Level Representations

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2019: Image Processing (ICANN 2019)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11729))

Included in the following conference series:

  • 2601 Accesses

Abstract

Neural networks can be fooled by adversarial examples. Recently, many methods have been proposed to generate adversarial examples, but these works mainly concentrate on the pixel-wise information, which limits the transferability of adversarial examples. Different from these methods, we introduce perceptual module to extract the high-level representations and change the manifold of the adversarial examples. Besides, we propose a novel network structure to replace the generative adversarial network (GAN). The improved structure ensures high similarity of adversarial examples and promotes the stability of training process. Extensive experiments demonstrate that our method has significant improvement on the transferability. Furthermore, the adversarial training defence method is invalid for our attack.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy (SP), pp. 39–57 (2017). https://doi.org/10.1109/SP.2017.49

  2. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (ICLR) (2015). arXiv:1412.6572

  3. Khrulkov, V., Oseledets, I.: Art of singular vectors and universal adversarial perturbations. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8562–8570 (2018). https://doi.org/10.1109/CVPR.2018.00893

  4. Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., Zhu, J.: Defense against adversarial attacks using high-level representation guided denoiser. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1778–1787 (2018). https://doi.org/10.1109/CVPR.2018.00191

  5. Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks (2017)

    Google Scholar 

  6. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks (2017). stat, 1050:9

    Google Scholar 

  7. Miyato, T., Maeda, S.-I., Koyama, M., Nakae, K., Ishii, S.: Distributional smoothing with virtual adversarial training (2015). arXiv preprint arXiv:1507.00677

  8. Fawzi, A., Fawzi, O., Moosavi-Dezfooli, S.M., Frossard, P.: Universal adversarial perturbations. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 86–94 (2017). https://doi.org/10.1109/CVPR.2017.17

  9. Garg, U., Mopuri, K.R., Babu, R.V.: Fast feature fool: a data independent approach to universal adversarial perturbations. In: British Machine Vision Conference (BMVC) (2017). https://doi.org/10.5244/C.31.30

  10. Mopuri, K.R., Ojha, U., Garg, U., Babu, R.V.: NAG: Network for adversary generation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 742–751 (2018). https://doi.org/10.1109/CVPR.2018.00084

Download references

Acknowledgements

This work is supported by Guangdong Province Key Area R&D Program under grant No. 2018B010113001, the R&D Program of Shenzhen under grant No. JCYJ20170307153157440 and the Shenzhen Key Lab of Software Defined Networking under grant No. ZDSYS20140509172959989, and the National Natural Science Foundation of China under Grant No. 61802220.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yong Jiang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hao, Y., Li, T., Li, L., Jiang, Y., Cheng, X. (2019). HLR: Generating Adversarial Examples by High-Level Representations. In: Tetko, I., Kůrková, V., Karpov, P., Theis, F. (eds) Artificial Neural Networks and Machine Learning – ICANN 2019: Image Processing. ICANN 2019. Lecture Notes in Computer Science(), vol 11729. Springer, Cham. https://doi.org/10.1007/978-3-030-30508-6_57

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-30508-6_57

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-30507-9

  • Online ISBN: 978-3-030-30508-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics