Advertisement

Adversarial Vision Challenge

  • Wieland BrendelEmail author
  • Jonas Rauber
  • Alexey Kurakin
  • Nicolas Papernot
  • Behar Veliqi
  • Sharada P. Mohanty
  • Florian Laurent
  • Marcel Salathé
  • Matthias Bethge
  • Yaodong Yu
  • Hongyang Zhang
  • Susu Xu
  • Hongbao Zhang
  • Pengtao Xie
  • Eric P. Xing
  • Thomas Brunner
  • Frederik Diehl
  • Jérôme Rony
  • Luiz Gustavo Hafemann
  • Shuyu Cheng
  • Yinpeng Dong
  • Xuefei Ning
  • Wenshuo Li
  • Yu Wang
Conference paper
Part of the The Springer Series on Challenges in Machine Learning book series (SSCML)

Abstract

This competition was meant to facilitate measurable progress towards robust machine vision models and more generally applicable adversarial attacks. It encouraged researchers to develop query-efficient adversarial attacks that can successfully operate against a wide range of defenses while just observing the final model decision to generate adversarial examples. Conversely, the competition encouraged the development of new defenses that can resist a wide range of strong decision-based attacks. In this chapter we describe the organisation and structure of the challenge as well as the solutions developed by the top-ranking teams.

Notes

Acknowledgements

This work has been funded, in part, by the German Federal Ministry of Education and Research (BMBF) through the Verbundprojekt TUEAI: Tübingen AI Center (FKZ: 01IS18039A) as well as the German Research Foundation (DFG CRC 1233 on “Robust Vision”). The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Rony; Rony acknowledges support by the Bosch Forschungsstiftung (Stifterverband, T113/30057/17); Brendel and Bethge were supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003.

References

  1. 1.
    Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, July 2018.Google Scholar
  2. 2.
    Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Machine Learning and Knowledge Discovery in Databases, pages 387–402. Springer, 2013.Google Scholar
  3. 3.
    Battista Biggio and Fabio Roli. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84:317–331, December 2018.Google Scholar
  4. 4.
    Wieland Brendel, Jonas Rauber, and Matthias Bethge. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. In International Conference on Learning Representations, 2018.Google Scholar
  5. 5.
    Thomas Brunner, Frederik Diehl, Michael Truong Le, and Alois Knoll. Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks. arXiv preprint arXiv:1812.09803, 2018.Google Scholar
  6. 6.
    Nicholas Carlini, Guy Katz, Clark Barrett, and David L. Dill. Provably minimally-distorted adversarial examples. arXiv preprint arXiv:1709.10207, 2017.Google Scholar
  7. 7.
    Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.Google Scholar
  8. 8.
    Chuan Guo, Jared S Frank, and Kilian Q Weinberger. Low frequency adversarial perturbation. arXiv preprint arXiv:1809.08758, 2018.Google Scholar
  9. 9.
    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015.Google Scholar
  10. 10.
    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.Google Scholar
  11. 11.
    Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In CVPR, 2017.Google Scholar
  12. 12.
    Christian Igel, Thorsten Suttorp, and Nikolaus Hansen. A computational efficient covariance matrix update and a (1+ 1)-cma for evolution strategies. In Proceedings of the 8th annual conference on Genetic and evolutionary computation, pages 453–460. ACM, 2006.Google Scholar
  13. 13.
    Harini Kannan, Alexey Kurakin, and Ian J. Goodfellow. Adversarial logit pairing. CoRR, abs/1803.06373, 2018.Google Scholar
  14. 14.
    Kurakin Kannan and Goodfellow. Adversarial logits pairing. arXiv preprint arXiv:1803.06373, 2018.Google Scholar
  15. 15.
    Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Cifar-10 (canadian institute for advanced research).Google Scholar
  16. 16.
    Alexey Kurakin, Ian Goodfellow, Samy Bengio, Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu, Xiaolin Hu, Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, Alan Yuille, Sangxia Huang, Yao Zhao, Yuzhe Zhao, Zhonglin Han, Junjiajia Long, Yerkebulan Berdibekov, Takuya Akiba, Seiya Tokui, and Motoki Abe. Adversarial Attacks and Defences Competition. arXiv preprint arXiv:1804.00097, 2018.Google Scholar
  17. 17.
    Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. Adversarial examples in the physical world. CoRR, abs/1607.02533, 2016.Google Scholar
  18. 18.
    Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.Google Scholar
  19. 19.
    Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2574–2582, 2016.Google Scholar
  20. 20.
    Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277, 2016.Google Scholar
  21. 21.
    Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, ASIA CCS’ 17, pages 506–519, New York, NY, USA, 2017. ACM.Google Scholar
  22. 22.
    Bernardo Ávila Pires and Csaba Szepesvári. Multiclass classification calibration functions. arXiv preprint arXiv:1609.06385, 2016.Google Scholar
  23. 23.
    Jonas Rauber, Wieland Brendel, and Matthias Bethge. Foolbox: A python toolbox to benchmark the robustness of machine learning models. arXiv preprint arXiv:1707.04131, 2017.Google Scholar
  24. 24.
    Jérôme Rony, Luiz G Hafemann, Luiz S Oliveira, Ismail Ben Ayed, Robert Sabourin, and Eric Granger. Decoupling direction and norm for efficient gradient-based l2 adversarial attacks and defenses. arXiv preprint arXiv:1811.09600, 2018.Google Scholar
  25. 25.
    Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015.MathSciNetCrossRefGoogle Scholar
  26. 26.
    Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.Google Scholar
  27. 27.
    Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. In International Conference on Learning Representations, 2018.Google Scholar
  28. 28.
    Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. In International Conference on Learning Representations, 2019.Google Scholar
  29. 29.
    Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 5987–5995. IEEE, 2017.Google Scholar
  30. 30.
    Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, and Michael I. Jordan. Theoretically principled trade-off between robustness and accuracy. arXiv preprint arXiv:1901.08573, 2019.Google Scholar
  31. 31.
    Ying Zhang, Tao Xiang, Timothy M Hospedales, and Huchuan Lu. Deep mutual learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4320–4328, 2018.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Wieland Brendel
    • 1
    Email author
  • Jonas Rauber
    • 1
  • Alexey Kurakin
    • 2
  • Nicolas Papernot
    • 2
  • Behar Veliqi
    • 1
  • Sharada P. Mohanty
    • 3
  • Florian Laurent
    • 3
  • Marcel Salathé
    • 3
  • Matthias Bethge
    • 1
  • Yaodong Yu
    • 4
  • Hongyang Zhang
    • 5
  • Susu Xu
    • 5
  • Hongbao Zhang
    • 4
  • Pengtao Xie
    • 4
  • Eric P. Xing
    • 4
  • Thomas Brunner
    • 6
  • Frederik Diehl
    • 6
  • Jérôme Rony
    • 7
  • Luiz Gustavo Hafemann
    • 7
  • Shuyu Cheng
    • 8
  • Yinpeng Dong
    • 8
  • Xuefei Ning
    • 8
  • Wenshuo Li
    • 8
  • Yu Wang
    • 8
  1. 1.University of TübingenTübingenGermany
  2. 2.Google BrainMountain ViewUSA
  3. 3.École Polytechnique Fédérale de LausanneLausanneSwitzerland
  4. 4.Petuum Inc.PittsburghUSA
  5. 5.Carnegie Mellon UniversityPittsburghUSA
  6. 6.fortiss GmbHMunichGermany
  7. 7.Laboratoire d’imagerie de vision et d’intelligence artificielle (LIVIA), ÉTS MontrealMontrealCanada
  8. 8.Tsinghua UniversityBeijingChina

Personalised recommendations