Advertisement

Adversarial T-Shirt! Evading Person Detectors in a Physical World

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12350)

Abstract

It is known that deep neural networks (DNNs) are vulnerable to adversarial attacks. The so-called physical adversarial examples deceive DNN-based decision makers by attaching adversarial patches to real objects. However, most of the existing works on physical adversarial attacks focus on static objects such as glass frames, stop signs and images attached to cardboard. In this work, we propose Adversarial T-shirts, a robust physical adversarial example for evading person detectors even if it could undergo non-rigid deformation due to a moving person’s pose changes. To the best of our knowledge, this is the first work that models the effect of deformation for designing physical adversarial examples with respect to non-rigid objects such as T-shirts. We show that the proposed method achieves 74% and 57% attack success rates in the digital and physical worlds respectively against YOLOv2. In contrast, the state-of-the-art physical attack method to fool a person detector only achieves 18% attack success rate. Furthermore, by leveraging min-max optimization, we extend our method to the ensemble attack setting against two object detectors YOLO-v2 and Faster R-CNN simultaneously.

Keywords

Physical adversarial attack Object detection Deep learning 

Notes

Acknowledgement

This work is partly supported by the National Science Foundation CNS-1932351. We would also like to extend our gratitude to MIT-IBM Watson AI Lab.

Supplementary material

504441_1_En_39_MOESM1_ESM.pdf (8.1 mb)
Supplementary material 1 (pdf 8281 KB)

References

  1. 1.
    Athalye, A., Engstrom, L., Ilyas, A., Kwok, K.: Synthesizing robust adversarial examples. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning, vol. 80, pp. 284–293 (2018)Google Scholar
  2. 2.
    Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420 (2018)
  3. 3.
    Athalye, A., Engstrom, L., Ilyas, A., Kwok, K.: Synthesizing robust adversarial examples. In: International Conference on Machine Learning, pp. 284–293 (2018)Google Scholar
  4. 4.
    Bookstein, F.L.: Principal warps: thin-plate splines and the decomposition of deformations. IEEE Trans. Pattern Anal. Mach. Intell. 11(6), 567–585 (1989)CrossRefGoogle Scholar
  5. 5.
    Cao, Y., et al.: Adversarial objects against lidar-based autonomous driving systems. arXiv preprint arXiv:1907.05418 (2019)
  6. 6.
    Carlini, N., Wagner, D.: Audio adversarial examples: targeted attacks on speech-to-text. In: 2018 IEEE Security and Privacy Workshops (SPW), pp. 1–7. IEEE (2018)Google Scholar
  7. 7.
    Chen, S.-T., Cornelius, C., Martin, J., Chau, D.H.P.: ShapeShifter: robust physical adversarial attack on faster R-CNN object detector. In: Berlingerio, M., Bonchi, F., Gärtner, T., Hurley, N., Ifrim, G. (eds.) ECML PKDD 2018. LNCS (LNAI), vol. 11051, pp. 52–68. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-10925-7_4CrossRefGoogle Scholar
  8. 8.
    Chui, H.: Non-rigid point matching: algorithms, extensions and applications. Citeseer (2001)Google Scholar
  9. 9.
    Ding, G.W., Lui, K.Y.C., Jin, X., Wang, L., Huang, R.: On the sensitivity of adversarial robustness to input data distributions. In: International Conference on Learning Representations (2019)Google Scholar
  10. 10.
    Donato, G., Belongie, S.: Approximate thin plate spline mappings. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2352, pp. 21–31. Springer, Heidelberg (2002).  https://doi.org/10.1007/3-540-47977-5_2CrossRefGoogle Scholar
  11. 11.
    Duan, R., Ma, X., Wang, Y., Bailey, J., Qin, A.K., Yang, Y.: Adversarial camouflage: hiding physical-world attacks with natural styles. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1000–1008 (2020)Google Scholar
  12. 12.
    Duchi, J.C., Bartlett, P.L., Wainwright, M.J.: Randomized smoothing for stochastic optimization. SIAM J. Optim. 22(2), 674–701 (2012)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Evtimov, I., et al.: Robust physical-world attacks on machine learning models. arXiv preprint arXiv:1707.08945 (2017)
  14. 14.
    Eykholt, K., et al.: Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1625–1634 (2018)Google Scholar
  15. 15.
    Eykholt, K., et al.: Physical adversarial examples for object detectors. In: 12th USENIX Workshop on Offensive Technologies (WOOT 2018) (2018)Google Scholar
  16. 16.
    Geiger, A., Moosmann, F., Car, Ö., Schuster, B.: Automatic camera and range sensor calibration using a single shot. In: 2012 IEEE International Conference on Robotics and Automation, pp. 3936–3943. IEEE (2012)Google Scholar
  17. 17.
    Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1440–1448 (2015)Google Scholar
  18. 18.
    Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)Google Scholar
  19. 19.
    Jaderberg, M., Simonyan, K., Zisserman, A., et al.: Spatial transformer networks. In: Advances in Neural Information Processing Systems, pp. 2017–2025 (2015)Google Scholar
  20. 20.
    Li, J., Schmidt, F., Kolter, Z.: Adversarial camera stickers: a physical camera-based attack on deep learning systems. In: International Conference on Machine Learning, pp. 3896–3904 (2019)Google Scholar
  21. 21.
    Lin, J., Gan, C., Han, S.: Defensive quantization: when efficiency meets robustness. In: International Conference on Learning Representations (2019). https://openreview.net/forum?id=ryetZ20ctX
  22. 22.
    Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10602-1_48CrossRefGoogle Scholar
  23. 23.
    Liu, H.T.D., Tao, M., Li, C.L., Nowrouzezahrai, D., Jacobson, A.: Beyond pixel norm-balls: parametric adversaries using an analytically differentiable renderer. In: International Conference on Learning Representations (2019). https://openreview.net/forum?id=SJl2niR9KQ
  24. 24.
    Liu, W., et al.: SSD: single shot MultiBox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46448-0_2CrossRefGoogle Scholar
  25. 25.
    Lu, J., Sibai, H., Fabry, E.: Adversarial examples that fool detectors. arXiv preprint arXiv:1712.02494 (2017)
  26. 26.
    Redmon, J., Farhadi, A.: Yolo9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7263–7271 (2017)Google Scholar
  27. 27.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)Google Scholar
  28. 28.
    Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.K.: Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528–1540. ACM (2016)Google Scholar
  29. 29.
    Sitawarin, C., Bhagoji, A.N., Mosenia, A., Mittal, P., Chiang, M.: Rogue signs: deceiving traffic sign recognition with malicious ads and logos. arXiv preprint arXiv:1801.02780 (2018)
  30. 30.
    Thys, S., Van Ranst, W., Goedemé, T.: Fooling automated surveillance cameras: adversarial patches to attack person detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2019)Google Scholar
  31. 31.
    Wang, J., et al.: Beyond adversarial training: min-max optimization in adversarial attack and defense. arXiv preprint arXiv:1906.03563 (2019)
  32. 32.
    Xu, K., et al.: Topology attack and defense for graph neural networks: an optimization perspective. In: International Joint Conference on Artificial Intelligence (IJCAI) (2019)Google Scholar
  33. 33.
    Xu, K., et al.: Interpreting adversarial examples by activation promotion and suppression. arXiv preprint arXiv:1904.02057 (2019)
  34. 34.
    Xu, K., et al.: Structured adversarial attack: towards general implementation and better interpretability. In: International Conference on Learning Representations (2019)Google Scholar
  35. 35.
    Zhang, Y., Foroosh, H., David, P., Gong, B.: CAMOU: learning physical vehicle camouflages to adversarially attack detectors in the wild. In: International Conference on Learning Representations (2019). https://openreview.net/forum?id=SJgEl3A5tm
  36. 36.
    Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000)CrossRefGoogle Scholar
  37. 37.
    Zhao, P., Xu, K., Liu, S., Wang, Y., Lin, X.: Admm attack: an enhanced adversarial attack for deep neural networks with undetectable distortions. In: Proceedings of the 24th Asia and South Pacific Design Automation Conference, pp. 499–505. ACM (2019)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Northeastern UniversityBostonUSA
  2. 2.MIT-IBM Watson AI Lab, IBM ResearchCambridgeUSA
  3. 3.Massachusetts Institute of TechnologyCambridgeUSA

Personalised recommendations