Advertisement

Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12349)

Abstract

We present a systematic study of the transferability of adversarial attacks on state-of-the-art object detection frameworks. Using standard detection datasets, we train patterns that suppress the objectness scores produced by a range of commonly used detectors, and ensembles of detectors. Through extensive experiments, we benchmark the effectiveness of adversarially trained patches under both white-box and black-box settings, and quantify transferability of attacks between datasets, object classes, and detector models. Finally, we present a detailed study of physical world attacks using printed posters and wearable clothes, and rigorously quantify the performance of such attacks with different metrics.

Notes

Acknowledgement

Thanks to Ross Girshick for helping us improve our experiments. This work is partially supported by Facebook AI.

Supplementary material

504439_1_En_1_MOESM1_ESM.pdf (2.3 mb)
Supplementary material 1 (pdf 2367 KB)

References

  1. 1.
    Arnab, A., Miksik, O., Torr, P.H.: On the robustness of semantic segmentation models to adversarial attacks. In: CVPR (2018)Google Scholar
  2. 2.
    Athalye, A., Engstrom, L., Ilyas, A., Kwok, K.: Synthesizing robust adversarial examples. In: ICML (2018)Google Scholar
  3. 3.
    Brown, T.B., Mané, D., Roy, A., Abadi, M., Gilmer, J.: Adversarial patch. arXiv preprint arXiv:1712.09665 (2017)
  4. 4.
    Cai, Z., Vasconcelos, N.: Cascade R-CNN: high quality object detection and instance segmentation. arXiv preprint arXiv:1906.09756 (2019)
  5. 5.
    Chen, S.-T., Cornelius, C., Martin, J., Chau, D.H.P.: ShapeShifter: robust physical adversarial attack on faster R-CNN object detector. In: Berlingerio, M., Bonchi, F., Gärtner, T., Hurley, N., Ifrim, G. (eds.) ECML PKDD 2018. LNCS (LNAI), vol. 11051, pp. 52–68. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-10925-7_4CrossRefGoogle Scholar
  6. 6.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: CVPR (2005)Google Scholar
  7. 7.
    Everingham, M., Eslami, S.A., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes challenge: a retrospective. IJCV 111, 98–136 (2015).  https://doi.org/10.1007/s11263-014-0733-5CrossRefGoogle Scholar
  8. 8.
    Eykholt, K., et al.: Physical adversarial examples for object detectors. In: WOOT (2018)Google Scholar
  9. 9.
    Eykholt, K., et al.: Robust physical-world attacks on deep learning models. In: CVPR (2018)Google Scholar
  10. 10.
    Girshick, R.: Fast r-cnn. In: ICCV (2015)Google Scholar
  11. 11.
    Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)Google Scholar
  12. 12.
    He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: ICCV (2017)Google Scholar
  13. 13.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)Google Scholar
  14. 14.
    Huang, L., et al.: Universal physical camouflage attacks on object detectors. In: CVPR (2020)Google Scholar
  15. 15.
    Komkov, S., Petiushko, A.: Advhat: Real-world adversarial attack on arcface face id system. arXiv preprint arXiv:1908.08705 (2019)
  16. 16.
    Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. In: ICLR Workshop (2017)Google Scholar
  17. 17.
    Li, Y., Chen, Y., Wang, N., Zhang, Z.: Scale-aware trident networks for object detection. In: ICCV (2019)Google Scholar
  18. 18.
    Li, Y., Bian, X., Chang, M.C., Lyu, S.: Exploring the vulnerability of single shot module in object detectors via imperceptible background patches. In: BMVC (2019)Google Scholar
  19. 19.
    Li, Y., Tian, D., Chang, M., Bian, X., Lyu, S.: Robust adversarial perturbation on deep proposal-based models. In: BMVC (2018)Google Scholar
  20. 20.
    Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: ICCV (2017)Google Scholar
  21. 21.
    Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46448-0_2CrossRefGoogle Scholar
  22. 22.
    Liu, X., Yang, H., Liu, Z., Song, L., Li, H., Chen, Y.: Dpatch: An adversarial patch attack on object detectors. arXiv preprint arXiv:1806.02299 (2018)
  23. 23.
    Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)
  24. 24.
    Metzen, J.H., Kumar, M.C., Brox, T., Fischer, V.: Universal adversarial perturbations against semantic image segmentation. In: ICCV (2017)Google Scholar
  25. 25.
    Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: CVPR (2017)Google Scholar
  26. 26.
    Redmon, J., Farhadi, A.: Yolo9000: better, faster, stronger. In: CVPR (2017)Google Scholar
  27. 27.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: towards real-time object detection with region proposal networks. In: NIPS (2015)Google Scholar
  28. 28.
    Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.K.: Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In: ACM CCS (2016)Google Scholar
  29. 29.
    Singh, B., Najibi, M., Davis, L.S.: Sniper: efficient multi-scale training. In: NeurIPS (2018)Google Scholar
  30. 30.
    Sitawarin, C., Bhagoji, A.N., Mosenia, A., Chiang, M., Mittal, P.: Darts: deceiving autonomous cars with toxic signs. arXiv preprint arXiv:1802.06430 (2018)
  31. 31.
    Thys, S., Van Ranst, W., Goedemé, T.: Fooling automated surveillance cameras: adversarial patches to attack person detection. In: CVPR Workshop (2019)Google Scholar
  32. 32.
    Tian, Z., Shen, C., Chen, H., He, T.: FCOS: fully convolutional one-stage object detection. In: ICCV (2019)Google Scholar
  33. 33.
    Wei, X., Liang, S., Chen, N., Cao, X.: Transferable adversarial attacks for image and video object detection. In: IJCAI (2019)Google Scholar
  34. 34.
    Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.: Adversarial examples for semantic segmentation and object detection. In: ICCV (2017)Google Scholar
  35. 35.
    Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: CVPR (2017)Google Scholar
  36. 36.
    Xu, K., et al.: Adversarial t-shirt! evading person detectors in a physical world. In: ECCV (2020)Google Scholar
  37. 37.
    Zeng, X., et al.: Adversarial attacks beyond the image space. In: CVPR (2019)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.University of MarylandCollege ParkUSA
  2. 2.Facebook AINew YorkUSA

Personalised recommendations