Advertisement

Design and Interpretation of Universal Adversarial Patches in Face Detection

Conference paper
  • 474 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12362)

Abstract

We consider universal adversarial patches for faces—small visual elements whose addition to a face image reliably destroys the performance of face detectors. Unlike previous work that mostly focused on the algorithmic design of adversarial examples in terms of improving the success rate as an attacker, in this work we show an interpretation of such patches that can prevent the state-of-the-art face detectors from detecting the real faces. We investigate a phenomenon: patches designed to suppress real face detection appear face-like. This phenomenon holds generally across different initialization, locations, scales of patches, backbones and face detection frameworks. We propose new optimization-based approaches to automatic design of universal adversarial patches for varying goals of the attack, including scenarios in which true positives are suppressed without introducing false positives. Our proposed algorithms perform well on real-world datasets, deceiving state-of-the-art face detectors in terms of multiple precision/recall metrics and transferability.

Notes

Acknowledgement

We thank Gregory Shakhnarovich for helping to improve the writing of this paper and valuable suggestions on the experimental designs. X. Yang and J. Zhu were supported by the National Key Research and Development Program of China (No. 2017YFA0700904), NSFC Projects (Nos. 61620106010, U19B2034, U181146), Beijing Academy of Artificial Intelligence, Tsinghua-Huawei Joint Research Program, Tiangong Institute for Intelligent Computing, and NVIDIA NVAIL Program with GPU/DGX Acceleration. H. Zhang was supported in part by the Defense Advanced Research Projects Agency under cooperative agreement HR00112020003.

Supplementary material

504472_1_En_11_MOESM1_ESM.pdf (497 kb)
Supplementary material 1 (pdf 496 KB)

References

  1. 1.
    Athalye, A., Engstrom, L., Ilyas, A., Kwok, K.: Synthesizing robust adversarial examples. arXiv preprint arXiv:1707.07397 (2017)
  2. 2.
    Biggio, B., et al.: Evasion attacks against machine learning at test time. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 387–402 (2013)Google Scholar
  3. 3.
    Blum, A., Dick, T., Manoj, N., Zhang, H.: Random smoothing might be unable to certify \(\ell _{\infty }\) robustness for high-dimensional images. arXiv preprint arXiv:2002.03517 2(2) (2020)
  4. 4.
    Brendel, W., et al.: Adversarial vision challenge. In: The NeurIPS’18 Competition, pp. 129–153 (2020)Google Scholar
  5. 5.
    Brown, T.B., Mané, D., Roy, A., Abadi, M., Gilmer, J.: Adversarial patch. arXiv preprint arXiv:1712.09665 (2017)
  6. 6.
    Chen, S.-T., Cornelius, C., Martin, J., Chau, D.H.P.: ShapeShifter: robust physical adversarial attack on faster R-CNN object detector. In: Berlingerio, M., Bonchi, F., Gärtner, T., Hurley, N., Ifrim, G. (eds.) ECML PKDD 2018. LNCS (LNAI), vol. 11051, pp. 52–68. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-10925-7_4CrossRefGoogle Scholar
  7. 7.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR 2009 (2009)Google Scholar
  8. 8.
    Dong, Y., et al.: Benchmarking adversarial robustness on image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 321–331 (2020)Google Scholar
  9. 9.
    Eykholt, K., et al.: Physical adversarial examples for object detectors. arXiv preprint arXiv:1807.07769 (2018)
  10. 10.
    Eykholt, K., et al.: Robust physical-world attacks on deep learning visual classification. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1625–1634 (2018)Google Scholar
  11. 11.
    Eykholt, K., et al.: Note on attacking object detectors with adversarial stickers. arXiv preprint arXiv:1712.08062 (2017)
  12. 12.
    Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (ICLR) (2015)Google Scholar
  13. 13.
    He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)Google Scholar
  14. 14.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  15. 15.
    Jain, V., Learned-Miller, E.: FDDB: a benchmark for face detection in unconstrained settings (2010)Google Scholar
  16. 16.
    Jia, R., Liang, P.: Adversarial examples for evaluating reading comprehension systems. arXiv preprint arXiv:1707.07328 (2017)
  17. 17.
    Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. In: International Conference on Learning Representations (ICLR) Workshops (2017)Google Scholar
  18. 18.
    Lee, M., Kolter, Z.: On physical adversarial patches for object detection. arXiv preprint arXiv:1906.11897 (2019)
  19. 19.
    Li, J., et al.: DSFD: dual shot face detector. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5060–5069 (2019)Google Scholar
  20. 20.
    Li, Y., Yang, X., Wu, B., Lyu, S.: Hiding faces in plain sight: Disrupting ai face synthesis with adversarial perturbations. arXiv preprint arXiv:1906.09288 (2019)
  21. 21.
    Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125 (2017)Google Scholar
  22. 22.
    Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)Google Scholar
  23. 23.
    Liu, W., et al.: SSD: single shot MultiBox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46448-0_2CrossRefGoogle Scholar
  24. 24.
    Liu, X., Yang, H., Liu, Z., Song, L., Li, H., Chen, Y.: Dpatch: an adversarial patch attack on object detectors. arXiv preprint arXiv:1806.02299 (2018)
  25. 25.
    Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (ICLR) (2018)Google Scholar
  26. 26.
    Ming, X., Wei, F., Zhang, T., Chen, D., Wen, F.: Group sampling for scale invariant face detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3446–3456 (2019)Google Scholar
  27. 27.
    Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 427–436 (2015)Google Scholar
  28. 28.
    Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: IEEE Symposium on Security and Privacy (2016)Google Scholar
  29. 29.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)Google Scholar
  30. 30.
    Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.K.: Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528–1540 (2016)Google Scholar
  31. 31.
    Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.K.: A general framework for adversarial examples with objectives. ACM Trans. Privacy Secur. (TOPS) 22(3), 16 (2019)Google Scholar
  32. 32.
    Szegedy, C., et al.: Intriguing properties of neural networks. In: International Conference on Learning Representations (ICLR) (2014)Google Scholar
  33. 33.
    Thys, S., Van Ranst, W., Goedemé, T.: Fooling automated surveillance cameras: adversarial patches to attack person detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2019)Google Scholar
  34. 34.
    Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., Madry, A.: Robustness may be at odds with accuracy. arXiv preprint arXiv:1805.12152 (2018)
  35. 35.
    Wei, F., Sun, X., Li, H., Wang, J., Lin, S.: Point-set anchors for object detection, instance segmentation and pose estimation. arXiv preprint arXiv:2007.02846 (2020)
  36. 36.
    Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.: Adversarial examples for semantic segmentation and object detection. In: IEEE International Conference on Computer Vision, pp. 1369–1378 (2017)Google Scholar
  37. 37.
    Yamada, T., Gohshi, S., Echizen, I.: Privacy visor: method for preventing face image detection by using differences in human and device sensitivity. In: IFIP International Conference on Communications and Multimedia Security, pp. 152–161 (2013)Google Scholar
  38. 38.
    Yang, S., Luo, P., Loy, C.C., Tang, X.: Wider face: a face detection benchmark. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)Google Scholar
  39. 39.
    Yang, Y.Y., Rashtchian, C., Zhang, H., Salakhutdinov, R., Chaudhuri, K.: Adversarial robustness through local lipschitzness. arXiv preprint arXiv:2003.02460 (2020)
  40. 40.
    You, S., Huang, T., Yang, M., Wang, F., Qian, C., Zhang, C.: Greedynas: towards fast one-shot NAS with greedy supernet. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1999–2008 (2020)Google Scholar
  41. 41.
    You, S., Xu, C., Xu, C., Tao, D.: Learning from multiple teacher networks. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1285–1294 (2017)Google Scholar
  42. 42.
    Zhang, H., Wang, J.: Towards adversarially robust object detection. In: IEEE International Conference on Computer Vision, pp. 421–430 (2019)Google Scholar
  43. 43.
    Zhang, H., Yu, Y., Jiao, J., Xing, E.P., Ghaoui, L.E., Jordan, M.I.: Theoretically principled trade-off between robustness and accuracy. In: International Conference on Machine Learning (ICML) (2019)Google Scholar
  44. 44.
    Zhang, S., Wen, L., Shi, H., Lei, Z., Lyu, S., Li, S.Z.: Single-shot scale-aware network for real-time face detection. Int. J. Comput. Vision 127(6–7), 537–559 (2019)CrossRefGoogle Scholar
  45. 45.
    Zhu, C., Tao, R., Luu, K., Savvides, M.: Seeing small faces from robust anchor’s perspective. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5127–5136 (2018)Google Scholar
  46. 46.
    Zitnick, C.L., Dollár, P.: Edge boxes: locating object proposals from edges. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 391–405. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10602-1_26CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Department of Computer Science and Technology, BNRist CenterInstitute for AI, Tsinghua UniversityBeijingChina
  2. 2.Microsoft Research AsiaBeijingChina
  3. 3.TTICChicagoUSA

Personalised recommendations