Skip to main content
Log in

An Adversarial Attack on Salient Regions of Traffic Sign

  • Published:
Automotive Innovation Aims and scope Submit manuscript

Abstract

The state-of-the-art deep neural networks are vulnerable to the attacks of adversarial examples with small-magnitude perturbations. In the field of deep-learning-based automated driving, such adversarial attack threats testify to the weakness of AI models. This limitation can lead to severe issues regarding the safety of the intended functionality (SOTIF) in automated driving. From the perspective of causality, the adversarial attacks can be regarded as confounding effects with spurious correlations established by the non-causal features. However, few previous research works are devoted to building the relationship between adversarial examples, causality, and SOTIF. This paper proposes a robust physical adversarial perturbation generation method that aims at the salient image regions of the targeted attack class with the guidance of class activation mapping (CAM). With the utilization of CAM, the maximization of the confounding effects can be achieved through the intermediate variable of the front-door criterion between images and targeted attack labels. In the simulation experiment, the proposed method achieved a 94.6% targeted attack success rate (ASR) on the released dataset when the speed-speed-limit-60 km/h (speed-limit-60) signs could be attacked as speed-speed-limit-80 km/h (speed-limit-80) signs. In the real physical experiment, the targeted ASR is 75% and the untargeted ASR is 100%. Besides the state-of-the-art attack result, a detailed experiment is implemented to evaluate the performance of the proposed method under low resolutions, diverse optimizers, and multifarious defense methods. The code and data are released at the repository: https://github.com/yebin999/rp2-with-cam.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Abbreviations

ASR:

Attack success rate

CAM:

Class activation mapping

RP2 :

Robust physical perturbations

SOTIF:

Safety of the intended functionality

References

  1. Bojarski, M., Testa, D.D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L.D., Monfort, M., Muller, U., Zhang, J., Zhang, X., Zhao, J., Zieba, K.: End to end learning for self-driving cars. CoRR arXiv:abs/1604.07316 (2016)

  2. Redmon, J., Angelova, A.: Real-time grasp detection using convolutional neural networks. In: IEEE International Conference on Robotics and Automation, ICRA, Seattle, WA, USA, 26-30 May, 2015, pp. 1316–1322 (2015)

  3. Taigman, Y., Yang, M., Ranzato, M., Wolf, L.: Deepface: Closing the gap to human-level performance in face verification. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Columbus, OH, USA, June 23-28, 2014, pp. 1701–1708 (2014)

  4. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: 3rd International Conference on Learning Representations, ICLR, San Diego, CA, USA, May 7–9, 2015, Conference Track Proceedings (2015)

  5. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: 6th International Conference on Learning Representations, ICLR, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings (2018)

  6. Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., Song, D.: Robust physical-world attacks on deep learning visual classification. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Salt Lake City, UT, USA, June 18–22, 2018, pp. 1625–1634 (2018)

  7. Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. In: 5th International Conference on Learning Representations, ICLR, Toulon, France, April 24–26, 2017, Workshop Track Proceedings (2017)

  8. Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.K.: Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, October 24–28, 2016, pp. 1528–1540 (2016)

  9. ISO Central Secretary Organization: ISO/PAS 21448: Road vehicles - Safety of the intended functionality. Standard, International Organization for Standardization, Geneva, Switzerland (2019)

    Google Scholar 

  10. Pearl, J., Mackenzie, D.: The Book Of Why: The New Science Of Cause And Effect, pp. 1–432. Basic Books, New York (2018)

  11. Zhou, B., Khosla, A., Lapedriza, À., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Las Vegas, NV, USA, June 27–30, 2016, pp. 2921–2929 (2016)

  12. Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., Li, J.: Boosting adversarial attacks with momentum. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Salt Lake City, UT, USA, June 18–22, 2018, pp. 9185–9193 (2018)

  13. Xie, C., Zhang, Z., Zhou, Y., Bai, S., Wang, J., Ren, Z., Yuille, A.L.: Improving transferability of adversarial examples with input diversity. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR , Long Beach, CA, USA, June 16–20, 2019, pp. 2730–2739 (2019)

  14. Carlini, N., Wagner, D.A.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy, SP , San Jose, CA, USA, May 22–26, 2017, pp. 39–57 (2017)

  15. Moosavi-Dezfooli, S., Fawzi, A., Frossard, P.: Deepfool: A simple and accurate method to fool deep neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Las Vegas, NV, USA, June 27–30, 2016, pp. 2574–2582 (2016)

  16. Xiao, C., Li, B., Zhu, J., He, W., Liu, M., Song, D.: Generating adversarial examples with adversarial networks. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI, July 13–19, 2018, Stockholm, Sweden, pp. 3905–3911 (2018)

  17. Jandial, S., Mangla, P., Varshney, S., Balasubramanian, V.: Advgan++: Harnessing latent layers for adversary generation. In: IEEE/CVF International Conference on Computer Vision Workshops, ICCV Workshops 2019, Seoul, Korea (South), Oct 27–28, 2019, pp. 2045–2048 (2019)

  18. Dong, Y., Su, H., Wu, B., Li, Z., Liu, W., Zhang, T., Zhu, J.: Efficient decision-based black-box adversarial attacks on face recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Long Beach, CA, USA, June 16-20, 2019, pp. 7714–7722 (2019)

  19. Chen, W., Zhang, Z., Hu, X., Wu, B.: Boosting decision-based black-box adversarial attacks with random sign flip. In: Computer Vision - ECCV - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XV. Lecture Notes in Computer Science, vol. 12360, pp. 276–293 (2020)

  20. Liang, S., Wu, B., Fan, Y., Wei, X., Cao, X.: Parallel rectangle flip attack: A query-based black-box attack against object detection. In: IEEE/CVF International Conference on Computer Vision, ICCV , Montreal, QC, Canada, Oct 10–17, 2021, pp. 7677–7687 (2021)

  21. Qin, Z., Fan, Y., Zha, H., Wu, B.: Random noise defense against query-based black-box attacks. In: Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems, NeurIPS, Dec 6–14, 2021, Virtual, pp. 7650–7663 (2021)

  22. Feng, Y., Wu, B., Fan, Y., Liu, L., Li, Z., Xia, S.: Boosting black-box attack with partially transferred conditional adversarial distribution. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, New Orleans, LA, USA, June 18–24, 2022, pp. 15074–15083 (2022)

  23. Sitawarin, C., Bhagoji, A.N., Mosenia, A., Mittal, P., Chiang, M.: Rogue signs: Deceiving traffic sign recognition with malicious ads and logos. CoRR arXiv:abs/1801.02780 (2018)

  24. Sitawarin, C., Bhagoji, A.N., Mosenia, A., Chiang, M., Mittal, P.: DARTS: deceiving autonomous cars with toxic signs. CoRR arXiv:abs/1802.06430 (2018)

  25. Feng, R., Chen, J., Fernandes, E., Jha, S., Prakash, A.: Robust physical hard-label attacks on deep learning visual classification. CoRR arXiv:abs/2002.07088 (2020)

  26. Kong, Z., Guo, J., Li, A., Liu, C.: Physgan: Generating physical-world-resilient adversarial examples for autonomous driving. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR , Seattle, WA, USA, June 13–19, 2020, pp. 14242–14251 (2020)

  27. Ye, B., Yin, H., Yan, J., Ge, W.: Patch-based attack on traffic sign recognition. In: 24th IEEE International Intelligent Transportation Systems Conference, ITSC, Indianapolis, IN, USA, September 19–22, 2021, pp. 164–171 (2021)

  28. Cao, Y., Wang, N., Xiao, C., Yang, D., Fang, J., Yang, R., Chen, Q.A., Liu, M., Li, B.: Invisible for both camera and lidar: Security of multi-sensor fusion based perception in autonomous driving under physical-world attacks. In: 42nd IEEE Symposium on Security and Privacy, SP, San Francisco, CA, USA, 24–27 May 2021, pp. 176–194 (2021)

  29. Xu, K., Zhang, G., Liu, S., Fan, Q., Sun, M., Chen, H., Chen, P., Wang, Y., Lin, X.: Adversarial t-shirt! evading person detectors in a physical world. In: Computer Vision - ECCV - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part V. Lecture Notes in Computer Science, vol. 12350, pp. 665–681 (2020)

  30. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. Int. J. Comput. Vis. 128(2), 336–359 (2020)

    Article  Google Scholar 

  31. Pearl, J., Glymour, M., Jewell, N.P.: Causal Inference in Statistics: A Primer, pp. 1–150. Willey, Hoboken, NJ (2016)

  32. Peters, J., Janzing, D., Schölkopf, B.: Elements of Causal Inference: Foundations and Learning Algorithms, pp. 1–265. MIT Press, Cambridge, MA (2017)

  33. Tang, K., Tao, M., Zhang, H.: Adversarial visual robustness by causal intervention. CoRR arXiv:abs/2106.09534 (2021)

  34. Zhang, C., Zhang, K., Li, Y.: A causal view on robustness of neural networks. In: Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems, NeurIPS, Dec 6–12, 2020, Virtual (2020)

  35. Deng, T., Zeng, Z.: Generate adversarial examples by spatially perturbing on the meaningful area. Pattern Recognit. Lett. 125, 632–638 (2019)

    Article  Google Scholar 

  36. Xu, K., Liu, S., Zhang, G., Sun, M., Zhao, P., Fan, Q., Gan, C., Lin, X.: Interpreting adversarial examples by activation promotion and suppression. CoRR arXiv:abs/1904.02057 (2019)

  37. Kotyan, S., Vargas, D.V.: Deep neural network loses attention to adversarial images. In: Proceedings of the Workshop on Artificial Intelligence Safety Co-located with the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI), Virtual, August, 2021. CEUR Workshop Proceedings, vol. 2916 (2021)

  38. Zhou, D., Wang, N., Peng, C., Gao, X., Wang, X., Yu, J., Liu, T.: Removing adversarial noise in class activation feature space. In: IEEE/CVF International Conference on Computer Vision, ICCV, Montreal, QC, Canada, October 10-17, 2021, pp. 7858–7867 (2021)

  39. Heo, J., Joo, S., Moon, T.: Fooling neural network interpretations via adversarial model manipulation. In: Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems, NeurIPS, December 8-14, 2019, Vancouver, BC, Canada, pp. 2921–2932 (2019)

  40. Ren, L., Yin, H., Ge, W., Meng, Q.: Environment influences on uncertainty of object detection for automated driving systems. In: 12th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics, CISP-BMEI , Suzhou, China, Oct 19–21, 2019, pp. 1–5 (2019)

  41. Collin, A., Bilka, A., Pendleton, S., Tebbens, R.J.D.: Safety of the intended driving behavior using rulebooks. In: IEEE Intelligent Vehicles Symposium, IV, Las Vegas, NV, USA, Oct 19 - Nov 13, 2020, pp. 136–143 (2020)

  42. Müller, J., Gabb, M., Buchholz, M.: A subjective-logic-based reliability estimation mechanism for cooperative information with application to iv’s safety. In: IEEE Intelligent Vehicles Symposium, IV, Paris, France, June 9–12, 2019, pp. 1940–1946 (2019)

  43. Willers, O., Sudholt, S., Raafatnia, S., Abrecht, S.: Safety concerns and mitigation approaches regarding the use of deep learning in safety-critical perception tasks. In: Computer Safety, Reliability, and Security. SAFECOMP Workshops, Lisbon, Portugal, September 15, 2020, Proceedings. Lecture Notes in Computer Science, vol. 12235, pp. 336–350 (2020)

  44. Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., Madry, A.: Adversarial examples are not bugs, they are features. In: Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems, NeurIPS, December 8-14, 2019, Vancouver, BC, Canada, pp. 125–136 (2019)

  45. Zhang, H., Yu, Y., Jiao, J., Xing, E.P., Ghaoui, L.E., Jordan, M.I.: Theoretically principled trade-off between robustness and accuracy. In: Proceedings of the 36th International Conference on Machine Learning, ICML, 9-15 June 2019, Long Beach, California, USA. Proceedings of Machine Learning Research, vol. 97, pp. 7472–7482 (2019)

  46. Yun, S., Han, D., Chun, S., Oh, S.J., Yoo, Y., Choe, J.: Cutmix: Regularization strategy to train strong classifiers with localizable features. In: IEEE/CVF International Conference on Computer Vision, ICCV, Seoul, Korea (South), Oct 27 - Nov 2, 2019, pp. 6022–6031 (2019)

  47. Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural Networks 32, 323–332 (2012)

  48. Yadav, V.: p2-trafficsign. https://github.com/vxy10/p2-TrafficSigns (2016)

  49. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: 3rd International Conference on Learning Representations, {ICLR}, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings (2015)

  50. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S.E., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Boston, MA, USA, June 7–12, 2015, pp. 1–9 (2015)

  51. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: 3rd International Conference on Learning Representations, ICLR, San Diego, CA, USA, May 7–9, 2015, Conference Track Proceedings (2015)

  52. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Las Vegas, NV, USA, June 27–30, 2016, pp. 770–778 (2016)

  53. Zeiler, M.D.: ADADELTA: an adaptive learning rate method. CoRR arXiv:abs/1212.5701 (2012)

  54. Duchi, J.C., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12, 2121–2159 (2011)

    MathSciNet  MATH  Google Scholar 

  55. Hassani, A., Walton, S., Shah, N., Abuduweili, A., Li, J., Shi, H.: Escaping the big data paradigm with compact transformers. CoRR arXiv:abs/2104.05704 (2021)

  56. Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: IEEE/CVF International Conference on Computer Vision, ICCV, Montreal, QC, Canada, October 10–17, 2021, pp. 9992–10002 (2021)

  57. Zagoruyko, S., Komodakis, N.: Wide residual networks. In: Proceedings of the British Machine Vision Conference, BMVC, York, UK, September 19–22, 2016 (2016)

  58. Bhojanapalli, S., Chakrabarti, A., Glasner, D., Li, D., Unterthiner, T., Veit, A.: Understanding robustness of transformers for image classification. In: IEEE/CVF International Conference on Computer Vision, ICCV, Montreal, QC, Canada, Oct 10–17, 2021, pp. 10211–10221 (2021)

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grant No. 62133011. The authors would like to thank TÜV SÜD for their kind and generous support. We are grateful for the efforts of our colleagues at the Sino-German Center of Intelligent Systems, Tongji University. We appreciate the suggestions from Dr. Qi Deng and Mr. Chenglong Zhao (PhD candidate at Shanghai Jiaotong University) on our manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huilin Yin.

Additional information

Academic Editor: Shuo Feng

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yan, J., Yin, H., Ye, B. et al. An Adversarial Attack on Salient Regions of Traffic Sign. Automot. Innov. 6, 190–203 (2023). https://doi.org/10.1007/s42154-023-00220-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42154-023-00220-9

Keywords

Navigation