Skip to main content
Log in

A Strategy to Evaluate Test Time Evasion Attack Feasibility

  • Schwerpunkt
  • Published:
Datenschutz und Datensicherheit - DuD Aims and scope Submit manuscript

Zusammenfassung

New attacks against Computer Vision systems and other perceptive machine learning approaches are currently published in high frequency. Often the assumptions or limitations of these works are so strict that the attacks seem to have no practical relevance. On the other hand, recent reports show the effectiveness of attacks against cyber-physical systems (CPS). In particular, attacks on automotive systems demonstrate the safety-impact in real-world scenarios. We discuss the practical relevance of security threats for machine learning approaches in automotive use cases and we propose a strategy to evaluate the feasibility of such threats. This includes a method to potentially discover existing vulnerabilities and rate their exploitability in the use case.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

  1. Martín Abadi et al.: TensorFlow: Large-scale machine learning on heterogeneous systems. 2015. url: https://www.tensorfow.org/.

  2. Naveed Akhtar and Ajmal Mian: Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey. In: IEEE Access 6 (2018). https://doi.org/10.1109/ACCESS.2018.2807385.

    Article  Google Scholar 

  3. Tom B. Brown et al.: Adversarial Patch. arXiv:1712.09665 [cs]. May 2018. url: http://arxiv.org/abs/1712.09665 (visited on 02/02/2023).

  4. Bundesamt für Sicherheit in der Informationstechnik: Secure, robust and transparent application of AI-Problems, measures and need for action. Tech. rep. BSI, 2021. url: https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/KI/Secure_robust_and_transparent_application_of_AI.pdf?blob=publicationFile&v=2 (visited on 02/04/2023).

  5. Ambra Demontis et al.: Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks. In: 28th USENIX Security Symposium. 2019.

  6. Jia Deng et al.: ImageNet: A large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2009. https://doi.org/10.1109/CVPR.2009.5206848.

  7. Kevin Eykholt et al.: Robust Physical-World Attacks on Deep Learning Visual Classification. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2018. doi: 10.1109/ CVPR.2018.00175.

  8. Amin Ghiasi, Ali Shafahi, and Tom Goldstein: Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates. arXiv:2003.08937 [cs.LG]. Mar. 2020. url: http://arxiv.org/abs/2003.08937 (visited on 02/02/2023).

  9. Kaiming He et al.: Deep Residual Learning for Image Recognition. arXiv:1512.03385 [cs]. Dec. 2015. url: http://arxiv.org/abs/1512.03385 (visited on 02/02/2023).

  10. Andrew Howard et al.: Searching for MobileNetV3. arXiv – arXiv:1905.02244 [cs.CV]. Nov. 2019. url: http://arxiv.org/abs/1905.02244 (visited on 02/02/2023).

  11. ISO: Artificial Intelligence (AI) — Assessment of the robustness of neural networks — Part 1: Overview. Tech. rep. ISO/IEC TR 24029-1. 2021.

  12. Alexey Kurakin, Ian Goodfellow, and Samy Bengio: Adversarial examples in the physical world. arXiv: 1607.02533. Feb. 2017. doi: 10.48550/ARXIV.1607.02533. url: http://arxiv.org/abs/1607.02533 (visited on 03/04/2021).

  13. Nir Morgulis et al.: Fooling a Real Car with Adversarial Traffic Signs. arXiv: 1907.00374. June 2019. url: http://arxiv.org/abs/1907.00374 (visited on 02/08/2021).

  14. Ben Nassi et al.: Phantom of the ADAS: Securing Advanced Driver-Assistance Systems from Split-Second Phantom Attacks. In: ACM SIGSAC Conference on Computer and Communications Security. 2020. https://doi.org/10.1145/3372297.3423359.

    Book  Google Scholar 

  15. Maria-Irina Nicolae et al.: Adversarial robustness toolbox v1.2.0. arXiv:1807.01069 [cs.LG]. 2018. url: https://arxiv.org/pdf/1807.01069.

  16. Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow: Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples. arXiv: 1605.07277. May 2016. url: http://arxiv.org/abs/1605.07277 (visited on 03/04/2021).

  17. Karen Simonyan and Andrew Zisserman: Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv – arXiv:1409.1556 [cs]. Apr. 2015. url: http://arxiv.org/abs/1409.1556 (visited on 02/02/2023).

  18. Andrea Stocco, Brian Pulfer, and Paolo Tonella: Mind the Gap! A Study on the Transferability of Virtual vs Physical-world Testing of Autonomous Driving Systems. In: IEEE Transactions on Software Engineering (2022). https://doi.org/10.1109/TSE.2022.3202311.

    Article  Google Scholar 

  19. Tencent Keen Security Lab: Experimental Security Research of Tesla Autopilot. Tech. rep. Mar. 2019. url: https://keenlab.tencent.com/en/whitepapers/Experimental_Security_Research_of_Tesla_Autopilot.pdf (visited on 02/08/2021).

  20. Steve Povolny, Trivedi Shivangee: Model Hacking ADAS to Pave Safer Roads for Autonomous Vehicles. Feb. 2020. url: https://www.mcafee.com/blogs/other-blogs/mcafee-labs/model-hacking-adas-to-pave-safer-roads-for-autonomous-vehicles/ (visited on 02/03/2023).

  21. Xingxing Wei et al.: Physically Adversarial Attacks and Defenses in Computer Vision: A Survey. arXiv:2211.01671 [cs.CV]. 2022. url: https://arxiv.org/abs/2211.01671.

  22. Eric Wong, Frank R. Schmidt, and J. Zico Kolter: Wasserstein Adversarial Examples via Projected Sinkhorn Iterations. arXiv:1902.07906 [cs, stat]. Jan. 2020. url: http://arxiv.org/abs/1902.07906 (visited on 02/02/2023).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stephan Kleber.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kleber, S., Wachter, P. A Strategy to Evaluate Test Time Evasion Attack Feasibility. Datenschutz Datensich 47, 478–482 (2023). https://doi.org/10.1007/s11623-023-1802-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11623-023-1802-0

Navigation