Skip to main content

Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks

  • Conference paper
  • First Online:
Computer Safety, Reliability, and Security. SAFECOMP 2020 Workshops (SAFECOMP 2020)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 12235))

Included in the following conference series:

Abstract

Deep learning methods are widely regarded as indispensable when it comes to designing perception pipelines for autonomous agents such as robots, drones or automated vehicles. The main reasons, however, for deep learning not being used for autonomous agents at large scale already are safety concerns. Deep learning approaches typically exhibit a black-box behavior which makes it hard for them to be evaluated with respect to safety-critical aspects. While there have been some work on safety in deep learning, most papers typically focus on high-level safety concerns. In this work, we seek to dive into the safety concerns of deep learning methods on a deeply technical level. Additionally, we present extensive discussions on possible mitigation methods and give an outlook regarding what mitigation methods are still missing in order to facilitate an argumentation for the safety of a deep learning method.

Parts of the research leading to the presented results are funded by the German Federal Ministry for Economic Affairs and Energy within the project “KI Absicherung – Safe AI for automated driving”. We would like to thank the consortium for the successful cooperation, in particular Matthias Woehrle, Peter Schlicht and Christian Hellert for reviewing our work and their thoughtful comments.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Please note that while we focus on DNNs, a large amount of the safety concerns discussed in this paper may also be valid for other types of ML-based methods.

  2. 2.

    For a concise overview of common threat models see, e.g., [35].

  3. 3.

    Defending against adversarial examples is currently a heavily researched topic and there may exist other effective methods.

  4. 4.

    Even though synthetic data may look “realistic” to a human, the data-level distribution may be significantly different leading to non-meaningful test results.

  5. 5.

    For reasons described in SC-7, the test set used for the ultimate performance evaluation needs to remain unseen until final testing.

References

  1. Adler, R., et al.: Hardening of artificial neural networks for use in safety-critical applications - a mapping study. arXiv (2019)

    Google Scholar 

  2. Alcorn, M.A., et al.: Strike (with) a pose: neural networks are easily fooled by strange poses of familiar objects. arXiv (2018)

    Google Scholar 

  3. Blundell, C., Cornebise, J., Kavukcuoglu, K., Wierstra, D.: Weight uncertainty in neural networks. In: ICML (2015)

    Google Scholar 

  4. Bousquet, O., Boucheron, S., Lugosi, G.: Introduction to statistical learning theory. In: Bousquet, O., von Luxburg, U., Rätsch, G. (eds.) ML -2003. LNCS (LNAI), vol. 3176, pp. 169–207. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-28650-9_8

    Chapter  MATH  Google Scholar 

  5. Brown, T.B., Mané, D., Roy, A., Abadi, M., Gilmer, J.: Adversarial Patch. arXiv (2017)

    Google Scholar 

  6. Burton, S., Gauerhof, L., Heinzemann, C.: Making the case for safety of machine learning in highly automated driving. In: Tonetta, S., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2017. LNCS, vol. 10489, pp. 5–16. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66284-8_1

    Chapter  Google Scholar 

  7. Burton, S., Gauerhof, L., Sethy, B.B., Habli, I., Hawkins, R.: Confidence arguments for evidence of performance in machine learning for highly automated driving functions. In: Romanovsky, A., Troubitsyna, E., Gashi, I., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2019. LNCS, vol. 11699, pp. 365–377. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-26250-1_30

    Chapter  Google Scholar 

  8. Carlini, N., Wagner, D.A.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy (2017)

    Google Scholar 

  9. Engstrom, L., Tran, B., Tsipras, D., Schmidt, L., Madry, A.: Exploring the landscape of spatial robustness. In: ICML (2019)

    Google Scholar 

  10. Eykholt, K., et al.: Physical Adversarial Examples for Object Detectors. arXiv (2018)

    Google Scholar 

  11. Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: ICML (2016)

    Google Scholar 

  12. Gauerhof, L., Munk, P., Burton, S.: Structuring validation targets of a machine learning function applied to automated driving. In: Gallina, B., Skavhaug, A., Bitsch, F. (eds.) SAFECOMP 2018. LNCS, vol. 11093, pp. 45–58. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99130-6_4

    Chapter  Google Scholar 

  13. Gharib, M., Lollini, P., Botta, M., Amparore, E., Donatelli, S., Bondavalli, A.: On the safety of automotive systems incorporating machine learning based components: a position paper. In: DSN (2018)

    Google Scholar 

  14. Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)

    Google Scholar 

  15. Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On Calibration of Modern Neural Networks. arXiv (2017)

    Google Scholar 

  16. Haase-Schütz, C., Hertlein, H., Wiesbeck, W.: Estimating labeling quality with deep object detectors. In: IEEE IV (2019)

    Google Scholar 

  17. Hein, M., Andriushchenko, M., Bitterwolf, J.: Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem. In: CVPR (2019)

    Google Scholar 

  18. Hendrycks, D., Dietterich, T.: Benchmarking neural network robustness to common corruptions and perturbations. In: ICLR (2019)

    Google Scholar 

  19. ISO: Road vehicles - functional safety (ISO 26262) (2018)

    Google Scholar 

  20. ISO: Road vehicles - safety of the intended functionality (ISO/PAS 21448) (2019)

    Google Scholar 

  21. Kletz, T.A.: HAZOP & HAZAN: Notes on the Identification and Assessment of Hazards. Hazard Workshop Modules, Institution of Chemical Engineers (1986)

    Google Scholar 

  22. Koopman, P., Fratrik, F.: How many operational design domains, objects, and events? In: Workshop on AI Safety (2019)

    Google Scholar 

  23. Kurd, Z., Kelly, T.: Establishing safety criteria for artificial neural networks. In: Knowledge-Based Intelligent Information and Engineering Systems (2003)

    Google Scholar 

  24. Lampert, C.H., Nickisch, H., Harmeling, S.: Attribute-based classification for zero-shot visual object categorization. In: TPAMI (2014)

    Google Scholar 

  25. Lee, M., Kolter, J.Z.: On Physical Adversarial Patches for Object Detection. arXiv (2019)

    Google Scholar 

  26. Li, J., Schmidt, F.R., Kolter, J.Z.: Adversarial camera stickers: a physical camera-based attack on deep learning systems. arXiv (2019)

    Google Scholar 

  27. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018)

    Google Scholar 

  28. Morgulis, N., Kreines, A., Mendelowitz, S., Weisglass, Y.: Fooling a Real Car with Adversarial Traffic Signs. arXiv (2019)

    Google Scholar 

  29. Pakdaman Naeini, M., Cooper, G., Hauskrecht, M.: Obtaining well calibrated probabilities using Bayesian binning. In: AAAI (2015)

    Google Scholar 

  30. Schumann, J., Gupta, P., Liu, Y.: Application of neural networks in high assurance systems: a survey. In: Schumann, J., Liu, Y. (eds.) Applications of Neural Networks in High Assurance Systems. Studies in Computational Intelligence, vol. 268, pp. 1–19. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-10690-3_1

    Chapter  Google Scholar 

  31. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: ICCV (2017)

    Google Scholar 

  32. Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014)

    Google Scholar 

  33. Varshney, K.R.: Engineering safety in machine learning. In: Information Theory and Applications Workshop (2016)

    Google Scholar 

  34. Wong, E., Kolter, J.Z.: Provable defenses against adversarial examples via the convex outer adversarial polytope. In: ICML (2018)

    Google Scholar 

  35. Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. In: TNNLS (2019)

    Google Scholar 

  36. Zendel, O., Murschitz, M., Humenberger, M., Herzner, W.: CV-HAZOP: introducing test data validation for computer vision. In: ICCV (2015)

    Google Scholar 

  37. Zendel, O., Honauer, K., Murschitz, M., Steininger, D., Domínguez, G.F.: WildDash - creating hazard-aware benchmarks. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11210, pp. 407–421. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01231-1_25

    Chapter  Google Scholar 

  38. Zhang, J.M., Harman, M., Ma, L., Liu, Y.: Machine Learning Testing: Survey, Landscapes and Horizons. arXiv (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shervin Raafatnia .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Willers, O., Sudholt, S., Raafatnia, S., Abrecht, S. (2020). Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks. In: Casimiro, A., Ortmeier, F., Schoitsch, E., Bitsch, F., Ferreira, P. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2020 Workshops. SAFECOMP 2020. Lecture Notes in Computer Science(), vol 12235. Springer, Cham. https://doi.org/10.1007/978-3-030-55583-2_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-55583-2_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-55582-5

  • Online ISBN: 978-3-030-55583-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics