Skip to main content

Gaussian-Based Runtime Detection of Out-of-distribution Inputs for Neural Networks

  • Conference paper
  • First Online:
Runtime Verification (RV 2021)

Abstract

In this short paper, we introduce a simple approach for runtime monitoring of deep neural networks and show how to use it for out-of-distribution detection. The approach is based on inferring Gaussian models of some of the neurons and layers. Despite its simplicity, it performs better than recently introduced approaches based on interval abstractions which are traditionally used in verification.

This research was funded in part by the DFG research training group CONVEY (GRK 2428), the DFG project 383882557 - Statistical Unbounded Verification (KR 4890/2-1), the project Audi Verifiable AI, and the BMWi funded KARLI project (grant 19A21031C).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Cheng, C.-H., Nührenberg, G., Yasuoka, H.: Runtime monitoring neuron activation patterns. DATE (2019). https://arxiv.org/abs/1809.06573

  2. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016). https://www.deeplearningbook.org/

  3. Huang, X., et al.: A survey of safety and trustworthiness of deep neural networks. CoRR (2018). https://arxiv.org/abs/1812.08342

  4. Ortega, P., Maini, V.: Building safe artificial intelligence: specification, robustness, and assurance. Deep Mind blog (2018). https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1

  5. Wikipedia: List of self-driving car fatalities. Wikipedia article (2018). https://en.wikipedia.org/wiki/List-of-self-driving-car-fatalities

  6. Wong, E., Kolter, Z.: Provable defenses against adversarial examples via the convex outer adversarial polytope. In: ICML (2018). https://arxiv.org/abs/1711.00851

  7. Gowal, S., et al.: On the effectiveness of interval bound propagation for training verifiably robust models. In: NIPS (2018). https://arxiv.org/abs/1810.12715

  8. Mirman, M., Gehr, T., Vechev, M.: Differentiable abstract interpretation for provably robust neural networks. In: ICML (2018). https://files.sri.inf.ethz.ch/website/papers/icml18-diffai.pdf

  9. McAllister, R., et al.: Concrete problems for autonomous vehicle safety: advantages of bayesian deep learning. In: IJCAI (2017). https://www.ijcai.org/Proceedings/2017/661

  10. Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., Mané, D.: Concrete problems in AI safety. CoRR (2016). https://arxiv.org/abs/1606.06565

  11. Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. In: ICLR (2017). https://arxiv.org/abs/1610.02136

  12. Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. In: NIPS (2017). https://arxiv.org/abs/1612.01474

  13. Liang, S., Li, Y., Srikant, R.: Enhancing the reliability of out-of-distribution image detection in neural networks. In: ICLR (2018). https://arxiv.org/abs/1706.02690

  14. Lee, K., Lee, K., Lee, H., Shin, J.: A simple unified framework for detecting out-of- distribution samples and adversarial attacks. In: NIPS (2018). https://arxiv.org/abs/1807.03888

  15. Ren, J., et al.: Likelihood ratios for out-of-distribution detection. In: NeurIPS (2019). https://arxiv.org/abs/1906.02845

  16. Henzinger, T.A., Lukina, A., Schilling, C.: Outside the box: abstraction-based monitoring of neural networks. In: ECAI (2020). https://arxiv.org/abs/1911.09032

  17. Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, Heidelberg (2006)

    MATH  Google Scholar 

  18. Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: The German traffic sign recognition benchmark: a multi-class classification competition. In: Proceedings of the IEEE International Joint Conference on Neural Networks, pp. 1453–1460 (2011)

    Google Scholar 

  19. ISO/PAS 21448. Road vehicles - Safety of the intended functionality. https://www.iso.org/obp/ui/#iso:std:70939:en

  20. Pimentel, M.A.F., Clifton, D.A., Clifton, L.A., Tarassenko, L.: A review of novelty detection. Signal Process. 99, 215–249 (2014)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stefanie Mohr .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hashemi, V., Křetínský, J., Mohr, S., Seferis, E. (2021). Gaussian-Based Runtime Detection of Out-of-distribution Inputs for Neural Networks. In: Feng, L., Fisman, D. (eds) Runtime Verification. RV 2021. Lecture Notes in Computer Science(), vol 12974. Springer, Cham. https://doi.org/10.1007/978-3-030-88494-9_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88494-9_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88493-2

  • Online ISBN: 978-3-030-88494-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics