Skip to main content

Gaussian-Based Runtime Detection of Out-of-distribution Inputs for Neural Networks

  • Conference paper
  • First Online:
Runtime Verification (RV 2021)


In this short paper, we introduce a simple approach for runtime monitoring of deep neural networks and show how to use it for out-of-distribution detection. The approach is based on inferring Gaussian models of some of the neurons and layers. Despite its simplicity, it performs better than recently introduced approaches based on interval abstractions which are traditionally used in verification.

This research was funded in part by the DFG research training group CONVEY (GRK 2428), the DFG project 383882557 - Statistical Unbounded Verification (KR 4890/2-1), the project Audi Verifiable AI, and the BMWi funded KARLI project (grant 19A21031C).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others


  1. Cheng, C.-H., Nührenberg, G., Yasuoka, H.: Runtime monitoring neuron activation patterns. DATE (2019).

  2. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016).

  3. Huang, X., et al.: A survey of safety and trustworthiness of deep neural networks. CoRR (2018).

  4. Ortega, P., Maini, V.: Building safe artificial intelligence: specification, robustness, and assurance. Deep Mind blog (2018).

  5. Wikipedia: List of self-driving car fatalities. Wikipedia article (2018).

  6. Wong, E., Kolter, Z.: Provable defenses against adversarial examples via the convex outer adversarial polytope. In: ICML (2018).

  7. Gowal, S., et al.: On the effectiveness of interval bound propagation for training verifiably robust models. In: NIPS (2018).

  8. Mirman, M., Gehr, T., Vechev, M.: Differentiable abstract interpretation for provably robust neural networks. In: ICML (2018).

  9. McAllister, R., et al.: Concrete problems for autonomous vehicle safety: advantages of bayesian deep learning. In: IJCAI (2017).

  10. Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., Mané, D.: Concrete problems in AI safety. CoRR (2016).

  11. Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. In: ICLR (2017).

  12. Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. In: NIPS (2017).

  13. Liang, S., Li, Y., Srikant, R.: Enhancing the reliability of out-of-distribution image detection in neural networks. In: ICLR (2018).

  14. Lee, K., Lee, K., Lee, H., Shin, J.: A simple unified framework for detecting out-of- distribution samples and adversarial attacks. In: NIPS (2018).

  15. Ren, J., et al.: Likelihood ratios for out-of-distribution detection. In: NeurIPS (2019).

  16. Henzinger, T.A., Lukina, A., Schilling, C.: Outside the box: abstraction-based monitoring of neural networks. In: ECAI (2020).

  17. Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, Heidelberg (2006)

    MATH  Google Scholar 

  18. Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: The German traffic sign recognition benchmark: a multi-class classification competition. In: Proceedings of the IEEE International Joint Conference on Neural Networks, pp. 1453–1460 (2011)

    Google Scholar 

  19. ISO/PAS 21448. Road vehicles - Safety of the intended functionality.

  20. Pimentel, M.A.F., Clifton, D.A., Clifton, L.A., Tarassenko, L.: A review of novelty detection. Signal Process. 99, 215–249 (2014)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Stefanie Mohr .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hashemi, V., Křetínský, J., Mohr, S., Seferis, E. (2021). Gaussian-Based Runtime Detection of Out-of-distribution Inputs for Neural Networks. In: Feng, L., Fisman, D. (eds) Runtime Verification. RV 2021. Lecture Notes in Computer Science(), vol 12974. Springer, Cham.

Download citation

  • DOI:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88493-2

  • Online ISBN: 978-3-030-88494-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics