Abstract
In this short paper, we introduce a simple approach for runtime monitoring of deep neural networks and show how to use it for out-of-distribution detection. The approach is based on inferring Gaussian models of some of the neurons and layers. Despite its simplicity, it performs better than recently introduced approaches based on interval abstractions which are traditionally used in verification.
This research was funded in part by the DFG research training group CONVEY (GRK 2428), the DFG project 383882557 - Statistical Unbounded Verification (KR 4890/2-1), the project Audi Verifiable AI, and the BMWi funded KARLI project (grant 19A21031C).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Cheng, C.-H., Nührenberg, G., Yasuoka, H.: Runtime monitoring neuron activation patterns. DATE (2019). https://arxiv.org/abs/1809.06573
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016). https://www.deeplearningbook.org/
Huang, X., et al.: A survey of safety and trustworthiness of deep neural networks. CoRR (2018). https://arxiv.org/abs/1812.08342
Ortega, P., Maini, V.: Building safe artificial intelligence: specification, robustness, and assurance. Deep Mind blog (2018). https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1
Wikipedia: List of self-driving car fatalities. Wikipedia article (2018). https://en.wikipedia.org/wiki/List-of-self-driving-car-fatalities
Wong, E., Kolter, Z.: Provable defenses against adversarial examples via the convex outer adversarial polytope. In: ICML (2018). https://arxiv.org/abs/1711.00851
Gowal, S., et al.: On the effectiveness of interval bound propagation for training verifiably robust models. In: NIPS (2018). https://arxiv.org/abs/1810.12715
Mirman, M., Gehr, T., Vechev, M.: Differentiable abstract interpretation for provably robust neural networks. In: ICML (2018). https://files.sri.inf.ethz.ch/website/papers/icml18-diffai.pdf
McAllister, R., et al.: Concrete problems for autonomous vehicle safety: advantages of bayesian deep learning. In: IJCAI (2017). https://www.ijcai.org/Proceedings/2017/661
Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., Mané, D.: Concrete problems in AI safety. CoRR (2016). https://arxiv.org/abs/1606.06565
Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. In: ICLR (2017). https://arxiv.org/abs/1610.02136
Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. In: NIPS (2017). https://arxiv.org/abs/1612.01474
Liang, S., Li, Y., Srikant, R.: Enhancing the reliability of out-of-distribution image detection in neural networks. In: ICLR (2018). https://arxiv.org/abs/1706.02690
Lee, K., Lee, K., Lee, H., Shin, J.: A simple unified framework for detecting out-of- distribution samples and adversarial attacks. In: NIPS (2018). https://arxiv.org/abs/1807.03888
Ren, J., et al.: Likelihood ratios for out-of-distribution detection. In: NeurIPS (2019). https://arxiv.org/abs/1906.02845
Henzinger, T.A., Lukina, A., Schilling, C.: Outside the box: abstraction-based monitoring of neural networks. In: ECAI (2020). https://arxiv.org/abs/1911.09032
Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, Heidelberg (2006)
Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: The German traffic sign recognition benchmark: a multi-class classification competition. In: Proceedings of the IEEE International Joint Conference on Neural Networks, pp. 1453–1460 (2011)
ISO/PAS 21448. Road vehicles - Safety of the intended functionality. https://www.iso.org/obp/ui/#iso:std:70939:en
Pimentel, M.A.F., Clifton, D.A., Clifton, L.A., Tarassenko, L.: A review of novelty detection. Signal Process. 99, 215–249 (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Hashemi, V., Křetínský, J., Mohr, S., Seferis, E. (2021). Gaussian-Based Runtime Detection of Out-of-distribution Inputs for Neural Networks. In: Feng, L., Fisman, D. (eds) Runtime Verification. RV 2021. Lecture Notes in Computer Science(), vol 12974. Springer, Cham. https://doi.org/10.1007/978-3-030-88494-9_14
Download citation
DOI: https://doi.org/10.1007/978-3-030-88494-9_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-88493-2
Online ISBN: 978-3-030-88494-9
eBook Packages: Computer ScienceComputer Science (R0)