Skip to main content

Task-Agnostic Out-of-Distribution Detection Using Kernel Density Estimation

  • 922 Accesses

Part of the Lecture Notes in Computer Science book series (LNIP,volume 12959)

Abstract

In the recent years, researchers proposed a number of successful methods to perform out-of-distribution (OOD) detection in deep neural networks (DNNs). So far the scope of the highly accurate methods has been limited to image level classification tasks. However, attempts for generally applicable methods beyond classification did not attain similar performance. In this paper, we address this limitation by proposing a simple yet effective task-agnostic OOD detection method. We estimate the probability density functions (pdfs) of intermediate features of a pre-trained DNN by performing kernel density estimation (KDE) on the training dataset. As direct application of KDE to feature maps is hindered by their high dimensionality, we use a set of lower-dimensional marginalized KDE models instead of a single high-dimensional one. At test time, we evaluate the pdfs on a test sample and produce a confidence score that indicates the sample is OOD. The use of KDE eliminates the need for making simplifying assumptions about the underlying feature pdfs and makes the proposed method task-agnostic. We perform experiments on classification task using computer vision benchmark datasets. Additionally, we perform experiments on medical image segmentation task using brain MRI datasets. The results demonstrate that the proposed method consistently achieves high OOD detection performance in both classification and segmentation tasks and improves state-of-the-art in almost all cases. Our code is available at https://github.com/eerdil/task_agnostic_ood. Longer version of the paper and supplementary materials can be found as preprint in [8].

Keywords

  • Out-of-distribution detection
  • Kernel density estimation

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-030-87735-4_9
  • Chapter length: 11 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   54.99
Price excludes VAT (USA)
  • ISBN: 978-3-030-87735-4
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   69.99
Price excludes VAT (USA)
Fig. 1.

Notes

  1. 1.

    Pretrained models: https://github.com/pokaxpoka/deep_Mahalanobis_detector.

  2. 2.

    TIN, LSUN, and iSUN are available at https://github.com/facebookresearch/odin.

References

  1. Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., Mané, D.: Concrete problems in AI safety. arXiv preprint arXiv:1606.06565 (2016)

  2. Bishop, C.M.: Novelty detection and neural network validation. IEE Proc.-Vis. Image Sig. Process. 141(4), 217–222 (1994)

    CrossRef  Google Scholar 

  3. Chaitanya, K., Erdil, E., Karani, N., Konukoglu, E.: Contrastive learning of global and local features for medical image segmentation with limited annotations. In: Advances in Neural Information Processing Systems, vol. 33 (2020)

    Google Scholar 

  4. Cremers, D., Osher, S.J., Soatto, S.: Kernel density estimation and intrinsic alignment for shape priors in level set segmentation. Int. J. Comput. Vision 69(3), 335–351 (2006). https://doi.org/10.1007/s11263-006-7533-5

    CrossRef  Google Scholar 

  5. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)

    Google Scholar 

  6. DeVries, T., Taylor, G.W.: Learning confidence for out-of-distribution detection in neural networks. arXiv preprint arXiv:1802.04865 (2018)

  7. Di Martino, A., et al.: The autism brain imaging data exchange: towards a large-scale evaluation of the intrinsic brain architecture in autism. Mol. Psychiatry 19(6), 659–667 (2014)

    CrossRef  Google Scholar 

  8. Erdil, E., Chaitanya, K., Karani, N., Konukoglu, E.: Task-agnostic out-of-distribution detection using kernel density estimation. arXiv preprint arXiv:2006.10712 (2020). https://arxiv.org/pdf/2006.10712.pdf

  9. Erdil, E., Yildirim, S., Tasdizen, T., Cetin, M.: Pseudo-marginal MCMC sampling for image segmentation using nonparametric shape priors. IEEE Trans. Image Process. 28(11), 5702–5715 (2019)

    MathSciNet  CrossRef  Google Scholar 

  10. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  11. Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On calibration of modern neural networks. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 1321–1330. JMLR. org (2017)

    Google Scholar 

  12. Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. In: Proceedings of International Conference on Learning Representations (2017)

    Google Scholar 

  13. Hendrycks, D., Mazeika, M., Dietterich, T.: Deep anomaly detection with outlier exposure. In: International Conference on Learning Representations (2018)

    Google Scholar 

  14. Hendrycks, D., Mazeika, M., Kadavath, S., Song, D.: Using self-supervised learning can improve model robustness and uncertainty. In: Advances in Neural Information Processing Systems, pp. 15637–15648 (2019)

    Google Scholar 

  15. Hsu, Y.C., Shen, Y., Jin, H., Kira, Z.: Generalized ODIN: detecting out-of-distribution image without learning from out-of-distribution data. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10951–10960 (2020)

    Google Scholar 

  16. Karani, N., Erdil, E., Chaitanya, K., Konukoglu, E.: Test-time adaptable neural networks for robust medical image segmentation. Med. Image Anal. 68, 101907 (2021)

    CrossRef  Google Scholar 

  17. Kim, J., Çetin, M., Willsky, A.S.: Nonparametric shape priors for active contour-based image segmentation. Signal Process. 87(12), 3021–3044 (2007)

    CrossRef  Google Scholar 

  18. Kim, K.H., Shim, S., Lim, Y., Jeon, J., Choi, J., Kim, B., Yoon, A.S.: Rapp: novelty detection with reconstruction along projection pathway. In: International Conference on Learning Representations (2020)

    Google Scholar 

  19. Kohl, S.A., et al.: A probabilistic U-Net for segmentation of ambiguous images. arXiv preprint arXiv:1806.05034 (2018)

  20. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. Technical report, Citeseer (2009)

    Google Scholar 

  21. Lee, K., Lee, H., Lee, K., Shin, J.: Training confidence-calibrated classifiers for detecting out-of-distribution samples. In: ICLR 2018 (2018)

    Google Scholar 

  22. Lee, K., Lee, K., Lee, H., Shin, J.: A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In: Advances in Neural Information Processing Systems, pp. 7167–7177 (2018)

    Google Scholar 

  23. Liang, S., Li, Y., Srikant, R.: Enhancing the reliability of out-of-distribution image detection in neural networks. arXiv preprint arXiv:1706.02690 (2017)

  24. Liu, W., Wang, X., Owens, J., Li, Y.: Energy-based out-of-distribution detection. In: Advances in Neural Information Processing Systems, vol. 33 (2020)

    Google Scholar 

  25. Nalisnick, E., Matsukawa, A., Teh, Y.W., Gorur, D., Lakshminarayanan, B.: Do deep generative models know what they don’t know? In: International Conference on Learning Representations (2018)

    Google Scholar 

  26. Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Reading digits in natural images with unsupervised feature learning (2011)

    Google Scholar 

  27. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    CrossRef  Google Scholar 

  28. Scott, D.W.: Multivariate Density Estimation: Theory, Practice, and Visualization. Wiley, Hoboken (2015)

    CrossRef  Google Scholar 

  29. Silverman, B.W.: Density Estimation for Statistics and Data Analysis, vol. 26. CRC Press, Boco Raton (1986)

    MATH  Google Scholar 

  30. Van Essen, D.C., et al.: The WU-MINN human connectome project: an overview. Neuroimage 80, 62–79 (2013)

    CrossRef  Google Scholar 

  31. Venkatakrishnan, A.R., Kim, S.T., Eisawy, R., Pfister, F., Navab, N.: Self-supervised out-of-distribution detection in brain CT scans. arXiv preprint arXiv:2011.05428 (2020)

  32. Vyas, A., Jammalamadaka, N., Zhu, X., Das, D., Kaul, B., Willke, T.L.: Out-of-distribution detection using an ensemble of self supervised leave-out classifiers. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11212, pp. 560–574. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01237-3_34

    CrossRef  Google Scholar 

  33. Wang, D., Shelhamer, E., Liu, S., Olshausen, B., Darrell, T.: Fully test-time adaptation by entropy minimization. arXiv preprint arXiv:2006.10726 (2020)

  34. Xu, P., Ehinger, K.A., Zhang, Y., Finkelstein, A., Kulkarni, S.R., Xiao, J.: TurkerGaze: crowdsourcing saliency with webcam based eye tracking. arXiv preprint arXiv:1504.06755 (2015)

  35. Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., Xiao, J.: LSUN: construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015)

  36. Yu, Q., Aizawa, K.: Unsupervised out-of-distribution detection by maximum classifier discrepancy. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 9518–9526 (2019)

    Google Scholar 

Download references

Acknowledgement

The presented work was partly funding by: 1. Personalized Health and Related Technologies (PHRT), project number 222, ETH domain, 2. Clinical Research Priority Program Grant on Artificial Intelligence in Oncological Imaging Network, University of Zurich, 3. Swiss Data Science Center (DeepMicroIA), 4. Swiss Platform for Advanced Scientific Computing (PASC), coordinated by Swiss National Super-computing Centre (CSCS). We also thank Nvidia for their GPU donation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ertunc Erdil .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Erdil, E., Chaitanya, K., Karani, N., Konukoglu, E. (2021). Task-Agnostic Out-of-Distribution Detection Using Kernel Density Estimation. In: , et al. Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Perinatal Imaging, Placental and Preterm Image Analysis. UNSURE PIPPI 2021 2021. Lecture Notes in Computer Science(), vol 12959. Springer, Cham. https://doi.org/10.1007/978-3-030-87735-4_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87735-4_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87734-7

  • Online ISBN: 978-3-030-87735-4

  • eBook Packages: Computer ScienceComputer Science (R0)