Skip to main content

Uncertainty Estimates as Data Selection Criteria to Boost Omni-Supervised Learning

Part of the Lecture Notes in Computer Science book series (LNIP,volume 12261)

Abstract

For many medical applications, large quantities of imaging data are routinely obtained but it can be difficult and time-consuming to obtain high-quality labels for that data. We propose a novel uncertainty-based method to improve the performance of segmentation networks when limited manual labels are available in a large dataset. We estimate segmentation uncertainty on unlabeled data using test-time augmentation and test-time dropout. We then use uncertainty metrics to select unlabeled samples for further training in a semi-supervised learning framework. Compared to random data selection, our method gives a significant boost in Dice coefficient for semi-supervised volume segmentation on the EADC-ADNI/HARP MRI dataset and the large-scale INTERGROWTH-21st ultrasound dataset. Our results show a greater performance boost on the ultrasound dataset, suggesting that our method is most useful with data of lower or more variable quality.

Keywords

  • Uncertainty
  • Omni-supervised learning
  • Boosting

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-030-59710-8_67
  • Chapter length: 10 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   119.00
Price excludes VAT (USA)
  • ISBN: 978-3-030-59710-8
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   159.99
Price excludes VAT (USA)
Fig. 1.
Fig. 2.
Fig. 3.

Notes

  1. 1.

    As most voxels distant from anatomical boundaries are consistently segmented, the uncertainty (or segmentation variance) of most voxels is 0.

References

  1. Bearman, A., Russakovsky, O., Ferrari, V., Fei-Fei, L.: What’s the point: semantic segmentation with point supervision. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9911, pp. 549–565. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46478-7_34

    CrossRef  Google Scholar 

  2. Bocchetta, M., et al.: Harmonized benchmark labels of the hippocampus on magnetic resonance: The EADC-ADNI project. Alzheimer’s and Dementia (2015). https://doi.org/10.1016/j.jalz.2013.12.019

  3. Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49

    CrossRef  Google Scholar 

  4. Denker, J.S., LeCun, Y.: Transforming neural-net output levels to probability distributions. In: Advances in Neural Information Processing Systems, vol. 3 (1991)

    Google Scholar 

  5. Dietterich, T.G.: Ensemble methods in machine learning. In: Kittler, J., Roli, F. (eds.) MCS 2000. LNCS, vol. 1857, pp. 1–15. Springer, Heidelberg (2000). https://doi.org/10.1007/3-540-45014-9_1

    CrossRef  Google Scholar 

  6. Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning, June 2015. http://arxiv.org/abs/1506.02142

  7. Hinton, G., Vinyals, O., Dean, J.: Distilling the Knowledge in a Neural Network, March 2015. http://arxiv.org/abs/1503.02531

  8. Huang, R., Noble, J.A., Namburete, A.I.L.: Omni-supervised learning: scaling up to large unlabelled medical datasets. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 572–580. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_65

    CrossRef  Google Scholar 

  9. Jack, C.R., et al.: The Alzheimer’s Disease Neuroimaging Initiative (ADNI): MRI methods (2008). https://doi.org/10.1002/jmri.21049

  10. Kendall, A., Gal, Y.: What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? (2017). http://papers.nips.cc/paper/7141-what-uncertainties-do-we-need

  11. Khoreva, A., Benenson, R., Hosang, J., Hein, M., Schiele, B.: Simple does it: weakly supervised instance and semantic segmentation. In: Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 (2017). https://doi.org/10.1109/CVPR.2017.181

  12. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (2012)

    Google Scholar 

  13. Makropoulos, A., Counsell, S.J., Rueckert, D.: A review on automatic fetal and neonatal brain MRI segmentation. NeuroImage (2017). https://doi.org/10.1016/j.neuroimage.2017.06.074

  14. Namburete, A.I., Xie, W., Yaqub, M., Zisserman, A., Noble, J.A.: Fully-automated alignment of 3D fetal brain ultrasound to a canonical reference space using multi-task learning. Med. Image Anal. 46, 1–14 (2018). https://doi.org/10.1016/J.MEDIA.2018.02.006. https://www.sciencedirect.com/science/article/pii/S1361841518300306http://www.ncbi.nlm.nih.gov/pubmed/29499436

  15. Papageorghiou, A.T., et al.: International standards for fetal growth based on serial ultrasound measurements: the Fetal Growth Longitudinal Study of the INTERGROWTH-21st Project. Lancet 384(9946), 869–879 (2014). https://doi.org/10.1016/S0140-6736(14)61490-2. http://linkinghub.elsevier.com/retrieve/pii/S0140673614614902

  16. Radosavovic, I., Dollar, P., Girshick, R., Gkioxari, G., He, K.: Data distillation: towards omni-supervised learning. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2018). https://doi.org/10.1109/CVPR.2018.00433

  17. Schapire, R.E.: The strength of weak learnability. Mach. Learn. (1990). https://doi.org/10.1023/A:1022648800760

    CrossRef  MATH  Google Scholar 

  18. Wang, G., Li, W., Aertsen, M., Deprest, J., Ourselin, S., Vercauteren, T.: Aleatoric uncertainty estimation with test-time augmentation for medical image segmentation with convolutional neural networks. Neurocomputing 338, 34–45 (2019). https://doi.org/10.1016/J.NEUCOM.2019.01.103. https://www.sciencedirect.com/science/article/pii/S0925231219301961

  19. Yang, L., Zhang, Y., Chen, J., Zhang, S., Chen, D.Z.: Suggestive annotation: a deep active learning framework for biomedical image segmentation. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 399–407. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66179-7_46

    CrossRef  Google Scholar 

  20. Zhang, Y., Yang, L., Chen, J., Fredericksen, M., Hughes, D.P., Chen, D.Z.: Deep Adversarial Networks for Biomedical Image Segmentation Utilizing Unannotated Images. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 408–416. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66179-7_47

    CrossRef  Google Scholar 

Download references

Acknowledgment

We would like to thank Nicola Dinsdale for her help with data preparation and analysis of the MRI dataset. This work is supported by funding from the Engineering and Physical Sciences Research Council (EPSRC) and Medical Research Council (MRC) [grant number EP/L016052/1]. A. T. Papageorghiou is supported by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre. A. Namburete is grateful for support from the UK Royal Academy of Engineering under the Engineering for Development Research Fellowships scheme. J. A. Noble acknowledges the National Institutes of Health (NIH) through the National Institute on Alcohol Abuse and Alcoholism (NIAAA) (U01 AA014809-14), We thank the INTERGROWTH-21st Consortium for permission to use 3D ultrasound volumes of the fetal brain.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lorenzo Venturini .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Venturini, L., Papageorghiou, A.T., Noble, J.A., Namburete, A.I.L. (2020). Uncertainty Estimates as Data Selection Criteria to Boost Omni-Supervised Learning. In: , et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. MICCAI 2020. Lecture Notes in Computer Science(), vol 12261. Springer, Cham. https://doi.org/10.1007/978-3-030-59710-8_67

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59710-8_67

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59709-2

  • Online ISBN: 978-3-030-59710-8

  • eBook Packages: Computer ScienceComputer Science (R0)