Skip to main content

Exploring Structure-Wise Uncertainty for 3D Medical Image Segmentation

  • Conference paper
  • First Online:
Medical Imaging and Computer-Aided Diagnosis (MICAD 2022)

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 810))

Abstract

When applying a Deep Learning model to medical images, it is crucial to estimate the model uncertainty. Voxel-wise uncertainty is a useful visual marker for human experts and could be used to improve the model’s voxel-wise output, such as segmentation. Moreover, uncertainty provides a solid foundation for out-of-distribution (OOD) detection, improving the model performance on the image-wise level. However, one of the frequent tasks in medical imaging is the segmentation of distinct, local structures such as tumors or lesions. Here, the structure-wise uncertainty allows more precise operations than image-wise and more semantic-aware than voxel-wise. The way to produce uncertainty for individual structures remains poorly explored. We propose a framework to measure the structure-wise uncertainty and evaluate the impact of OOD data on the model performance. Thus, we identify the best UE method to improve the segmentation quality. The proposed framework is tested on three datasets with the tumor segmentation task: LIDC-IDRI, LiTS, and a private one with multiple brain metastases cases.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/BorisShirokikh/u-froc.

References

  1. Lee, J.G., Jun, S., Cho, Y.W., Lee, H., Kim, G.B., Seo, J.B., Kim, N.: Deep learning in medical imaging: general overview. Korean journal of radiology 18(4), 570–584 (2017)

    Article  Google Scholar 

  2. Kompa, B., Snoek, J., Beam, A.L.: Second opinion needed: communicating uncertainty in medical machine learning. NPJ Digital Medicine 4(1), 1–6 (2021)

    Article  Google Scholar 

  3. Iwamoto, S., Raytchev, B., Tamaki, T., Kaneda, K.: Improving the reliability of semantic segmentation of medical images by uncertainty modeling with Bayesian deep networks and curriculum learning. In: Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Perinatal Imaging, Placental and Preterm Image Analysis, pp. 34–43. Springer (2021)

    Google Scholar 

  4. Linmans, J., van der Laak, J., Litjens, G.: Efficient out-of-distribution detection in digital pathology using multi-head convolutional neural networks. In: MIDL. pp. 465–478 (2020)

    Google Scholar 

  5. Sahiner, B., Pezeshk, A., Hadjiiski, L.M., Wang, X., Drukker, K., Cha, K.H., Summers, R.M., Giger, M.L.: Deep learning in medical imaging and radiation therapy. Medical physics 46(1), e1–e36 (2019)

    Article  Google Scholar 

  6. Leibig, C., Allken, V., Ayhan, M.S., Berens, P., Wahl, S.: Leveraging uncertainty information from deep neural networks for disease detection. Scientific reports 7(1), 1–14 (2017)

    Article  Google Scholar 

  7. Roy, A.G., Conjeti, S., Navab, N., Wachinger, C., Initiative, A.D.N., et al.: Bayesian quicknat: Model uncertainty in deep whole-brain segmentation for structure-wise quality control. NeuroImage 195, 11–22 (2019)

    Article  Google Scholar 

  8. Nair, T., Precup, D., Arnold, D.L., Arbel, T.: Exploring uncertainty measures in deep networks for multiple sclerosis lesion detection and segmentation. Medical Image Analysis 59, 101557 (2020), https://www.sciencedirect.com/science/article/pii/S1361841519300994

  9. Ozdemir, O., Woodward, B., Berlin, A.A.: Propagating uncertainty in multi-stage Bayesian convolutional neural networks with application to pulmonary nodule detection. CoRR abs/1712.00497 (2017), http://arxiv.org/abs/1712.00497

  10. Bhat, I., Kuijf, H.J., Cheplygina, V., Pluim, J.P.: Using uncertainty estimation to reduce false positives in liver lesion detection. In: 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI). pp. 663–667 (2021)

    Google Scholar 

  11. Mehrtash, A., Wells, W., Tempany, C., Abolmaesumi, P., Kapur, T.: Confidence calibration and predictive uncertainty estimation for deep medical image segmentation. IEEE Transactions on Medical Imaging PP, 1–1 (07 2020)

    Google Scholar 

  12. Hoebel, K., Andrearczyk, V., Beers, A., Patel, J., Chang, K., Depeursinge, A., Müller, H., Kalpathy-Cramer, J.: An exploration of uncertainty information for segmentation quality assessment. In: Išgum, I., Landman, B.A. (eds.) Medical Imaging 2020: Image Processing. vol. 11313, pp. 381–390. International Society for Optics and Photonics, SPIE (2020), https://doi.org/10.1117/12.2548722

  13. Devries, T., Taylor, G.W.: Leveraging uncertainty estimates for predicting segmentation quality. ArXiv abs/1807.00502 (2018)

    Google Scholar 

  14. Seeböck, P., Orlando, J., Schlegl, T., Waldstein, S., Bogunović, H., Riedl, S., Langs, G., Schmidt-Erfurth, U.: Exploiting epistemic uncertainty of anatomy segmentation for anomaly detection in retinal oct. IEEE Transactions on Medical Imaging PP, 1–1 (05 2019)

    Google Scholar 

  15. Hiasa, Y., Otake, Y., Takao, M., Ogawa, T., Sugano, N., Sato, Y.: Automated muscle segmentation from clinical ct using Bayesian u-net for personalized musculoskeletal modeling. IEEE Transactions on Medical Imaging 39(4), 1030–1040 (2020)

    Article  Google Scholar 

  16. Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems 30 (2017)

    Google Scholar 

  17. Jungo, A., Reyes, M.: Assessing reliability and challenges of uncertainty estimations for medical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 48–56. Springer (2019)

    Google Scholar 

  18. Houlsby, N., Huszár, F., Ghahramani, Z., Lengyel, M.: Bayesian active learning for classification and preference learning. arXiv preprint arXiv:1112.5745 (2011)

  19. Smith, L., Gal, Y.: Understanding measures of uncertainty for adversarial example detection. arXiv preprint arXiv:1803.08533 (2018)

  20. Lu, S.L., Liao, H.C., Hsu, F.M., Liao, C.C., Lai, F., Xiao, F.: The intracranial tumor segmentation challenge: Contour tumors on brain mri for radiosurgery. NeuroImage 244, 118585 (2021)

    Article  Google Scholar 

  21. van der Voort, S.R., Incekara, F., Wijnenga, M.M., Kapsas, G., Gahrmann, R., Schouten, J.W., Dubbink, H.J., Vincent, A.J., van den Bent, M.J., French, P.J., et al.: The erasmus glioma database (egd): Structural mri scans, who 2016 subtypes, and segmentations of 774 patients with glioma. Data in brief 37, 107191 (2021)

    Article  Google Scholar 

  22. Armato III, S.G., McLennan, G., Bidaut, L., McNitt-Gray, M.F., Meyer, C.R., Reeves, A.P., Zhao, B., Aberle, D.R., Henschke, C.I., Hoffman, E.A., et al.: The lung image database consortium (lidc) and image database resource initiative (idri): a completed reference database of lung nodules on ct scans. Medical physics 38(2), 915–931 (2011)

    Article  Google Scholar 

  23. Tsai, E.B., Simpson, S., Lungren, M.P., Hershman, M., Roshkovan, L., Colak, E., Erickson, B.J., Shih, G., Stein, A., Kalpathy-Cramer, J., et al.: The rsna international covid-19 open radiology database (ricord). Radiology 299(1), E204–E213 (2021)

    Article  Google Scholar 

  24. Bilic, P., Christ, P.F., Vorontsov, E., Chlebus, G., Chen, H., Dou, Q., Fu, C.W., Han, X., Heng, P.A., Hesser, J., et al.: The liver tumor segmentation benchmark (lits). arXiv preprint arXiv:1901.04056 (2019)

  25. Pimkin, A., Samoylenko, A., Antipina, N., Ovechkina, A., Golanov, A., Dalechina, A., Belyaev, M.: Multidomain ct metal artifacts reduction using partial convolution based inpainting. In: 2020 International Joint Conference on Neural Networks (IJCNN). pp. 1–6. IEEE (2020)

    Google Scholar 

  26. Saparov, T., Kurmukov, A., Shirokikh, B., Belyaev, M.: Zero-shot domain adaptation in ct segmentation by filtered back projection augmentation. In: Deep Generative Models, and Data Augmentation, Labelling, and Imperfections, pp. 243–250. Springer (2021)

    Google Scholar 

  27. Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods 18(2), 203–211 (2021)

    Article  Google Scholar 

  28. Milletari, F., Navab, N., Ahmadi, S.A.: V-net: Fully convolutional neural networks for volumetric medical image segmentation. In: 2016 fourth international conference on 3D vision (3DV). pp. 565–571. IEEE (2016)

    Google Scholar 

Download references

Acknowledgements

The authors acknowledge the National Cancer Institute and the Foundation for the National Institutes of Health, and their critical role in the creation of the free publicly available LIDC/IDRI Database used in this study. This research was funded by Russian Science Foundation grant number 20-71-10134.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Boris Shirokikh .

Editor information

Editors and Affiliations

Experimental Setup

Experimental Setup

1.1 Preprocessing

Here, we describe data preparation steps including datasets splits, normalization, and interpolation.

Mets data is randomly split into train (1140 images) and test (414 images) sets. We interpolate the images to have \(1\,\text {mm} \times 1\,\text {mm} \times 1\) mm spacing.

LIDC data is randomly split into train (812 images) and test (204 images) sets. We clip image intensities between −1350 and 350 Hounsfield units (HU)—the standard lung window. We interpolate images to have 1 mm \(\times \) 1 mm \(\times \) 1.5 mm spacing.

LiTS is presented as two subsets, so we use the first as a test (28 images) and the second, excluding cases with empty tumor masks, as a train (90 images) set. The images are cropped to the provided liver masks. The intensities are clipped to the \([-150, 250]\) HU interval—the standard liver window. Finally, we interpolate images to have 0.77 mm \(\times \) 0.77 mm \(\times \) 1 mm spacing.

LiTS-mod is obtained by random changes of the reconstruction kernel to be extremely soft (\(a=-0.7, b=0.5\)) or sharp (\(a=30, b=3\)) using the implementation and notations of [26], and addition of “metal” artifacts (ball of radius 5 and 3000 HU) by substituting the parts of sinogram projection, as in [25].

Before passing through the network, we scale image intensities in [0, 1].

1.2 Training Setup

Although using cross-entropy loss has theoretical justifications of encouraging better calibrated predictions [16], models trained with this loss function fail in our segmentation task. For that reason we use Dice Loss [28] and its modifications in our experiments. Thus, uncertainty estimates might be shifted in such tasks, and experimental evaluation, as in our study, becomes even more relevant. All models are trained in a patch-based manner: patches are sampled randomly so that they contain structures. We use SGD optimizer with Nesterov momentum of 0.9 and \(10^{-3}\) initial learning rate, which is decreased to \(10^{-4}\) after \(80\%\) of epochs. For LiTS and Mets segmentation the model is trained for 100 epochs (100 iterations per epoch, batch size 20), while for LIDC segmentation there are 30 epochs (1000 iterations per epoch, batch size 2).

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Vasiliuk, A., Frolova, D., Belyaev, M., Shirokikh, B. (2023). Exploring Structure-Wise Uncertainty for 3D Medical Image Segmentation. In: Su, R., Zhang, Y., Liu, H., F Frangi, A. (eds) Medical Imaging and Computer-Aided Diagnosis. MICAD 2022. Lecture Notes in Electrical Engineering, vol 810. Springer, Singapore. https://doi.org/10.1007/978-981-16-6775-6_2

Download citation

  • DOI: https://doi.org/10.1007/978-981-16-6775-6_2

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-16-6774-9

  • Online ISBN: 978-981-16-6775-6

  • eBook Packages: MedicineMedicine (R0)

Publish with us

Policies and ethics