Advertisement

Reducing Textural Bias Improves Robustness of Deep Segmentation Models

Conference paper
  • 198 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12722)

Abstract

Despite advances in deep learning, robustness under domain shift remains a major bottleneck in medical imaging settings. Findings on natural images suggest that deep neural models can show a strong textural bias when carrying out image classification tasks. In this thorough empirical study, we draw inspiration from findings on natural images and investigate ways in which addressing the textural bias phenomenon could bring up the robustness of deep segmentation models when applied to three-dimensional (3D) medical data. To achieve this, publicly available MRI scans from the Developing Human Connectome Project are used to study ways in which simulating textural noise can help train robust models in a complex semantic segmentation task. We contribute an extensive empirical investigation consisting of 176 experiments and illustrate how applying specific types of simulated textural noise prior to training can lead to texture invariant models, resulting in improved robustness when segmenting scans corrupted by previously unseen noise types and levels.

Keywords

Textural bias Domain shift Robustness Segmentation 

Notes

Acknowledgments

The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013)/ERC Grant Agreement no. 319456. We are grateful to the families who generously supported his trial.

References

  1. 1.
    Bastiani, M., et al.: Automated processing pipeline for neonatal diffusion MRI in the developing human connectome project. NeuroImage 185, 750–763 (2019).  https://doi.org/10.1016/j.neuroimage.2018.05.064CrossRefGoogle Scholar
  2. 2.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)Google Scholar
  3. 3.
    Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F.A., Brendel, W.: Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. In: 7th International Conference on Learning Representations, ICLR. OpenReview.net (2019). https://openreview.net/forum?id=Bygh9j09KX
  4. 4.
    Kamnitsas, K., et al.: Unsupervised domain adaptation in brain lesion segmentation with adversarial networks. In: Niethammer, M., et al. (eds.) IPMI 2017. LNCS, vol. 10265, pp. 597–609. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-59050-9_47CrossRefGoogle Scholar
  5. 5.
    Kamnitsas, K., et al.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 36, 61–78 (2017).  https://doi.org/10.1016/j.media.2016.10.004CrossRefGoogle Scholar
  6. 6.
    Mousavi, S.M., Naghsh, A., Manaf, A.A., Abu-Bakar, S.A.R.: A robust medical image watermarking against salt and pepper noise for brain MRI images. Multimedia Tools Appl. 76(7), 10313–10342 (2016).  https://doi.org/10.1007/s11042-016-3622-9CrossRefGoogle Scholar
  7. 7.
    Osadebey, M.E., Pedersen, M., Arnold, D.L., Wendel-Mitoraj, K.E.: Blind blur assessment of MRI images using parallel multiscale difference of gaussian filters. Biomed. Eng. Online 17(1), 1–22 (2018)CrossRefGoogle Scholar
  8. 8.
    Perone, C.S., Ballester, P.L., Barros, R.C., Cohen-Adad, J.: Unsupervised domain adaptation for medical imaging segmentation with self-ensembling. NeuroImage 194, 1–11 (2019).  https://doi.org/10.1016/j.neuroimage.2019.03.026CrossRefGoogle Scholar
  9. 9.
    Virtanen, P., et al.: SciPy 1.0 contributors: SciPy 1.0: fundamental algorithms for scientific computing in python. Nat. Methods 17, 261–272 (2020)Google Scholar
  10. 10.
    van der Walt, S., et al.: scikit-image: Image processing in python (2014). CoRR abs/1407.6245, http://arxiv.org/abs/1407.6245

Copyright information

© Springer Nature Switzerland AG 2021

Authors and Affiliations

  1. 1.Imperial College LondonLondonUK

Personalised recommendations