Advertisement

Improving U-Net Segmentation with Active Contour Based Label Correction

Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 1248)

Abstract

Deterministic deep learning methods for image segmentation require very precise ground-truth labels. However, obtaining perfect segmentations for medical image analysis is highly time-consuming and usually not feasible. In ultrasound imaging this problem is especially pronounced, as ultrasound scans are challenged by low contrast, speckle and shadow artifacts, all contributing to imperfect manual labelling. To overcome the problem of imperfect labels, we propose a label correction step which can correct the imperfect ground-truth labels in the training set by applying active contours. This forces the ground-truth segmentations towards regions which coincide with edges in the original volume (and thus object boundaries). We demonstrated the proposed active contour correction with a standard U-Net on the boundary segmentation of the cavum septum pellucidum in 3D fetal brain ultrasound and on the segmentation of the left ventricle in 2D ultrasound scans. The active contour label correction yielded more precise boundary predictions, suggesting that this simple correction step can improve boundary segmentation with imperfect labels.

Keywords

Segmentation Deep learning Active contours Ultrasound 

Notes

Acknowledgements

L.S. Hesse acknowledges the support of the UK Engineering and Physical Sciences Research Council (EPSRC) Doctoral Training Award. The authors are grateful for support from the Royal Academy of Engineering under the Engineering for Development Research Fellowship scheme.

References

  1. 1.
    Acuna, D., Kar, A., Fidler, S.: Devil is in the edges: learning semantic boundaries from noisy annotations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 11075–11083 (2019)Google Scholar
  2. 2.
    Bridge, C.P.: Introduction to the Monogenic Signal (2017). http://arxiv.org/abs/1703.09199
  3. 3.
    Caselles, V., Kimmel, R., Sapiro, G.: Geodesic active contours. In: Proceedings of IEEE International Conference on Computer Vision, pp. 694–699. IEEE (1995)Google Scholar
  4. 4.
    Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46723-8_49CrossRefGoogle Scholar
  5. 5.
    Heller, N., Dean, J., Papanikolopoulos, N.: Imperfect segmentation labels: how much do they matter? In: Stoyanov, D., et al. (eds.) LABELS/CVII/STENT -2018. LNCS, vol. 11043, pp. 112–120. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01364-6_13CrossRefGoogle Scholar
  6. 6.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: 32nd International Conference on Machine Learning (ICML), vol. 1, pp. 448–456 (2015)Google Scholar
  7. 7.
    Karimi, D., Dou, H., Warfield, S.K., Gholipour, A.: Deep learning with noisy labels: exploring techniques and remedies in medical image analysis (2019). http://arxiv.org/abs/1912.02911
  8. 8.
    Kass, M., Witkin, A., Terzopoulos, D.: Snakes: active contour models. Int. J. Comput. Vision 1(4), 321–331 (1988)CrossRefGoogle Scholar
  9. 9.
    Kinga, D., Ba, J.: Adam: a method for stochastic optimization. In: Proceedings of the 3rd International Conference on Learning Representations (ICLR), vol. 5 (2014)Google Scholar
  10. 10.
    Leclerc, S., et al.: Deep learning for segmentation using an open large-scale dataset in 2D echocardiography. IEEE Trans. Med. Imaging 38(9), 2198–2210 (2019)CrossRefGoogle Scholar
  11. 11.
    Litjens, G., et al.: A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017)CrossRefGoogle Scholar
  12. 12.
    Marquez-Neila, P., Baumela, L., Alvarez, L.: A morphological approach to curvature-based evolution of curves and surfaces. IEEE Trans. Pattern Anal. Mach. Intell. 36(1), 2–17 (2013)CrossRefGoogle Scholar
  13. 13.
    Mirikharaji, Z., Yan, Y., Hamarneh, G.: Learning to segment skin lesions from noisy annotations. In: Wang, Q., et al. (eds.) DART/MIL3ID -2019. LNCS, vol. 11795, pp. 207–215. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-33391-1_24CrossRefGoogle Scholar
  14. 14.
    Namburete, A.I.L., van Kampen, R., Papageorghiou, A.T., Papież, B.W.: Multi-channel groupwise registration to construct an ultrasound-specific fetal brain atlas. In: Melbourne, A., et al. (eds.) PIPPI/DATRA -2018. LNCS, vol. 11076, pp. 76–86. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00807-9_8CrossRefGoogle Scholar
  15. 15.
    Namburete, A.I., Xie, W., Yaqub, M., Zisserman, A., Noble, J.A.: Fully-automated alignment of 3D fetal brain ultrasound to a canonical reference space using multi-task learning. Med. Image Anal. 46, 1–14 (2018)CrossRefGoogle Scholar
  16. 16.
    Noble, J.A., Boukerroui, D.: Ultrasound image segmentation: a survey. IEEE Trans. Med. Imaging 25(8), 987–1010 (2006)CrossRefGoogle Scholar
  17. 17.
    Papageorghiou, A.T., et al.: International standards for fetal growth based on serial ultrasound measurements: the fetal growth longitudinal study of the intergrowth-21st project. Lancet 384(9946), 869–879 (2014)CrossRefGoogle Scholar
  18. 18.
    Rajpoot, K., Noble, A., Grau, V., Rajpoot, N.: Feature detection from echocardiography images using local phase information. In: Proceedings 12th Medical Image Understanding and Analysis (2008)Google Scholar
  19. 19.
    Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  20. 20.
    Yu, Z., et al.: Simultaneous edge alignment and learning. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 400–417. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01219-9_24CrossRefGoogle Scholar
  21. 21.
    Zhu, H., Shi, J., Wu, J.: Pick-and-learn: automatic quality evaluation for noisy-labeled image segmentation. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11769, pp. 576–584. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-32226-7_64CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Institute of Biomedical Engineering, Department of Engineering ScienceUniversity of OxfordOxfordUK

Personalised recommendations