Advertisement

3D Deeply-Supervised U-Net Based Whole Heart Segmentation

  • Qianqian Tong
  • Munan Ning
  • Weixin Si
  • Xiangyun Liao
  • Jing Qin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10663)

Abstract

Accurate whole-heart segmentation from multi-modality medical images (MRI, CT) plays an important role in many clinical applications, such as precision surgical planning and improvement of diagnosis and treatment. This paper presents a deeply-supervised 3D U-Net for fully automatic whole-heart segmentation by jointly using the multi-modal MRI and CT images. First, a 3D U-Net is employed to coarsely detect the whole heart and segment its region of interest, which can alleviate the impact of surrounding tissues. Then, we artificially enlarge the training set by extracting different regions of interest so as to train a deep network. We perform voxel-wise whole-heart segmentation with the end-to-end trained deeply-supervised 3D U-Net. Considering that different modality information of the whole heart has a certain complementary effect, we extract multi-modality features by fusing MRI and CT images to define the overall heart structure, and achieve final results. We evaluate our method on cardiac images from the multi-modality whole heart segmentation (MM-WHS) 2017 challenge.

Keywords

Whole heart segmentation 3D deeply-supervised U-Net Multi-modal cardiac images 

Notes

Acknowledgements

This work was supported by grants from Shenzhen Science and Technology Program (No. JCYJ20160429190300857), China Posdoctoral Science Foundation (2017M622831) and SIAT Innovation Program for Excellent Young Researchers (No. 2017059).

References

  1. 1.
    Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46723-8_49 CrossRefGoogle Scholar
  2. 2.
    Anh, T., Carneiro, G.: Fully automated non-rigid segmentation with distance regularized level set evolution initialized and constrained by deep-structured inference. In: Proceedings of CVPR, pp. 3118–3125 (2014)Google Scholar
  3. 3.
    Avendi, R., Kheradvar, A., Jafarkhani, H.: A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI. Med. Image Anal. 30, 108–119 (2016)CrossRefGoogle Scholar
  4. 4.
    Zhuang, X., Shen, J.: Challenges and methodologies of fully automatic whole heart segmentation: a review. J. Healthc Eng. 4(3), 371–407 (2013)CrossRefGoogle Scholar
  5. 5.
    Zhuang, X., Rhode, S., Razavi, S., Hawkes, J., Ourselin, S.: A registration-based propagation framework for automatic whole heart segmentation of cardiac MRI. IEEE Trans. Med. Imaging 29(9), 1612–1625 (2010)CrossRefGoogle Scholar
  6. 6.
    Pace, D.F., Dalca, A.V., Geva, T., Powell, A.J., Moghari, M.H., Golland, P.: Interactive whole-heart segmentation in congenital heart disease. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 80–88. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_10 CrossRefGoogle Scholar
  7. 7.
    Zhuang, X., Shen, J.: Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI. Med. Image Anal. 31, 77–87 (2016)CrossRefGoogle Scholar
  8. 8.
    Li, J., Zhang, R., Shi, L., Wang, D.: Automatic whole-heart segmentation in congenital heart disease using deeply-supervised 3D FCN. In: Zuluaga, M.A., Bhatia, K., Kainz, B., Moghari, M.H., Pace, D.F. (eds.) RAMBO/HVSMR -2016. LNCS, vol. 10129, pp. 111–118. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-52280-7_11 CrossRefGoogle Scholar
  9. 9.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRefGoogle Scholar
  10. 10.
    Krizhevsky, A., Sutskever, I., Hinton, E.: ImageNet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012)Google Scholar
  11. 11.
    Eigen, D., Fergus, R.: Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In: Proceedings of ICCV, pp. 2650–2658 (2015)Google Scholar
  12. 12.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28 CrossRefGoogle Scholar
  13. 13.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of CVPR, pp. 3431–3440 (2015)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.School of ComputerWuhan UniversityWuhanChina
  2. 2.School of NursingThe Hong Kong Polytechnic UniversityHong KongChina
  3. 3.Shenzhen Institutes of Advanced TechnologyChinese Academy of SciencesBeijingChina

Personalised recommendations