3D Convolutional Networks for Fully Automatic Fine-Grained Whole Heart Partition

  • Xin Yang
  • Cheng Bian
  • Lequan Yu
  • Dong Ni
  • Pheng-Ann Heng
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10663)


Segmenting cardiovascular volumes plays a crucial role for clinical applications, especially parsing the whole heart into fine-grained structures. However, conquering fuzzy boundaries and differentiating branchy structures in cardiovascular volume images still remain a challenging task. In this paper, we propose a general and fully automatic solution for fine-grained whole heart partition. The proposed framework originates from the 3D Fully Convolutional Network, and is reinforced in the following aspects: (1) By inheriting the knowledge from a pre-trained C3D Network, our network launches with a good initialization and gains capabilities in coping with overfitting. (2) We triggered several auxiliary loss functions on shallow layers to promote gradient flow and thus alleviate the training difficulties associated with deep neural networks. (3) Considering the obvious volume imbalance among different substructures, we introduced a Multi-class Dice Similarity Coefficient based metric to efficiently balance the training for all classes. We evaluated our method on the MM-WHS Challenge 2017 datasets. Extensive experimental results demonstrated the promising performance of our method. Our framework achieves promising results across different modalities and is general to be referred in other volumetric segmentation tasks.



The work in this paper was supported by the grant from National Natural Science Foundation of China under Grant 61571304, a grant from Hong Kong Research Grants Council (Project no. CUHK 412513) and grants from the National Natural Science Foundation of China (Project No. 61233012 and No. 81601576).


  1. 1.
    Chen, H., Ni, D., Qin, J., et al.: Standard plane localization in fetal ultrasound via domain transferred deep neural networks. IEEE JBHI 19(5), 1627–1636 (2015)Google Scholar
  2. 2.
    Çiçek, Ö., Abdulkadir, A., et al.: 3D U-NET: learning dense volumetric segmentation from sparse annotation. arXiv preprint arXiv:1606.06650 (2016)
  3. 3.
    Dou, Q., Yu, L., et al.: 3D deeply supervised network for automated segmentation of volumetric medical images. Med. Image Anal. 41, 40–54 (2017)CrossRefGoogle Scholar
  4. 4.
    Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Aistats. 9, 249–256 (2010)Google Scholar
  5. 5.
    Kaus, M.R., von Berg, J., Weese, J., et al.: Automated segmentation of the left ventricle in cardiac MRI. Med. Image Anal. 8(3), 245–254 (2004)CrossRefGoogle Scholar
  6. 6.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012)Google Scholar
  7. 7.
    Lee, C.Y., Xie, S., Gallagher, P., Zhang, Z., Tu, Z.: Deeply-supervised nets (2015)Google Scholar
  8. 8.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR, pp. 3431–3440 (2015)Google Scholar
  9. 9.
    Milletari, F., Navab, N., et al.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571. IEEE (2016)Google Scholar
  10. 10.
    Peters, J., Ecabert, O., Meyer, C., Schramm, H., Kneser, R., Groth, A., Weese, J.: Automatic whole heart segmentation in static magnetic resonance image volumes. In: Ayache, N., Ourselin, S., Maeder, A. (eds.) MICCAI 2007. LNCS, vol. 4792, pp. 402–410. Springer, Heidelberg (2007).  https://doi.org/10.1007/978-3-540-75759-7_49 CrossRefGoogle Scholar
  11. 11.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28 CrossRefGoogle Scholar
  12. 12.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  13. 13.
    Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3d convolutional networks. In: ICCV, pp. 4489–4497 (2015)Google Scholar
  14. 14.
    Yu, L., Yang, X., Qin, J., Heng, P.-A.: 3D FractalNet: dense volumetric segmentation for cardiovascular MRI volumes. In: Zuluaga, M.A., Bhatia, K., Kainz, B., Moghari, M.H., Pace, D.F. (eds.) RAMBO/HVSMR -2016. LNCS, vol. 10129, pp. 103–110. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-52280-7_10 CrossRefGoogle Scholar
  15. 15.
    Zhen, X., Zhang, H., Islam, A., Bhaduri, M., Chan, I., Li, S.: Direct and simultaneous estimation of cardiac four chamber volumes by multioutput sparse regression. Med. Image Anal. 36, 184–196 (2017)CrossRefGoogle Scholar
  16. 16.
    Zhuang, X.: Challenges and methodologies of fully automatic whole heart segmentation: a review. J. Healthc. Eng. 4(3), 371–408 (2013)CrossRefGoogle Scholar
  17. 17.
    Zhuang, X., Rhode, K.S., Razavi, R.S., Hawkes, D.J., Ourselin, S.: A registration-based propagation framework for automatic whole heart segmentation of cardiac MRI. IEEE Trans. Med. Imaging 29(9), 1612–1625 (2010)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringThe Chinese University of Hong KongHong KongChina
  2. 2.National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science CenterShenzhen UniversityShenzhenChina
  3. 3.Shenzhen Key Laboratory of Virtual Reality and Human Interaction Technology, Shenzhen Institutes of Advanced TechnologyChinese Academy of SciencesShenzhenChina

Personalised recommendations