Hybrid Loss Guided Convolutional Networks for Whole Heart Parsing

  • Xin Yang
  • Cheng Bian
  • Lequan Yu
  • Dong Ni
  • Pheng-Ann Heng
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10663)


CT and MR are dominant imaging modalities in cardiovascular inspection. Segmenting the whole heart from CT and MR volumes, and parsing it into distinctive substructures are highly desired in clinic. However, traditional methods tend to be degraded by the large variances of heart and image, and also the high requirement in simultaneously distinguishing several substructures. In this paper, we start with the well-founded Fully Convolutional Network (FCN), and closely couple the FCN with 3D operators, transfer learning and deep supervision mechanism to distill 3D contextual information and attack potential difficulties in training deep neural networks. We then focus on a main concern in our enhanced FCN. As the number of substructures to be distinguished increases, the imbalance among different classes will emerge and bias the training towards major classes and therefore should be tackled seriously. Class-balanced loss function is useful in addressing the problem but at the risk of sacrificing the segmentation details. For a better trade-off, in this paper, we propose a hybrid loss which takes advantage of different kinds of loss functions to guide the training procedure to equally treat all classes, and at the same time preserve boundary details, like the branchy structure of great vessels. We verified our method on the MM-WHS Challenge 2017 datasets, which contain both CT and MR. Our hybrid loss guided model presents superior results in concurrently labeling 7 substructures of heart (ranked as second in CT segmentation Challenge). Our framework is robust and efficient on different modalities and can be extended to other volumetric segmentation tasks.



The work in this paper was supported by the grant from National Natural Science Foundation of China under Grant 61571304, a grant from Hong Kong Research Grants Council (Project no. CUHK 412513), a grant from Shenzhen Science and Technology Program (No. JCYJ20160429190300857) and a grant from Guangdong province science and technology plan project (No. 2016A020220013).


  1. 1.
    Bai, W., Shi, W., Ledig, C., Rueckert, D.: Multi-atlas segmentation with augmented features for cardiac MR images. Med. Image Anal. 19(1), 98–109 (2015)CrossRefGoogle Scholar
  2. 2.
    Chen, H., Ni, D., Qin, J., et al.: Standard plane localization in fetal ultrasound via domain transferred deep neural networks. IEEE JBHI 19(5), 1627–1636 (2015)Google Scholar
  3. 3.
    Çiçek, Ö., Abdulkadir, A., et al.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. arXiv preprint arXiv:1606.06650 (2016)
  4. 4.
    Dou, Q., Yu, L., et al.: 3D deeply supervised network for automated segmentation of volumetric medical images. Med. Image Anal. (2017)Google Scholar
  5. 5.
    Ecabert, O., et al.: Segmentation of the heart and great vessels in CT images using a model-based adaptation framework. Med. Image Anal. 15(6), 863–876 (2011)CrossRefGoogle Scholar
  6. 6.
    Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: AISTATS, vol. 9, pp. 249–256 (2010)Google Scholar
  7. 7.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012)Google Scholar
  8. 8.
    Lee, C.Y., Xie, S., Gallagher, P., Zhang, Z., Tu, Z.: Deeply-supervised nets (2015)Google Scholar
  9. 9.
    Milletari, F., Navab, N., et al.: V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571. IEEE (2016)Google Scholar
  10. 10.
    Peters, J., Ecabert, O., Meyer, C., Schramm, H., Kneser, R., Groth, A., Weese, J.: Automatic whole heart segmentation in static magnetic resonance image volumes. In: Ayache, N., Ourselin, S., Maeder, A. (eds.) MICCAI 2007. LNCS, vol. 4792, pp. 402–410. Springer, Heidelberg (2007). CrossRefGoogle Scholar
  11. 11.
    Pizer, S.M., Amburn, E.P., et al.: Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 39(3), 355–368 (1987)CrossRefGoogle Scholar
  12. 12.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). CrossRefGoogle Scholar
  13. 13.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  14. 14.
    Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3D convolutional networks. In: ICCV, pp. 4489–4497 (2015)Google Scholar
  15. 15.
    Tran, P.V.: A fully convolutional neural network for cardiac segmentation in short-axis MRI. arXiv preprint arXiv:1604.00494 (2016)
  16. 16.
    Xie, S., Tu, Z.: Holistically-nested edge detection. In: ICCV, pp. 1395–1403 (2015)Google Scholar
  17. 17.
    Yu, L., Yang, X., Qin, J., Heng, P.-A.: 3D FractalNet: dense volumetric segmentation for cardiovascular MRI volumes. In: Zuluaga, M.A., Bhatia, K., Kainz, B., Moghari, M.H., Pace, D.F. (eds.) RAMBO/HVSMR -2016. LNCS, vol. 10129, pp. 103–110. Springer, Cham (2017). CrossRefGoogle Scholar
  18. 18.
    Zhuang, X., et al.: A registration-based propagation framework for automatic whole heart segmentation of cardiac MRI. IEEE TMI 29(9), 1612–1625 (2010)Google Scholar
  19. 19.
    Zotti, C., Luo, Z., et al.: Novel deep convolution neural network applied to MRI cardiac segmentation. arXiv preprint arXiv:1705.08943 (2017)

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringThe Chinese University of Hong KongSha TinHong Kong
  2. 2.National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science CenterShenzhen UniversityShenzhenChina
  3. 3.Shenzhen Key Laboratory of Virtual Reality and Human Interaction Technology, Shenzhen Institutes of Advanced TechnologyChinese Academy of SciencesShenzhenChina

Personalised recommendations