Tubular Structure Segmentation Using Spatial Fully Connected Network with Radial Distance Loss for 3D Medical Images

  • Chenglong WangEmail author
  • Yuichiro Hayashi
  • Masahiro Oda
  • Hayato Itoh
  • Takayuki Kitasaka
  • Alejandro F. Frangi
  • Kensaku Mori
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11769)


This paper presents a new spatial fully connected tubular network for 3D tubular-structure segmentation. Automatic and complete segmentation of intricate tubular structures remains an unsolved challenge in the medical image analysis. Airways and vasculature pose high demands on medical image analysis as they are elongated fine structures with calibers ranging from several tens of voxels to voxel-level resolution, branching in deeply multi-scale fashion, and with complex topological and spatial relationships. Most machine/deep learning approaches are based on intensity features and ignore spatial consistency across the network that are otherwise distinct in tubular structures. In this work, we introduce 3D slice-by-slice convolutional layers in a U-Net architecture to capture the spatial information of elongated structures. Furthermore, we present a novel loss function, coined radial distance loss, specifically designed for tubular structures. The commonly used methods of cross-entropy loss and generalized Dice loss are sensitive to volumetric variation. However, in tiny tubular structure segmentation, topological errors are as important as volumetric errors. The proposed radial distance loss places higher weight to the centerline, and this weight decreases along the radial direction. Radial distance loss can help networks focus more attention on tiny structures than on thicker tubular structures. We perform experiments on bronchus segmentation on 3D CT images. The experimental results show that compared to the baseline U-Net, our proposed network achieved improvement about 24% and 30% in Dice index and centerline over ratio.


Tubular structure segmentation Spatial FCN Radial distance loss Blood vessel Bronchus 



Parts of this research was supported by MEXT/JSPS KAKENHI (26108006, 26560255, 17H00867, 17K20099), and the JSPS Bilateral Collaboration Grant and AMED (191k1010036h0001).


  1. 1.
    Litjens, G., et al.: A survey on deep learning in medical image analysis. Med. IA 42, 60–88 (2017)Google Scholar
  2. 2.
    Yun, J., et al.: Improvement of fully automated airway segmentation on volumetric computed tomographic images using a 2.5 dimensional convolutional neural net. Med. IA 51, 13–20 (2019)CrossRefGoogle Scholar
  3. 3.
    Tetteh, G., et al.: DeepVesselNet: Vessel segmentation, centerline prediction, and bifurcation detection in 3-D angiographic volumes. arXiv preprint arXiv:1803.09340 (2018)
  4. 4.
    Meng, Q., Roth, H.R., Kitasaka, T., Oda, M., Ueno, J., Mori, K.: Tracking and segmentation of the airways in chest CT using a fully convolutional network. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 198–207. Springer, Cham (2017). Scholar
  5. 5.
    Huang, Q., Sun, J., Ding, H., Wang, X., Wang, G.: Robust liver vessel extraction using 3D U-Net with variant dice loss function. Comput. Biol. Med. 101, 153–162 (2018)CrossRefGoogle Scholar
  6. 6.
    Pan, X., Shi, J., Luo, P., Wang, X., Tang, X.: Spatial as deep: spatial CNN for traffic scene understanding. In: AAAI, pp. 7276–7283 (2018)Google Scholar
  7. 7.
    Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). Scholar
  8. 8.
    Roth, H., et al.: Towards dense volumetric pancreas segmentation in CT using 3D fully convolutional networks. In: SPIE MI 2018, vol. 10574, p. 105740B (2018)Google Scholar
  9. 9.
    Milletari, F., Navab, N., Ahmadi, S.A.: V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571 (2016)Google Scholar
  10. 10.
    Ribera, J., Güera, D., Chen, Y., Delp, E.: Weighted Hausdorff distance: A loss function for object localization. arXiv preprint arXiv:1806.07564 (2018)
  11. 11.
    Jia, S., et al.: Automatically segmenting the left atrium from cardiac images using successive 3D U-nets and a contour loss. In: Pop, M., et al. (eds.) STACOM 2018. LNCS, vol. 11395, pp. 221–229. Springer, Cham (2019). Scholar
  12. 12.
    Wang, C., et al.: Tensor-based graph-cut in riemannian metric space and its application to renal artery segmentation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9902, pp. 353–361. Springer, Cham (2016). Scholar
  13. 13.
    Chen, H., Dou, Q., Yu, L., Qin, J., Heng, P.A.: Voxresnet: deep voxelwise residual networks for brain segmentation from 3D MR images. NeuroImage 170, 446–455 (2018)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Chenglong Wang
    • 1
    Email author
  • Yuichiro Hayashi
    • 2
  • Masahiro Oda
    • 2
  • Hayato Itoh
    • 2
  • Takayuki Kitasaka
    • 3
  • Alejandro F. Frangi
    • 4
  • Kensaku Mori
    • 2
    • 5
    • 6
  1. 1.Graduate School of Information ScienceNagoya UniversityNagoyaJapan
  2. 2.Graduate School of InformaticsNagoya UniversityNagoyaJapan
  3. 3.Aichi Institute of TechnologyToyotaJapan
  4. 4.School of Computing and School of MedicineUniversity of LeedsLeedsUK
  5. 5.Information Technology CenterNagoya UniversityNagoyaJapan
  6. 6.Research Center for Medical BigdataNational Institute of InformaticsTokyoJapan

Personalised recommendations