Tubular Structure Segmentation Using Spatial Fully Connected Network with Radial Distance Loss for 3D Medical Images
This paper presents a new spatial fully connected tubular network for 3D tubular-structure segmentation. Automatic and complete segmentation of intricate tubular structures remains an unsolved challenge in the medical image analysis. Airways and vasculature pose high demands on medical image analysis as they are elongated fine structures with calibers ranging from several tens of voxels to voxel-level resolution, branching in deeply multi-scale fashion, and with complex topological and spatial relationships. Most machine/deep learning approaches are based on intensity features and ignore spatial consistency across the network that are otherwise distinct in tubular structures. In this work, we introduce 3D slice-by-slice convolutional layers in a U-Net architecture to capture the spatial information of elongated structures. Furthermore, we present a novel loss function, coined radial distance loss, specifically designed for tubular structures. The commonly used methods of cross-entropy loss and generalized Dice loss are sensitive to volumetric variation. However, in tiny tubular structure segmentation, topological errors are as important as volumetric errors. The proposed radial distance loss places higher weight to the centerline, and this weight decreases along the radial direction. Radial distance loss can help networks focus more attention on tiny structures than on thicker tubular structures. We perform experiments on bronchus segmentation on 3D CT images. The experimental results show that compared to the baseline U-Net, our proposed network achieved improvement about 24% and 30% in Dice index and centerline over ratio.
KeywordsTubular structure segmentation Spatial FCN Radial distance loss Blood vessel Bronchus
Parts of this research was supported by MEXT/JSPS KAKENHI (26108006, 26560255, 17H00867, 17K20099), and the JSPS Bilateral Collaboration Grant and AMED (191k1010036h0001).
- 1.Litjens, G., et al.: A survey on deep learning in medical image analysis. Med. IA 42, 60–88 (2017)Google Scholar
- 3.Tetteh, G., et al.: DeepVesselNet: Vessel segmentation, centerline prediction, and bifurcation detection in 3-D angiographic volumes. arXiv preprint arXiv:1803.09340 (2018)
- 4.Meng, Q., Roth, H.R., Kitasaka, T., Oda, M., Ueno, J., Mori, K.: Tracking and segmentation of the airways in chest CT using a fully convolutional network. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 198–207. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66185-8_23CrossRefGoogle Scholar
- 6.Pan, X., Shi, J., Luo, P., Wang, X., Tang, X.: Spatial as deep: spatial CNN for traffic scene understanding. In: AAAI, pp. 7276–7283 (2018)Google Scholar
- 7.Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49CrossRefGoogle Scholar
- 8.Roth, H., et al.: Towards dense volumetric pancreas segmentation in CT using 3D fully convolutional networks. In: SPIE MI 2018, vol. 10574, p. 105740B (2018)Google Scholar
- 9.Milletari, F., Navab, N., Ahmadi, S.A.: V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571 (2016)Google Scholar
- 10.Ribera, J., Güera, D., Chen, Y., Delp, E.: Weighted Hausdorff distance: A loss function for object localization. arXiv preprint arXiv:1806.07564 (2018)
- 12.Wang, C., et al.: Tensor-based graph-cut in riemannian metric space and its application to renal artery segmentation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9902, pp. 353–361. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46726-9_41CrossRefGoogle Scholar