Advertisement

Multi-structure Segmentation from Partially Labeled Datasets. Application to Body Composition Measurements on CT Scans

  • Germán GonzálezEmail author
  • George R. Washko
  • Raúl San José Estépar
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11040)

Abstract

Labeled data is the current bottleneck of medical image research. Substantial efforts are made to generate segmentation masks to characterize a given organ. The community ends up with multiple label maps of individual structures in different cases, not suitable for current multi-organ segmentation frameworks. Our objective is to leverage segmentations from multiple organs in different cases to generate a robust multi-organ deep learning segmentation network. We propose a modified cost-function that takes into account only the voxels labeled in the image, ignoring unlabeled structures. We evaluate the proposed methodology in the context of pectoralis muscle and subcutaneous fat segmentation on chest CT scans. Six different structures are segmented from an axial slice centered on the transversal aorta. We compare the performance of a network trained on 3,000 images where only one structure has been annotated (PUNet) against six UNets (one per structure) and a multi-class UNet trained on 500 completely annotated images, showing equivalence between the three methods (Dice coefficients of 0.909, 0.906 and 0.909 respectively). We further propose a modification of the architecture by adding convolutions to the skip connections (CUNet). When trained with partially labeled images, it outperforms statistically significantly the other three methods (Dice 0.916, \(p\!<\!0.0001\)). We, therefore, show that (a) when keeping the number of organ annotation constant, training with partially labeled images is equivalent to training with wholly labeled data and (b) adding convolutions in the skip connections improves performance.

Keywords

Deep learning Multi-organ Segmentation Unet Pectoralis 

References

  1. 1.
    Kayalibay, B., Jensen, G., van der Smagt, P.: CNN-based segmentation of medical imaging data. arXiv preprint arXiv:1701.03056 (2017)
  2. 2.
    Cai, J., Lu, L., Xie, Y., Xing, F., Yang, L.: Improving deep pancreas segmentation in CT and MRI images via recurrent neural contextual learning and direct loss function. arXiv preprint arXiv:1707.04912 (2017)
  3. 3.
    Fidon, L., et al.: Scalable multimodal convolutional networks for brain tumour segmentation. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 285–293. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66179-7_33CrossRefGoogle Scholar
  4. 4.
    Drozdzal, M., Chartrand, G., Vorontsov, E.: Learning normalized inputs for iterative estimation in medical image segmentation. Med. Image Anal. 44, 1–13 (2018)CrossRefGoogle Scholar
  5. 5.
    Roth, H.R., et al.: Hierarchical 3D fully convolutional networks for multi-organ segmentation. arXiv preprint arXiv:1704.06382 (2017)
  6. 6.
    Fidon, L., et al.: Generalised wasserstein dice score for imbalanced multi-class segmentation using holistic convolutional networks. arXiv preprint arXiv:1707.00478 (2017)
  7. 7.
    Sudre, C.H., Li, W., Vercauteren, T., Ourselin, S., Jorge Cardoso, M.: Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In: Cardoso, M.J., et al. (eds.) DLMIA/ML-CDS -2017. LNCS, vol. 10553, pp. 240–248. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-67558-9_28CrossRefGoogle Scholar
  8. 8.
    Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  9. 9.
    Drozdzal, M., Vorontsov, E., Chartrand, G., Kadoury, S., Pal, C.: The importance of skip connections in biomedical image segmentation. In: Carneiro, G., et al. (eds.) LABELS/DLMIA -2016. LNCS, vol. 10008, pp. 179–187. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46976-8_19CrossRefGoogle Scholar
  10. 10.
    McDonald, M.L.N., et al.: Quantitative computed tomography measures of pectoralis muscle area and disease severity in chronic obstructive pulmonary disease. A cross-sectional study. Ann. Am. Thorac. Soc. 11(3), 326–334 (2014)CrossRefGoogle Scholar
  11. 11.
    Kinsey, C.M., San Josée Estéepar, R., Van der Velden, J., Cole, B.F., Christiani, D.C., Washko, G.R.: Lower pectoralis muscle area is associated with a worse overall survival in non- small cell lung cancer. Cancer Epidemiol., Biomark. Prev.: Publ. Am. Assoc. Cancer Res., Cosponsored Am. Soc. Prev. Oncol. 26(1), 38–43 (2017)CrossRefGoogle Scholar
  12. 12.
    Harmouche, R., Ross, J.C., Washko, G.R., San José Estépar, R.: Pectoralis muscle segmentation on CT images based on bayesian graph cuts with a subject-tailored atlas. In: Menze, B., et al. (eds.) MCV 2014. LNCS, vol. 8848, pp. 34–44. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-13972-2_4CrossRefGoogle Scholar
  13. 13.
    Moreta-Martinez, R., Onieva-Onieva, J., Pascau, J., San Jose Estépar, R.: Pectoralis muscle and subcutaneous adipose tissue segmentation on CT images based on convolutional networks. In: Computer Assisted Radiology and Surgery. Springer (2017)Google Scholar
  14. 14.
    Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D u-net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46723-8_49CrossRefGoogle Scholar
  15. 15.
    Regan, E.A., et al.: Genetic epidemiology of copd (copdgene) study design. COPD 7(1), 32–43 (2010)CrossRefGoogle Scholar
  16. 16.
    Milletari, F., Navab, N., Ahmadi, S.A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571. IEEE (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Sierra Research S.L.AlicanteSpain
  2. 2.Division of Pulmonary and Critical Care Medicine, Department of Medicine, Brigham and Womens HospitalHarvard Medical SchoolBostonUSA
  3. 3.Applied Chest Imaging Laboratory, Department of Radiology, Brigham and Women’s HospitalHarvard Medical SchoolBostonUSA

Personalised recommendations