Advertisement

Multi-organ Segmentation of Chest CT Images in Radiation Oncology: Comparison of Standard and Dilated UNet

  • Umair JavaidEmail author
  • Damien Dasnoy
  • John A. Lee
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11182)

Abstract

Automatic delineation of organs at risk (OAR) in computed tomography (CT) images is a crucial step for treatment planning in radiation oncology. However, manual delineation of organs is a challenging and time-consuming task subject to inter-observer variabilities. Automatic organ delineation has been relying on non-rigid registrations and atlases. However, lately deep learning appears as a strong competitor with specific architectures dedicated to image segmentation like UNet. In this paper, we first assessed the standard UNet to delineate multiple organs in CT images. Second, we observed the effect of dilated convolutional layers in UNet to better capture the global context from the CT images and effectively learn the anatomy, which results in increased localization of organ delineation. We evaluated the performance of a standard UNet and a dilated UNet (with dilated convolutional layers) on four chest organs (esophagus, left lung, right lung, and spinal cord) from 29 lung image acquisitions and observed that dilated UNet delineates the soft tissues notably esophagus and spinal cord with higher accuracy than the standard UNet. We quantified the segmentation accuracy of both models by computing spatial overlap measures like Dice similarity coefficient, recall & precision, and Hausdorff distance. Compared to the standard UNet, dilated UNet yields the best Dice scores for soft organs whereas for lungs, no significant difference in the Dice score was observed: \(0.84\pm 0.07\) vs \(0.71\pm 0.10\) for esophagus, \(0.99\pm {0.01}\) vs \(0.99\pm {0.01}\) for left lung, \(0.99\pm {0.01}\) vs \(0.99\pm {0.01}\) for right lung and \(0.91\pm {0.05}\) vs \(0.88\pm {0.04}\) for spinal cord.

Keywords

Multi-organ Segmentation Computed tomography 

Notes

Acknowledgments

Umair Javaid is funded by Fonds de la Recherche Scientifique - FNRS, Télévie grant no. 7.4625.16. Damien Dasnoy is a Research Fellow of FNRS. John A. Lee is a Senior Research Associate with the Belgian FNRS. We thank UCLouvain University hospital Saint-Luc for providing the data. We also thank NVIDIA Corporation for providing Titan X (Pascal) GPUs.

References

  1. 1.
    Zhang, T., Chi, Y., Elisa, M., Di, Y.: Automatic delineation of on-line head-and-neck computed tomography images: toward on-line adaptive radiotherapy. Int. J. Radiat. Oncol. Biol. Phys. 68(2), 522–530 (2007)CrossRefGoogle Scholar
  2. 2.
    Gorthi, S., et al.: Segmentation of head and neck lymph node regions for radiotherapy planning using active contour-based atlas registration. IEEE J. Sel. Top. Signal Process. 3(1), 135–147 (2009)CrossRefGoogle Scholar
  3. 3.
    Dolz, J., et al.: Interactive contour delineation of organs at risk in radiotherapy: clinical evaluation on NSCLC patients. Med. Phys. 43(5), 2569–2580 (2016)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Wolz, R., Chu, C., Misawa, K., Fujiwara, M., Mori, K., Rueckert, D.: Automated abdominal multi-organ segmentation with subject-specific atlas generation. IEEE Trans. Med. Imaging 32(9), 1723–1730 (2013)CrossRefGoogle Scholar
  5. 5.
    Okada, T., Linguraru, M.G., Hori, M., Summers, R.M., Tomiyama, N., Sato, Y.: Abdominal multi-organ segmentation from CT images using conditional shape-location and unsupervised intensity priors. Med. Image Anal. 26(1), 1–18 (2015)CrossRefGoogle Scholar
  6. 6.
    Litjens, G., et al.: A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017)CrossRefGoogle Scholar
  7. 7.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)Google Scholar
  8. 8.
    Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  9. 9.
    Cha, K.H., Hadjiiski, L., Samala, R.K., Chan, H.-P., Caoili, E.M., Cohan, R.H.: Urinary bladder segmentation in CT urography using deep-learning convolutional neural network and level sets. Med. Phys. 43(4), 1882–1896 (2016)CrossRefGoogle Scholar
  10. 10.
    Roth, H.R., et al.: DeepOrgan: multi-level deep convolutional networks for automated pancreas segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9349, pp. 556–564. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24553-9_68CrossRefGoogle Scholar
  11. 11.
    Li, X., et al.: H-DenseUNet: hybrid densely connected UNet for liver and liver tumor segmentation from CT volumes. arXiv preprint arXiv:1709.07330 (2017)
  12. 12.
    Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122 (2015)
  13. 13.
    Gibson, E., et al.: Towards image-guided pancreas and biliary endoscopy: automatic multi-organ segmentation on abdominal CT with dense dilated networks. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10433, pp. 728–736. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66182-7_83CrossRefGoogle Scholar
  14. 14.
    Chen, L.-C., George, P., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2018)CrossRefGoogle Scholar
  15. 15.
    Di Perri, D., et al.: Evolution of [18f] fluorodeoxyglucose and [18f] fluoroazomycin arabinoside PET uptake distributions in lung tumours during radiation therapy. Acta Oncol. 56(4), 516–524 (2017)CrossRefGoogle Scholar
  16. 16.
    Simard, P.Y., Steinkraus, D., Platt, J.C., et al.: Best practices for convolutional neural networks applied to visual document analysis. ICDAR 3, 958–962 (2003)Google Scholar
  17. 17.
    Dutilleux, P.: An implementation of the “algorithme à trous” to compute the wavelet transform. In: Combes, J.M., Grossmann, A., Tchamitchian, P. (eds.) Wavelets, pp. 298–304. Springer, Heidelberg (1990).  https://doi.org/10.1007/978-3-642-75988-8_29CrossRefGoogle Scholar
  18. 18.
    Wang, P., et al.: Understanding convolution for semantic segmentation. arXiv preprint arXiv:1702.08502 (2017)
  19. 19.
    Van Den Oord, A., et al.: Wavenet: a generative model for raw audio. arXiv preprint arXiv:1609.03499 (2016)
  20. 20.
    Chollet, F., et al.: Keras (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.ICTEAMUniversité Catholique de LouvainLouvain-la-NeuveBelgium
  2. 2.IREC/MIROUniversité Catholique de LouvainBrusselsBelgium

Personalised recommendations