Advertisement

Abstract

Accurate segmentation of organs-at-risks (OARs) from Computed Tomography (CT) image is a key step for efficient planning of radiation therapy for nasopharyngeal carcinoma (NPC) treatment. Convolutional Neural Networks (CNN) have recently become the state-of-the-art automated OARs image segmentation method. However, due to the low contrast of head and neck organism tissues in CT, the fully automatic segmentation may still need to be refined to become accurate and robust enough for clinical use. We propose a deep learning-based multi-organ interactive segmentation method to improve the results obtained by an automatic CNN and to reduce user interactions during refinement for higher accuracy. We use one CNN to obtain an initial automatic segmentation, on which user interactions are added to indicate mis-segmentation. Another CNN takes as input the user interactions with the initial segmentation and gives a refined result. We propose a dimension separate lightweight network that gives a faster and better dense predictions. In addition, we propose a mis-segmentation-based weighting strategy combined with loss functions to achieve more accurate segmentation. We validated the proposed framework in the context of 3D head and neck organism segmentation from CT images. Experimental results show our method achieves a large improvement from automatic CNNs, and obtains higher accuracy with fewer user interventions and less time compared with traditional interactive segmentation method.

References

  1. 1.
    Boykov, Y.Y., Jolly, M.P.: Interactive graph cuts for optimal boundary & region segmentation of objects in ND images. In: ICCV, pp. 105–112 (2001)Google Scholar
  2. 2.
    Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46723-8_49CrossRefGoogle Scholar
  3. 3.
    Ibragimov, B., Xing, L.: Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks. Med. Phys. 44(2), 547–557 (2017)CrossRefGoogle Scholar
  4. 4.
    Raudaschl, P.F., et al.: Evaluation of segmentation methods on head and neck CT: auto-segmentation challenge 2015. Med. Phys. 44(5), 2020–2036 (2017)CrossRefGoogle Scholar
  5. 5.
    Torre, L.A., Bray, F., Siegel, R.L., Ferlay, J., Lortet-Tieulent, J., Jemal, A.: Global cancer statistics, 2012. CA Cancer J. Clin. 65(2), 87–108 (2015)CrossRefGoogle Scholar
  6. 6.
    Ulyanov, D., Vedaldi, A., Lempitsky, V.: Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022 (2016)
  7. 7.
    Wang, G., et al.: Interactive medical image segmentation using deep learning with image-specific fine tuning. IEEE TMI 37(7), 1562–1573 (2018)Google Scholar
  8. 8.
    Wang, G., et al.: DeepiGeoS: a deep interactive geodesic framework for medical image segmentation. In: IEEE TPAMI (2018)Google Scholar
  9. 9.
    Wong, K.C., Moradi, M., Tang, H., Syeda-Mahmood, T.: 3D segmentation with exponential logarithmic loss for highly unbalanced object sizes. In: MICCAI, pp. 612–619 (2018)CrossRefGoogle Scholar
  10. 10.
    Yushkevich, P.A., et al.: User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability. Neuroimage 31(3), 1116–1128 (2006)CrossRefGoogle Scholar
  11. 11.
    Zhao, F., Xie, X.: An overview of interactive medical image segmentation. Ann. BMVA 2013(7), 1–22 (2013)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Wenhui Lei
    • 1
  • Huan Wang
    • 1
  • Ran Gu
    • 1
  • Shichuan Zhang
    • 2
  • Shaoting Zhang
    • 1
  • Guotai Wang
    • 1
    Email author
  1. 1.School of Mechanical and Electrical EngineeringUniversity of Electronic Science and Technology of ChinaChengduChina
  2. 2.Department of Radiation Oncology, Sichuan Cancer Hospital and InstituteUniversity of Electronic Science and Technology of ChinaChengduChina

Personalised recommendations