DeepIGeoS-V2: Deep Interactive Segmentation of Multiple Organs from Head and Neck Images with Lightweight CNNs
Accurate segmentation of organs-at-risks (OARs) from Computed Tomography (CT) image is a key step for efficient planning of radiation therapy for nasopharyngeal carcinoma (NPC) treatment. Convolutional Neural Networks (CNN) have recently become the state-of-the-art automated OARs image segmentation method. However, due to the low contrast of head and neck organism tissues in CT, the fully automatic segmentation may still need to be refined to become accurate and robust enough for clinical use. We propose a deep learning-based multi-organ interactive segmentation method to improve the results obtained by an automatic CNN and to reduce user interactions during refinement for higher accuracy. We use one CNN to obtain an initial automatic segmentation, on which user interactions are added to indicate mis-segmentation. Another CNN takes as input the user interactions with the initial segmentation and gives a refined result. We propose a dimension separate lightweight network that gives a faster and better dense predictions. In addition, we propose a mis-segmentation-based weighting strategy combined with loss functions to achieve more accurate segmentation. We validated the proposed framework in the context of 3D head and neck organism segmentation from CT images. Experimental results show our method achieves a large improvement from automatic CNNs, and obtains higher accuracy with fewer user interventions and less time compared with traditional interactive segmentation method.
- 1.Boykov, Y.Y., Jolly, M.P.: Interactive graph cuts for optimal boundary & region segmentation of objects in ND images. In: ICCV, pp. 105–112 (2001)Google Scholar
- 2.Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49CrossRefGoogle Scholar
- 6.Ulyanov, D., Vedaldi, A., Lempitsky, V.: Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022 (2016)
- 7.Wang, G., et al.: Interactive medical image segmentation using deep learning with image-specific fine tuning. IEEE TMI 37(7), 1562–1573 (2018)Google Scholar
- 8.Wang, G., et al.: DeepiGeoS: a deep interactive geodesic framework for medical image segmentation. In: IEEE TPAMI (2018)Google Scholar
- 11.Zhao, F., Xie, X.: An overview of interactive medical image segmentation. Ann. BMVA 2013(7), 1–22 (2013)Google Scholar