Semi-supervised Segmentation of Liver Using Adversarial Learning with Deep Atlas Prior
Medical image segmentation is one of the most important steps in computer-aided intervention and diagnosis. Although deep learning-based segmentation methods have achieved great success in computer vision domain, there are still several challenges in medical image domain. In comparison with natural images, medical image databases are usually small because the annotation is extremely time-consuming and requires expert knowledge. Thus, effective use of unannotated data is essential for medical image segmentation. On the other hand, medical images have many anatomical priors in comparison to non-medical images such as the shape and position of organs. Incorporating the anatomical prior knowledge in deep learning is a vital issue for accurate medical image segmentation. To address these two problems, in this paper we proposed a semi-supervised adversarial learning model with Deep Atlas Prior (DAP) to improve the accuracy of liver segmentation in CT images. We trained the semi-supervised adversarial learning model using both annotated and unannotated images. The DAP, which is based on the probability atlas of organ (liver) and contains prior information such as the shape and position, is combined with the conventional focal loss to aid segmentation. We call the combined loss as Bayesian loss and the conventional focal loss that utilizes the predicted probabilities of training data in the previous learning epoch as a likelihood loss. Experiments on ISBI LiTS 2017 challenge dataset showed that the performance of the semi-supervised network was significantly improved by incorporating with DAP.
KeywordsLiver segmentation Semi-supervised Deep Atlas Prior Adversarial learning
This work was supported in part by Major Scientific Research Project of Zhejiang Lab under the Grant No. 2018DG0ZX01, in part by the Key Science and Technology Innovation Support Program of Hangzhou under the Grant No. 20172011A038, and in part by the Grant-in Aid for Scientific Research from the Japanese Ministry for Education, Science, Culture and Sports (MEXT) under the Grant No. 18H03267 and No. 17H00754.
- 2.Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)Google Scholar
- 3.Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
- 5.Hung, W.C., Tsai, Y.H., Liou, Y.T., Lin, Y.Y., Yang, M.H.: Adversarial learning for semi-supervised semantic segmentation. arXiv preprint arXiv:1802.07934 (2018)
- 6.Souly, N., Spampinato, C., Shah, M.: Semi-supervised semantic segmentation using generative adversarial network. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 5688–5696 (2017)Google Scholar
- 8.Nie, D., Gao, Y., Wang, L., Shen, D.: ASDNet: attention based semi-supervised deep networks for medical image segmentation. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11073, pp. 370–378. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00937-3_43CrossRefGoogle Scholar
- 11.Vakalopoulou, M., et al.: AtlasNet: multi-atlas non-linear deep networks for medical image segmentation. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11073, pp. 658–666. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00937-3_75CrossRefGoogle Scholar
- 12.Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)Google Scholar
- 13.Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar