SegAN: Adversarial Network with Multi-scale L1 Loss for Medical Image Segmentation
Inspired by classic Generative Adversarial Networks (GANs), we propose a novel end-to-end adversarial neural network, called SegAN, for the task of medical image segmentation. Since image segmentation requires dense, pixel-level labeling, the single scalar real/fake output of a classic GAN’s discriminator may be ineffective in producing stable and sufficient gradient feedback to the networks. Instead, we use a fully convolutional neural network as the segmentor to generate segmentation label maps, and propose a novel adversarial critic network with a multi-scale L1 loss function to force the critic and segmentor to learn both global and local features that capture long- and short-range spatial relationships between pixels. In our SegAN framework, the segmentor and critic networks are trained in an alternating fashion in a min-max game: The critic is trained by maximizing a multi-scale loss function, while the segmentor is trained with only gradients passed along by the critic, with the aim to minimize the multi-scale loss function. We show that such a SegAN framework is more effective and stable for the segmentation task, and it leads to better performance than the state-of-the-art U-net segmentation method. We tested our SegAN method using datasets from the MICCAI BRATS brain tumor segmentation challenge. Extensive experimental results demonstrate the effectiveness of the proposed SegAN with multi-scale loss: on BRATS 2013 SegAN gives performance comparable to the state-of-the-art for whole tumor and tumor core segmentation while achieves better precision and sensitivity for Gd-enhance tumor core segmentation; on BRATS 2015 SegAN achieves better performance than the state-of-the-art in both dice score and precision.
This research was supported in part by the Intramural Research Program of the National Institutes of Health (NIH), National Library of Medicine (NLM), and Lister Hill National Center for Biomedical Communications (LHNCBC), under Contract HHSN276201500692P.
- Arjovsky, M., Chintala, S., Bottou, L. (2017). Wasserstein gan. arXiv:170107875.
- Canny, J. (1986). A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (6), 679–698.Google Scholar
- Chen, L. C., Papandreou, G., Kokkinos, I., Murphy, K. (2015). Semantic image segmentation with deep convolutional nets and fully connected crfs. In ICLR. arXiv:1412.7062.
- Cobzas, D., Birkbeck, N., Schmidt, M., Jagersand, M. (2007). Murtha A (2007) 3d variational brain tumor segmentation using a high dimensional feature set. In IEEE 11th international conference on computer vision. ICCV 2007 (pp. 1–8). IEEE.Google Scholar
- Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672–2680).Google Scholar
- Isola, P., Zhu, J. Y., Zhou, T., Efros, A. A. (2016). Image-to-image translation with conditional adversarial networks. arXiv:161107004.
- Lee, C. H., Wang, S., Murtha, A., Brown, M., Greiner, R. (2008). Segmenting brain tumors using pseudo–conditional random fields. In Medical image computing and computer-assisted intervention–MICCAI 2008 (pp. 359–366).Google Scholar
- Lefohn, A., Cates, J., Whitaker, R. (2003). Interactive, gpu-based level sets for 3d segmentation. In Medical image computing and computer-assisted intervention-MICCAI 2003 (pp. 564–572).Google Scholar
- Lin, G., Shen, C., van den Hengel, A., Reid, I. (2016). Efficient piecewise training of deep structured models for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3194–3203).Google Scholar
- Long, J., Shelhamer, E., Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431–3440).Google Scholar
- Luc, P., Couprie, C., Chintala, S., Verbeek, J. (2016). Semantic segmentation using adversarial networks. arXiv:161108408.
- Noh, H., Hong, S., Han, B. (2015). Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE international conference on computer vision (pp. 1520–1528).Google Scholar
- Radford, A., Metz, L., Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434.
- Ronneberger, O, Fischer, P, Brox, T. (2015). U-net: convolutional networks for biomedical image segmentation. In International conference on medical image computing and computer-assisted intervention. Springer (pp. 234–241).Google Scholar
- Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X. (2016). Improved techniques for training gans. In Advances in neural information processing systems (pp. 2226–2234).Google Scholar
- Wels, M., Carneiro, G., Aplas, A., Huber, M., Hornegger, J., Comaniciu, D. (2008). A discriminative model-constrained graph cuts approach to fully automated pediatric brain tumor segmentation in 3-d mri. In Medical image computing and computer-assisted intervention–MICCAI 2008 (pp. 67–75).Google Scholar
- Zhang, H., Xu, T., Li, H., Zhang, S., Huang, X., Wang, X., Metaxas, D. (2017). Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. IEEE Int. Conf. Comput. Vision (ICCV) 5907–5915.Google Scholar