Deep Adversarial Networks for Biomedical Image Segmentation Utilizing Unannotated Images

  • Yizhe ZhangEmail author
  • Lin Yang
  • Jianxu Chen
  • Maridel Fredericksen
  • David P. Hughes
  • Danny Z. Chen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10435)


Semantic segmentation is a fundamental problem in biomedical image analysis. In biomedical practice, it is often the case that only limited annotated data are available for model training. Unannotated images, on the other hand, are easier to acquire. How to utilize unannotated images for training effective segmentation models is an important issue. In this paper, we propose a new deep adversarial network (DAN) model for biomedical image segmentation, aiming to attain consistently good segmentation results on both annotated and unannotated images. Our model consists of two networks: (1) a segmentation network (SN) to conduct segmentation; (2) an evaluation network (EN) to assess segmentation quality. During training, EN is encouraged to distinguish between segmentation results of unannotated images and annotated ones (by giving them different scores), while SN is encouraged to produce segmentation results of unannotated images such that EN cannot distinguish these from the annotated ones. Through an iterative adversarial training process, because EN is constantly “criticizing” the segmentation results of unannotated images, SN can be trained to produce more and more accurate segmentation for unannotated and unseen samples. Experiments show that our proposed DAN model is effective in utilizing unannotated image data to obtain considerably better segmentation.



This project was supported in part by the National Science Foundation under grant CCF-1640081, and the Nanoelectronics Research Corporation (NERC), a wholly-owned subsidiary of the Semiconductor Research Corporation (SRC), through Extremely Energy Efficient Collective Electronics (EXCEL), an SRC-NRI Nanoelectronics Research Initiative under Research Task ID 2698.005. The research was supported in part by NSF grants CCF-1217906, CNS-1629914, CCF-1617735, and IOS-1558062, and NIH grant R01 GM116927-02.

Supplementary material

455908_1_En_47_MOESM1_ESM.pdf (11.8 mb)
Supplementary material 1 (pdf 12040 KB)


  1. 1.
    Chen, H., Qi, X., Cheng, J.Z., Heng, P.A.: Deep contextual networks for neuronal structure segmentation. In: AAAI, pp. 1167–1173 (2016)Google Scholar
  2. 2.
    Chen, H., Qi, X., Yu, L., Heng, P.A.: DCAN: deep contour-aware networks for accurate gland segmentation. In: CVPR, pp. 2487–2496 (2016)Google Scholar
  3. 3.
    Dosovitskiy, A., Springenberg, J.T., Riedmiller, M., Brox, T.: Discriminative unsupervised feature learning with convolutional neural networks. In: NIPS, pp. 766–774 (2014)Google Scholar
  4. 4.
    Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., et al.: Generative adversarial nets. In: NIPS, pp. 2672–2680 (2014)Google Scholar
  5. 5.
    Hong, S., Noh, H., Han, B.: Decoupled deep neural network for semi-supervised semantic segmentation. In: NIPS, pp. 1495–1503 (2015)Google Scholar
  6. 6.
    Luc, P., Couprie, C., Chintala, S., Verbeek, J.: Semantic segmentation using adversarial networks. arXiv preprint arXiv:1611.08408 (2016)
  7. 7.
    Papandreou, G., Chen, L.C., Murphy, K., Yuille, A.L.: Weakly and semi-supervised learning of a DCNN for semantic image segmentation. arXiv preprint arXiv:1502.02734 (2015)
  8. 8.
    Ranzato, M., Szummer, M.: Semi-supervised learning of compact document representations with deep networks. In: ICML, pp. 792–799 (2008)Google Scholar
  9. 9.
    Rasmus, A., Berglund, M., Honkala, M., Valpola, H., Raiko, T.: Semi-supervised learning with Ladder networks. In: NIPS, pp. 3546–3554 (2015)Google Scholar
  10. 10.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). doi: 10.1007/978-3-319-24574-4_28 CrossRefGoogle Scholar
  11. 11.
    Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training GANs. In: NIPS, pp. 2226–2234 (2016)Google Scholar
  12. 12.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  13. 13.
    Sirinukunwattana, K., Pluim, J.P., Chen, H., Qi, X., Heng, P.A., Guo, Y.B., Wang, L.Y., Matuszewski, B.J., Bruni, E., et al.: Gland segmentation in colon histology images: the GlaS challenge contest. Med. Image Anal. 35, 489–502 (2017)CrossRefGoogle Scholar
  14. 14.
    Suddarth, S.C., Kergosien, Y.L.: Rule-injection hints as a means of improving network performance and learning time. In: Almeida, L.B., Wellekens, C.J. (eds.) EURASIP 1990. LNCS, vol. 412, pp. 120–129. Springer, Heidelberg (1990). doi: 10.1007/3-540-52255-7_33 CrossRefGoogle Scholar
  15. 15.
    Xu, Y., Li, Y., Liu, M., Wang, Y., Fan, Y., Lai, M., Chang, E.I., et al.: Gland instance segmentation by deep multichannel neural networks. arXiv preprint arXiv:1607.04889 (2016)
  16. 16.
    Xu, Y., Li, Y., Liu, M., Wang, Y., Lai, M., Chang, E.I.-C.: Gland instance segmentation by deep multichannel side supervision. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 496–504. Springer, Cham (2016). doi: 10.1007/978-3-319-46723-8_57 CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Yizhe Zhang
    • 1
    Email author
  • Lin Yang
    • 1
  • Jianxu Chen
    • 1
  • Maridel Fredericksen
    • 2
  • David P. Hughes
    • 2
  • Danny Z. Chen
    • 1
  1. 1.Department of Computer Science and EngineeringUniversity of Notre DameNotre DameUSA
  2. 2.Department of Entomology and Department of Biology, Center for Infectious Disease DynamicsPennsylvania State UniversityUniversity ParkUSA

Personalised recommendations