Skip to main content

TarGAN: Target-Aware Generative Adversarial Networks for Multi-modality Medical Image Translation

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2021 (MICCAI 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12906))

Abstract

Paired multi-modality medical images, can provide complementary information to help physicians make more reasonable decisions than single modality medical images. But they are difficult to generate due to multiple factors in practice (e.g., time, cost, radiation dose). To address these problems, multi-modality medical image translation has aroused increasing research interest recently. However, the existing works mainly focus on translation effect of a whole image instead of a critical target area or Region of Interest (ROI), e.g., organ and so on. This leads to poor-quality translation of the localized target area which becomes blurry, deformed or even with extra unreasonable textures. In this paper, we propose a novel target-aware generative adversarial network called TarGAN, which is a generic multi-modality medical image translation model capable of (1) learning multi-modality medical image translation without relying on paired data, (2) enhancing quality of target area generation with the help of target area labels. The generator of TarGAN jointly learns mapping at two levels simultaneously—whole image translation mapping and target area translation mapping. These two mappings are interrelated through a proposed crossing loss. The experiments on both quantitative measures and qualitative evaluations demonstrate that TarGAN outperforms the state-of-the-art methods in all cases. Subsequent segmentation task is conducted to demonstrate effectiveness of synthetic images generated by TarGAN in a real-world application. Our code is available at https://github.com/cs-xiao/TarGAN.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J.: Stargan: unified generative adversarial networks for multi-domain image-to-image translation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8789–8797 (2018)

    Google Scholar 

  2. Ernst, P., Hille, G., Hansen, C., Tönnies, K., Rak, M.: A CNN-based framework for statistical assessment of spinal shape and curvature in whole-body MRI images of large populations. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11767, pp. 3–11. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32251-9_1

    Chapter  Google Scholar 

  3. Fu, C., et al.: Three dimensional fluorescence microscopy image synthesis and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 2221–2229 (2018)

    Google Scholar 

  4. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of Wasserstein GANS. In: Advances in Neural Information Processing Systems, pp. 5767–5777 (2017)

    Google Scholar 

  5. Gupta, L., Klinkhammer, B.M., Boor, P., Merhof, D., Gadermayr, M.: GAN-based image enrichment in digital pathology boosts segmentation accuracy. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11764, pp. 631–639. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32239-7_70

    Chapter  Google Scholar 

  6. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANS trained by a two time-scale update rule converge to a local Nash equilibrium. In: Advances in Neural Information Processing Systems, pp. 6626–6637 (2017)

    Google Scholar 

  7. Huang, P.U., et al.: CoCa-GAN: common-feature-learning-based context-aware generative adversarial network for glioma grading. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11766, pp. 155–163. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32248-9_18

    Chapter  Google Scholar 

  8. Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: NNU-net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Meth. 18(2), 203–211 (2021)

    Article  Google Scholar 

  9. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: IEEE Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  10. Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of GANS for improved quality, stability, and variation. In: International Conference on Learning Representations (2018)

    Google Scholar 

  11. Kavur, A.E., et al.: Chaos challenge-combined (CT-MR) healthy abdominal organ segmentation. Med. Image Anal. 69, 101950 (2021)

    Google Scholar 

  12. Kavur, A.E., Selver, M.A., Dicle, O., Barıs, M., Gezer, N.S.: Chaos-combined (CT-MR) healthy abdominal organ segmentation challenge data. In: Proceedings of IEEE International Symposium Biomedical Image (ISBI) (2019)

    Google Scholar 

  13. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (Poster) (2015)

    Google Scholar 

  14. Martin Arjovsky, S., Bottou, L.: Wasserstein generative adversarial networks. In: Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia (2017)

    Google Scholar 

  15. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  16. Shen, L., et al.: Multi-domain image completion for random missing input data. IEEE Trans. Med. Imaging 40(4), 1113–1122 (2020)

    Google Scholar 

  17. Xin, B., Hu, Y., Zheng, Y., Liao, H.: Multi-modality generative adversarial networks with tumor consistency loss for brain MR image synthesis. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), pp. 1803–1807. IEEE (2020)

    Google Scholar 

  18. Yu, B., Zhou, L., Wang, L., Shi, Y., Fripp, J., Bourgeat, P.: EA-GANS: edge-aware generative adversarial networks for cross-modality MR image synthesis. IEEE Trans. Med. Imaging 38(7), 1750–1762 (2019)

    Article  Google Scholar 

  19. Zhang, Z., Yang, L., Zheng, Y.: Translating and segmenting multimodal medical volumes with cycle-and shape-consistency generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9242–9251 (2018)

    Google Scholar 

  20. Zhu, D., et al.: UGAN: Untraceable GAN for multi-domain face translation. arXiv preprint arXiv:1907.11418 (2019)

  21. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)

    Google Scholar 

Download references

Acknowledgments

This work is supported in part by the Natural Science Foundation of Guangdong Province (2017A030313358, 2017A030313355, 2020A15 15010717), the Guangzhou Science and Technology Planning Project (201704030051), the Fundamental Research Funds for the Central Universities (2019MS073), NSF-1850492 (to R.L.) and NSF-2045804 (to R.L.).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jia Wei .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1364 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, J., Wei, J., Li, R. (2021). TarGAN: Target-Aware Generative Adversarial Networks for Multi-modality Medical Image Translation. In: de Bruijne, M., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. MICCAI 2021. Lecture Notes in Computer Science(), vol 12906. Springer, Cham. https://doi.org/10.1007/978-3-030-87231-1_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87231-1_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87230-4

  • Online ISBN: 978-3-030-87231-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics