Skip to main content

FundusGAN: A One-Stage Single Input GAN forĀ Fundus Synthesis

  • Conference paper
  • First Online:
Pattern Recognition and Computer Vision (PRCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13535))

Included in the following conference series:

  • 1433 Accesses

Abstract

Annotating medical images, especially fundus images that contain complex structures, needs expertise and time. To this end, fundus image synthesis methods were proposed to obtain specific categories of samples by combining vessel components and basic fundus images, during which well-segmented vessels from real fundus images were always required. Being different from these methods, We present a one-stage fundus image generating network to obtain healthy fundus images from scratch. First, we propose a basic attention Generator to present both global and local features. Second, we guide the Generator to focus on multi-scale fundus texture and structure features for better synthesis. Third, we design a self-motivated strategy to construct a vessel assisting module for vessel refining. By integrating the three proposed sub-modules, our fundus synthesis network, termed as FundusGAN, is built to provide one-stage fundus image generation without extra references. As a result, the synthetic fundus images are anatomically consistent with real images and demonstrate both diversity and reasonable visual quality.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The dataset is available at https://odir2019.grand-challenge.org/dataset.

  2. 2.

    The code is available at https://github.com/juntang-zhuang/LadderNet.

References

  1. AbrĆ moff, M.D., Garvin, M.K., Sonka, M.: Retinal imaging and image analysis. IEEE Rev. Biomed. Eng. 3, 169ā€“208 (2010)

    ArticleĀ  Google ScholarĀ 

  2. Fiorini, S., et al.: Automatic generation of synthetic retinal fundus images: vascular network. In: 20th Conference on Medical Image Understanding and Analysis (MIUA), pp. 54ā€“60. Springer, Leicestershire (2016). https://doi.org/10.1016/j.procs.2016.07.010

  3. Goodfellow, I., et al.: Generative adversarial nets. In: NIPS, pp. 2678ā€“2680. MIT, Montreal (2014)

    Google ScholarĀ 

  4. Costa, P., Galdran, A., Meyer, M.I., Abramoff, M.D., Niemeijer, M., MendonƧa, A., Campilho, A.: Towards adversarial retinal image synthesis. arXiv preprint arXiv:1701.08974 (2017)

  5. Costa, P., Galdran, A., Meyer, M.I., Niemeijer, M., AbrĆ moff, M., MendonƧa, A.M., Campilho, A.: End-to-end adversarial retinal image synthesis. IEEE Trans. Med. Imaging 37(3), 781ā€“791 (2018)

    ArticleĀ  Google ScholarĀ 

  6. Kurach, K., Lučić, M., Zhai, X., Michalski, M., Gelly, S.: A large-scale study on regularization and normalization in GANs. In: ICML (2019)

    Google ScholarĀ 

  7. Roth, K., Lucchi, A., Nowozin, S., Hofmann, T.: Stabilizing training of generative adversarial networks through regularization. In: NIPS, pp. 2015ā€“2025. MIT, California (2017)

    Google ScholarĀ 

  8. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training GANs. In: NIPS, pp. 2234ā€“2242. MIT, Barcelona (2016)

    Google ScholarĀ 

  9. Denton, E., Chintala, S., Szlam, A., Fergus, R.: Deep generative image models using a Laplacian pyramid of adversarial networks. In: NIPS, pp. 1486ā€“1494. MIT, Barcelona (2016)

    Google ScholarĀ 

  10. Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: ICML (2017)

    Google ScholarĀ 

  11. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.: Improved training of Wasserstein GANs. In: NIPS, pp. 5769ā€“5779. MIT, California (2017)

    Google ScholarĀ 

  12. Jiang, Y., Chang, S., Wang, Z.: TransGAN: two pure transformers can make one strong GAN, and that can scale up. In: NIPS, pp. 14745ā€“14758. Curran Associates Inc, New Orleans (2021)

    Google ScholarĀ 

  13. Kamran, S.A., Hossain, K.F., Tavakkoli, A., Zuckerbrod, S.L., Baker, S.A.: VTGAN: semi-supervised retinal image synthesis and disease prediction using vision Transformers. In: ICCV (2021)

    Google ScholarĀ 

  14. Yu, Z., Xiang, Q., Meng, J., Kou, C., Ren, Q., Lu, Y.: Retinal image synthesis from multiple-landmarks input with generative adversarial networks. Biomed. Eng. Online 10(1), 1ā€“15 (2019)

    Google ScholarĀ 

  15. Liu, Y.-C., et al.: Synthesizing new retinal symptom images by multiple generative models. In: Carneiro, G., You, S. (eds.) ACCV 2018. LNCS, vol. 11367, pp. 235ā€“250. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-21074-8_19

    ChapterĀ  Google ScholarĀ 

  16. He, Z., Huiqi, L., Sebastian, M., Li, C.: Synthesizing retinal and neuronal images with generative adversarial nets. Med. Image Anal. 49, 14ā€“26 (2018)

    ArticleĀ  Google ScholarĀ 

  17. Sengupta, S., Athwale, A., Gulati, T., Zelek, J., Lakshminarayanan, V.: FunSyn-Net: enhanced residual variational auto-encoder and image-to-image translation network for fundus image synthesis. In: Medical Imaging 2020: Image Processing, vol. 11313, pp. 15ā€“10 (2020)

    Google ScholarĀ 

  18. Ding, K., Ma, K., Wang, S., Simoncelli, E.P.: Image quality assessment: unifying structure and texture similarity. IEEE Trans. Pattern Anal. Mach. Intell. 445(5), 2567ā€“2581 (2022)

    Google ScholarĀ 

  19. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600ā€“612 (2004)

    ArticleĀ  Google ScholarĀ 

  20. Li, T., Gao, Y., Wang, K., Guo, S., Liu, H., Kang, H.: Diagnostic assessment of deep learning algorithms for diabetic retinopathy screening. Inf. Sci. 501, 511ā€“522 (2019)

    ArticleĀ  Google ScholarĀ 

  21. DecenciĆØre, E., et al.: Feedback on a publicly distributed database: the Messidor database. Image Anal. Stereol. 33, 231ā€“234 (2014)

    ArticleĀ  Google ScholarĀ 

  22. Li, K., Qi, X., Luo, Y., Yao, Z., Zhou, X., Sun, M.: Accurate retinal vessel segmentation in color fundus images via fully attention-based networks. IEEE J. Biomed. Health Inform. 25, 2071ā€“2081 (2021)

    ArticleĀ  Google ScholarĀ 

  23. Christian L., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR, pp. 105ā€“114 (2017)

    Google ScholarĀ 

  24. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-Scale update rule converge to a local Nash equilibrium. In: NIPS, pp. 6629ā€“6640. Curran Associates Inc., California (2017)

    Google ScholarĀ 

  25. Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)

  26. Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing, improving the image quality of StyleGAN. In: CVPR, pp. 8110ā€“8119 (2020)

    Google ScholarĀ 

  27. Zhang, H., Goodfellow, I., Metaxas, D., Odena, A.: Self-attention generative adversarial networks. In: ICML (2019)

    Google ScholarĀ 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xue Xia .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

Ā© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cai, C., Xia, X., Fang, Y. (2022). FundusGAN: A One-Stage Single Input GAN forĀ Fundus Synthesis. In: Yu, S., et al. Pattern Recognition and Computer Vision. PRCV 2022. Lecture Notes in Computer Science, vol 13535. Springer, Cham. https://doi.org/10.1007/978-3-031-18910-4_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-18910-4_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-18909-8

  • Online ISBN: 978-3-031-18910-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics