Abstract
Fluorescein angiography (FA) is a diagnostic method for observing the vascular circulation in the eye. However, it poses a risk to patients. Therefore, generative adversarial networks have been used to convert retinal fundus structure images into FA images. Existing high-resolution image generation methods employ complex deep network models that are challenging to optimize, which leads to issues such as blurred lesion boundaries and poor capture of microleakage and microvessels. In this study, we propose a multiple-ResNet generative adversarial network (GAN) to improve model training, thereby enhancing the ability to generate high-resolution FA images. First, the structure of the multiple-ResNet generator is designed to enhance detail generation in high-resolution images. Second, the Gaussian error linear unit (GELU) activation function is used to help the model converge rapidly. The effectiveness of the multiple-ResNet is verified using the publicly available Isfahan MISP dataset. Experimental results show that our method outperforms other methods, achieving better quantitative results with a mean structural similarity of 0.641, peak signal-to-noise ratio of 18.25, and learned perceptual image patch similarity of 0.272. Compared with state-of-the-art methods, the results showed that using the multiple-ResNet framework and GELU activation function can improve the generation of detailed regions in high-resolution FA images.
Graphical abstract
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Abràmoff MD, Garvin MK, Sonka M (2010) Retinal imaging and image analysis. IEEE Rev Biomed Eng 3:169–208
Rabb MF, Burton TC, Schatz H, Yannuzzi LA (1978) Fluorescein angiography of the fundus: a schematic approach to interpretation. Surv Ophthalmol 22:387–403
Li D, Ma L, Li J, Qi S, Yao Y, Teng Y (2022) A comprehensive survey on deep learning techniques in CT image quality improvement. Med Biol Eng Comput 60:2757–2770
Creswell A, White T, Dumoulin V, Arulkumaran K, Sengupta B, Bharath AA (2018) Generative adversarial networks: an overview. IEEE Signal Process Mag 35:53–65
Chen Y, Yang XH, Wei Z, Heidari AA, Zheng N, Li Z, Chen H, Hu H, Zhou Q, Guan Q (2022) Generative adversarial networks in medical image augmentation: a review. Comput Biol Med 144:105382–105404
Schiffers F, Yu Z, Arguin S, Maier A, Ren Q (2018) Synthetic fundus fluorescein angiography using deep neural networks. Bildverarb Med 3:234–238
Li K, Yu L, Wang S, Heng PA (2019) Unsupervised retina image synthesis via disentangled representation learning. Simul Synth Med Imaging 11827:32–41
Hervella ÁS, Rouco J, Novo J, Ortega M (2019) Deep multimodal reconstruction of retinal images using paired or unpaired data. Int Jt. Conf Neural Netw, pp 1–8
Tavakkoli A, Kamran SA, Hossain KF, Zuckerbrod SL (2020) A novel deep learning conditional generative adversarial network for producing angiography images from retinal fundus photographs. Sci Rep 10:21580–21595
Kamran SA, Hossain KF, Tavakkoli A, Zuckerbrod SL (2021) Attention2angiogan: synthesizing fluorescein angiography from retinal fundus images using generative adversarial networks. Int Conf Pattern Recognit, pp 9122–9129
Li P, He Y, Wang P, Wang J, Shi G, Chen Y (2023) Synthesizing multi-frame high-resolution fluorescein angiography images from retinal fundus images using generative adversarial networks. Biomed Eng Online 22:1–15
Huang K, Li M, Yu J, Miao J, Hu Z, Yuan S, Chen Q (2023) Lesion-aware generative adversarial networks for color fundus image to fundus fluorescein angiography translation. Comput Methods Programs Biomed 229:107306–107315
Yiwei C, Yi H, Hong Y, Lina X, Xin Z, Guohua S (2024) Unified deep learning model for predicting fundus fluorescein angiography image from fundus structure image. J Innov Opt Health Sci 17:2450003–2450012
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. Adv Neural Inf Process Syst, pp 2672–2680
Zhu JY, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. Proc IEEE Int Conf Comput Vis, pp 2223–2232
Kukker A, Sharma R (2020) Genetic algorithm-optimized fuzzy lyapunov reinforcement learning for nonlinear systems. Arab J Sci Eng 45:1629–1638
Kukker A, Sharma R, Mishra O, Parashar D (2024) Epileptic seizure classification using fuzzy lattices and Neural Reinforcement Learning. Comput Methods Biomech Biomed Eng Imaging Vis 11:2290361–2290372
Wang TC, Liu MY, Zhu JY, Tao A, Kautz J, Catanzaro B (2018) High-resolution image synthesis and semantic manipulation with conditional gans. Proc IEEE Conf Comput Vis Pattern Recognit, pp 8798–8807
Johnson J, Alahi A, Li FF (2016) Perceptual losses for real-time style transfer and super-resolution. Comput Vis ECCV, pp 694–711
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. Proc IEEE Conf Comput Vi Pattern Recog, pp 770–778
Ulyanov D, Vedaldi A, Lempitsky V (2016) Instance normalization: the missing ingredient for fast stylization. arXiv:1607.08022
Hendrycks D, Gimpel K (2016) Gaussian error linear units (gelus). arXiv:1606.08415
Ramachandran P, Zoph B, Quoc VL (2017) Searching for activation functions. arXiv preprint arXiv:1710.05941
Li C, Wand M (2016) Precomputed real-time texture synthesis with markovian generative adversarial networks. Comput Vis ECCV, pp 702–716
Mao X, Li Q, Xie H, Lau RY, Wang Z, Paul Smolley S (2017) Least squares generative adversarial networks. Proc IEEE Int Conf Comput Vis, pp 2794–2802
Hajeb Mohammad Alipour S, Rabbani H, Akhlaghi MR (2012) Diabetic retinopathy grading by digital curvelet transform. Comput Math Methods Med 2012:761901–761911
Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13:600–612
Zhang R, Isola P, Efros AA, Shechtman E, Wang O (2018) The unreasonable effectiveness of deep features as a perceptual metric. Proc IEEE Conf Comput Vis Pattern Recognit, pp 586–595
Hore A, Ziou D (2010) Image quality metrics: PSNR vs. SSIM. Int Conf Pattern Recognit, pp 2366–2369
Acknowledgements
The authors would like to thank the editors and anonymous reviewers for their invaluable suggestions.
Funding
This work was supported by the National Natural Science Foundation of China (61703268, 62402308).
Author information
Authors and Affiliations
Contributions
Jiahui Yuan: Conceptualization, Methodology, Writing—original draft & editing. Weiwei Gao: Methodology, Supervision, Writing—review. Yu Fang: Supervision. Haifeng Zhang: Supervision. Nan Song: Resources, Supervision.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Ethics approval
Since this paper is based on a publicly available dataset, no ethical statement is required.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Yuan, J., Gao, W., Fang, Y. et al. Multiple-ResNet GAN: An enhanced high-resolution image generation method for translation from fundus structure image to fluorescein angiography. Med Biol Eng Comput (2024). https://doi.org/10.1007/s11517-024-03191-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11517-024-03191-z