Abstract
The generative diffusion model has been highlighted as a state-of-the-art artificial intelligence technique for image synthesis. Here, we show that a denoising diffusion probabilistic model (DDPM) can be used for a domain-specific task generating fundus photographs based on a limited training dataset in an unconditional manner. We trained the DDPM based on U-Net backbone architecture, which is the most popular form of the generative diffusion model. After training, serial multiple denoising U-Nets can generate FPs using random noise seeds. A thousand healthy retinal images were used to train the diffusion model. The input image size was set to a pixel resolution of 128 × 128. The trained DDPM successfully generated synthetic fundus photographs with a resolution of 128 × 128 pixels using our small dataset. We failed to train the DDPM for 256-by-256-pixel images due to the limited computation capacity using a personal cloud platform. In a comparative analysis, the progressive growing generative adversarial network (PGGAN) model synthesized more sharpened images than the DDPM in the retinal vessels and optic discs. The PGGAN (Frechet inception distance [FID] score: 41.761) achieved a better FID score than the DDPM (FID score: 65.605). We used a domain-specific generative diffusion model to synthesize fundus photographs based on a relatively small dataset. Because the DDPM has disadvantages with a small dataset, including difficulty in training and low image quality compared with generative adversarial networks such as PGGAN, further studies are needed to improve diffusion models for domain-specific medical tasks with small numbers of samples.
Article Highlights
-
A diffusion model can be used in an unconditional manner to generate images by learning the distribution of the fundus image dataset.
-
The fundus image generation performance of the diffusion model was not sufficient in this small dataset.
-
Generating images using the diffusion method is a promising technology, but further research is needed for its application in the field of fundus photography.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Deep learning for ocular imaging analysis has been an essential academic field in ophthalmology [1, 2]. Recently, the generation of deep-learning-based images has received considerable attention [3]. In the medical field, image synthesis can solve the problem of personal information protection and a limited amount of pathological data [4]. Generated images can have various combined significant features of the training dataset, but each one can may be different with the original images. With the advent of generative adversarial networks (GAN), various advanced image processing methods, including segmentation, data augmentation, denoising, and domain transfer, have been applied in ophthalmology image domains [5]. Although many fundus photography (FP) datasets have recently been released to the public, data on some retinal diseases are still lacking [6]. However, new image generation methods are being introduced to overcome the shortcomings of GAN, that it is difficult to tune the parameters for stable training and to avoid mode collapse [7].
The generative diffusion model has been highlighted as a state-of-the-art deep learning technique since the introduction of DALL-E by OpenAI in April 2022 [8]. Recent well-known generative diffusion models with large architecture, such as MidJourney and Copilot, have successfully synthesized high-quality images [9]. However, these popular models cannot generate retinal images because they are not trained on fundus images (Fig. 1). Even if the user directly attempts to create an image with prompts such as “retina,” “fundus photo,” and “optic nerve,” these models cannot generate any information about the blood vessels, optic nerve, and macula of the retina image. Therefore, to generate realistic FP images using the generative diffusion model, development of individual domain-specific models is required.
The basic diffusion architecture is the denoising diffusion probabilistic model (DDPM), which is trained via diffusion steps to add random noise to images and to learn to reverse the noising diffusion to synthesize the desired images [10]. DDPM is one of the state-of-the-art architectures designed for unconditional image generation. It iteratively learns a Markov Chain functions which gradually translates a simple distribution such as Gaussian noise into a desired data distribution. Several previous studies have suggested that diffusion models have several advantages, including stable training and flexibility of outcomes [11]. After image generation using the DDPM became popular, novel diffusion techniques were constantly developed in the machine learning community and medical fields [12]. One previous study proposed Medfusion, which is a DDPM-based self-supervised auto-encoder model that produces the same image as the input image, and suggested that DDPM outperformed other generative models including GAN [13]. Medfusion learned to generate FP images based on a large dataset more than a hundred thousand images.
In general, a very large amount of data may guarantee the successful training of generative models. However, how the performance of DDPM will change in a limited amount of FP data has not been studied. In this study, we investigated whether a generative diffusion model can synthesize high-quality FPs via domain-specific training using a small dataset an unconditional manner (Fig. 2). Most previous studies of diffusion models in medical imaging have developed specific transformation functions of input and output in the form of conditional GAN or single U-Net models. In this study, we intended to show that DDPM can be used in an unconditional manner to generate images by learning the distribution of the FP dataset itself.
To address the feasibility of DDPM to generate FP images, in the next sections, we performed a comparative experiment between DDPM and GAN models. Section 2 describes DDPM training and experiment methods. Section 3 describes the evaluation of the generated FP images and a comparison of the two unconditional image generation methods. In the final Sect. 4, the pros and cons of the two methods obtained through experimentation were compared and the results were discussed.
2 Methods
2.1 Dataset
This study was based on a publicly accessible and deidentified FP image database, which was released by previous studies [14,15,16]. We collected the healthy retinal images from each dataset and excluded pathological images. To reduce the noise of different borders, we selected FPs where all boundaries of the circular mask were intact. Finally, 1000 healthy FPs were used to train the diffusion model as a limited data situation. The data collection protocol was approved by the Institutional Review Board of the Korea National Institute for Bioethics Policy (KoNIBP). The analysis using the deidentified FP image database was performed according to the guidelines of the KoNIBP. The Institutional Review Board of the KoNIBP waived the requirement for informed consent because the data were fully deidentified to protect patient confidentiality. All the procedures were performed in accordance with the ethical standards of the 1964 Declaration of Helsinki and its later amendments. Our organized dataset used in this study is available in the data repository (https://data.mendeley.com/datasets/fm4m8kr6cz).
2.2 Diffusion model development
Diffusion models match data distributions of a training dataset by learning to reverse a gradual noising process. The technique of diffusion models is at the beginning of its use in medical imaging fields, such as medical image reconstruction [17]. Currently, DDPM is the most popular and basic diffusion model inspired by considerations from nonequilibrium in thermodynamics [10]. As shown in Fig. 2, we trained the DDPM based on U-Net backbone architecture, which is the most general form of the generative diffusion model [10]. We used the basic architecture with 64 hidden dimensions for each U-Net unit. In our DDPM model, the diffusion process was fixed to Markov chain which gradually infuse Gaussian pixel noise to the image. The detailed After training, serial multiple denoising U-Nets can generate FPs using random noise seeds. The input image size was set to a pixel resolution of 128 × 128. This was the maximum resolution required to finalize the proper training with our computational resources. We set the time step to the default value of 1000 because fewer numbers produce severely noisy or blurred synthetic images. We set the sampling time step to 250 and loss function to L1 with finite absolute value. In the training process, we set the batch size to 16, the learning rate to 2 × 10–5, and the number of training steps to 106. All codes for the implementation of the DDPM are available on the webpage (https://github.com/lucidrains/denoising-diffusion-pytorch). To increase the stability of the training, we adopted general linear transformation techniques including left and right flip, width and height translation from − 10% to + 10%, random rotation from − 10° to 10°, and zooming from − 10% to 10%. The deep learning models were developed using an NVIDIA RTX 2080Ti GPU with 4352 CUDA cores and 11 GB RAM. For reproducibility, our modified codes, which can be implemented in Google Colaboratory, were released in the code repository (https://github.com/TaeKeunToo/Diffusion).
For a comparative investigation, we generated FP images using the DDPM and progressive growing GAN (PGGAN) [4] with the same training conditions (https://github.com/tkarras/progressive_growing_of_gans). The PGGAN has been the most popular unconditional image synthesis technique with robust and reliable outcomes [7]. All hyperparameters were set to the default values. The same linear transformation techniques with our DDPM training were used for PGGAN. The image quality of the 20 images, each generated using the PGGAN and DDPM, was evaluated by two ophthalmologists (HKK and IHR). To avoid bias, we did not provide any prior information regarding the tested images to the ophthalmologists. The ophthalmologists reviewed the synthetic images via e-mails and there was no time limit for their convenience. We asked them if the image was properly generated as a good quality FP.
We also generated FP images using the Medfusion (a publicly accessible DDPM-based auto-encoder) model trained with a large dataset (101,442 samples) to compare the quality of the synthetic images (Medfusion is publicly accessible at the following link: https://huggingface.co/spaces/mueller-franzes/medfusion-app) [13]. We used Frechet inception distance (FID) score to quantitatively evaluate the generated images [3, 18]. The image quality of the synthetic images from the Medfusion model (20 images) was also evaluated by two ophthalmologists as the same protocol with the evaluation process mentioned above. We compared the FP images generated by our DDPM model to those from the Medfusion.
3 Results
The DDPM successfully generated FP images with a resolution of 128 × 128 pixels using the one-thousand training samples. Figure 3 shows that the DDPM required about one million training iterations to generate good synthetic images with a resolution of 128 × 128 pixels and 250 h to complete this training. It depended on the batch size and computation capacities. Training sometimes failed due to incorrectly adjusted learning rates and local minima. We failed to train the DDPM for 256-by-256-pixel images due to the limited computation capacity. Figure 4 shows that the trained DDPM successfully generated synthetic FP images. The random seed images were transformed into intermediate noisy pictures and, finally, into the newly generated FPs.
Figure 5 shows examples of the generated retinal images. The generated images look closer to the real FPs. They showed the diversity of retinal structures and did not show a mode collapse. The optic nerve, blood vessels, and black masks of FP were well-formed in the synthetic images. There were no mode collapses and grid artifacts during the image generation of the DDPM.
Figure 6 shows the example images generated by the fully trained PGGAN and DDPM. Better image quality was observed in PGGAN. The PGGAN synthesized more sharpened images than the DDPM in the retinal vessels and optic discs (Fig. 6A). The image quality of the synthetic images evaluated by both experts was similar. The rates of good image quality as graded by the two ophthalmologists were 75.0% (expert A) and 85.0% (expert B) for the PGGAN, and 35% (expert A) and 20% (expert B) for the DDPM (Fig. 6B). Both ophthalmologists commented that the major cause of lower scores for the DDPM was the blurred anatomical features in the generated images. The quantitative evaluation result is shown in Table 1. The PGGAN (FID score: 41.761) achieved a better FID score than the DDPM (FID score: 65.605).
We also generated FP images using the Medfusion model, which was trained using a large FP dataset. The result FP images from the Medfusion model are shown in Fig. 7. The rates of good image quality were 85.0% (expert A) and 90.0% (expert B) for the Medfusion. These scores were better than those of our DDPM that was trained using the small dataset (75.0% from expert A and 85.0% from expert B) (Table 2).
4 Discussion
In this study, we confirmed that DDPM can generate FP images unconditionally by learning only the data distribution without modifying the DDPM structure, without using a conditional design that needs supervised learning. Despite the development of deep learning, a generative artificial intelligence model that can freely generate high-quality FP images has not yet been developed. Generative AI can be a good solution for developing models to predict rare retinal diseases [19]. Looking at the current trend of technological development, a generative model capable of generating FP images of various diseases will eventually be released, and its main technology will be based on diffusion. Therefore, this study can be considered the first step to generate high-quality FP images in the future.
We investigated the feasibility of DDPM with a limited FP dataset for generating synthetic FP images. Deep learning has solved various image analysis problems, such as classification, segmentation, denoising, and data augmentation problems in ophthalmology image domains [20,21,22]. Recently, the generative diffusion model has gained wide interest as a new class of AI technique for medical image synthesis [17]. The synthetic images can be used for educational purposes and data augmentation in clinical imaging. For example, a generative diffusion model successfully generated realistic brain MRI images [23]. This technique was also used in histopathologic images for an image domain transfer task [11]. A large pretrained generative diffusion model could generate synthetic chest X-ray images using a fine-tuning process [24]. It has been introduced as a new image-translation algorithm that can provide a high-quality denoised images [12]. We developed a domain-specific generative diffusion model based on the DDPM to generate FP images using a small dataset. However, our finding shows that it is still in the early stages of development. To our knowledge, this is the first study to use a domain-specific generative diffusion model to synthesize FPs using a limited dataset.
The FP is a standardized imaging domain with a black frame and the same locations of the optic nerve and macula, and it may allow to generate new images from a small amount of data. Medical images from other domains that are not standardized may lack a thousand of training image samples. When we compared the image qualities between our DDPM (based on a small dataset) and Medfusion model (based on a large dataset), we observed that DDPM generally needs larger and diverse dataset to generate high-quality images. Although our DDPM model was trained using more standardized FP images (healthy eyes), a larger training dataset (Medfusion) provided better image synthesis performance via DDPM. According to the previous study using a large dataset, DDMP showed better performance than GAN (StyleGAN) to generate FP images [13]. However, our study showed that the performance of DDPM was reduced compared to the GAN model (PGGAN) in this small dataset.
The DDPM successfully synthesized a relatively small-sized FP image with a resolution of 128 × 128 pixels; however, it had several limitations compared with the PGGAN. Although the GAN requires following some hyperparameter tuning steps for stable training, these limitations are critical shown in Table 2. The training process of the DDPM requires more computational resources for its completion than that of the PGGAN. During our experiments, we found that general data augmentation techniques were effective in improving the image quality of both the DDPM and PGGAN. The outcome images generated by the DDPM showed more blurred features than those generated by the PGGAN, although the DDPM was trained over a more extended period. However, in PGGAN, noticeable checkerboard artifacts caused by deconvolution often appeared. During the trial-and-error steps, we failed to train the DDPM with a small amount of data and a small batch size owing to unstable gradients. Both the DDPM and PGGAN required large amounts of data for stable training. DDPM has shown to be excellent at removing noise from images [25], but its image generation ability still needs to be verified.
Our study has limitations. Due to the limit of computational capacity, fundus images have not been generated under more diverse experimental conditions. The quality of image synthesis may have decreased because the training dataset is not large. The small number of the dataset was another limitation of this study. The training failure in a large resolution (256 by 256 pixels) may be caused by the small amount of data. Leveraging distributed computing resources will be a solution to training DDPM models for larger resolutions. In addition, it was also difficult to objectively evaluate the quality and diversity of the synthetic images. The findings of this study are preliminary, so further research is needed. However, it has the advantage of showing the possibility of image generation using a diffusion model using a small FP dataset.
According to our observations, in the generation of synthetic FPs, it seems difficult for diffusion models to surpass the performance of GAN-based models using a limited dataset within the computational capabilities currently available to general researchers. Although diffusion models can overcome the shortcomings of GAN, such as mode collapse and vanishing gradient [26], the DDPM was difficult to train. The quality of the resulting images from the DDPM was lower than those obtained using GAN. The previous study reported that the diffusion model was better than GAN in an image synthesis task [27], but this study did not. One previous study showed a diffusion model was better than GAN in an image translation problem of medical pathology [11]. Further studies are required to improve the training speed of diffusion models and quality of the synthesized images to use these models for domain-specific medical imaging research. We believe that a domain-specific hyper-large diffusion model may be another solution to obtain high-quality synthetic medical images. GAN also showed low performance at first, but with the addition of ideas, it has obtained good image generation performance with relatively small datasets currently.
5 Conclusion
It is possible to use a domain-specific generative diffusion model to synthesize FPs using a limited dataset using unconditional DDPM, but it needs further study to improve image quality. We found that DDPM could generate synthetic FPs without mode collapses and grid artifacts. Unlike the experimental results of large data, our study showed that the FP image generation performance of DDPM was lower than the unconditional GAN model (PGGAN) in this small dataset. We hope that our early experience will help medical researchers to conduct further studies using diffusion models. In the future, additional research should be conducted with the goal of generating a variety of high-quality FP images based on more data and a large diffusion model.
Data availability
Our organized dataset used in this study is available in the data repository (https://data.mendeley.com/datasets/fm4m8kr6cz). Source code and data for hands-on implementation of this study is provided at https://github.com/TaeKeunToo/Diffusion (GitHub data repository).
Abbreviations
- DDPM:
-
Denoising diffusion probabilistic model
- FID:
-
Frechet inception distance
- FP:
-
Fundus photography
- GAN:
-
Generative adversarial network
- KoNIBP:
-
Korea National Institute for Bioethics Policy
- PGGAN:
-
Progressive growing generative adversarial network
References
Jin K, Ye J. Artificial intelligence and deep learning in ophthalmology: Current status and future perspectives. Adv Ophthalmol Pract Res. 2022;2:100078. https://doi.org/10.1016/j.aopr.2022.100078.
Yoo TK, Choi JY. Outcomes of adversarial attacks on deep learning models for ophthalmology imaging domains. JAMA Ophthalmol. 2020;138:1213–5. https://doi.org/10.1001/jamaophthalmol.2020.3442.
Tavakkoli A, Kamran SA, Hossain KF, Zuckerbrod SL. A novel deep learning conditional generative adversarial network for producing angiography images from retinal fundus photographs. Sci Rep. 2020;10:21580. https://doi.org/10.1038/s41598-020-78696-2.
Burlina PM, Joshi N, Pacheco KD, Liu TYA, Bressler NM. Assessment of deep generative models for high-resolution synthetic retinal image generation of age-related macular degeneration. JAMA Ophthalmol. 2019;137:258–64. https://doi.org/10.1001/jamaophthalmol.2018.6156.
Yu X, Li M, Ge C, Shum PP, Chen J, Liu L. A generative adversarial network with multi-scale convolution and dilated convolution res-network for OCT retinal image despeckling. Biomed Signal Process Control. 2023;80:104231. https://doi.org/10.1016/j.bspc.2022.104231.
Young Choi E, Han SH, Hee Ryu I, Kuk Kim J, Sik Lee I, Han E, et al. Automated detection of crystalline retinopathy via fundus photography using multistage generative adversarial networks. Biocybern Biomed Eng. 2023;43:725–35. https://doi.org/10.1016/j.bbe.2023.10.005.
You A, Kim JK, Ryu IH, Yoo TK. Application of generative adversarial networks (GAN) for ophthalmology image domains: a survey. Eye Vision. 2022;9:6. https://doi.org/10.1186/s40662-022-00277-3.
Kather JN, Ghaffari Laleh N, Foersch S, Truhn D. Medical domain knowledge in domain-agnostic generative AI. Npj Digit Med. 2022;5:1–5. https://doi.org/10.1038/s41746-022-00634-5.
Chen J, Shao Z, Hu B. generating interior design from text: a new diffusion model-based method for efficient creative design. Buildings. 2023;13:1861. https://doi.org/10.3390/buildings13071861.
Ho J, Jain A, Abbeel P. Denoising diffusion probabilistic models. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Red Hook, NY, USA; 2020. pp. 6840–51.
Jeong J, Kim KD, Nam Y, Cho E, Go H, Kim N. Stain normalization using score-based diffusion model through stain separation and overlapped moving window patch strategies. Comput Biol Med. 2022. https://doi.org/10.1016/j.compbiomed.2022.106335.
Nouri H, Nasri R, Abtahi S-H. Addressing inter-device variations in optical coherence tomography angiography: will image-to-image translation systems help? Int J Retin Vitr. 2023;9:51. https://doi.org/10.1186/s40942-023-00491-8.
Müller-Franzes G, Niehues JM, Khader F, Arasteh ST, Haarburger C, Kuhl C, et al. A multimodal comparison of latent denoising diffusion probabilistic models and generative adversarial networks for medical image synthesis. Sci Rep. 2023;13:12098. https://doi.org/10.1038/s41598-023-39278-0.
Kim J, Ryu IH, Kim JK, Lee IS, Kim HK, Han E, et al. Machine learning predicting myopic regression after corneal refractive surgery using preoperative data and fundus photography. Graefes Arch Clin Exp Ophthalmol. 2022. https://doi.org/10.1007/s00417-022-05738-y.
Cen L-P, Ji J, Lin J-W, Ju S-T, Lin H-J, Li T-P, et al. Automatic detection of 39 fundus diseases and conditions in retinal photographs using deep neural networks. Nat Commun. 2021;12:4828. https://doi.org/10.1038/s41467-021-25138-w.
Pachade S, Porwal P, Thulkar D, Kokare M, Deshmukh G, Sahasrabuddhe V, et al. Retinal fundus multi-disease image dataset (RFMiD): a dataset for multi-disease detection research. Data. 2021;6:14. https://doi.org/10.3390/data6020014.
Chung H, Ye JC. Score-based diffusion models for accelerated MRI. Med Image Anal. 2022;80:102479. https://doi.org/10.1016/j.media.2022.102479.
Chen Y, Long J, Guo J. RF-GANs: a method to synthesize retinal fundus images based on generative adversarial network. Comput Intell Neurosci. 2021;2021: e3812865. https://doi.org/10.1155/2021/3812865.
Yoo TK, Choi JY, Kim HK. Feasibility study to improve deep learning in OCT diagnosis of rare retinal diseases with few-shot classification. Med Biol Eng Comput. 2021;59:401–15. https://doi.org/10.1007/s11517-021-02321-1.
Wang X, Cai X, He S, Zhang X, Wu Q. Subfoveal choroidal thickness changes after intravitreal ranibizumab injections in different patterns of diabetic macular edema using a deep learning-based auto-segmentation. Int Ophthalmol. 2021. https://doi.org/10.1007/s10792-021-01806-0.
Burlina P, Paul W, Mathew P, Joshi N, Pacheco KD, Bressler NM. Low-shot deep learning of diabetic retinopathy with potential applications to address artificial intelligence bias in retinal diagnostics and rare ophthalmic diseases. JAMA Ophthalmol. 2020. https://doi.org/10.1001/jamaophthalmol.2020.3269.
Abu-Qamar O, Lewis W, Mendonca LSM, De Sisternes L, Chin A, Alibhai AY, et al. Pseudoaveraging for denoising of OCT angiography: a deep learning approach for image quality enhancement in healthy and diabetic eyes. Int J Ret Vitr. 2023;9:62. https://doi.org/10.1186/s40942-023-00486-5.
Pinaya WHL, Tudosiu P-D, Dafflon J, Da Costa PF, Fernandez V, Nachev P, et al. Brain imaging generation with latent diffusion models. In: Mukhopadhyay A, Oksuz I, Engelhardt S, Zhu D, Yuan Y, editors., et al., Deep generative models. Cham: Springer Nature Switzerland; 2022. p. 117–26. https://doi.org/10.1007/978-3-031-18576-2_12.
Ali H, Murad S, Shah Z. Spot the fake lungs: generating synthetic medical images using neural diffusion models. Berlin: Springer; 2022. https://doi.org/10.48550/arXiv.2211.00902.
Mao S, He Y, Chen H, Zheng H, Liu J, Yuan Y, et al. High-quality and high-diversity conditionally generative ghost imaging based on denoising diffusion probabilistic model. Opt Express, OE. 2023;31:25104–16. https://doi.org/10.1364/OE.496706.
Croitoru F-A, Hondru V, Ionescu RT, Shah M. Diffusion models in vision: a survey. IEEE Trans Pattern Anal Mach Intell. 2023;45:10850–69. https://doi.org/10.1109/TPAMI.2023.3261988.
Dhariwal P, Nichol A. Diffusion models beat GANs on image synthesis. Adv Neural Inf Process Syst. 2021;34:8780–94.
Acknowledgements
None.
Funding
No funding was received for this research. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Author information
Authors and Affiliations
Contributions
TKY and HKK designed the study. TKY and IHR collected data. TKY built the machine learning model. JYC and TKY wrote the manuscript. All coauthors edited the manuscript.
Corresponding authors
Ethics declarations
Ethics approval and consent to participate
The data collection protocol was approved by the Institutional Review Board of Korea National Institute for Bioethics Policy (KoNIBP). All the procedures were performed in accordance with the ethical standards of the 1964 Declaration of Helsinki and its later amendments. Informed consent for this study was exempt from the Ethical Review Board of KoNIBP. This study was based on a publicly accessible and deidentified FP image database.
Consent for publication
Informed consent for this study was exempt from the Ethical Review Board of the Korea National Institute for Bioethics Policy (KoNIBP). This study was based on a publicly accessible and deidentified FP image database.
Competing interests
TKY is an employee of VISUWORKS. He received a salary or stock as part of the standard compensation package. The remaining authors declare no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Supplementary file2 (MP4 1053 KB)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kim, H.K., Ryu, I.H., Choi, J.Y. et al. A feasibility study on the adoption of a generative denoising diffusion model for the synthesis of fundus photographs using a small dataset. Discov Appl Sci 6, 188 (2024). https://doi.org/10.1007/s42452-024-05871-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s42452-024-05871-9