Skip to main content

Advertisement

Log in

CycleGAN-based deep learning technique for artifact reduction in fundus photography

  • Retinal Disorders
  • Published:
Graefe's Archive for Clinical and Experimental Ophthalmology Aims and scope Submit manuscript

Abstract

Purpose

A low quality of fundus photograph with artifacts may lead to false diagnosis. Recently, a cycle-consistent generative adversarial network (CycleGAN) has been introduced as a tool to generate images without matching paired images. Therefore, herein, we present a deep learning technique that removes the artifacts automatically in a fundus photograph using a CycleGAN model.

Methods

This study included a total of 2206 anonymized retinal images including 1146 with artifacts and 1060 without artifacts. In this experiment, we applied the CycleGAN model to color fundus photographs with a pixel resolution of 256 × 256 × 3. To apply the CycleGAN to an independent dataset, we randomly divided the data into training (90%) and test (10%) datasets. Additionally, we adopted the automated quality evaluation (AQE) to assess the retinal image quality.

Results

From the results, we observed that the artifacts such as overall haze, edge haze, lashes, arcs, and uneven illumination were successfully reduced by the CycleGAN in the generated images, and the main information of the retina was essentially retained. Further, we observed that most of the generated images exhibited improved AQE grade values when compared with the original images with artifacts.

Conclusion

Thus, we could conclude that the CycleGAN technique can effectively reduce the artifacts and improve the quality of fundus photographs, and it may be beneficial for clinicians in analyzing the low-quality fundus photographs. Future studies should improve the quality and resolution of the generated image to provide a more detailed fundus photography.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Data availability

All the datasets are available at Mendeley Data repositories (https://data.mendeley.com/datasets/dh2x8v6nf8).

References

  1. Fleming AD, Philip S, Goatman KA et al (2006) Automated assessment of diabetic retinal image quality based on clarity and field definition. Invest Ophthalmol Vis Sci 47:1120–1125. https://doi.org/10.1167/iovs.05-1155

    Article  PubMed  Google Scholar 

  2. Bartling H, Wanger P, Martin L (2009) Automated quality evaluation of digital fundus photographs. Acta Ophthalmol 87:643–647. https://doi.org/10.1111/j.1755-3768.2008.01321.x

    Article  PubMed  Google Scholar 

  3. Marrugo AG, Sorel M, Sroubek F, Millán MS (2011) Retinal image restoration by means of blind deconvolution. J Biomed Opt 16:116016. https://doi.org/10.1117/1.3652709

    Article  PubMed  Google Scholar 

  4. Mora AD, Soares J, Fonseca JM (2013) A template matching technique for artifacts detection in retinal images. In: 2013 8th international symposium on image and signal processing and analysis (ISPA). pp 717–722

  5. Gondara L (2016) Medical image denoising using convolutional denoising autoencoders. In: 2016 IEEE 16th international conference on data mining workshops (ICDMW). pp 241–246

  6. Yoo TK, Choi JY, Seo JG et al (2019) The possibility of the combination of OCT and fundus images for improving the diagnostic accuracy of deep learning for age-related macular degeneration: a preliminary experiment. Med Biol Eng Comput 57:677–687. https://doi.org/10.1007/s11517-018-1915-z

    Article  PubMed  Google Scholar 

  7. Yoo TK, Ryu IH, Lee G et al (2019) Adopting machine learning to automatically identify candidate patients for corneal refractive surgery. Npj Digit Med 2:59. https://doi.org/10.1038/s41746-019-0135-8

    Article  PubMed  PubMed Central  Google Scholar 

  8. Goodfellow I, Pouget-Abadie J, Mirza M, et al (2014) Generative adversarial nets. In: Advances in neural information processing systems. pp 2672–2680

  9. Isola P, Zhu J-Y, Zhou T, Efros AA (2017) Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1125–1134

  10. Liu Y, Khosravan N, Liu Y et al (2019) Cross-modality knowledge transfer for prostate segmentation from CT scans. In: Wang Q, Milletari F, Nguyen HV et al (eds) Domain adaptation and representation transfer and medical image learning with less labels and imperfect data. Springer International Publishing, Cham, pp 63–71

    Chapter  Google Scholar 

  11. Liu Y, Guo Y, Chen W, Lew MS (2018) An extensive study of cycle-consistent generative networks for image-to-image translation. In: 2018 24th international conference on pattern recognition (ICPR). pp 219–224

  12. Yoo TK, Choi JY, Kim HK (2020) A generative adversarial network approach to predicting postoperative appearance after orbital decompression surgery for thyroid eye disease. Comput Biol Med 103628. https://doi.org/10.1016/j.compbiomed.2020.103628

  13. Tang C, Li J, Wang L, et al (2019) Unpaired low-dose CT denoising network based on cycle-consistent generative adversarial network with prior image information. In: Comput. Math. Methods Med. https://www.hindawi.com/journals/cmmm/2019/8639825/. Accessed 16 Jan 2020

  14. Zhu J-Y, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision. pp 2223–2232

  15. Carneiro T, Medeiros Da NóBrega RV, Nepomuceno T et al (2018) Performance analysis of Google CoLaboratory as a tool for accelerating deep learning applications. IEEE Access 6:61677–61685. https://doi.org/10.1109/ACCESS.2018.2874767

    Article  Google Scholar 

  16. Suresh K (2011) An overview of randomization techniques: an unbiased assessment of outcome in clinical research. J Hum Reprod Sci 4:8–11. https://doi.org/10.4103/0974-1208.82352

    Article  PubMed  PubMed Central  Google Scholar 

  17. Sang J, Lei Z, Li SZ (2009) Face image quality evaluation for ISO/IEC standards 19794-5 and 29794-5. In: Tistarelli M, Nixon MS (eds) Advances in biometrics. Springer, Berlin, pp 229–238

    Chapter  Google Scholar 

  18. You Q, Wan C, Sun J, et al (2019) Fundus image enhancement method based on CycleGAN. In: 2019 41st annual international conference of the IEEE engineering in medicine and biology society (EMBC). pp 4500–4503

  19. Wang S, Jin K, Lu H et al (2016) Human visual system-based fundus image quality assessment of portable fundus camera photographs. IEEE Trans Med Imaging 35:1046–1055. https://doi.org/10.1109/TMI.2015.2506902

    Article  PubMed  Google Scholar 

  20. Suzuki N, Yamane K (2012) Determination of the optimal colour space for distinguishing small retinal haemorrhages from dust artefacts. Acta Ophthalmol 90:1–2. https://doi.org/10.1111/j.1755-3768.2012.4721.x

    Article  Google Scholar 

  21. Köhler T, Hornegger J, Mayer M, Michelson G (2012) Quality-guided denoising for low-cost fundus imaging. In: Tolxdorff T, Deserno TM, Handels H, Meinzer H-P (eds) Bildverarbeitung für die Medizin 2012: Algorithmen - Systeme - Anwendungen. Proceedings des workshops vom 18. bis 20. März 2012 in Berlin. Springer, Berlin, Heidelberg, pp 292–297

  22. Gulshan V, Peng L, Coram M et al (2016) Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316:2402–2410. https://doi.org/10.1001/jama.2016.17216

    Article  Google Scholar 

  23. Ihle SJ, Reichmuth AM, Girardin S et al (2019) Unsupervised data to content transformation with histogram-matching cycle-consistent generative adversarial networks. Nat Mach Intell 1:461–470. https://doi.org/10.1038/s42256-019-0096-2

    Article  Google Scholar 

  24. Wolterink JM, Dinkla AM, Savenije MHF et al (2017) Deep MR to CT synthesis using unpaired data. In: Tsaftaris SA, Gooya A, Frangi AF, Prince JL (eds) Simulation and synthesis in medical imaging. Springer International Publishing, Cham, pp 14–23

    Chapter  Google Scholar 

  25. Wang L, Xu X, Yu Y et al (2019) SAR-to-optical image translation using supervised cycle-consistent adversarial networks. IEEE Access 7:129136–129149

    Article  Google Scholar 

  26. Burlina PM, Joshi N, Pacheco KD et al (2019) Assessment of deep generative models for high-resolution synthetic retinal image generation of age-related macular degeneration. JAMA Ophthalmol 137:258–264. https://doi.org/10.1001/jamaophthalmol.2018.6156

    Article  PubMed  PubMed Central  Google Scholar 

  27. Son J, Park SJ, Jung K-H (2018) Towards accurate segmentation of retinal vessels and the optic disc in fundoscopic images with generative adversarial networks. J Digit Imaging. https://doi.org/10.1007/s10278-018-0126-3

  28. Becker AS, Jendele L, Skopek O et al (2019) Injecting and removing suspicious features in breast imaging with CycleGAN: a pilot study of automated adversarial attacks using neural networks on small images. Eur J Radiol 120:108649. https://doi.org/10.1016/j.ejrad.2019.108649

    Article  PubMed  Google Scholar 

  29. Zhang Z, Yang L, Zheng Y (2018) Translating and segmenting multimodal medical volumes with cycle- and shape-consistency generative adversarial network. In: 2018 IEEE/CVF conference on computer vision and pattern recognition. pp 9242–9251

Download references

Author information

Authors and Affiliations

Authors

Contributions

Tae Keun Yoo and Joon Yul Choi conceived and designed this study. Tae Keun Yoo and Joon Yul Choi analyzed and described the data. Joon Yul Choi and Hong Kyu Kim collected the data. All authors contributed to the writing and approval of the final manuscript.

Corresponding author

Correspondence to Tae Keun Yoo.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

All procedures performed were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. This article does not contain any studies with human participants performed by any of the authors.

Code availability

We used the CoLaboratory’s CycleGAN tutorial page to develop and validate the CycleGAN model, and all these codes are available in the webpage (https://www.tensorflow.org/tutorials/generative/cyclegan). We only modified the input pipeline to import our dataset, and this code is presented in the Supplementary Material.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

ESM 1

(PDF 310 kb).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yoo, T.K., Choi, J.Y. & Kim, H.K. CycleGAN-based deep learning technique for artifact reduction in fundus photography. Graefes Arch Clin Exp Ophthalmol 258, 1631–1637 (2020). https://doi.org/10.1007/s00417-020-04709-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00417-020-04709-5

Keywords

Navigation