Retinal Image Synthesis for Glaucoma Assessment Using DCGAN and VAE Models

  • Andres Diaz-PintoEmail author
  • Adrián Colomer
  • Valery Naranjo
  • Sandra Morales
  • Yanwu Xu
  • Alejandro F. Frangi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11314)


The performance of a Glaucoma assessment system is highly affected by the number of labelled images used during the training stage. However, labelled images are often scarce or costly to obtain. In this paper, we address the problem of synthesising retinal fundus images by training a Variational Autoencoder and an adversarial model on 2357 retinal images. The innovation of this approach is in synthesising retinal images without using previous vessel segmentation from a separate method, which makes this system completely independent. The obtained models are image synthesizers capable of generating any amount of cropped retinal images from a simple normal distribution. Furthermore, more images were used for training than any other work in the literature. Synthetic images were qualitatively evaluated by 10 clinical experts and their consistency were estimated by measuring the proportion of pixels corresponding to the anatomical structures around the optic disc. Moreover, we calculated the mean-squared error between the average 2D-histogram of synthetic and real images, obtaining a small difference of \(3\times 10^{-4}\). Further analysis of the latent space and cup size of the images was performed by measuring the Cup/Disc ratio of synthetic images using a state-of-the-art method. The results obtained from this analysis and the qualitative and quantitative evaluation demonstrate that the synthesised images are anatomically consistent and the system is a promising step towards a model capable of generating labelled images.


Medical imaging Retinal image synthesis Fundus images DCGAN VAE 



We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. This work was supported by the Project GALAHAD [H2020-ICT-2016-2017, 732613].


  1. 1.
    Chen, X., Xu, Y., Yan, S., Wong, D.W.K., Wong, T.Y., Liu, J.: Automatic feature learning for glaucoma detection based on deep learning. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 669–677. Springer, Cham (2015). Scholar
  2. 2.
    Fiorini, S., Biasi, M.D., Ballerini, L., Trucco, E., Ruggeri, A.: Automatic generation of synthetic retinal fundus images. In: Smart Tools and Apps for Graphics - Eurographics Italian Chapter Conference. The Eurographics Association (2014)Google Scholar
  3. 3.
    Bonaldi, L., Menti, E., Ballerini, L., Ruggeri, A., Trucco, E.: Automatic generation of synthetic retinal fundus images: vascular network. Proc. Comput. Sci. 90(Suppl. C), 54–60 (2016)CrossRefGoogle Scholar
  4. 4.
    Costa, P., et al.: End-to-end adversarial retinal image synthesis. IEEE Trans. Med. Imaging 37(3), 781–791 (2018)CrossRefGoogle Scholar
  5. 5.
    Costa, P., et al.: Towards adversarial retinal image synthesis. arXiv:1701.08974 (2017)
  6. 6.
    Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. arXiv:1312.6114 (2013)
  7. 7.
    Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv: 1511.06434, November 2015
  8. 8.
    Köhler, T., Budai, A., Kraus, M.F., Odstrčilik, J., Michelson, G., Hornegger., J.: Automatic no-reference quality assessment for retinal fundus images using vessel segmentation. In: Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems, pp. 95–100 (2013)Google Scholar
  9. 9.
    Sivaswamy, J., Krishnadas, S., Joshi, G.D., Jain, M., Ujjwal, A.S.T.: Drishti-GS: retinal image dataset for optic nerve head (ONH) segmentation. In: 2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI), pp. 53–56 (2014)Google Scholar
  10. 10.
    Zhang, Z., et al.: ORIGA-light: an online retinal fundus image database for glaucoma analysis and research. In: 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, pp. 3065–3068, August 2010Google Scholar
  11. 11.
    Medina-Mesa, E., et al.: Estimating the amount of hemoglobin in the neuroretinal rim using color images and OCT. Curr. Eye Res. 41(6), 798–805 (2015)CrossRefGoogle Scholar
  12. 12.
    sjchoi86: sjchoi86-HRF Database (2017). Accessed 02 July 2017
  13. 13.
    Chollet, F., et al.: Keras (2015). Accessed 21 May 2017
  14. 14.
    Theis, L., van den Oord, A., Bethge, M.: A note on the evaluation of generative models. In: International Conference on Learning Representations, April 2016Google Scholar
  15. 15.
    Morales, S., Naranjo, V., Navea, A., Alcañiz, M.: Computer-aided diagnosis software for hypertensive risk determination through fundus image processing. IEEE J. Biomed. Health Inform. 18(6), 1757–1763 (2014)CrossRefGoogle Scholar
  16. 16.
    White, T.: Sampling generative networks. arXiv:1609.04468 (2016)
  17. 17.
    Fu, H., Cheng, J., Xu, Y., Wong, D.W.K., Liu, J., Cao, X.: Joint optic disc and cup segmentation based on multi-label deep network and polar transformation. IEEE Trans. Med. Imaging (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Andres Diaz-Pinto
    • 1
    Email author
  • Adrián Colomer
    • 1
  • Valery Naranjo
    • 1
  • Sandra Morales
    • 1
  • Yanwu Xu
    • 2
  • Alejandro F. Frangi
    • 3
  1. 1.Instituto de Investigación e Innovación en Bioingeniería, I3BUniversitat Politècnica de ValènciaValenciaSpain
  2. 2.Guangzhou Shiyuan Electronics Co., Ltd. (CVTE)GuangzhouChina
  3. 3.CISTIB, University of SheffieldSheffieldUK

Personalised recommendations