Skip to main content

Graded Image Generation Using Stratified CycleGAN

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2020 (MICCAI 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12262))

Abstract

In medical imaging, CycleGAN has been used for various image generation tasks, including image synthesis, image denoising, and data augmentation. However, when pushing the technical limits of medical imaging, there can be a substantial variation in image quality. Here, we demonstrate that images generated by CycleGAN can be improved through explicit grading of image quality, which we call stratified CycleGAN. In this image generation task, CycleGAN is used to upgrade the image quality and content of near-infrared fluorescent (NIRF) retinal images. After manual assignment of grading scores to a small subset of the data, semi-supervised learning is applied to propagate grades across the remainder of the data and set up the training data. These scores are embedded into the CycleGAN by adding the grading score as a conditional input to the generator and by integrating an image quality classifier into the discriminator. We validate the efficacy of the proposed stratified CycleGAN by considering pairs of NIRF images at the same retinal regions (imaged with and without correction of optical aberrations achieved using adaptive optics), with the goal being to restore image quality in aberrated images such that cellular-level detail can be obtained. Overall, stratified CycleGAN generated higher quality synthetic images than traditional CycleGAN. Evaluation of cell detection accuracy confirmed that synthetic images were faithful to ground truth images of the same cells. Across this challenging dataset, F1-score improved from \(76.9\pm 5.7\)% when using traditional CycleGAN to \(85.0\pm 3.4\)% when using stratified CycleGAN. These findings demonstrate the potential of stratified Cycle-GAN to improve the synthesis of medical images that exhibit a graded variation in image quality.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Armanious, K., Jiang, C., Abdulatif, S., et al.: Unsupervised medical image translation using cycle-medGAN. In: 27th European Signal Processing Conference (2018)

    Google Scholar 

  2. Harms, J., Lei, Y., Wang, T., et al.: Paired cycle-GAN-based image correction for quantitative cone-beam computed tomography. Med. Phys. 46(9), 3998–4009 (2019)

    Article  Google Scholar 

  3. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)

    Google Scholar 

  4. Hiasa, Y., et al.: Cross-modality image synthesis from unpaired data using CycleGAN. In: Gooya, A., Goksel, O., Oguz, I., Burgos, N. (eds.) SASHIMI 2018. LNCS, vol. 11037, pp. 31–41. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00536-8_4

    Chapter  Google Scholar 

  5. Jiang, J., et al.: Tumor-aware, adversarial domain adaptation from CT to MRI for lung cancer segmentation. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 777–785. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00934-2_86

    Chapter  Google Scholar 

  6. Lee, D.H.: Pseudo-label : the simple and efficient semi-supervised learning method for deep neural networks. In: ICML 2013 Workshop: Challenges in Representation Learning (2013)

    Google Scholar 

  7. Liu, J., Han, Y.J., Liu, T., Tam, J.: Spatially aware deep learning improves identification of retinal pigment epithelial cells with heterogeneous fluorescence levels visualized using adaptive optics. In: Medical Imaging 2020: Biomedical Applications in Molecular, Structural, and Functional Imaging, vol. 11317, p. 1131719 (2020)

    Google Scholar 

  8. Liu, J., Shen, C., Liu, T., Aguilera, N., Tam, J.: Active appearance model induced generative adversarial network for controlled data augmentation. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11764, pp. 201–208. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32239-7_23

    Chapter  Google Scholar 

  9. Liu, Y., Lei, Y., Wang, T., et al.: CBCT-based synthetic CT generation using deep-attention cycleGAN for pancreatic adaptive radiotherapy. Med. Phys. (2020). https://doi.org/10.1002/mp.14121

    Article  Google Scholar 

  10. McDermott, M.B.A., Yan, T., Naumann, T., et al.: Semi-supervised biomedical translation with cycle wasserstein regression GANs. In: Thirty-Second AAAI Conference on Artificial Intelligence (AAAI18) (2018)

    Google Scholar 

  11. Nie, D., Trullo, R., Lian, J., et al.: Medical image synthesis with deep convolutional adversarial networks. IEEE Trans. Biomed. Eng. 65(12), 2720–2730 (2018)

    Article  Google Scholar 

  12. Sandfort, V., Yan, K., Pickhardt, P.J., Summers, R.M.: Data augmentation using generative adversarial networks (cycleGAN) to improve generalizability in CT segmentation tasks. Sci. Rep. 9, 16884 (2019)

    Article  Google Scholar 

  13. Tam, J., Liu, J., Dubra, A., Fariss, R.N.: In vivo imaging of the human retinal pigment epithelial mosaic using adaptive optics enhanced indocyanine green ophthalmoscopy. Invest. Ophthalmol. Vis. Sci. 57(10), 4376–4384 (2016)

    Article  Google Scholar 

  14. Wolterink, J.M., Leiner, T., Viergever, M.A., Išgum, I.: Generative adversarial networks for noise reduction in low-dose CT. IEEE Trans. Med. Imaging 36(12), 2536–2545 (2017)

    Article  Google Scholar 

  15. Yi, X., Walia, E., Babyn, P.: Generative adversarial network in medical imaging: a review. Med. Image Anal. 58, 101552 (2019)

    Article  Google Scholar 

  16. Yu, B., Zhou, L., Wang, L., et al.: Ea-GANs: edge-aware generative adversarial networks for cross-modality MR image synthesis. IEEE Trans. Med. Imaging 38(7), 1750–1762 (2019)

    Article  Google Scholar 

  17. Zhang, T., et al.: SkrGAN: sketching-rendering unconditional generative adversarial networks for medical image synthesis. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11767, pp. 777–785. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32251-9_85

    Chapter  Google Scholar 

  18. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: 2017 IEEE International Conference on Computer Vision (ICCV) (2017)

    Google Scholar 

Download references

Acknowledgments

This work was supported by the Intramural Research Program of the National Institutes of Health, National Eye Institute.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Johnny Tam .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, J., Li, J., Liu, T., Tam, J. (2020). Graded Image Generation Using Stratified CycleGAN. In: Martel, A.L., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. MICCAI 2020. Lecture Notes in Computer Science(), vol 12262. Springer, Cham. https://doi.org/10.1007/978-3-030-59713-9_73

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59713-9_73

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59712-2

  • Online ISBN: 978-3-030-59713-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics