Skip to main content

Advertisement

Log in

Generative Adversarial Networks for Brain MRI Synthesis: Impact of Training Set Size on Clinical Application

  • Published:
Journal of Imaging Informatics in Medicine Aims and scope Submit manuscript

Abstract

We evaluated the impact of training set size on generative adversarial networks (GANs) to synthesize brain MRI sequences. We compared three sets of GANs trained to generate pre-contrast T1 (gT1) from post-contrast T1 and FLAIR (gFLAIR) from T2. The baseline models were trained on 135 cases; for this study, we used the same model architecture but a larger cohort of 1251 cases and two stopping rules, an early checkpoint (early models) and one after 50 epochs (late models). We tested all models on an independent dataset of 485 newly diagnosed gliomas. We compared the generated MRIs with the original ones using the structural similarity index (SSI) and mean squared error (MSE). We simulated scenarios where either the original T1, FLAIR, or both were missing and used their synthesized version as inputs for a segmentation model with the original post-contrast T1 and T2. We compared the segmentations using the dice similarity coefficient (DSC) for the contrast-enhancing area, non-enhancing area, and the whole lesion. For the baseline, early, and late models on the test set, for the gT1, median SSI was .957, .918, and .947; median MSE was .006, .014, and .008. For the gFLAIR, median SSI was .924, .908, and .915; median MSE was .016, .016, and .019. The range DSC was .625–.955, .420–.952, and .610–.954. Overall, GANs trained on a relatively small cohort performed similarly to those trained on a cohort ten times larger, making them a viable option for rare diseases or institutions with limited resources.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data Availability

The data used to train the models is publicly available. The BraTS 2017 dataset is available at https://www.med.upenn.edu/sbia/brats2017/data.html. The BraTS 2021 dataset is available at http://braintumorsegmentation.org/.

Abbreviations

GAN:

Generative adversarial network

cGAN:

Conditional generative adversarial network

T1:

Pre-contrast T1-weighted MRI scan

T1Gd:

Post-contrast T1-weighted MRI scan

T2:

T2-weighted MRI scan

FLAIR:

Fluid attenuated inversion recovery MRI scan

References

  1. Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks. Commun ACM. 2020;63(11):139-144.

    Article  MathSciNet  Google Scholar 

  2. Mirza M, Osindero S. Conditional Generative Adversarial Nets. arXiv [csLG]. Published online November 6, 2014. http://arxiv.org/abs/1411.1784

  3. Lan L, You L, Zhang Z, et al. Generative Adversarial Networks and Its Applications in Biomedical Informatics. Front Public Health. 2020;8:164.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Menze BH, Jakab A, Bauer S, et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Trans Med Imaging. 2015;34(10):1993-2024.

    Article  PubMed  Google Scholar 

  5. Li HB, Conte GM, Anwar SM, et al. The Brain Tumor Segmentation (BraTS) Challenge 2023: Brain MR Image Synthesis for Tumor Segmentation (BraSyn). arXiv [eessIV]. Published online May 15, 2023. http://arxiv.org/abs/2305.09011

  6. Kofler F, Meissen F, Steinbauer F, et al. The Brain Tumor Segmentation (BraTS) Challenge 2023: Local Synthesis of Healthy Brain Tissue via Inpainting. arXiv [eessIV]. Published online May 15, 2023. http://arxiv.org/abs/2305.08992

  7. Conte GM, Weston AD, Vogelsang DC, et al. Generative adversarial networks to synthesize missing T1 and FLAIR MRI sequences for use in a multisequence brain tumor segmentation model. Radiology. 2021;300(1):E319. https://doi.org/10.1148/radiol.2021203786

  8. Baid U, Ghodasara S, Mohan S, et al. The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification. arXiv [csCV]. Published online July 5, 2021. http://arxiv.org/abs/2107.02314

  9. Bakas S, Akbari H, Sotiras A, et al. Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features. Scientific Data. 2017;4(1):1-13.

    Article  Google Scholar 

  10. S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. Kirby, et al. Segmentation Labels and Radiomic Features for the Pre-operative Scans of the TCGA-GBM collection (BraTS-TCGA-GBM). doi:https://doi.org/10.7937/K9/TCIA.2017.KLXWJJ1Q

  11. Harris CR, Millman KJ, van der Walt SJ, et al. Array programming with NumPy. Nature. 2020;585(7825):357-362.

    Article  CAS  PubMed  PubMed Central  ADS  Google Scholar 

  12. Brett M, Markiewicz CJ, Hanke M, et al. Nipy/nibabel.; 2022. https://nipy.org/nibabel/

  13. Isola P, Zhu JY, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. ; 2017:1125–1134.

  14. Biewald L. Experiment Tracking with Weights and Biases. Published online 2020. https://www.wandb.com/

  15. Pedregosa F, Varoquaux G, Gramfort A, et al. Scikit-learn: Machine Learning in Python. arXiv [csLG]. Published online January 2, 2012:2825–2830. Accessed June 1, 2023. https://www.jmlr.org/papers/volume12/pedregosa11a/pedregosa11a.pdf?ref=https:/

  16. van der Walt S, Schönberger JL, Nunez-Iglesias J, et al. scikit-image: image processing in Python. PeerJ. 2014;2:e453.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Wang Z, Bovik AC. Mean squared error: Love it or leave it? A new look at Signal Fidelity Measures. IEEE Signal Process Mag. 2009;26(1):98-117.

    Article  ADS  Google Scholar 

  18. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process. 2004;13(4):600-612.

    Article  PubMed  ADS  Google Scholar 

  19. Kickingereder P, Isensee F, Tursunova I, et al. Automated quantitative tumour response assessment of MRI in neuro-oncology with artificial neural networks: a multicentre, retrospective study. Lancet Oncol. 2019;20(5):728-740.

    Article  PubMed  Google Scholar 

  20. Isensee F, Jäger PF, Kohl SAA, Petersen J, Maier-Hein KH. Automated Design of Deep Learning Methods for Biomedical Image Segmentation. arXiv [csCV]. Published online April 17, 2019. http://arxiv.org/abs/1904.08128

  21. Ghaffari M, Sowmya A, Oliver R. Automated Brain Tumor Segmentation Using Multimodal Brain Scans: A Survey Based on Models Submitted to the BraTS 2012–2018 Challenges. IEEE Rev Biomed Eng. 2020;13:156-168.

    Article  PubMed  Google Scholar 

  22. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. Springer International Publishing; 2015:234–241.

  23. Sun C, Shrivastava A, Singh S, Gupta A. Revisiting unreasonable effectiveness of data in deep learning era. In: Proceedings of the IEEE International Conference on Computer Vision. ; 2017:843–852.

  24. Hestness J, Narang S, Ardalani N, et al. Deep Learning Scaling is Predictable, Empirically. arXiv [csLG]. Published online December 1, 2017. http://arxiv.org/abs/1712.00409

  25. Heilemann G, Matthewman M, Kuess P, et al. Can Generative Adversarial Networks help to overcome the limited data problem in segmentation? Z Med Phys. 2022;32(3):361-368.

    Article  PubMed  Google Scholar 

  26. Dong X, Lei Y, Wang T, et al. Automatic multiorgan segmentation in thorax CT images using U-net-GAN. Med Phys. 2019;46(5):2157-2168.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Funding

Center for Individualized Medicine, Mayo Clinic

Author information

Authors and Affiliations

Authors

Contributions

Study concepts/study design or data acquisition or data analysis/interpretation, all authors; manuscript drafting or manuscript revision for important intellectual content, all authors; approval of final version of submitted manuscript, all authors; agrees to ensure any questions related to the work are appropriately resolved, all authors; literature research, M.M.Z., G.M.C.; statistical analysis, M.M.Z.,G.M.C; and manuscript editing, all authors.

Corresponding author

Correspondence to GM Conte.

Ethics declarations

Ethics Approval and Consent to Participate

This project was granted an exemption from the requirement for IRB approval (45 CFR 46.104d, Category 4).

Competing Interest

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (PDF 372 KB)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zoghby, M., Erickson, B. & Conte, G. Generative Adversarial Networks for Brain MRI Synthesis: Impact of Training Set Size on Clinical Application. J Digit Imaging. Inform. med. (2024). https://doi.org/10.1007/s10278-024-00976-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10278-024-00976-4

Keywords

Navigation