Skip to main content
Log in

MRI image synthesis for fluid-attenuated inversion recovery and diffusion-weighted images with deep learning

  • Scientific Paper
  • Published:
Physical and Engineering Sciences in Medicine Aims and scope Submit manuscript

Abstract

This study aims to synthesize fluid-attenuated inversion recovery (FLAIR) and diffusion-weighted images (DWI) with a deep conditional adversarial network from T1- and T2-weighted magnetic resonance imaging (MRI) images. A total of 1980 images of 102 patients were split into two datasets: 1470 (68 patients) in a training set and 510 (34 patients) in a test set. The prediction framework was based on a convolutional neural network with a generator and discriminator. T1-weighted, T2-weighted, and composite images were used as inputs. The digital imaging and communications in medicine (DICOM) images were converted to 8-bit red–green–blue images. The red and blue channels of the composite images were assigned to 8-bit grayscale pixel values in T1-weighted images, and the green channel was assigned to those in T2-weighted images. The prediction FLAIR and DWI images were of the same objects as the inputs. For the results, the prediction model with composite MRI input images in the DWI image showed the smallest relative mean absolute error (rMAE) and largest mutual information (MI), and that in the FLAIR image showed the largest relative mean-square error (rMSE), relative root-mean-square error (rRMSE), and peak signal-to-noise ratio (PSNR). For the FLAIR image, the prediction model with the T2-weighted MRI input images generated more accurate synthesis results than that with the T1-weighted inputs. The proposed image synthesis framework can improve the versatility and quality of multi-contrast MRI without extra scans. The composite input MRI image contributes to synthesizing the multi-contrast MRI image efficiently.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Krupa K, Bekiesiska-Figatowska M (2015) Artifacts in magnetic resonance imaging. Pol J Radiol 23(80):93–106

    Google Scholar 

  2. Thukral B (2015) Problems and preferences in pediatric imaging. Indian J Radiol Imaging 25(4):359–364

    Article  PubMed  PubMed Central  Google Scholar 

  3. Dar SU, Yurt M, Karacan L et al (2019) Image Synthesis in Multi-Contrast MRI With Conditional Generative Adversarial Networks. IEEE Trans Med Imaging 38(10):2375–2388

    Article  PubMed  Google Scholar 

  4. Kang J, Ullah Z, Gwak J (2021) Mri-based brain tumor classification using ensemble of deep features and machine learning classifiers. Sensors 21:2222

    Article  PubMed  PubMed Central  Google Scholar 

  5. Loddo A, Pili F, Di Ruberto C (2021) Deep learning for COVID-19 diagnosis from CT images. Appl Sci 11:8227

    Article  CAS  Google Scholar 

  6. Jog A, Carass A, Roy S et al (2015) MR image synthesis by contrast learning on neighborhood ensembles. Med Image Anal 24(1):63–76

    Article  PubMed  PubMed Central  Google Scholar 

  7. Roy S, Carass A, Prince J (2011) A compressed sensing approach for MR tissue contrast synthesis. Inf Process Med Imaging 22:371–383

    PubMed  PubMed Central  Google Scholar 

  8. Roy S, Carass A, Prince JL (2013) Magnetic Resonance Image Example-Based Contrast Synthesis. IEEE Trans Med Imaging 32(12):2348–2363

    Article  PubMed  PubMed Central  Google Scholar 

  9. Li R, Zhang W, Suk HI et al (2014) Deep learning based imaging data completion for improved brain disease diagnosis. Med Image Comput Comput Assist Interv 17(Pt 3):305–312

    PubMed  PubMed Central  Google Scholar 

  10. Roy S, Carass A, Shiee N et al (2010) MR contrast synthesis for lesion segmentation. Proc IEEE Int Symp Biomed Imaging 21(2010):932–935

    Google Scholar 

  11. Jog A, Carass A, Roy S et al (2017) Random forest regression for magnetic resonance image synthesis. Med Image Anal 35:475–488

    Article  PubMed  Google Scholar 

  12. Jog A, Carass A, Pham DL et al (2015) Tree-encoded conditional random fields for image synthesis. Inf Process Med Imaging 24:733–745

    PubMed  PubMed Central  Google Scholar 

  13. Zhou X, Takayama R, Wang S et al (2017) Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method. Med Phys 44:5221–5233

    Article  PubMed  Google Scholar 

  14. Zhao W, Lv T, Gao P et al (2019) Dual-energy CT imaging using a single-energy CT data is feasible via deep learning. ArXiv 1906:04874

    Google Scholar 

  15. Yurt M, Dar SU, Erdem A et al (2021) mustGAN: multi-stream generative adversarial networks for MR image synthesis. Med Image Anal 70:101944

    Article  PubMed  Google Scholar 

  16. Lakhani P, Gray DL, Pett CR et al (2018) Hello world deep learning in medical imaging. J Digit Imaging 31(3):283–289

    Article  PubMed  PubMed Central  Google Scholar 

  17. Smith HJ, Bakke SJ, Smevik B et al (1992) Comparison of 12-bit and 8-bit gray scale resolution in MR imaging of the CNS. An ROC analysis Acta Radiol 33(6):505–511

    Article  CAS  PubMed  Google Scholar 

Download references

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daisuke Kawahara.

Ethics declarations

Conflicts of interest

Author Daisuke Kawahara declares that he has no conflict of interest.

Ethnical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Advances in knowledge

We created new image prediction model for the FLAIR and DWI images from T1- and T2-weighted images, and composite images.

Informed consent

Informed consent was obtained from all individual participants included in the study.

Research involving human and animal rights

This article does not contain any studies with human participants or animals performed by any of the authors.

Consent to participate

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kawahara, D., Yoshimura, H., Matsuura, T. et al. MRI image synthesis for fluid-attenuated inversion recovery and diffusion-weighted images with deep learning. Phys Eng Sci Med 46, 313–323 (2023). https://doi.org/10.1007/s13246-023-01220-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13246-023-01220-z

Keywords

Navigation