Skip to main content

Advertisement

Log in

Deep learning for whole-body medical image generation

  • Original Article
  • Published:
European Journal of Nuclear Medicine and Molecular Imaging Aims and scope Submit manuscript

Abstract

Background

Artificial intelligence (AI) algorithms based on deep convolutional networks have demonstrated remarkable success for image transformation tasks. State-of-the-art results have been achieved by generative adversarial networks (GANs) and training approaches which do not require paired data. Recently, these techniques have been applied in the medical field for cross-domain image translation.

Purpose

This study investigated deep learning transformation in medical imaging. It was motivated to identify generalizable methods which would satisfy the simultaneous requirements of quality and anatomical accuracy across the entire human body. Specifically, whole-body MR patient data acquired on a PET/MR system were used to generate synthetic CT image volumes. The capacity of these synthetic CT data for use in PET attenuation correction (AC) was evaluated and compared to current MR-based attenuation correction (MR-AC) methods, which typically use multiphase Dixon sequences to segment various tissue types.

Materials and methods

This work aimed to investigate the technical performance of a GAN system for general MR-to-CT volumetric transformation and to evaluate the performance of the generated images for PET AC. A dataset comprising matched, same-day PET/MR and PET/CT patient scans was used for validation.

Results

A combination of training techniques was used to produce synthetic images which were of high-quality and anatomically accurate. Higher correlation was found between the values of mu maps calculated directly from CT data and those derived from the synthetic CT images than those from the default segmented Dixon approach. Over the entire body, the total amounts of reconstructed PET activities were similar between the two MR-AC methods, but the synthetic CT method yielded higher accuracy for quantifying the tracer uptake in specific regions.

Conclusion

The findings reported here demonstrate the feasibility of this technique and its potential to improve certain aspects of attenuation correction for PET/MR systems. Moreover, this work may have larger implications for establishing generalized methods for inter-modality, whole-body transformation in medical imaging. Unsupervised deep learning techniques can produce high-quality synthetic images, but additional constraints may be needed to maintain medical integrity in the generated data.

Graphical abstract

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Yi X, Walia E, Babyn P. Generative adversarial network in medical imaging: a review. Med Image Anal. 2019;58:101552.

    Article  Google Scholar 

  2. Dar SU, et al. Image synthesis in multi-contrast MRI with conditional generative adversarial networks. IEEE Trans Med Imaging. 2019;38(10):2375–88.

    Article  Google Scholar 

  3. Armanious K, et al. Unsupervised medical image translation using Cycle-MedGAN. In 2019 27th European Signal Processing Conference (EUSIPCO). 2019. IEEE.

  4. Leynes AP, et al. Zero-echo-time and Dixon deep pseudo-CT (ZeDD CT): direct generation of pseudo-CT images for pelvic PET/MRI attenuation correction using deep convolutional neural networks with multiparametric MRI. J Nucl Med. 2018;59(5):852–8.

    Article  Google Scholar 

  5. Armanious K, et al. Independent attenuation correction of whole body [18 F] FDG-PET using a deep learning approach with Generative Adversarial Networks. EJNMMI Res. 2020;10:1–9.

    Article  Google Scholar 

  6. Hwang D, et al. Generation of PET attenuation map for whole-body time-of-flight 18F-FDG PET/MRI using a deep neural network trained with simultaneously reconstructed activity and attenuation maps. J Nucl Med. 2019;60(8):1183–9.

    Article  Google Scholar 

  7. Michel, C.J. and J. Nuyts, Completion of truncated attenuation maps using maximum likelihood estimation of attenuation and activity (MLAA). 2013, Google Patents.

  8. Zhu, J.-Y., et al. Unpaired image-to-image translation using cycle-consistent adversarial networks. in Proceedings of the IEEE international conference on computer vision. 2017.

  9. Wolterink, J.M., et al. Deep MR to CT synthesis using unpaired data. In International workshop on simulation and synthesis in medical imaging. 2017. Springer.

  10. Ge, Y., et al. Unpaired whole-body MR to CT synthesis with correlation coefficient constrained adversarial learning. In Medical Imaging 2019: Image Processing. 2019. International Society for Optics and Photonics.

  11. Dong X, et al. Deep learning-based attenuation correction in the absence of structural information for whole-body positron emission tomography imaging. Phys Med Biol. 2020;65(5):055011.

    Article  CAS  Google Scholar 

  12. Johnson, J., A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision. 2016. Springer.

  13. Li, C. and M. Wand. Precomputed real-time texture synthesis with markovian generative adversarial networks. In European conference on computer vision. 2016. Springer.

  14. Isola, P., et al. Image-to-image translation with conditional adversarial networks. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.

  15. Breuer FA, et al. Controlled aliasing in parallel imaging results in higher acceleration (CAIPIRINHA) for multi-slice imaging. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine. 2005;53(3):684–91.

    Article  Google Scholar 

  16. Carney JP, et al. Method for transforming CT images for attenuation correction in PET/CT imaging. Med Phys. 2006;33(4):976–83.

  17. Szabo Z, et al. Initial evaluation of [18F] DCFPyL for prostate-specific membrane antigen (PSMA)-targeted PET imaging of prostate cancer. Mol Imaging Biol. 2015;17(4):565–74.

    Article  CAS  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joshua Schaefferkoetter.

Ethics declarations

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed consent

Informed consent was obtained from all individual participants included in the study.

Conflict of interest

The first author JS is a full-time employee of Siemens Medical Solutions, USA. The other authors declare no competing interests.

Additional information

Key points

Question

Can deep convolutional networks perform accurate transformations of whole-body data between different imaging modalities?

Pertinent findings

High-quality MR-to-CT transformations were achieved through the combination of supervised and unsupervised network training techniques. These generated data were used for PET attenuation correction in a cohort of PET/MR patients, and areas of potential improvement over current approaches were observed.

Implications for patient care

This work may contribute to methods which improve quantification for PET/MR images. The methods presented here may also be relevant to other AI-based clinical applications in whole-body imaging.

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the Topical Collection on Advanced Image Analyses (Radiomics and Artificial Intelligence).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Schaefferkoetter, J., Yan, J., Moon, S. et al. Deep learning for whole-body medical image generation. Eur J Nucl Med Mol Imaging 48, 3817–3826 (2021). https://doi.org/10.1007/s00259-021-05413-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00259-021-05413-0

Keywords

Navigation