Skip to main content
Log in

PET image denoising using unsupervised deep learning

  • Original Article
  • Published:
European Journal of Nuclear Medicine and Molecular Imaging Aims and scope Submit manuscript

Abstract

Purpose

Image quality of positron emission tomography (PET) is limited by various physical degradation factors. Our study aims to perform PET image denoising by utilizing prior information from the same patient. The proposed method is based on unsupervised deep learning, where no training pairs are needed.

Methods

In this method, the prior high-quality image from the patient was employed as the network input and the noisy PET image itself was treated as the training label. Constrained by the network structure and the prior image input, the network was trained to learn the intrinsic structure information from the noisy image and output a restored PET image. To validate the performance of the proposed method, a computer simulation study based on the BrainWeb phantom was first performed. A 68Ga-PRGD2 PET/CT dataset containing 10 patients and a 18F-FDG PET/MR dataset containing 30 patients were later on used for clinical data evaluation. The Gaussian, non-local mean (NLM) using CT/MR image as priors, BM4D, and Deep Decoder methods were included as reference methods. The contrast-to-noise ratio (CNR) improvements were used to rank different methods based on Wilcoxon signed-rank test.

Results

For the simulation study, contrast recovery coefficient (CRC) vs. standard deviation (STD) curves showed that the proposed method achieved the best performance regarding the bias-variance tradeoff. For the clinical PET/CT dataset, the proposed method achieved the highest CNR improvement ratio (53.35% ± 21.78%), compared with the Gaussian (12.64% ± 6.15%, P = 0.002), NLM guided by CT (24.35% ± 16.30%, P = 0.002), BM4D (38.31% ± 20.26%, P = 0.002), and Deep Decoder (41.67% ± 22.28%, P = 0.002) methods. For the clinical PET/MR dataset, the CNR improvement ratio of the proposed method achieved 46.80% ± 25.23%, higher than the Gaussian (18.16% ± 10.02%, P < 0.0001), NLM guided by MR (25.36% ± 19.48%, P < 0.0001), BM4D (37.02% ± 21.38%, P < 0.0001), and Deep Decoder (30.03% ± 20.64%, P < 0.0001) methods. Restored images for all the datasets demonstrate that the proposed method can effectively smooth out the noise while recovering image details.

Conclusion

The proposed unsupervised deep learning framework provides excellent image restoration effects, outperforming the Gaussian, NLM methods, BM4D, and Deep Decoder methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Fletcher JW, Djulbegovic B, Soares HP, Siegel BA, Lowe VJ, Lyman GH, et al. Recommendations on the use of 18F-FDG PET in oncology. J Nucl Med. 2008;49:480–508. https://doi.org/10.2967/jnumed.107.047787.

    Article  PubMed  Google Scholar 

  2. Beyer T, Townsend DW, Brun T, Kinahan PE, Charron M, Roddy R, et al. A combined PET/CT scanner for clinical oncology. J Nucl Med. 2000;41:1369–79.

    CAS  PubMed  Google Scholar 

  3. Schwaiger M, Ziegler S, Nekolla SG. PET/CT: challenge for nuclear cardiology. J Nucl Med. 2005;46:1664–78.

    PubMed  Google Scholar 

  4. Tai YF. Applications of positron emission tomography (PET) in neurology. J Neurol Neurosurg Psychiatry. 2004;75:669–76. https://doi.org/10.1136/jnnp.2003.028175.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  5. Gong K, Majewski S, Kinahan PE, Harrison RL, Elston BF, Manjeshwar R, et al. Designing a compact high performance brain PET scanner - simulation study. Phys Med Biol. IOP Publishing. 2016;61:3681–97. https://doi.org/10.1088/0031-9155/61/10/3681.

    Article  CAS  Google Scholar 

  6. Tauber C, Stute S, Chau M, Spiteri P, Chalon S, Guilloteau D, et al. Spatio-temporal diffusion of dynamic PET images. Phys Med Biol. 2011;56:6583–96. https://doi.org/10.1088/0031-9155/56/20/004.

    Article  CAS  PubMed  Google Scholar 

  7. Dutta J, Leahy RM, Li Q. Non-local means denoising of dynamic PET images. Muñoz-Barrutia A, editor. PLoS One 2013;8:e81390. https://doi.org/10.1371/journal.pone.0081390.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Boussion N, Cheze Le Rest C, Hatt M, Visvikis D. Incorporation of wavelet-based denoising in iterative deconvolution for partial volume correction in whole-body PET imaging. Eur J Nucl Med Mol Imaging. 2009;36:1064–75. https://doi.org/10.1007/s00259-009-1065-5.

    Article  CAS  PubMed  Google Scholar 

  9. Shidahara M, Ikoma Y, Seki C, Fujimura Y, Naganawa M, Ito H, et al. Wavelet denoising for voxel-based compartmental analysis of peripheral benzodiazepine receptors with 18F-FEDAA1106. Eur J Nucl Med Mol Imaging. 2008;35:416–23. https://doi.org/10.1007/s00259-007-0623-y.

    Article  CAS  PubMed  Google Scholar 

  10. Christian BT, Vandehey NT, Floberg JM, Mistretta CA. Dynamic PET Denoising with HYPR processing. J Nucl Med. 2010;51:1147–54. https://doi.org/10.2967/jnumed.109.073999.

    Article  PubMed  Google Scholar 

  11. Xu Z, Bagci U, Seidel J, Thomasson D, Solomon J, Mollura DJ. Segmentation based denoising of PET images: an iterative approach via regional means and affinity propagation. Med Image Comput Comput Assist Interv. 2014;17:698–705. https://doi.org/10.1007/978-3-319-10404-1_87.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Comtat C, Kinahan PE, Fessler JA, Beyer T, Townsend DW, Defrise M, et al. Clinically feasible reconstruction of 3D whole-body PET/CT data using blurred anatomical labels. Phys Med Biol. 2002;47:1–20. https://doi.org/10.1088/0031-9155/47/1/301.

    Article  PubMed  Google Scholar 

  13. Baete K, Nuyts J, Van Paesschen W, Suetens P, Dupont P. Anatomical-based FDG-PET reconstruction for the detection of hypo-metabolic regions in epilepsy. IEEE Trans Med Imaging. 2004;23:510–9. https://doi.org/10.1109/tmi.2004.825623.

    Article  PubMed  Google Scholar 

  14. Bowsher JE, Yuan H, Hedlund LW, Turkington TG, Akabani G, Badea A et al. Utilizing MRI information to estimate F18-FDG distributions in rat flank tumors. IEEE Symp Conf Rec Nucl Sci 2004. IEEE; 2004. p. 2488–92. https://doi.org/10.1109/nssmic.2004.1462760.

  15. Chan C, Fulton R, Barnett R, Feng DD, Meikle S. Postreconstruction nonlocal means filtering of whole-body PET with an anatomical prior. IEEE Trans Med Imaging. 2014;33:636–50. https://doi.org/10.1109/tmi.2013.2292881.

    Article  PubMed  Google Scholar 

  16. Yan J, Lim JCS, Townsend DW. MRI-guided brain PET image filtering and partial volume correction. Phys Med Biol IOP Publishing. 2015;60:961–76. https://doi.org/10.1109/nssmic.2013.6829058.

    Article  Google Scholar 

  17. He K, Sun J, Tang X. Guided image filtering. IEEE Trans Pattern Anal Mach Intell. 2013;35:1397–409.

    Article  PubMed  Google Scholar 

  18. Somayajula S, Panagiotou C, Rangarajan A, Li Q, Arridge SR, Leahy RM. PET image reconstruction using information theoretic anatomical priors. IEEE Trans Med Imaging. 2011;30:537–49. https://doi.org/10.1109/nssmic.2005.1596899.

    Article  PubMed  Google Scholar 

  19. Tang J, Rahmim A. Bayesian PET image reconstruction incorporating anato-functional joint entropy. Phys Med Biol. 2009;54:7063–75. https://doi.org/10.1109/isbi.2008.4541178.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Nuyts J. The use of mutual information and joint entropy for anatomical priors in emission tomography. 2007 IEEE Nucl Sci Symp Conf Rec. IEEE; 2007. p. 4149–54. https://doi.org/10.1109/nssmic.2007.4437034.

  21. Song T, Yang F, Chowdhury SR, Kim K, Johnson KA, El Fakhri G, et al. PET image deblurring and super-resolution with an MR-based joint entropy prior. IEEE Trans Comput Imaging. 2019;1. https://doi.org/10.1109/tci.2019.2913287

    Article  PubMed  PubMed Central  Google Scholar 

  22. Wang S, Su Z, Ying L, Peng X, Zhu S, Liang F, et al. Accelerating magnetic resonance imaging via deep learning. 2016 IEEE 13th Int Symp Biomed Imaging. IEEE; 2016. p. 514–517. https://doi.org/10.1109/isbi.2016.7493320.

  23. Chen H, Zhang Y, Zhang W, Liao P, Li K, Zhou J, et al. Low-dose CT via convolutional neural network. Biomed Opt Express. 2017;8:679.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Wu D, Kim K, Fakhri G El, Li Q. A cascaded convolutional neural network for x-ray low-dose CT image denoising 2017.

  25. Gong K, Guan J, Kim K, Zhang X, Yang J, Seo Y, et al. Iterative PET image reconstruction using convolutional neural network representation. IEEE Trans Med Imaging. 2018:1–8. https://doi.org/10.1109/tmi.2018.2869871.

    Article  Google Scholar 

  26. Chen KT, Gong E, de Carvalho Macruz FB, Xu J, Boumis A, Khalighi M, et al. Ultra–low-dose 18 F-florbetaben amyloid PET imaging using deep learning with multi-contrast MRI inputs. Radiology. 2019;290:649–56. https://doi.org/10.1148/radiol.2018180940.

    Article  PubMed  Google Scholar 

  27. Xiang L, Qiao Y, Nie D, An L, Wang Q, Shen D. Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI. Neurocomputing. 2018;406–16. https://doi.org/10.1016/j.neucom.2017.06.048.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Ulyanov D, Vedaldi A, Lempitsky V. Deep image prior. 2017 IEEE Conf Comput Vis Pattern Recognit. IEEE; 2017; pp. 5882–5891. https://doi.org/10.1109/cvpr.2018.00984.

  29. Mirza M, Osindero S. Conditional generative adversarial nets. Cambridge: Cambridge University Press; 2014. p. 1–7. Available from: http://arxiv.org/abs/1411.1784.

    Google Scholar 

  30. Çiçek Ö, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. Lect Notes Comput Sci (including Subser Lect Notes Artif Intell Lect Notes Bioinformatics). 2016. pp. 424–32. https://doi.org/10.1007/978-3-319-46723-8_49.

    Chapter  Google Scholar 

  31. Gong K, Kim K, Cui J, Guo N, Catana C, Qi J, et al. Learning personalized representation for inverse problems in medical imaging using deep neural network. 2018;1–11. Available from: http://arxiv.org/abs/1807.01759

  32. Liu DC, Nocedal J. On the limited memory BFGS method for large scale optimization. Math Program. 1989;45:503–28. https://doi.org/10.1007/bf01589116.

    Article  Google Scholar 

  33. Kingma DP, Ba J. Adam: A method for stochastic optimization. 2014; Available from: http://arxiv.org/abs/1412.6980.

  34. Nesterov Y. A method for unconstrained convex minimization problem with the rate of convergence o(1/k^2). Dokl AN USSR. 1983;269:543–7.

    Google Scholar 

  35. Cocosco CA, Kollokian V, Kwan RK-S, Pike GB, Evans AC. Brainweb: online interface to a 3D MRI simulated brain database. Citeseer: Neuroimage; 1997.

    Google Scholar 

  36. Maggioni M, Katkovnik V, Egiazarian K, Foi A. Nonlocal transform-domain filter for volumetric data denoising and reconstruction. IEEE Trans Image Process. 2013;22:119–33. https://doi.org/10.1109/tip.2012.2210725.

    Article  PubMed  Google Scholar 

  37. Heckel R, Hand P. Deep decoder: concise image representations from untrained non-convolutional networks. Int Conf Learn Represent. International Conference on Learning Representations; 2019. https://doi.org/10.1109/TIP.2012.2210725.

    Article  PubMed  Google Scholar 

Download references

Funding

This work was supported by the National Institutes of Health under grant 1RF1AG052653-01A1, 1P41EB022544-01A1, NIH C06 CA059267, by the National Natural Science Foundation of China (No: U1809204, 61525106, 61427807, 61701436), by the National Key Technology Research and Development Program of China (No: 2017YFE0104000, 2016YFC1300302), and by Shenzhen Innovation Funding (No: JCYJ20170818164343304, JCYJ20170816172431715). Jianan Cui is a PhD student in Zhejiang University and was supported by the China Scholarship Council for 2-year study at Massachusetts General Hospital.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Huafeng Liu or Quanzheng Li.

Ethics declarations

Conflict of interest

Author Quanzheng Li has received research support from Siemens Medical Solutions. Other authors declare that they have no conflict of interest.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed consent

Informed consent was obtained from all individual participants included in the study.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the Topical Collection on Advanced Image Analyses (Radiomics and Artificial Intelligence)

Electronic supplementary material

Suppl. figure 1

Comparisons between using the noise image as input and using anatomical prior image as input (proposed) based on the simulated BrainWeb phantom. (PNG 64 kb)

High Resolution Image (TIFF 1566 kb)

Suppl. figure 2

Network structure of the modified 3D U-net employed in the proposed method. (PNG 4489 kb)

High Resolution Image (TIFF 13671 kb)

Suppl. figure 3

Comparison of the normalized cost values for the Adam, Nesterov’s accelerated gradient (NAG) and L-BFGS algorithms based on one PET/CT dataset. The normalized cost value is defined as \( {L}_n=\left({\varnothing}_{Adam}^{ref}-{\varnothing}^n\right)/\left({\varnothing}_{Adam}^{ref}-{\varnothing}_{Adam}^1\right) \), where \( {\varnothing}_{Adam}^{ref} \) and \( {\varnothing}_{Adam}^1 \) are the cost value using the Adam algorithm running 700 epochs and 1 epoch, respectively. (PNG 23 kb)

High Resolution Image (TIFF 985 kb)

Suppl. figure 4

The lesion contrast .vs standard deviations in reference ROIs by varying FWHMs (gray) of the Gaussian filter, window sizes (blue) of the NLM method, noise standard deviation (light blue) of the BM4D, and training epochs of the Deep Decoder (green) and the proposed method (orange). Left plot is based on one patient scan from the PET/CT dataset. Right plot is based on one patient scan from the PET/MR dataset. (PNG 48 kb)

High Resolution Image (TIFF 56.3 kb)

Suppl. figure 5

Tumor size, SUVmax, SUVmean and TLG versus CNR improvement ratios. Left: PET/CT data set; Right: PET/MR data set. (PNG 6112 kb)

High Resolution Image (TIFF 36565 kb)

Suppl. figure 6

A mismatch example from PET/CT dataset. Zoomed regions shown on top row indicate the tumor structure mismatches between the CT prior and the noisy PET image. (PNG 97 kb)

High Resolution Image (TIFF 1845 kb)

ESM 7

(DOCX 56.1 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cui, J., Gong, K., Guo, N. et al. PET image denoising using unsupervised deep learning. Eur J Nucl Med Mol Imaging 46, 2780–2789 (2019). https://doi.org/10.1007/s00259-019-04468-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00259-019-04468-4

Keywords

Navigation