Skip to main content

Mapping in Cycles: Dual-Domain PET-CT Synthesis Framework with Cycle-Consistent Constraints

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 (MICCAI 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13436))

Abstract

Positron emission tomography (PET) is an important medical imaging technique, especially for brain and cancer disease diagnosis. Modern PET scanner is usually combined with computed tomography (CT), where CT image is used for anatomical localization, PET attenuation correction, and radiotherapy treatment planning. Considering radiation dose of CT image as well as increasing spatial resolution of PET image, there is a growing demand to synthesize CT image from PET image (without scanning CT) to reduce risk of radiation exposure. However, most existing works perform learning-based image synthesis to construct cross-modality mapping only in the image domain, without considering of the projection domain, leading to potential physical inconsistency. To address this problem, we propose a novel PET-CT synthesis framework by exploiting dual-domain information (i.e., image domain and projection domain). Specifically, we design both image domain network and projection domain network to jointly learn high-dimensional mapping from PET to CT. The image domain and the projection domain can be connected together with a forward projection (FP) and a filtered back projection (FBP). To further help the PET-to-CT synthesis task, we also design a secondary CT-to-PET synthesis task with the same network structure, and combine the two tasks into a bidirectional mapping framework with several closed cycles. More importantly, these cycles can serve as cycle-consistent losses to further help network training for better synthesis performance. Extensive validations on the clinical PET-CT data demonstrate the proposed PET-CT synthesis framework outperforms the state-of-the-art (SOTA) medical image synthesis methods with significant improvements.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Armanious, K., et al.: Independent attenuation correction of whole body [18 F] FDG-pet using a deep learning approach with generative adversarial networks. EJNMMI Res. 10(1), 1–9 (2020)

    Article  MathSciNet  Google Scholar 

  2. Armanious, K., et al.: MedGAN: medical image translation using GANs. Comput. Med. Imaging Graph. 79, 101684 (2020)

    Google Scholar 

  3. Bi, L., Kim, J., Kumar, A., Feng, D., Fulham, M.: Synthesis of positron emission tomography (PET) images via multi-channel generative adversarial networks (GANs). In: Cardoso, M.J., et al. (eds.) CMMI/SWITCH/RAMBO 2017. LNCS, vol. 10555, pp. 43–51. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67564-0_5

    Chapter  Google Scholar 

  4. Dong, X., et al.: Synthetic CT generation from non-attenuation corrected pet images for whole-body pet imaging. Phys. Med. Biol. 64(21), 215016 (2019)

    Google Scholar 

  5. Goitein, M., et al.: The value of CT scanning in radiation therapy treatment planning: a prospective study. Int. J. Radiat. Oncol.* Biol.* Phys. 5(10), 1787–1798 (1979)

    Google Scholar 

  6. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)

    Google Scholar 

  7. Lin, W.A., et al.: DuDoNet: dual domain network for CT metal artifact reduction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10512–10521 (2019)

    Google Scholar 

  8. Liu, F., Jang, H., Kijowski, R., Zhao, G., Bradshaw, T., McMillan, A.B.: A deep learning approach for 18 F-FDG pet attenuation correction. EJNMMI Phys. 5(1), 1–15 (2018)

    Article  Google Scholar 

  9. Liu, J., Kang, Y., Hu, D., Chen, Y.: Cascade ResUnet with noise power spectrum loss for low dose CT imaging. In: 2020 13th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), pp. 796–801. IEEE (2020)

    Google Scholar 

  10. Luan, H., Qi, F., Xue, Z., Chen, L., Shen, D.: Multimodality image registration by maximization of quantitative-qualitative measure of mutual information. Pattern Recogn. 41(1), 285–298 (2008)

    Article  MATH  Google Scholar 

  11. Muehllehner, G., Karp, J.S.: Positron emission tomography. Phys. Med. Biol. 51(13), R117 (2006)

    Article  Google Scholar 

  12. Nie, D., et al.: Medical image synthesis with deep convolutional adversarial networks. IEEE Trans. Biomed. Eng. 65(12), 2720–2730 (2018)

    Article  Google Scholar 

  13. Shi, L., et al.: A novel loss function incorporating imaging acquisition physics for PET attenuation map generation using deep learning. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11767, pp. 723–731. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32251-9_79

    Chapter  Google Scholar 

  14. Sudarshan, V.P., Upadhyay, U., Egan, G.F., Chen, Z., Awate, S.P.: Towards lower-dose pet using physics-based uncertainty-aware multimodal learning with robustness to out-of-distribution data. Med. Image Anal. 73, 102187 (2021)

    Google Scholar 

  15. Xiang, L., et al.: Deep embedding convolutional neural network for synthesizing CT image from T1-weighted MR image. Med. Image Anal. 47, 31–44 (2018)

    Article  Google Scholar 

  16. Xu, J., Gong, E., Pauly, J., Zaharchuk, G.: 200x low-dose pet reconstruction using deep learning. arXiv preprint arXiv:1712.04119 (2017)

  17. Zhang, J., et al.: Limited-view photoacoustic imaging reconstruction with dual domain inputs based on mutual information. In: 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), pp. 1522–1526. IEEE (2021)

    Google Scholar 

  18. Zhou, B., Zhou, S.K.: DuDoRNet: learning a dual-domain recurrent network for fast MRI reconstruction with deep T1 prior. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4273–4282 (2020)

    Google Scholar 

  19. Zhou, T., Thung, K.H., Zhu, X., Shen, D.: Effective feature learning and fusion of multimodality data using stage-wise deep neural network for dementia diagnosis. Hum. Brain Mapp. 40(3), 1001–1016 (2019)

    Article  Google Scholar 

  20. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by National Natural Science Foundation of China (grant number 62131015), Science and Technology Commission of Shanghai Municipality (STCSM) (grant number 21010502600), and The Key R &D Program of Guangdong Province, China (grant number 2021B0101420006).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dinggang Shen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, J., Cui, Z., Jiang, C., Zhang, J., Gao, F., Shen, D. (2022). Mapping in Cycles: Dual-Domain PET-CT Synthesis Framework with Cycle-Consistent Constraints. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. MICCAI 2022. Lecture Notes in Computer Science, vol 13436. Springer, Cham. https://doi.org/10.1007/978-3-031-16446-0_72

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16446-0_72

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16445-3

  • Online ISBN: 978-3-031-16446-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics