Advertisement

Locality Adaptive Multi-modality GANs for High-Quality PET Image Synthesis

  • Yan Wang
  • Luping Zhou
  • Lei Wang
  • Biting Yu
  • Chen Zu
  • David S. Lalush
  • Weili Lin
  • Xi Wu
  • Jiliu Zhou
  • Dinggang Shen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11070)

Abstract

Positron emission topography (PET) has been substantially used in recent years. To minimize the potential health risks caused by the tracer radiation inherent to PET scans, it is of great interest to synthesize the high-quality full-dose PET image from the low-dose one to reduce the radiation exposure while maintaining the image quality. In this paper, we propose a locality adaptive multi-modality generative adversarial networks model (LA-GANs) to synthesize the full-dose PET image from both the low-dose one and the accompanying T1-weighted MRI to incorporate anatomical information for better PET image synthesis. This paper has the following contributions. First, we propose a new mechanism to fuse multi-modality information in deep neural networks. Different from the traditional methods that treat each image modality as an input channel and apply the same kernel to convolute the whole image, we argue that the contributions of different modalities could vary at different image locations, and therefore a unified kernel for a whole image is not appropriate. To address this issue, we propose a method that is locality adaptive for multi-modality fusion. Second, to learn this locality adaptive fusion, we utilize 1 × 1 × 1 kernel so that the number of additional parameters incurred by our method is kept minimum. This also naturally produces a fused image which acts as a pseudo input for the subsequent learning stages. Third, the proposed locality adaptive fusion mechanism is learned jointly with the PET image synthesis in an end-to-end trained 3D conditional GANs model developed by us. Our 3D GANs model generates high quality PET images by employing large-sized image patches and hierarchical features. Experimental results show that our method outperforms the traditional multi-modality fusion methods used in deep networks, as well as the state-of-the-art PET estimation approaches.

References

  1. 1.
    Kang, J., Gao, Y., Shi, F., et al.: Prediction of standard-dose brain PET image by using MRI and low-dose brain [18F] FDG PET images. Med. Phys. 42(9), 5301–5309 (2015)CrossRefGoogle Scholar
  2. 2.
    Wang, Y., Zhang, P., An, L., et al.: Predicting standard-dose PET image from low-dose PET and multimodal MR images using mapping-based sparse representation. Phys. Med. Biol. 61(2), 791–812 (2016)CrossRefGoogle Scholar
  3. 3.
    Xiang, L., Qiao, Y., Nie, D., et al.: Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI. Neurocomputing 267, 406–416 (2017)CrossRefGoogle Scholar
  4. 4.
    Wang, Y., Ma, G., An, L., et al.: Semi-supervised tripled dictionary learning for standard-dose PET image prediction using low-dose PET and multimodal MRI. IEEE Trans. Biomed. Eng. 64(3), 569–579 (2017)CrossRefGoogle Scholar
  5. 5.
    An, L., Zhang, P., Adeli, E., et al.: Multi-level canonical correlation analysis for PET image estimation. IEEE Trans. Image Process. 25(7), 3303–3315 (2016)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016)CrossRefGoogle Scholar
  7. 7.
    Li, R., Zhang, W., Suk, H.-I., Wang, L., Li, J., Shen, D., Ji, S.: Deep learning based imaging data completion for improved brain disease diagnosis. In: Golland, Polina, Hata, Nobuhiko, Barillot, Christian, Hornegger, Joachim, Howe, Robert (eds.) MICCAI 2014. LNCS, vol. 8675, pp. 305–312. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10443-0_39CrossRefGoogle Scholar
  8. 8.
    Zhang, H., Xu, T., Li, H., Zhang, S., Huang, X., Wang, X., Metaxas, D.: StackGAN: text to photo-realistic image synthesis with stacked generative adversarial networks. In: IEEE International Conference on Computer Vision (ICCV), pp. 5907–5915 (2017)Google Scholar
  9. 9.
    Bi, L., Kim, J., Kumar, A., Feng, D., Fulham, M.: Synthesis of positron emission tomography (PET) images via multi-channel generative adversarial networks (GANs). In: Cardoso, M.J., Arbel, T., Gao, F., Kainz, B., van Walsum, T., Shi, K., Bhatia, K.K., Peter, R., Vercauteren, T., Reyes, M., Dalca, A., Wiest, R., Niessen, W., Emmer, B.J. (eds.) CMMI/SWITCH/RAMBO -2017. LNCS, vol. 10555, pp. 43–51. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-67564-0_5CrossRefGoogle Scholar
  10. 10.
    Goodfellow, I., Pouget-Abadie, J., Mirza, M.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
  11. 11.
    Wang, Y., Yu, B., Wang, L., Zu, C., Lalush, D., Lin, W., Wu, X., Zhou, J., Shen, D., Zhou, L.: 3D conditional generative adversarial networks for high-quality PET image estimation at low dose. Neuroimage 174, 550–562 (2018)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Yan Wang
    • 1
  • Luping Zhou
    • 2
  • Lei Wang
    • 3
  • Biting Yu
    • 3
  • Chen Zu
    • 3
  • David S. Lalush
    • 4
  • Weili Lin
    • 5
  • Xi Wu
    • 6
  • Jiliu Zhou
    • 1
    • 6
  • Dinggang Shen
    • 5
  1. 1.School of Computer ScienceSichuan UniversityChengduChina
  2. 2.School of Electrical and Information EngineeringUniversity of SydneySydneyAustralia
  3. 3.School of Computing and Information TechnologyUniversity of WollongongWollongongAustralia
  4. 4.Joint Department of Biomedical EngineeringUniversity of North Carolina at Chapel Hill and North Carolina State UniversityRaleighUSA
  5. 5.Department of Radiology and BRICUniversity of North Carolina at Chapel HillChapel HillUSA
  6. 6.School of Computer ScienceChengdu University of Information TechnologyChengduChina

Personalised recommendations