Medical Image Synthesis with Context-Aware Generative Adversarial Networks

  • Dong Nie
  • Roger Trullo
  • Jun Lian
  • Caroline Petitjean
  • Su Ruan
  • Qian Wang
  • Dinggang ShenEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10435)


Computed tomography (CT) is critical for various clinical applications, e.g., radiation treatment planning and also PET attenuation correction in MRI/PET scanner. However, CT exposes radiation during acquisition, which may cause side effects to patients. Compared to CT, magnetic resonance imaging (MRI) is much safer and does not involve radiations. Therefore, recently researchers are greatly motivated to estimate CT image from its corresponding MR image of the same subject for the case of radiation planning. In this paper, we propose a data-driven approach to address this challenging problem. Specifically, we train a fully convolutional network (FCN) to generate CT given the MR image. To better model the nonlinear mapping from MRI to CT and produce more realistic images, we propose to use the adversarial training strategy to train the FCN. Moreover, we propose an image-gradient-difference based loss function to alleviate the blurriness of the generated CT. We further apply Auto-Context Model (ACM) to implement a context-aware generative adversarial network. Experimental results show that our method is accurate and robust for predicting CT images from MR images, and also outperforms three state-of-the-art methods under comparison.


Generative models GAN Image synthesis Deep learning Auto-context 


  1. 1.
    Berker, Y., et al.: MRI-based attenuation correction for hybrid PET/MRI systems: a 4-class tissue segmentation technique using a combined ultrashort-echo-time/dixon mri sequence. J. Nucl. Med. 53(5), 796–804 (2012)CrossRefGoogle Scholar
  2. 2.
    Catana, C., et al.: Toward implementing an MRI-based pet attenuation-correction method for neurologic studies on the mr-pet brain prototype. J. Nucl. Med. 51(9), 1431–1438 (2010)CrossRefGoogle Scholar
  3. 3.
    Dong, C., et al.: Image super-resolution using deep convolutional networks. IEEE TPAMI 38(2), 295–307 (2016)CrossRefGoogle Scholar
  4. 4.
    Goodfellow, I., et al.: Generative adversarial nets. In: NIPS, pp. 2672–2680 (2014)Google Scholar
  5. 5.
    Huynh, T., et al.: Estimating CT image from mri data using structured random forest and auto-context model. IEEE TMI 35(1), 174–183 (2016)Google Scholar
  6. 6.
    Jog, A., et al.: Improving magnetic resonance resolution with supervised learning. In: ISBI, pp. 987–990. IEEE (2014)Google Scholar
  7. 7.
    Kinahan, P.E., et al.: Attenuation correction for a combined 3D PET/CT scanner. Med. Phys. 25(10), 2046–2053 (1998)CrossRefGoogle Scholar
  8. 8.
    Ledig. C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. CoRR, abs/1609.04802 (2016)Google Scholar
  9. 9.
    Li, R., Zhang, W., Suk, H.-I., Wang, L., Li, J., Shen, D., Ji, S.: Deep learning based imaging data completion for improved brain disease diagnosis. In: Golland, P., Hata, N., Barillot, C., Hornegger, J., Howe, R. (eds.) MICCAI 2014. LNCS, vol. 8675, pp. 305–312. Springer, Cham (2014). doi: 10.1007/978-3-319-10443-0_39 CrossRefGoogle Scholar
  10. 10.
    Long, J., et al.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on CVPR, pp. 3431–3440 (2015)Google Scholar
  11. 11.
    Mathieu, M., Couprie, C., LeCun, Y.: Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440 (2015)
  12. 12.
    Nie, D., et al.: Fully convolutional networks for multi-modality isointense infant brain image segmentation. In: ISBI, pp. 1342–1345. IEEE (2016)Google Scholar
  13. 13.
    Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)
  14. 14.
    Shen, D., Wu, G., Suk, H.I.: Deep learning in medical image analysis. Ann. Rev. Biomed. Eng. (2017)Google Scholar
  15. 15.
    Zhuowen, T., Bai, X.: Auto-context and its application to high-level vision tasks and 3D brain image segmentation. IEEE TPAMI 32(10), 1744–1757 (2010)CrossRefGoogle Scholar
  16. 16.
    Vercauteren, T., et al.: Diffeomorphic demons: efficient non-parametric image registration. NeuroImage 45(1), S61–S72 (2009)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Dong Nie
    • 1
    • 2
  • Roger Trullo
    • 1
    • 3
  • Jun Lian
    • 4
  • Caroline Petitjean
    • 3
  • Su Ruan
    • 3
  • Qian Wang
    • 5
  • Dinggang Shen
    • 1
    Email author
  1. 1.Department of Radiology and BRICUniversity of North Carolina at Chapel HillChapel HillUSA
  2. 2.Department of Computer ScienceUniversity of North Carolina at Chapel HillChapel HillUSA
  3. 3.Normandie Univ, INSA Rouen, LITISRouenFrance
  4. 4.Department of Radiation OncologyUniversity of North Carolina at Chapel HillChapel HillUSA
  5. 5.School of Biomedical Engineering, Med-X Research InstituteShanghai Jiao Tong UniversityShanghaiChina

Personalised recommendations