Task-GAN: Improving Generative Adversarial Network for Image Reconstruction

  • Jiahong Ouyang
  • Guanhua Wang
  • Enhao Gong
  • Kevin Chen
  • John Pauly
  • Greg ZaharchukEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11905)


Generative Adversarial Network (GAN) has demonstrated great potentials in computer vision tasks such as image restoration. However, image restoration for specific scenarios, such as medical image enhancement is still facing challenge: How to ensure the visually plausible results while not containing hallucinated features that might jeopardize downstream tasks such as pathology identification? Here, we propose Task-GAN, a generalized model for medical reconstruction problem. A task-specific network that captures the diagnostic/pathology features, was added to couple the GAN based image reconstruction framework. Validated on multiple medical datasets, we demonstrated that the proposed method leads to improved deep learning based image reconstruction while preserving the detailed structure and diagnostic features.


  1. 1.
    Goodfellow, I., et al.: Generative adversarial nets. In: NIPS (2014)Google Scholar
  2. 2.
    Mardani, M., et al.: Deep generative adversarial neural networks for compressive sensing MRI. IEEE Trans. Med. Imaging 38(1), 167–179 (2019)CrossRefGoogle Scholar
  3. 3.
    Seitzer, M., et al.: Adversarial and perceptual refinement for compressed sensing MRI reconstruction. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 232–240. Springer, Cham (2018). Scholar
  4. 4.
    Yang, G., et al.: DAGAN: deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction. IEEE Trans. Med. Imaging 37(6), 1310–1321 (2017)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Kang, E., et al.: A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction. Med. Phys. 44(10), e360–e375 (2017)CrossRefGoogle Scholar
  6. 6.
    Chen, K., et al.: Ultra-low-dose 18F-florbetaben amyloid PET imaging using deep learning with multi-contrast MRI inputs. Radiology. 290(3), 649–656 (2018)CrossRefGoogle Scholar
  7. 7.
    Huang, R., et al.: Beyond face rotation: global and local perception GAN for photorealistic and identity preserving frontal view synthesis. In: ICCV (2017)Google Scholar
  8. 8.
    Hoffman, J., et al.: CyCADA: cycle-consistent adversarial domain adaptation. In: ICML (2018)Google Scholar
  9. 9.
    Isola, P., et al.: Image-to-image translation with conditional adversarial networks. In: NIPS (2016)Google Scholar
  10. 10.
    Salimans, T., et al.: Improved techniques for training GANs. In: NIPS (2016)Google Scholar
  11. 11.
    Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). Scholar
  12. 12.
    Zhu, J., et al.: Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593 (2017)
  13. 13.
    Warntjes, J.B.M., et al.: Rapid magnetic resonance quantification on the brain: optimization for clinical usage. Magn. Reson. Med. 60, 320–329 (2008)CrossRefGoogle Scholar
  14. 14.
    Tanenbaum, L.N., et al.: Synthetic MRI for clinical neuroimaging: results of the Magnetic Resonance Image Compilation (MAGiC) prospective, multicenter, multireader trial. Am. J. Neuroradiol. 38, 1103–1110 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Jiahong Ouyang
    • 1
  • Guanhua Wang
    • 1
    • 2
  • Enhao Gong
    • 1
    • 3
  • Kevin Chen
    • 1
  • John Pauly
    • 1
  • Greg Zaharchuk
    • 1
    • 3
    Email author
  1. 1.Stanford UniversityStanfordUSA
  2. 2.Tsinghua UniversityBeijingChina
  3. 3.Subtle MedicalMenlo ParkUSA

Personalised recommendations