Skip to main content

Unpaired MR Image Homogenisation by Disentangled Representations and Its Uncertainty

  • 922 Accesses

Part of the Lecture Notes in Computer Science book series (LNIP,volume 12959)

Abstract

Inter-scanner and inter-protocol differences in MRI datasets are known to induce significant quantification variability. Hence data homogenisation is crucial for a reliable combination of data or observations from different sources. Existing homogenisation methods rely on pairs of images to learn a mapping from a source domain to a reference domain. In real-world, we only have access to unpaired data from the source and reference domains. In this paper, we successfully address this scenario by proposing an unsupervised image-to-image translation framework which models the complex mapping by disentangling the image space into a common content space and a scanner-specific one. We perform image quality enhancement among two MR scanners, enriching the structural information and reducing noise level. We evaluate our method on both healthy controls and multiple sclerosis (MS) cohorts and have seen both visual and quantitative improvement over state-of-the-art GAN-based methods while retaining regions of diagnostic importance such as lesions. In addition, for the first time, we quantify the uncertainty in the unsupervised homogenisation pipeline to enhance the interpretability. Codes are available: https://github.com/hongweilibran/Multi-modal-medical-image-synthesis.

B. Wiestler and B. Menze—Equal contributions to this work.

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-030-87735-4_5
  • Chapter length: 10 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   54.99
Price excludes VAT (USA)
  • ISBN: 978-3-030-87735-4
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   69.99
Price excludes VAT (USA)
Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.

References

  1. Alexander, D.C., et al.: Image quality transfer and applications in diffusion MRI. Neuroimage 152, 283–298 (2017)

    CrossRef  Google Scholar 

  2. Bahrami, K., Shi, F., Rekik, I., Gao, Y., Shen, D.: 7T-guided super-resolution of 3T MRI. Med. Phys. 44(5), 1661–1677 (2017)

    CrossRef  Google Scholar 

  3. Choi, Y., et al.: StarGAN: unified generative adversarial networks for multi-domain image-to-image translation. In: CVPR, pp. 8789–8797 (2018)

    Google Scholar 

  4. Glocker, B., Robinson, R., Castro, D.C., Dou, Q., Konukoglu, E.: Machine learning with multi-site imaging data: an empirical study on the impact of scanner effects. arXiv preprint arXiv:1910.04597 (2019)

  5. Goodfellow, I.: Nips 2016 tutorial: generative adversarial networks. arXiv preprint arXiv:1701.00160 (2016)

  6. Lee, H.-Y., Tseng, H.-Y., Huang, J.-B., Singh, M., Yang, M.-H.: Diverse image-to-image translation via disentangled representations. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11205, pp. 36–52. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01246-5_3

    CrossRef  Google Scholar 

  7. Li, H., et al.: DiamondGAN: unified multi-modal generative adversarial networks for MRI sequences synthesis. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11767, pp. 795–803. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32251-9_87

    CrossRef  Google Scholar 

  8. Lin, H., et al.: Deep learning for low-field to high-field MR: image quality transfer with probabilistic decimation simulator. In: Knoll, F., Maier, A., Rueckert, D., Ye, J.C. (eds.) MLMIR 2019. LNCS, vol. 11905, pp. 58–70. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-33843-5_6

    CrossRef  Google Scholar 

  9. Liu, M.Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. In: Advances in Neural Information Processing Systems, pp. 700–708 (2017)

    Google Scholar 

  10. Lysandropoulos, A.P., et al.: Quantifying brain volumes for multiple sclerosis patients follow-up in clinical practice-comparison of 1.5 and 3 tesla magnetic resonance imaging. Brain Behav. 6(2), e00422 (2016)

    CrossRef  Google Scholar 

  11. Oktay, O., et al.: Multi-input cardiac image super-resolution using convolutional neural networks. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9902, pp. 246–254. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46726-9_29

    CrossRef  Google Scholar 

  12. Park, J., Hwang, D., Kim, K.Y., Kang, S.K., Kim, Y.K., Lee, J.S.: Computed tomography super-resolution using deep convolutional neural network. Phys. Med. Biol. 63(14), 145011 (2018)

    CrossRef  Google Scholar 

  13. Shi, W., et al.: Cardiac image super-resolution with global correspondence using multi-atlas PatchMatch. In: Mori, K., Sakuma, I., Sato, Y., Barillot, C., Navab, N. (eds.) MICCAI 2013. LNCS, vol. 8151, pp. 9–16. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40760-4_2

    CrossRef  Google Scholar 

  14. Siddharth, N., et al.: Learning disentangled representations with semi-supervised deep generative models. In: Advances in Neural Information Processing Systems, pp. 5925–5935 (2017)

    Google Scholar 

  15. Tanno, R., et al.: Bayesian image quality transfer with CNNs: exploring uncertainty in dMRI super-resolution. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10433, pp. 611–619. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66182-7_70

    CrossRef  Google Scholar 

  16. Tax, C.M., et al.: Cross-scanner and cross-protocol diffusion MRI data harmonisation: a benchmark database and evaluation of algorithms. Neuroimage 195, 285–299 (2019)

    CrossRef  Google Scholar 

  17. Welander, P., Karlsson, S., Eklund, A.: Generative adversarial networks for image-to-image translation on multi-contrast MR images-a comparison of CycleGAN and unit. arXiv preprint arXiv:1806.07777 (2018)

  18. Yang, J., Dvornek, N.C., Zhang, F., Chapiro, J., Lin, M.D., Duncan, J.S.: Unsupervised domain adaptation via disentangled representations: application to cross-modality liver segmentation. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11765, pp. 255–263. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32245-8_29

    CrossRef  Google Scholar 

  19. Yurt, M., Dar, S.U.H., Erdem, A., Erdem, E., Çukur, T.: mustGAN: multi-stream generative adversarial networks for MR image synthesis. arXiv preprint arXiv:1909.11504 (2019)

  20. Zhu, J.Y., et al.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: CVPR, pp. 2223–2232 (2017)

    Google Scholar 

Download references

Acknowledgement

This work was supported by Helmut Horten Foundation. B. W. and B. M. were supported through the DFG, SFB-824, sub-project B12.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianguo Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Li, H. et al. (2021). Unpaired MR Image Homogenisation by Disentangled Representations and Its Uncertainty. In: , et al. Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Perinatal Imaging, Placental and Preterm Image Analysis. UNSURE PIPPI 2021 2021. Lecture Notes in Computer Science(), vol 12959. Springer, Cham. https://doi.org/10.1007/978-3-030-87735-4_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87735-4_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87734-7

  • Online ISBN: 978-3-030-87735-4

  • eBook Packages: Computer ScienceComputer Science (R0)