Skip to main content

Empirical Bayesian Mixture Models for Medical Image Translation

  • Conference paper
  • First Online:
Simulation and Synthesis in Medical Imaging (SASHIMI 2019)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11827))

Included in the following conference series:

Abstract

Automatically generating one medical imaging modality from another is known as medical image translation, and has numerous interesting applications. This paper presents an interpretable generative modelling approach to medical image translation. By allowing a common model for group-wise normalisation and segmentation of brain scans to handle missing data, the model allows for predicting entirely missing modalities from one, or a few, MR contrasts. Furthermore, the model can be trained on a fairly small number of subjects. The proposed model is validated on three clinically relevant scenarios. Results appear promising and show that a principled, probabilistic model of the relationship between multi-channel signal intensities can be used to infer missing modalities – both MR contrasts and CT images.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For example, a multi-channel MRI might have three contrasts: T1w, T2w and PDw. In one voxel, only the T1w intensity is observed. The T2w and PDw intensities are then assumed missing in that voxel. Note that different voxels can have different combinations of contrasts/modalities missing.

  2. 2.

    http://brain-development.org/ixi-dataset/.

  3. 3.

    This scenario is more realistic in a clinical context. The results would improve if data from only one scanners was used.

  4. 4.

    The model is trained on IXI subjects IXI[064–118], and tested on IXI[002–063].

  5. 5.

    https://www.insight-journal.org/rire/.

  6. 6.

    The model is trained on RIRE patients patient[102–109], and tested on patient[001–007,101].

References

  1. Burgos, N., et al.: Attenuation correction synthesis for hybrid PET-MR scanners. In: Mori, K., Sakuma, I., Sato, Y., Barillot, C., Navab, N. (eds.) MICCAI 2013. LNCS, vol. 8149, pp. 147–154. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40811-3_19

    Chapter  Google Scholar 

  2. Cao, T., Zach, C., Modla, S., Powell, D., Czymmek, K., Niethammer, M.: Registration for correlative microscopy using image analogies. In: Dawant, B.M., Christensen, G.E., Fitzpatrick, J.M., Rueckert, D. (eds.) WBIR 2012. LNCS, vol. 7359, pp. 296–306. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-31340-0_31

    Chapter  Google Scholar 

  3. Roy, S., Carass, A., Shiee, N., Pham, D.L., Prince, J.L.: MR contrast synthesis for lesion segmentation. In: ISBI, pp. 932–935. IEEE (2010)

    Google Scholar 

  4. Kroon, D.-J., Slump, C.H.: MRI modalitiy transformation in demon registration. In: ISBI, pp. 963–966. IEEE (2009)

    Google Scholar 

  5. Guimond, A., Roche, A., Ayache, N., Meunier, J.: Three-dimensional multimodal brain warping using the demons algorithm and adaptive intensity corrections. IEEE Trans. Med. Imaging 20(1), 58–69 (2001)

    Article  Google Scholar 

  6. Wein, W., Brunke, S., Khamene, A., Callstrom, M.R., Navab, N.: Automatic CT-ultrasound registration for diagnostic imaging and image-guided intervention. Med. Image Anal. 12(5), 577–585 (2008)

    Article  Google Scholar 

  7. Hsu, S.-H., Cao, Y., Huang, K., Feng, M., Balter, J.M.: Investigation of a method for generating synthetic CT models from MRI scans of the head and neck for radiation therapy. Phys. Med. Biol. 58(23), 8419 (2013)

    Article  Google Scholar 

  8. Huynh, T., et al.: Estimating CT image from MRI data using structured random forest and auto-context model. IEEE Trans. Med. Imaging 35(1), 174–183 (2015)

    Article  Google Scholar 

  9. Iglesias, J.E., Konukoglu, E., Zikic, D., Glocker, B., Van Leemput, K., Fischl, B.: Is synthesizing MRI contrast useful for inter-modality analysis? In: Mori, K., Sakuma, I., Sato, Y., Barillot, C., Navab, N. (eds.) MICCAI 2013. LNCS, vol. 8149, pp. 631–638. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40811-3_79

    Chapter  Google Scholar 

  10. Roy, S., Carass, A., Prince, J.: A compressed sensing approach for MR tissue contrast synthesis. In: Székely, G., Hahn, H.K. (eds.) IPMI 2011. LNCS, vol. 6801, pp. 371–383. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22092-0_31

    Chapter  Google Scholar 

  11. Chartsias, A., Joyce, T., Giuffrida, M.V., Tsaftaris, S.A.: Multimodal MR synthesis via modality-invariant latent representation. IEEE Trans. Med. Imaging 37(3), 803–814 (2017)

    Article  Google Scholar 

  12. Nie, D., et al.: Medical image synthesis with context-aware generative adversarial networks. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 417–425. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66179-7_48

    Chapter  Google Scholar 

  13. Wolterink, J.M., Dinkla, A.M., Savenije, M.H.F., Seevinck, P.R., van den Berg, C.A.T., Išgum, I.: Deep MR to CT synthesis using unpaired data. In: Tsaftaris, S.A., Gooya, A., Frangi, A.F., Prince, J.L. (eds.) SASHIMI 2017. LNCS, vol. 10557, pp. 14–23. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68127-6_2

    Chapter  Google Scholar 

  14. Cohen, J.P., Luck, M., Honari, S.: Distribution matching losses can hallucinate features in medical image translation. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 529–536. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_60

    Chapter  Google Scholar 

  15. Ghahramani, Z., Jordan, M.I.: Supervised learning from incomplete data via an EM approach. In: NeurIPS, pp. 120–127 (1994)

    Google Scholar 

  16. Ashburner, J., Friston, K.J.: Unified segmentation. Neuroimage 26(3), 839–851 (2005)

    Article  Google Scholar 

  17. Blaiotta, C., Freund, P., Cardoso, M.J., Ashburner, J.: Generative diffeomorphic modelling of large MRI data sets for probabilistic template construction. Neuroimage 166, 117–134 (2018)

    Article  Google Scholar 

  18. Carlin, B.P., Louis, T.A.: Empirical bayes: past, present and future. J. Am. Stat. Assoc. 95(452), 1286–1289 (2000)

    Article  MathSciNet  Google Scholar 

  19. Blaiotta, C., Cardoso, M.J., Ashburner, J.: Variational inference for medical image segmentation. Comput. Vis. Image Underst. 151, 14–28 (2016)

    Article  Google Scholar 

  20. Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, Heidelberg (2006)

    MATH  Google Scholar 

  21. Brudfors, M., Balbastre, Y., Ashburner, J.: Nonlinear Markov random fields learned via backpropagation. In: Chung, A.C.S., Gee, J.C., Yushkevich, P.A., Bao, S. (eds.) IPMI 2019. LNCS, vol. 11492, pp. 805–817. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20351-1_63

    Chapter  Google Scholar 

  22. West, J.B., Woods, R.P.: Comparison and evaluation of retrospective intermodality image registration techniques. In: Medical Imaging 1996: Image Processing, vol. 2710, pp. 332–348. SPIE (1996)

    Google Scholar 

  23. Brudfors, M., Balbastre, Y., Nachev, P., Ashburner, J.: MRI super-resolution using multi-channel total variation. In: Nixon, M., Mahmoodi, S., Zwiggelaar, R. (eds.) MIUA 2018. CCIS, vol. 894, pp. 217–228. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-95921-4_21

    Chapter  Google Scholar 

Download references

Acknowledgements

MB was funded by the EPSRC-funded UCL Centre for Doctoral Training in Medical Imaging (EP/L016478/1) and the Department of Health’s NIHR-funded Biomedical Research Centre at University College London Hospitals. MB and JA was funded by the EU Human Brain Project’s Grant Agreement No 785907 (SGA2). YB was funded by the MRC and Spinal Research Charity through the ERA-NET Neuron joint call (MR/R000050/1).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mikael Brudfors .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Brudfors, M., Ashburner, J., Nachev, P., Balbastre, Y. (2019). Empirical Bayesian Mixture Models for Medical Image Translation. In: Burgos, N., Gooya, A., Svoboda, D. (eds) Simulation and Synthesis in Medical Imaging. SASHIMI 2019. Lecture Notes in Computer Science(), vol 11827. Springer, Cham. https://doi.org/10.1007/978-3-030-32778-1_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-32778-1_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-32777-4

  • Online ISBN: 978-3-030-32778-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics