Advertisement

Deep Boosted Regression for MR to CT Synthesis

  • Kerstin KläserEmail author
  • Pawel Markiewicz
  • Marta Ranzini
  • Wenqi Li
  • Marc Modat
  • Brian F. Hutton
  • David Atkinson
  • Kris Thielemans
  • M. Jorge Cardoso
  • Sébastien Ourselin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11037)

Abstract

Attenuation correction is an essential requirement of positron emission tomography (PET) image reconstruction to allow for accurate quantification. However, attenuation correction is particularly challenging for PET-MRI as neither PET nor magnetic resonance imaging (MRI) can directly image tissue attenuation properties. MRI-based computed tomography (CT) synthesis has been proposed as an alternative to physics based and segmentation-based approaches that assign a population-based tissue density value in order to generate an attenuation map. We propose a novel deep fully convolutional neural network that generates synthetic CTs in a recursive manner by gradually reducing the residuals of the previous network, increasing the overall accuracy and generalisability, while keeping the number of trainable parameters within reasonable limits. The model is trained on a database of 20 pre-acquired MRI/CT pairs and a four-fold random bootstrapped validation with a 80:20 split is performed. Quantitative results show that the proposed framework outperforms a state-of-the-art atlas-based approach decreasing the Mean Absolute Error (MAE) from 131HU to 68HU for the synthetic CTs and reducing the PET reconstruction error from 14.3% to 7.2%.

Notes

Acknowledgements

This work was supported by an IMPACT studentship funded jointly by Siemens and the EPSRC UCL Centre for Doctoral Training in Medical Imaging (EP/L016478/1). The research was also supported through the UK NIHR UCLH Biomedical Research Centre.

References

  1. 1.
    Burgos, N., et al.: Attenuation correction synthesis for hybrid pet-mr scanners: application to brain studies. IEEE Trans. Med. Imaging 33(12), 2332–2341 (2014)CrossRefGoogle Scholar
  2. 2.
    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks (2017). arXiv preprint arXiv:1703.10593
  3. 3.
    Wolterink, J.M., Dinkla, A.M., Savenije, M.H.F., Seevinck, P.R., van den Berg, C.A.T., Išgum, I.: Deep MR to CT synthesis using unpaired data. In: Tsaftaris, S.A., Gooya, A., Frangi, A.F., Prince, J.L. (eds.) SASHIMI 2017. LNCS, vol. 10557, pp. 14–23. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-68127-6_2CrossRefGoogle Scholar
  4. 4.
    Kamnitsas, K., et al.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 36, 61–78 (2017)CrossRefGoogle Scholar
  5. 5.
    Kleesiek, J., et al.: Deep MRI brain extraction: a 3D convolutional neural network for skull stripping. NeuroImage 129, 460–469 (2016)CrossRefGoogle Scholar
  6. 6.
    Li, W., Wang, G., Fidon, L., Ourselin, S., Cardoso, M.J., Vercauteren, T.: On the compactness, efficiency, and representation of 3D convolutional networks: brain parcellation as a pretext task. In: Niethammer, M., et al. (eds.) IPMI 2017. LNCS, vol. 10265, pp. 348–360. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-59050-9_28CrossRefGoogle Scholar
  7. 7.
    Han, X.: MR-based synthetic CT generation using a deep convolutional neural network method. Med. Phys. 44(4), 1408–1419 (2017)CrossRefGoogle Scholar
  8. 8.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. CoRR abs/1512.03385 (2015)Google Scholar
  9. 9.
    Gibson, E., et al.: NiftyNet: a deep-learning platform for medical imaging. CoRR abs/1709.03485 (2017)Google Scholar
  10. 10.
    Modat, M., et al.: Fast free-form deformation using graphics processing units. Comput. Methods Prog. Biomed. 98(3), 278–284 (2010)CrossRefGoogle Scholar
  11. 11.
    Markiewicz, P.J., et al.: NiftyPET: a high-throughput software platform for high quantitative accuracy and precision PET imaging and analysis. Neuroinformatics 16(1), 95–115 (2018)CrossRefGoogle Scholar
  12. 12.
    Nie, D., et al.: Medical image synthesis with context-aware generative adversarial networks. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 417–425. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66179-7_48CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Kerstin Kläser
    • 1
    Email author
  • Pawel Markiewicz
    • 1
  • Marta Ranzini
    • 1
  • Wenqi Li
    • 2
  • Marc Modat
    • 1
    • 2
  • Brian F. Hutton
    • 3
  • David Atkinson
    • 4
  • Kris Thielemans
    • 3
  • M. Jorge Cardoso
    • 1
    • 2
  • Sébastien Ourselin
    • 2
  1. 1.Centre for Medical Image ComputingUniversity College LondonLondonUK
  2. 2.School of Biomedical Engineering and Imaging SciencesKing’s College LondonLondonUK
  3. 3.Institute of Nuclear MedicineUniversity College LondonLondonUK
  4. 4.Centre for Medical ImagingUniversity College LondonLondonUK

Personalised recommendations