Advertisement

Evaluation of Strategies for PET Motion Correction - Manifold Learning vs. Deep Learning

  • James R. Clough
  • Daniel R. Balfour
  • Claudia Prieto
  • Andrew J. Reader
  • Paul K. Marsden
  • Andrew P. King
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11038)

Abstract

Image quality in abdominal PET is degraded by respiratory motion. In this paper we compare existing data-driven gating methods for motion correction which are based on manifold learning, with a proposed method in which a convolutional neural network learns estimated motion fields in an end-to-end manner, and then uses those estimated motion fields to motion correct the PET frames. We find that this proposed network approach is unable to outperform manifold learning methods in the literature, in terms of the image quality of the motion corrected volumes. We investigate possible explanations for this negative result and discuss the benefits of these unsupervised approaches which remain the state of the art.

Keywords

Motion estimation Positron emission tomography Convolutional neural network Principal component analysis 

Notes

Acknowledgments

We would like to thank nVidia for kindly donating the Quadro P6000 GPU used in this research.

References

  1. 1.
    Balfour, D.R., et al.: Respiratory motion correction of PET using MR-constrained PET-PET registration. Biomed. Eng. Online 14(1), 85 (2015)CrossRefGoogle Scholar
  2. 2.
    Baumgartner, C.F., et al.: High-resolution dynamic MR imaging of the thorax for respiratory motion correction of PET using groupwise manifold alignment. Med. Image Anal. 18(7), 939–952 (2014)CrossRefGoogle Scholar
  3. 3.
    Ben-Cohen, A., Klang, E., Raskin, S.P., Amitai, M.M., Greenspan, H.: Virtual PET images from CT data using deep convolutional networks: initial results. In: Tsaftaris, S.A., Gooya, A., Frangi, A.F., Prince, J.L. (eds.) SASHIMI 2017. LNCS, vol. 10557, pp. 49–57. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-68127-6_6CrossRefGoogle Scholar
  4. 4.
    Chen, C., et al.: Learning to see in the dark. arXiv preprint arXiv:1805.01934 (2018)
  5. 5.
    Freeman, M.F., Tukey, J.W.: Transformations related to the angular and the square root. Ann. Math. Stat. 21(4), 607–611 (1950)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Hudson, H.M., Larkin, R.S.: Ordered subsets of projection data. IEEE Trans. Med. Imaging 13(4), 601–609 (1994)CrossRefGoogle Scholar
  7. 7.
    Judenhofer, M.S.: Simultaneous PET-MRI: a new approach for functional and morphological imaging. Nat. Med. 14(4), 459–465 (2008)CrossRefGoogle Scholar
  8. 8.
    Li, H., Fan, Y.: Non-rigid image registration using self-supervised fully convolutional networks without training data. arXiv preprint arXiv:1801.04012 (2018)
  9. 9.
    Miao, S.: A CNN regression approach for real-time 2D/3D registration. IEEE Trans. Med. Imaging 35(5), 1352–1363 (2016)CrossRefGoogle Scholar
  10. 10.
    Remez, T., et al.: Deep convolutional denoising of low-light images (2017). http://arxiv.org/abs/1701.01687
  11. 11.
    Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  12. 12.
    Shao, Y., et al.: Simultaneous PET and MR imaging. Phys. Med. Biol. 42(10), 1965 (1997)CrossRefGoogle Scholar
  13. 13.
    Thielemans, K., et al.: Device-less gating for PET/CT using PCA. In: IEEE Nuclear Science Symposium Conference Record, pp. 3904–3910 (2011)Google Scholar
  14. 14.
    Thielemans, K., et al.: Comparison of different methods for data-driven respiratory gating of PET data. In: IEEE Nuclear Science Symposium Conference Record, pp. 3–6 (2013)Google Scholar
  15. 15.
    de Vos, B.D., Berendsen, F.F., Viergever, M.A., Staring, M., Išgum, I.: End-to-end unsupervised deformable image registration with a convolutional neural network. In: Cardoso, M.J., et al. (eds.) DLMIA/ML-CDS -2017. LNCS, vol. 10553, pp. 204–212. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-67558-9_24CrossRefGoogle Scholar
  16. 16.
    Xu, J., et al.: 200x low-dose pet reconstruction using deep learning. arXiv preprint arXiv:1712.04119 (2017)
  17. 17.
    Zhu, B., et al.: Image reconstruction by domain transform manifold learning. Nat. Publ. Group 555(7697), 487–492 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • James R. Clough
    • 1
  • Daniel R. Balfour
    • 1
  • Claudia Prieto
    • 1
  • Andrew J. Reader
    • 1
  • Paul K. Marsden
    • 1
  • Andrew P. King
    • 1
  1. 1.School of Bioengineering and Imaging ScienceKing’s College LondonLondonUK

Personalised recommendations