Detail-Preserving PET Reconstruction with Sparse Image Representation and Anatomical Priors

  • Jieqing Jiao
  • Pawel Markiewicz
  • Ninon Burgos
  • David Atkinson
  • Brian Hutton
  • Simon Arridge
  • Sebastien Ourselin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9123)

Abstract

Positron emission tomography (PET) reconstruction is an ill-posed inverse problem which typically involves fitting a high-dimensional forward model of the imaging process to noisy, and sometimes undersampled photon emission data. To improve the image quality, prior information derived from anatomical images of the same subject has been previously used in the penalised maximum likelihood (PML) method to regularise the model complexity and selectively smooth the image on a voxel basis in PET reconstruction. In this work, we propose a novel perspective of incorporating the prior information by exploring the sparse property of natural images. Instead of a regular voxel grid, the sparse image representation jointly determined by the prior image and the PET data is used in reconstruction to leverage between the image details and smoothness, and this prior is integrated into the PET forward model and has a closed-form expectation maximisation (EM) solution. Simulations show that the proposed approach achieves improved bias versus variance trade-off and higher contrast recovery than the current state-of-the-art methods, and preserves the image details better. Application to clinical PET data shows promising results.

Keywords

PET Image reconstruction Image prior Supervoxels EM 

References

  1. 1.
    Green, P.: Bayesian reconstructions from emission tomography data using a modified em algorithm. IEEE Trans. Med. Imaging 9(1), 84–93 (1990)CrossRefGoogle Scholar
  2. 2.
    Leahy, R., Yan, X.: Incorporation of anatomical MR data for improved functional imaging with PET. In: Colchester, A., Hawkes, D. (eds.) IPMI 1991. Lecture Notes in Computer Science, vol. 511, pp. 105–120. Springer, Heidelberg (1991)CrossRefGoogle Scholar
  3. 3.
    Baete, K., Nuyts, J., Van Paesschen, W., Suetens, P., Dupont, P.: Anatomical-based FDG-PET reconstruction for the detection of hypo-metabolic regions in epilepsy. IEEE Trans. Med. Imaging 23(4), 510–519 (2004)CrossRefGoogle Scholar
  4. 4.
    Bowsher, J., Yuan, H., Hedlund, L., Turkington, T., Akabani, G., Badea, A., Kurylo, W., Wheeler, C., Cofer, G., Dewhirst, M., Johnson, G.: Utilizing MRI information to estimate F18-FDG distributions in rat flank tumors. In: Conference Record of the 2004 Nuclear Science Symposium, vol. 4, pp. 2488–2492. IEEE (Oct 2004)Google Scholar
  5. 5.
    Vunckx, K., Atre, A., Baete, K., Reilhac, A., Deroose, C., Van Laere, K., Nuyts, J.: Evaluation of three MRI-based anatomical priors for quantitative pet brain imaging. IEEE Trans. Med. Imaging 31(3), 599–612 (2012)CrossRefGoogle Scholar
  6. 6.
    Bai, B., Li, Q., Leahy, R.M.: Magnetic resonance-guided positron emission tomography image reconstruction. Semin. Nucl. Med. 43(1), 30–44 (2013)CrossRefGoogle Scholar
  7. 7.
    Wang, G., Qi, J.: Edge-preserving PET image reconstruction using trust optimization transfer. IEEE Trans. Med. Imaging 34(4), 930–939 (2015)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Shepp, L., Vardi, Y.: Maximum likelihood reconstruction for emission tomography. IEEE Trans. Med. Imaging 1(2), 113–122 (1982)CrossRefGoogle Scholar
  9. 9.
    Wang, G., Qi, J.: Direct estimation of kinetic parametric images for dynamic PET. Theranostics 3(10), 802–815 (2013)CrossRefGoogle Scholar
  10. 10.
    Nguyen, V.G., jin Lee, S.: Incorporating anatomical side information into PET reconstruction using nonlocal regularization. IEEE Trans. Image Process. 22(10), 3961–3973 (2013)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Wang, G., Qi, J.: PET image reconstruction using kernel method. IEEE Trans. Med. Imaging 34(1), 61–71 (2015)CrossRefGoogle Scholar
  12. 12.
    Olshausen, B.A., Field, D.J.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381(6583), 607–609 (1996)CrossRefGoogle Scholar
  13. 13.
    Ren, X., Malik, J.: Learning a classification model for segmentation. In: Proceedings of the Ninth IEEE International Conference on Computer Vision, vol. 1, pp. 10–17, October 2003Google Scholar
  14. 14.
    Heinrich, M.P., Jenkinson, M., Papież, B.W., Glesson, F.V., Brady, S.M., Schnabel, J.A.: Edge- and detail-preserving sparse image representations for deformable registration of chest MRI and CT volumes. In: Gee, J.C., Joshi, S., Pohl, K.M., Wells, W.M., Zöllei, L. (eds.) IPMI 2013. LNCS, vol. 7917, pp. 463–474. Springer, Heidelberg (2013) CrossRefGoogle Scholar
  15. 15.
    Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: Slic superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012)CrossRefGoogle Scholar
  16. 16.
    Dillencourt, M.B., Samet, H., Tamminen, M.: A general approach to connected-component labeling for arbitrary image representations. J. ACM 39(2), 253–280 (1992)MATHMathSciNetCrossRefGoogle Scholar
  17. 17.
    Qi, J., Leahy, R.M.: Iterative reconstruction techniques in emission computed tomography. Phys. Med. Biol. 51(15), R541–R578 (2006)CrossRefGoogle Scholar
  18. 18.
    Tong, S., Shi, P.: Tracer kinetics guided dynamic PET reconstruction. In: Karssemeijer, N., Lelieveldt, B. (eds.) IPMI 2007. LNCS, vol. 4584, pp. 421–433. Springer, Heidelberg (2007) CrossRefGoogle Scholar
  19. 19.
    Gao, F., Liu, H., Jian, Y., Shi, P.: Dynamic dual-tracer PET reconstruction. In: Prince, J.L., Pham, D.L., Myers, K.J. (eds.) IPMI 2009. LNCS, vol. 5636, pp. 38–49. Springer, Heidelberg (2009) CrossRefGoogle Scholar
  20. 20.
    Gunn, R.N., Gunn, S.R., Cunningham, V.J.: Positron emission tomography compartmental models. J. Cereb. Blood Flow Metab. 21(6), 635–652 (2001)CrossRefGoogle Scholar
  21. 21.
    Li, Z., Wu, X.M., Chang, S.F.: Segmentation using superpixels: A bipartite graph partitioning approach. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 789–796, June 2012Google Scholar
  22. 22.
    Buades, A., Coll, B., Morel, J.M.: A non-local algorithm for image denoising. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, vol. 2, pp. 60–65, June 2005Google Scholar
  23. 23.
    Burgos, N., Cardoso, M., Thielemans, K., Modat, M., Pedemonte, S., Dickson, J., Barnes, A., Ahmed, R., Mahoney, J., Schott, J., Duncan, J., Atkinson, D., Arridge, S., Hutton, B., Ourselin, S.: Attenuation correction synthesis for hybrid pet-mr scanners: application to brain studies. IEEE Trans. Med. Imaging 33(12), 2332–2341 (2014)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Jieqing Jiao
    • 1
  • Pawel Markiewicz
    • 1
  • Ninon Burgos
    • 1
  • David Atkinson
    • 2
  • Brian Hutton
    • 3
  • Simon Arridge
    • 4
  • Sebastien Ourselin
    • 1
    • 5
  1. 1.Translational Imaging Group, CMICUniversity College LondonLondonUK
  2. 2.Centre for Medical ImagingUniversity College LondonLondonUK
  3. 3.Institute of Nuclear MedicineUniversity College LondonLondonUK
  4. 4.Centre for Medical Image ComputingUniversity College LondonLondonUK
  5. 5.Dementia Research Centre, Institute of NeurologyUniversity College LondonLondonUK

Personalised recommendations