Advertisement

Sparse-View CT Reconstruction Based on Improved Re-Sidual Network

  • Yufei QianEmail author
  • Shipeng Xie
  • Wenqin Zhuang
  • Haibo Li
Conference paper
Part of the Mechanisms and Machine Science book series (Mechan. Machine Science, volume 75)

Abstract

With the development of CT imaging, people have higher requirements for the quality of CT image reconstruction. It is desirable to use as low as reasonably achievable X-ray dose while meeting the quality of imaging requirements. Sparse-view reconstruction is a valid measure to resolve the radiation dose problem. Owing to the angular range of projection data does not satisfy the data completeness condition, sparse-view reconstruction has always been a conundrum in CT image reconstruction. In this paper, we introduces a new CT sparse-view reconstruction algorithm, which bases on the residual network. We optimize traditional residual models by improving the superfluous modules and reducing unnecessary calculations. Compared to several other classic methods, the experimental results with our network obtained better consequent, regarding artifact reduction, feature preservation, and computational speed.

Keywords

CT image reconstruction Sparse-view Residual network 

References

  1. 1.
    Einstein, A.J., Henzlova, M.J., Rajagopalan, S.: Estimating risk of cancer associated with radiation exposure from 64-slice computed tomography coronary angiography. JAMA 298(3), 317 (2007)CrossRefGoogle Scholar
  2. 2.
    Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Hanson, K.M.: Reduction of noise and image artifacts in computed tomography by nonlinear filtration of projection images. In: Medical Imaging 2001 Conference, vol. 4322, San Diego, CA, United States (2001)Google Scholar
  4. 4.
    Pan, X., Sidky, E.Y., Vannier, M.: Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction? Inverse Prob. 25(12), 1230009 (2008)MathSciNetzbMATHGoogle Scholar
  5. 5.
    Natterer, F., Wubbeling, F., Wang, G.: Mathematical methods in image reconstruction. Med. Phys. 29(1), 107–108 (2002)zbMATHGoogle Scholar
  6. 6.
    Brown, B.S.: Proceedings of the society of photo-optical instrumentation engineers. Optica Acta: Int. J. Opt. 22(9), 793–795 (1975)CrossRefGoogle Scholar
  7. 7.
    Tam, K.C., Bruder, H., Stierstorfer, K., Flohr, T., Sourbelle, K., Khamene, A.: Improving large angle cone beam CT image reconstruction with practical supplementary information. In: 2002 IEEE Nuclear Science Symposium Conference Record, vol. 2, pp. 998–1002, Norfolk, VA, USA (2002)Google Scholar
  8. 8.
    Sidky, E.Y., Pan, X.: Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Phys. Med. Biol. 53(17), 4777–4807 (2008)CrossRefGoogle Scholar
  9. 9.
    Yi, Z., Ke, L., Weihua, Z., et al.: Low-dose CT via convolutional neural network. Biomed. Opt. Express 8(2), 679–694 (2017)CrossRefGoogle Scholar
  10. 10.
    Chen, H., Zhang, Y., Kalra, M.K., et al.: Low-dose CT with a residual encoder-decoder convolutional neural network. IEEE Trans. Med. Imaging 36(12), 2524–2535 (2017)CrossRefGoogle Scholar
  11. 11.
    Chen, H., Zhang, Y., Chen, Y., et al.: Learn: learned experts’ assessment-based reconstruction network for sparse-data ct. IEEE Trans. Med. Imaging 37(6), 1333–1347 (2017)CrossRefGoogle Scholar
  12. 12.
    Stark, H., Woods, J., Paul, I., Hingorani, R.: Direct Fourier reconstruction in computer tomography. IEEE Trans. Acoust. Speech Signal Process. 29(2), 237–245 (1981)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Lucchese, M., Borghese, N.A.: Denoising of digital radiographic images with automatic regularization based on total variation. In: Foggia, P., Sansone, C., Vento, M. (eds.) Image Analysis and Processing—ICIAP 2009, ICIAP 2009, vol. 5716, pp. 711–720. Springer, Berlin, Heidelberg (2009)CrossRefGoogle Scholar
  14. 14.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), pp. 770–778. Las Vegas, NV, United States (2016)Google Scholar
  15. 15.
    Ledig, C., Theis, L., Huszar, F., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), pp. 105–114, Honolulu, Hawaii (2017)Google Scholar
  16. 16.
    Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1132–1140, Honolulu, Hawaii (2017)Google Scholar
  17. 17.
    Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: AAAI Conference on Artificial Intelligence (2016)Google Scholar
  18. 18.
    Kingma, D., Ba, J.: Adam: a method for stochastic optimization. Comput. Sci. (2014)Google Scholar
  19. 19.
    Xie, S., Zheng, X., Chen, Y., et al.: Artifact removal using improved GoogLeNet for sparse-view CT reconstruction. Sci. Rep. 8(1), 6700 (2018)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Yufei Qian
    • 1
    Email author
  • Shipeng Xie
    • 1
  • Wenqin Zhuang
    • 1
  • Haibo Li
    • 1
    • 2
  1. 1.College of Telecommunications & Information EngineeringNanjing University of Posts and TelecommunicationsNanjingChina
  2. 2.Department of Media Technology and Interaction Design, School of Electrical Engineering and Computer ScienceKTH Royal Institute of TechnologyStockholmSweden

Personalised recommendations