Advertisement

Multi-Modal Super-Resolution with Deep Guided Filtering

  • Bernhard StimpelEmail author
  • Christopher Syben
  • Franziska Schirrmacher
  • Philip Hoelter
  • Arnd Dörfler
  • Andreas Maier
Conference paper
Part of the Informatik aktuell book series (INFORMAT)

Zusammenfassung

Despite the visually appealing results, most Deep Learning-based super-resolution approaches lack the comprehensibility that is required for medical applications. We propose a modified version of the locally linear guided filter for the application of super-resolution in medical imaging. The guidance map itself is learned end-to-end from multimodal inputs, while the actual data is only processed with known operators. This ensures comprehensibility of the results and simplifies the implementation of guarantees. We demonstrate the possibilities of our approach based on multi-modal MR and cross-modal CT and MR data. For both datasets, our approach performs clearly better than bicubic upsampling. For projection images, we achieve SSIMs of up to 0.99, while slice image data results in SSIMs of up to 0.98 for four-fold upsampling given an image of the respective other modality at full resolution. In addition, end-to-end learning of the guidance map considerably improves the quality of the results.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Literatur

  1. 1.
    Köhler T. Multi-frame super-resolution reconstruction with applications to medical imaging. Friedrich-Alexander-Universität Erlangen-Nürnberg; 2018.Google Scholar
  2. 2.
    Wang Y, Perazzi F, McWilliams B, et al. A fully progressive approach to single-image super-resolution. IEEE Conf Comput Vis Pattern Recognit Work. 2018; p. 864-873.Google Scholar
  3. 3.
    Maier A, Schebesch F, Syben C, et al. Precision learning: towards use of known operators in neural networks. Proc ICPR. 2017;Available from: http://arxiv.org/abs/1712.00374.
  4. 4.
    Syben C, Stimpel B, Lommen J, et al. Deriving neural network architectures using precision learning: parallel-to-fan beam conversion. Proc Ger Conf Pattern Recognit. 2018;.Google Scholar
  5. 5.
    He K, Sun J, Tang X. Guided image filtering. IEEE Trans Pattern Anal Mach Intell. 2013;35(6):1397-1409.CrossRefGoogle Scholar
  6. 6.
    Wu H, Zheng S, Zhang J, et al. Fast end-to-end trainable guided filter. IEEE Conf Comput Vis Pattern Recognit. 2018;.Google Scholar
  7. 7.
    Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. Proc MICCAI. 2015; p. 234-241.Google Scholar
  8. 8.
    Johnson J, Alahi A, Fei-Fei L. Perceptual losses for real-time style transfer and super-resolution. arXiv:160308155. 2016;.
  9. 9.
    Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv:14091556. 2015;.
  10. 10.
    Pieper S, Halle M, Kikinis R. 3D slicer. Proc ISBI. 2004;2:632-635.Google Scholar
  11. 11.
    Stimpel B, Syben C, Würfl T, et al. MR to X-ray projection image synthesis. Proc Fifth Int Conf Image Form X-Ray Comput Tomogr. 2017;.Google Scholar
  12. 12.
    Lommen JM, Syben C, Stimpel B, et al. MR-projection imaging with perspective distortion as in X-ray fluoroscopy for interventional X/MR-hybrid applications. Proc 12th Interv MRI Symp. 2018; p. 54.Google Scholar
  13. 13.
    Maier A, Hofmann HG, Berger M, et al. CONRAD: a software framework for cone-beam imaging in radiology. Med Phys. 2013;40(11).CrossRefGoogle Scholar

Copyright information

© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2019

Authors and Affiliations

  • Bernhard Stimpel
    • 1
    • 2
    Email author
  • Christopher Syben
    • 1
    • 2
  • Franziska Schirrmacher
    • 1
  • Philip Hoelter
    • 2
  • Arnd Dörfler
    • 2
  • Andreas Maier
    • 1
  1. 1.Pattern Recognition LabFriedrich-Alexander-Universität Erlangen-NürnbergErlangenDeutschland
  2. 2.Department of NeuroradiologyUniversity Clinics ErlangenErlangenDeutschland

Personalised recommendations