Journal of Real-Time Image Processing

pp 1–12

GPU-accelerated denoising of 3D magnetic resonance images

Original Research Paper

DOI: 10.1007/s11554-014-0436-8

Cite this article as:
Howison, M. & Bethel, E.W. J Real-Time Image Proc (2014). doi:10.1007/s11554-014-0436-8


The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.


MRI denoisingNon-local meansBilateral filterAnisotropic diffusionCUDA

Copyright information

© Springer-Verlag Berlin Heidelberg (outside the USA) 2014

Authors and Affiliations

  1. 1.Center for Computation and VisualizationBrown UniversityProvidenceUSA
  2. 2.Computational Research Division, Lawrence Berkeley National LaboratoryBerkeleyUSA