Advertisement

Uncertainty Estimation in Medical Image Denoising with Bayesian Deep Image Prior

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12443)

Abstract

Uncertainty quantification in inverse medical imaging tasks with deep learning has received little attention. However, deep models trained on large data sets tend to hallucinate and create artifacts in the reconstructed output that are not anatomically present. We use a randomly initialized convolutional network as parameterization of the reconstructed image and perform gradient descent to match the observation, which is known as deep image prior. In this case, the reconstruction does not suffer from hallucinations as no prior training is performed. We extend this to a Bayesian approach with Monte Carlo dropout to quantify both aleatoric and epistemic uncertainty. The presented method is evaluated on the task of denoising different medical imaging modalities. The experimental results show that our approach yields well-calibrated uncertainty. That is, the predictive uncertainty correlates with the predictive error. This allows for reliable uncertainty estimates and can tackle the problem of hallucinations and artifacts in inverse medical imaging tasks.

Keywords

Variational inference Hallucination Deep learning 

1 Introduction

Noise in medical imaging affects all modalities, including X-ray, magnetic resonance imaging (MRI), computed tomography (CT), ultrasound (US) or optical coherence tomography (OCT) and can obstruct important details for medical diagnosis [1, 7, 16]. Besides “classical” approaches with linear and non-linear filters, such as the Wiener filter, or wavelet-denoising [3, 22], convolutional neural networks (CNN) have proven to yield superior performance in denoising of natural and medical images [16, 28].

The task of denoising is an inverse image problem and aims at reconstructing a clean image \( \hat{\textit{\textbf{x}}} \) from a noisy observation \( \tilde{\textit{\textbf{x}}} = \textit{\textbf{c}} \circ \textit{\textbf{x}} \). A common assumption of the noise model \( \textit{\textbf{c}} \) of the image \( \tilde{\textit{\textbf{x}}} \) is additive white Gaussian noise with zero mean and standard deviation \( \sigma \) [23, 28]. Given a noisy image \( \tilde{\textit{\textbf{x}}} \), the denoising can be expressed as optimization problem of the form
$$\begin{aligned} \hat{\textit{\textbf{x}}} = \mathop {\text {arg min}}\limits \Big \{ \mathcal {L}(\tilde{\textit{\textbf{x}}}, \hat{\textit{\textbf{x}}}) + \lambda \mathcal {R}(\hat{\textit{\textbf{x}}}) \Big \} ~ . \end{aligned}$$
(1)
The reconstruction \( \hat{\textit{\textbf{x}}} \) should be close to \( \tilde{\textit{\textbf{x}}} \) by means of a similarity metric \( \mathcal {L} \), but with substantially less noise. The regularizer \( \mathcal {R} \) expresses a prior on the reconstructed images, which leads to \( \hat{\textit{\textbf{x}}} \) having less noise than \( \tilde{\textit{\textbf{x}}} \). One usually imposes a smoothness constrain by penalizing first or higher order spatial derivatives of the image [24]. More recently, denoising autoencoders have successfully been used to implicitly learn a regularization prior from a data set with corrupted and uncorrupted data samples [11]. Autoencoders are usually composed of an encoding and decoding part with a data bottleneck in between. The encoder extracts important visual features from the noisy input image and the decoder reconstructs the input from the extracted features using learned image statistics.
This, however, creates the root problem of medical image denoising with deep learning that is addressed in this paper. The reconstruction is in accordance with the expectation of the denoising autoencoder based on previously learned information. At worst, the reconstruction can contain false image features, that look like valid features, but are not actually present in the input image. Due to the excellent denoising performance of autoencoders, those false features can be indistinguishable from valid features to a layperson and are embedded in an otherwise visually appealing image. This phenomenon is known as hallucination and, while acceptable in the reconstruction of natural images [25], must be avoided at all costs in medical imaging (see Fig. 1). Hallucinations can lead to false diagnoses and thus severely compromise patient safety.
Fig. 1.

Hallucinations in reconstructed retinal OCT scan from supervisely trained CNN. (Left) Ground truth OCT scan. (Right) The white arrow denotes a hallucinated retinal layer that is anatomically incorrect. Hallucinations are the result of reconstructing an unseen noisy input using previously learned image statistics.

To further increase the reliability in the denoised medical images, the reconstruction uncertainty has to be considered. Bayesian autoencoders provide the mathematical framework to quantify a per-pixel reconstruction uncertainty [2, 4, 14]. This allows the detection of hallucinations and other artifacts, given that the uncertainty is well-calibrated; i. e. the uncertainty corresponds well with the reconstruction error [15].

In this work, we employ deep image prior [18] to cope with hallucinations in medical image denoising and provide a Bayesian approach with Monte Carlo (MC) dropout [6] that yields well-calibrated reconstruction uncertainty. We present experimental results on denoising images from low-dose X-ray, ultrasound and OCT. Compared to previous work, our approach leads to better uncertainty estimates and is less prone to overfitting of the noisy image. Our code is publicly available at github.com/mlaves/uncertainty-deep-image-prior.

2 Related Work

Image Priors. Besides manually crafted priors such as 3D collaborative filtering [5], convolutional denoising autoencoders have been used to implicitly learn an image prior from data [7, 11]. Lempitsky et al. have recently shown that the excellent performance of deep networks for inverse image tasks, such as denoising, is based not only on their ability to learn image priors from data, but also on the structure of a convolutional image generator itself [18]. An image generator network \( \hat{\textit{\textbf{x}}} = \textit{\textbf{f}}_{\varvec{\theta }}(\textit{\textbf{z}}) \) with randomly-initialized parameters \( \varvec{\theta } \) is interpreted as parameterization of the image. The parameters \( \varvec{\theta } \) of the network are found by minimizing the pixel-wise squared error \( \Vert \tilde{\textit{\textbf{x}}} - \textit{\textbf{f}}_{\varvec{\theta }}(\textit{\textbf{z}}) \Vert \) with stochastic gradient descent (SGD). The input \( \textit{\textbf{z}} \) is sampled from a uniform distribution with additional perturbations by normally distributed noise in every iteration. This is referred to as deep image prior (DIP). They provided empirical evidence that the structure of a CNN alone is sufficient to capture enough image statistics to provide state-of-the-art performance in inverse imaging tasks. During the process of SGD, low-frequency image features are reconstructed first, followed by higher frequencies, which makes human supervision necessary to retrieve the optimal denoised image. Therefore, this approach heavily relies on early stopping in order to not overfit the noise. However, a key advantage of deep image prior is the absence of hallucinations, since there is no prior learning. A Bayesian approach could alleviate overfitting and additionally provide reconstruction uncertainty.

Bayesian Deep Learning. Bayesian neural networks allow estimation of predictive uncertainty [2] and we generally differentiate between aleatoric and epistemic uncertainty [12]. Aleatoric uncertainty results from noise in the data (e. g. speckle noise in US or OCT). It is derived from the conditional log-likelihood under the maximum likelihood estimation (MLE) or maximum posterior (MAP) framework and can be captured directly by a deep network (i. e. by subdividing the last layer of an image generator network). Epistemic uncertainty is caused by uncertainty in the model parameters. In deep learning, we usually perform MLE or MAP inference to find a single best estimate \( \hat{\varvec{\theta }} \) for the network parameters. This does not allow estimation of epistemic uncertainty and we therefore place distributions over the parameters. In Bayesian inference, we want to consider all possible parameter configurations, weighted by their posterior. Computing the posterior predictive distribution involves marginalization of the parameters \( \varvec{\theta } \), which is intractable. A common approximation of the posterior distribution is variational inference with Monte Carlo dropout [6]. It allows estimation of epistemic uncertainty by Monte Carlo sampling from the posterior of a network, that has been trained with dropout.

Bayesian Deep Image Prior. Cheng et al. recently provided a Bayesian perspective on the deep image prior in the context of natural images, which is most related to our work [4]. They interpret the convolutional network as spatial random process over the image coordinate space and use stochastic gradient Langevin dynamics (SGLD) as Bayesian approximation [26] to sample from the posterior. In SGLD, an MC sampler is derived from SGD by injecting Gaussian noise into the gradients after each backward pass. The authors claim to have solved the overfitting issue with DIP and to be able to provide uncertainty estimates. In the following, we will show that this is not the case for medical image denoising, even when using the code provided by the authors. Further, the uncertainty estimates from SGLD do not reflect the predictive error with respect to the noise-free ground truth image.

3 Methods

3.1 Aleatoric Uncertainty with Deep Image Prior

We first revisit the concept of deep image prior for denoising and subsequently extend it to a Bayesian approach with Monte Carlo dropout to estimate both aleatoric and epistemic uncertainty. Let \( \tilde{\textit{\textbf{x}}} \) be a noisy image, \( \textit{\textbf{x}} \) the true but generally unknown noise-free image and \( \textit{\textbf{f}}_{\varvec{\theta }} \) an image generator network with parameter set \( \varvec{\theta } \), that outputs the denoised image \( \hat{\textit{\textbf{x}}} \). In deep image prior, the optimal parameter point estimate \( \hat{\varvec{\theta }} \) is found by maximum likelihood estimation with gradient descent, which results in minimizing the squared error
$$\begin{aligned} \hat{\varvec{\theta }} = \mathop {\text {arg min}}\limits \Vert \tilde{\textit{\textbf{x}}} - \textit{\textbf{f}}_{\varvec{\theta }}(\textit{\textbf{z}}) \Vert ^{2} \end{aligned}$$
(2)
between the generated image \( \textit{\textbf{f}}_{\varvec{\theta }} \) and the noisy image \( \tilde{\textit{\textbf{x}}} \). The input \( \textit{\textbf{z}} \sim \mathcal {U}(0, 0.1) \) of the neural network has the same spatial dimensions as \( \tilde{\textit{\textbf{x}}} \) and is sampled from a uniform distribution. To ensure that \( \hat{\textit{\textbf{x}}} \) has less noise, carefully chosen early stopping must be applied (see Sect. 5).
To quantify aleatoric uncertainty, we assume that the image signal \( \tilde{\textit{\textbf{x}}} \) is sampled from a spatial random process and that each pixel i follows a Gaussian distribution \( \mathcal {N}(\tilde{x}_{i}; \hat{x}_{i}, \hat{\sigma }^{2}_{i}) \) with mean \( \hat{x}_{i} \) and variance \( \hat{\sigma }^{2}_{i} \). We split the last layer such that the network outputs these values for each pixel
$$\begin{aligned} \textit{\textbf{f}}_{\varvec{\theta }} = \left[ \hat{\textit{\textbf{x}}}, \hat{\varvec{\sigma }}^{2} \right] ~ . \end{aligned}$$
(3)
Now, MLE is performed by minimizing the full negative log-likelihood, which leads to the following optimization criterion [12, 15]
$$\begin{aligned} \mathcal {L}(\varvec{\theta }) = \frac{1}{N}\sum _{i=1}^{N} \hat{\sigma }_{i}^{-2} \big \Vert \tilde{x}_{i} - \hat{x}_{i} \big \Vert ^{2} + \log \hat{\sigma }_{i}^{2} ~ , \end{aligned}$$
(4)
where N is the number of pixels per image. In this case, \( \hat{\varvec{\sigma }}^{2} \) captures the pixel-wise aleatoric uncertainty and is jointly estimated with \( \hat{\textit{\textbf{x}}} \) by finding \( \varvec{\theta } \) that minimizes Eq. (4) with SGD. For numerical stability, Eq. (4) is implemented such that the network directly outputs \( -\log \hat{\varvec{\sigma }}^{2} \).

3.2 Epistemic Uncertainty with Bayesian Deep Image Prior

Next, we move towards a Bayesian view to additionally quantify the epistemic uncertainty. The image generator \( \textit{\textbf{f}}_{\varvec{\theta }} \) is extended into a Bayesian neural network under the variational inference framework with MC dropout [6]. A prior distribution \( p(\varvec{\theta }) \sim \mathcal {N}(\textit{\textbf{0}}, \lambda ^{-1} \textit{\textbf{I}}) \) is placed over the parameters and the network \( \textit{\textbf{f}}_{\tilde{\varvec{\theta }}} \) is trained with dropout by minimizing Eq. (4) with added weight decay. For inference, T stochastic forward passes with applied dropout are performed to sample from the approximate Bayesian posterior \( \tilde{\varvec{\theta }} \sim q(\varvec{\theta }) \). This allows us to approximate the posterior predictive distribution
$$\begin{aligned} p(\hat{\textit{\textbf{x}}} \vert \tilde{\textit{\textbf{x}}}) = \int p(\hat{\textit{\textbf{x}}} \vert \varvec{\theta }, \tilde{\textit{\textbf{x}}}) p(\varvec{\theta } \vert \tilde{\textit{\textbf{x}}}) \, \mathrm {d}\varvec{\theta } ~ , \end{aligned}$$
(5)
which is wider than the distribution from MLE or MAP, as it accounts for uncertainty in \( \varvec{\theta } \). We use Monte Carlo integration to estimate the predictive mean
$$\begin{aligned} \hat{\textit{\textbf{x}}} = \frac{1}{T} \sum _{t=1}^{T} \hat{\textit{\textbf{x}}}_{t} \end{aligned}$$
(6)
and predictive variance [12, 15]
$$\begin{aligned} \hat{\varvec{\sigma }}^{2} = \underbrace{ \frac{1}{T} \sum _{t=1}^{T} \left( \hat{\textit{\textbf{x}}}_{t} - \frac{1}{T} \sum _{t=1}^{T} \hat{\textit{\textbf{x}}}_{t} \right) ^{2}}_{\mathrm {epistemic}} + \underbrace{ \frac{1}{T} \sum _{t=1}^{T} \hat{\varvec{\sigma }}^{2}_{t} }_{\mathrm {aleatoric}} \end{aligned}$$
(7)
with \( \textit{\textbf{f}}_{\tilde{\varvec{\theta }}_t} = [ \varvec{\hat{x}}_{t}, \varvec{\hat{\sigma }}^{2}_{t} ] \). In this work, we use \( T=25 \) MC samples with dropout probability of \( p = 0.3 \). The resulting \( \hat{\textit{\textbf{x}}} \) is used as estimation of the noise-free image and \( \hat{\varvec{\sigma }}^{2} \) is used as uncertainty map. We use the mean over the pixel coordinates as scalar uncertainty value U.

3.3 Calibration of Uncertainty

Following recent literature, we define predictive uncertainty to be well-calibrated if it correlates linearly with the predictive error [8, 15, 19]. More formally, miscalibration is quantified with
$$\begin{aligned} \mathbb {E}_{\hat{\sigma }^{2}} \left[ \big \vert \big ( \Vert \tilde{\textit{\textbf{x}}} - \hat{\textit{\textbf{x}}} \Vert ^{2} \, \big \vert \, \hat{\sigma }^{2} = \sigma ^{2} \big ) - \sigma ^{2} \big \vert \right] \quad {{\,\mathrm{\forall }\,}}\left\{ \sigma ^{2} \in \mathbb {R} \, \vert \, \sigma ^{2} \ge 0 \right\} ~ . \end{aligned}$$
(8)
That is, if all pixels in a batch were estimated with uncertainty of 0.2, we expect the predictive error (MSE) to also equal 0.2. To approximate Eq. (8) on an image with finite pixels, we use the uncertainty calibration error (UCE) metric presented in [15], which involves binning the uncertainty values and computing a weighted average of absolute differences between MSE and uncertainty per bin.
Fig. 2.

Images used to evaluate the denoising performance. The task is to reconstruct a noise-free image from \( \tilde{\textit{\textbf{x}}} \) without having access to \( \textit{\textbf{x}} \). OCT and US images are characterized by speckle noise which can be simulated by additive Gaussian noise. Low-dose X-ray shows uneven photon density that can be simulated by Poisson noise.

4 Experiments

We refer to the presented Bayesian approach to deep image prior with Monte Carlo dropout as MCDIP and evaluate its denoising performance and the calibration of uncertainty on three different medical imaging modalities (see Fig. 2). The first test image \( \textit{\textbf{x}}_{\mathrm {OCT}} \) shows an OCT scan of a retina affected by choroidal neovascularization. Next, \( \textit{\textbf{x}}_{\mathrm {US}} \) shows an ultrasound of a fetal head for gestational age estimation. The third test image \( \textit{\textbf{x}}_{\mathrm {xray}} \) shows a chest x-ray for pneumonia assessment. All test images are arbitrarily sampled from public data sets [9, 13] and have a resolution of \( 512 \times 512 \) pixel.

Images from optical coherence tomography and ultrasound are prone to speckle noise due to interference phenomena [21]. Speckle noise can obscure small anatomical details and reduce image contrast. It is worth mentioning that speckle patterns also contain information about the microstructure of the tissue. However, this information is not perceptible to a human observer, therefore the denoising of such images is desirable. Noise in low-dose X-ray originates from an uneven photon density and can be modeled with Poisson noise [17, 27]. In this work, we approximate the Poisson noise with Gaussian noise since \( \mathsf {Poisson}(\lambda ) \) approaches a Normal distribution as \( \lambda \rightarrow \infty \) (see Appendix A.5). We first create a low-noise image \( \textit{\textbf{x}} \) by smoothing and downsampling the original image from public data sets using the ANTIALIAS filter from the Python Imaging Library (PIL) to \( 256 \times 256 \) pixel. Downsampling involves averaging over highly correlated neighboring pixels affected by uncorrelated noise. This decreases the observation noise by sacrificing image resolution (see Appendix A.4). The downsampled image acts as ground truth to which we compute the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) of the denoised image \( \hat{\textit{\textbf{x}}} \). Further, we compute the UCE and provide calibration diagrams (MSE vs. uncertainty) to show the (mis-)calibration of the uncertainty estimates.

We compare the results from MCDIP to standard DIP and to DIP with SGLD from Cheng et al. [4]. SGLD posterior inference is performed by averaging over T posterior samples \( \hat{\textit{\textbf{x}}} = \frac{1}{T} \sum _{t=1}^{T} \hat{\textit{\textbf{x}}}_{t} \) after a “burn in” phase. The posterior variance is used as an estimator of the epistemic uncertainty \( \frac{1}{T} \sum _{t=1}^{T} \left( \hat{\textit{\textbf{x}}} - \hat{\textit{\textbf{x}}}_{t} \right) ^{2} \). Cheng et al. claim that their approach does not require early stopping and yields better denoising performance. Additionally, we train the SGLD approach with the loss function from Eq. (7) to consider aleatoric uncertainty and denote this with SGLD+NLL. We implement SGLD using the Adam optimizer, which works better in practice and is more related to preconditioned SGLD [20].

5 Results

The results are presented threefold: We show (1) possible overfitting in Fig. 3 by plotting the PSNR between the reconstruction \( \hat{\textit{\textbf{x}}} \) and the ground truth image \( \textit{\textbf{x}} \); (2) denoising performance by providing the denoised images in Fig. 4 and PSNR in Table 1 after convergence (i. e. after 50k optimizer steps); and (3) goodness of uncertainty in Fig. 5 by providing calibration diagrams and uncertainty maps.

Our experiments confirm what is already known: The non-Bayesian DIP quickly overfits the noisy image. The narrow peaks in PSNR values during optimization show that manually performed early stopping is essential to obtain a reconstructed image with less noise (see Fig. 3). The PSNR between \( \hat{\textit{\textbf{x}}} \) and the ground truth \( \textit{\textbf{x}} \) approaches the value of the PSNR between the noisy image \( \tilde{\textit{\textbf{x}}} \) and the ground truth, thus reconstructing the noise as well. However, the SGLD approach shows almost identical overfitting behavior in our experiments. This is in contrast to what is stated by Chen et al., even when using the original implementation of SGLD provided by the authors [4]. SGLD+NLL additionally considers aleatoric uncertainty and converges to a higher PSNR level. This indicates that SGLD+NLL does not overfit the noisy image completely. MCDIP on the other hand does not show a sharp peak in Fig. 3 and safely converges to its highest PSNR value. This requires no manual early stopping to obtain a denoised image. The reconstructed X-ray images after convergence in Fig. 4 underline this: MCDIP does not reconstruct the noise. The PSNR values in Table 1 confirm these observations. Although it was not the intention of this work to reach highest-possible PSNR values, MCDIP even outperforms the other methods with early-stopping applied (see Appendix A.2).
Fig. 3.

Peak signal-to-noise ratio between denoised image \( \hat{\textit{\textbf{x}}} \) and ground truth \( \textit{\textbf{x}} \) vs. number of optimizer iterations. DIP and SGLD(+NLL) quickly overfit the noisy image. MCDIP converges to its highest PSNR value and does not overfit \( \tilde{\textit{\textbf{x}}} \). The plots show means from 3 runs with different random initialization.

Fig. 4.

Denoised X-ray images after convergence. Only MCDIP does not show overfitted noise. Additional reconstructions can be found in Appendix A.1.

Table 1.

PSNR values after convergence (at least 50k iterations). Note that our goal was not to reach highest possible PSNR, but to show overfitting in convergence.

PSNR

DIP

SGLD

SGLD+NLL

MCDIP

OCT

\( 23.64 \pm 0.19 \)

\( 23.58 \pm 0.12 \)

\( 24.82 \pm 0.12 \)

\( \mathbf {29.88} \pm 0.03 \)

US

\( 23.55 \pm 0.11 \)

\( 23.81 \pm 0.15 \)

\( 24.55 \pm 0.08 \)

\( \mathbf {29.67} \pm 0.07 \)

X-ray

\( 23.28 \pm 0.08 \)

\( 23.50 \pm 0.12 \)

\( 24.60 \pm 0.04 \)

\( \mathbf {31.19} \pm 0.10 \)

Fig. 5.

Calibration diagrams and uncertainty maps for SGLD+NLL with early stopping and MCDIP after convergence on the X-ray image (best viewed with digital zoom). (Left) The calibration diagrams show MSE vs. uncertainty and provide mean uncertainty (U) and UCE values. (Right) Uncertainty maps show per-pixel uncertainty.

The calibration diagrams and corresponding UCE values in Fig. 5 suggest that SGLD+NLL is better calibrated than MCDIP. However, due to overfitting the noisy image without early stopping, the MSE from SGLD+NLL concentrates around 0.0, which results in low UCE values. On the US and OCT image, the uncertainty from SGLD+NLL collapses to a single bin in the calibration diagram and does not allow to reason about the validness of the reconstructed image (see Fig. 9 in Appendix A.1). The uncertainty map from MCDIP shows high uncertainty at edges in the image and the mean uncertainty value (denoted by U) is close to the noise level in all three test images.

6 Discussion and Conclusion

In this paper, we provided a new Bayesian approach to the deep image prior. We used variational inference with Monte Carlo dropout and the full negative log-likelihood to both quantify epistemic and aleatoric uncertainty. The presented approach is applied to medical image denoising of three different modalities and provides state-of-the-art performance in denoising with deep image prior. Our Bayesian treatment does not need carefully applied early stopping and yields well-calibrated uncertainty. We observe the estimated mean uncertainty value to be close to the noise level of the images.

The question remains why Bayesian deep image prior with SGLD does not work as well as expected and is outperformed by MC dropout. First, SGLD as described by Welling et al. requires a strong decay of the step size to ensure convergence to a mode of the posterior [26]. Cheng et al. did not implement this and we followed their approach [4]. After implementing the described step size decay, SGLD did not overfit the noisy image (see Appendix A.3). However, this requires a carefully chosen step size decay which is equivalent to early stopping.

The deep image prior framework is especially interesting in medical imaging as it does not require supervised training and thus does not suffer from hallucinations and other artifacts. The presented approach can further be applied to deformable registration or other inverse image tasks in the medical domain.

References

  1. 1.
    Agostinelli, F., Anderson, M.R., Lee, H.: Adaptive multi-column deep neural networks with application to robust image denoising. In: Advances in Neural Information Processing Systems, pp. 1493–1501 (2013)Google Scholar
  2. 2.
    Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, Boston (2006).  https://doi.org/10.1007/978-1-4615-7566-5CrossRefzbMATHGoogle Scholar
  3. 3.
    Chang, S.G., Yu, B., Vetterli, M.: Adaptive wavelet thresholding for image denoising and compression. IEEE Trans. Image Process. 9(9), 1532–1546 (2000).  https://doi.org/10.1109/83.862633MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Cheng, Z., Gadelha, M., Maji, S., Sheldon, D.: A Bayesian perspective on the deep image prior. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5443–5451 (2019)Google Scholar
  5. 5.
    Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3-D transform-domain collaborative filtering. Trans. Image Process. 16(8), 2080–2095 (2007).  https://doi.org/10.1109/TIP.2007.901238MathSciNetCrossRefGoogle Scholar
  6. 6.
    Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In: ICML, pp. 1050–1059 (2016)Google Scholar
  7. 7.
    Gondara, L.: Medical image denoising using convolutional denoising autoencoders. In: International Conference on Data Mining Workshops, pp. 241–246 (2016).  https://doi.org/10.1109/ICDMW.2016.0041
  8. 8.
    Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On calibration of modern neural networks. In: ICML, pp. 1321–1330 (2017)Google Scholar
  9. 9.
    van den Heuvel, T.L., de Bruijn, D., de Korte, C.L., Ginneken, B.v.: Automated measurement of fetal head circumference using 2D ultrasound images. PloS One 13(8), e0200412 (2018).  https://doi.org/10.1371/journal.pone.0200412. US dataset source
  10. 10.
    Hogg, R.V., McKean, J., Craig, A.T.: Introduction to Mathematical Statistics, 8th edn. Pearson, New York (2018)Google Scholar
  11. 11.
    Jain, V., Seung, S.: Natural image denoising with convolutional networks. In: Advances in Neural Information Processing Systems, pp. 769–776 (2009)Google Scholar
  12. 12.
    Kendall, A., Gal, Y.: What uncertainties do we need in Bayesian deep learning for computer vision? In: NeurIPS, pp. 5574–5584 (2017)Google Scholar
  13. 13.
    Kermany, D.S., et al.: Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 172(5), 1122–1131 (2018).  https://doi.org/10.1016/j.cell.2018.02.010CrossRefGoogle Scholar
  14. 14.
    Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. In: ICLR (2014)Google Scholar
  15. 15.
    Laves, M.H., Ihler, S., Fast, J.F., Kahrs, L.A., Ortmaier, T.: Well-calibrated regression uncertainty in medical imaging with deep learning. In: Medical Imaging with Deep Learning (2020)Google Scholar
  16. 16.
    Laves, M.H., Ihler, S., Kahrs, L.A., Ortmaier, T.: Semantic denoising autoencoders for retinal optical coherence tomography. In: SPIE/OSA European Conference on Biomedical Optics, vol. 11078, pp. 86–89 (2019).  https://doi.org/10.1117/12.2526936
  17. 17.
    Lee, S., Lee, M.S., Kang, M.G.: Poisson-gaussian noise analysis and estimation for low-dose x-ray images in the NSCT domain. Sensors 18(4), 1019 (2018)CrossRefGoogle Scholar
  18. 18.
    Lempitsky, V., Vedaldi, A., Ulyanov, D.: Deep Image Prior. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9446–9454 (2018).  https://doi.org/10.1109/CVPR.2018.00984
  19. 19.
    Levi, D., Gispan, L., Giladi, N., Fetaya, E.: Evaluating and calibrating uncertainty prediction in regression tasks. arXiv arXiv:1905.11659 (2019)
  20. 20.
    Li, C., Chen, C., Carlson, D., Carin, L.: Preconditioned stochastic gradient Langevin dynamics for deep neural networks. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pp. 1788–1794 (2016)Google Scholar
  21. 21.
    Michailovich, O.V., Tannenbaum, A.: Despeckling of medical ultrasound images. Trans. Ultrason. Ferroelectr. Freq. Control 53(1), 64–78 (2006).  https://doi.org/10.1109/TUFFC.2006.1588392CrossRefGoogle Scholar
  22. 22.
    Rabbani, H., Nezafat, R., Gazor, S.: Wavelet-domain medical image denoising using bivariate Laplacian mixture model. Trans. Biomed. Eng. 56(12), 2826–2837 (2009).  https://doi.org/10.1109/TBME.2009.2028876CrossRefGoogle Scholar
  23. 23.
    Salinas, H.M., Fernandez, D.C.: Comparison of PDE-based nonlinear diffusion approaches for image enhancement and denoising in optical coherence tomography. IEEE Trans. Med. Imaging 26(6), 761–771 (2007).  https://doi.org/10.1109/TMI.2006.887375CrossRefGoogle Scholar
  24. 24.
    Sotiras, A., Davatzikos, C., Paragios, N.: Deformable medical image registration: a survey. IEEE Trans. Med. Imaging 32(7), 1153–1190 (2013).  https://doi.org/10.1109/TMI.2013.2265603CrossRefGoogle Scholar
  25. 25.
    Wang, N., Tao, D., Gao, X., Li, X., Li, J.: A comprehensive survey to face hallucination. Int. J. Comput. Vis. 106(1), 9–30 (2014).  https://doi.org/10.1007/s11263-013-0645-9CrossRefGoogle Scholar
  26. 26.
    Welling, M., Teh, Y.W.: Bayesian learning via stochastic gradient Langevin dynamics. In: ICML, pp. 681–688 (2011)Google Scholar
  27. 27.
    Žabić, S., Wang, Q., Morton, T., Brown, K.M.: A low dose simulation tool for CT systems with energy integrating detectors. Med. Phys. 40(3), 031102 (2013).  https://doi.org/10.1118/1.4789628CrossRefGoogle Scholar
  28. 28.
    Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017).  https://doi.org/10.1109/TIP.2017.2662206MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Leibniz Universität HannoverHanoverGermany

Personalised recommendations