Region-based evidential deep learning to quantify uncertainty and improve robustness of brain tumor segmentation

Despite recent advances in the accuracy of brain tumor segmentation, the results still suffer from low reliability and robustness. Uncertainty estimation is an efficient solution to this problem, as it provides a measure of confidence in the segmentation results. The current uncertainty estimation methods based on quantile regression, Bayesian neural network, ensemble, and Monte Carlo dropout are limited by their high computational cost and inconsistency. In order to overcome these challenges, Evidential Deep Learning (EDL) was developed in recent work but primarily for natural image classification and showed inferior segmentation results. In this paper, we proposed a region-based EDL segmentation framework that can generate reliable uncertainty maps and accurate segmentation results, which is robust to noise and image corruption. We used the Theory of Evidence to interpret the output of a neural network as evidence values gathered from input features. Following Subjective Logic, evidence was parameterized as a Dirichlet distribution, and predicted probabilities were treated as subjective opinions. To evaluate the performance of our model on segmentation and uncertainty estimation, we conducted quantitative and qualitative experiments on the BraTS 2020 dataset. The results demonstrated the top performance of the proposed method in quantifying segmentation uncertainty and robustly segmenting tumors. Furthermore, our proposed new framework maintained the advantages of low computational cost and easy implementation and showed the potential for clinical application.


Introduction
Automated brain tumor segmentation promises to provide more reliable measurements for cancer diagnosis and assessment, establishing new possibilities for high-throughput analysis [1].Segmentation enables clinicians to determine tumor location, extent, and subtype.Additionally, brain tumor segmentation on longitudinal MRI scans can facilitate monitoring tumor growth or shrinkage.In current clinical practice, accurate segmentation of brain tumor regions is usually done manually by experienced radiologists, which is time-consuming and labor-intensive.Furthermore, manual labeling of results may involve human bias, as they rely on the physician's experience and subjective decision-making.On the other hand, automated segmentation techniques can reduce labor and human bias to provide efficient, objective, and reproducible results for tumor diagnosis and monitoring.
The performance of automated brain tumor segmentation methods has grown rapidly over the past few years.This development is due to the growth of annotated datasets and the advent of deep learning models that can leverage large amounts of data [2].Most methods are based on fully convolutional neural networks (FCN) [3], like U-Net [4] and its variants [5] for improving the performance of brain tumor segmentation.Recently, Transformers and self-attention, originating from natural language processing (NLP), have also been applied to medical image segmentation [6].
Although the segmentation results of deep neural networks are reported to be close to or comparable to human performance [7], their robustness levels are low, and concerns remain with their clinical acceptability [8].Possible reasons include the large variability in imaging properties such as artifacts and magnetic field strength, as well as the inherent heterogeneity of brain tumors, which are beyond the training dataset.Furthermore, human bias in dataset annotations can cause models also to inherit this bias.One possible direction to alleviate the reliability problem of deep neural networks is to use uncertainty estimation.The uncertainty reflects how confident the network predicts the class labels.Confidence studies can help identify areas of data dominated by lack of annotations (epistemic uncertainty) or noisy annotations (aleatoric uncertainty).Additional information from uncertainty estimation can be used to quantify segmentation performance or as a post-processing step to correct automatic segmentation.Clinically, uncertainty estimates can feed back potential error regions to guide or automate corrections or be used for patient-level segmentation failure detection [1].Therefore, reliably quantifying the uncertainty of segmentation performance is critical in clinical applications.
Popular methods for quantifying uncertainty in neural networks include quantile regression (QR) [9], Bayesian neural network (BNN) [10][11][12], ensemble-based [13], dropout-based [14][15][16], and evidential deep learning (EDL) [17][18][19].Simply interpreting the confidence scores of softmax/sigmoid outputs as event probabilities in a categorical distribution can lead to overconfident wrong prediction [20,21].The classical BNN aims to capture uncertainty by learning the weight distribution of the network and approximates the integral of parameters by variational inference or Laplace approximation to estimate the posterior prediction distribution [18].However, most BNNs are challenging to implement and train since model parameters have to be explicitly modeled as random variables [22], which lack scalability in both architecture and data size [12].Hence, subsequent approaches focused on being able to reuse the training pipeline and maintain scalability while providing reasonable uncertainty estimates.To this end, more intuitive and simple methods, such as learning an ensemble of deterministic networks [15,21] and introducing Monte Carlo dropout [13] are proposed for brain tumor segmentation.On the downside, ensemble-based methods need to train multiple models from scratch, which is computationally expensive, and the introduction of dropouts results in inconsistent outputs [23].
On the other hand, EDL has been gradually developed in recent studies, demonstrating more promising and reliable performance in uncertainty estimation.Based on the Dempster-Shafer Evidence Theory (DST) [24], EDL uses the Dirichlet distribution to model the categorical distribution of the output given the input to the network.This class of methods produces closed-form prediction distributions and outperforms BNNs in adversarial queries and out-of-distribution uncertainty quantification [18].Compared to ensemble-based and dropout-based methods, EDL showed more robust results with lower computational costs [25].However, most of the recent works focus on the natural image classification and segmentation problem, making the application of EDL in medical image segmentation to be further studied.
In this paper, we propose a region-based EDL network for accurate brain tumor segmentation.
The network learned about classification distribution by minimizing region-based prediction error under the Dirichlet prior distribution.This enabled the proposed network to provide accurate and robust segmentation results and reliable uncertainty estimates simultaneously.Our method improved the mean squared error (MSE) loss used for the simple natural image classification [17], making it more suitable for semantic segmentation of medical images.The main contributions of our work can be summarized as follows: • An EDL framework was adopted for accurate brain tumor segmentation, which can quantify the uncertainty of segmentation results and improve the reliability and robustness of segmentation.The rest of the paper is structured as follows: Section 2 briefly introduces EDL and recent development.Section 3 details our segmentation framework, including EDL and novel loss functions.Section 4 illustrates the experimental setup and evaluation metrics, and the results are analyzed and discussed in Section 5.The conclusion and future research directions are given in Section 6.

Related Work
Despite many uncertainty estimation methods mentioned, our proposed framework resorts to arguably the most cutting-edge methodology, EDL, for this purpose.The rest of the section presents the principles of EDL (Subsection 2.1) and a brief overview (Subsection 2.2) of the scarce contributions in which EDL has been utilized for tumor segmentation.

Principles of EDL
Evidence Deep Learning (EDL) is based on Dempster-Shafer Evidence Theory (DST) [24], which is a generalization of Bayesian theory to subjective probability.It assigns belief masses to subsets of a discerning frame, representing a set of exclusive potential states, such as possible class labels for a voxel.A belief mass can be assigned to any subset of the frame.Assigning all belief masses to the entire frame represents the opinion that the truth can be any possible state, e.g., any label is equally likely.
The belief distribution of DST in the discerning framework can be formalized as a Dirichlet distribution by Subjective Logic (SL) [26].For a voxel i, the Dirichlet distribution Dir (α i ) is parameterized by a vector of Dirichlet parameters α ij for K classes, where j denotes the j-th class.(The denotations of subscripts i and j hold for the entire paper.)The neural network collects evidence e ij from the input data, a measure of support that facilitates classifying samples into the class j.The belief mass distribution, i.e., subjective opinion, in [17] corresponds to a Dirichlet distribution with parameter α ij = e ij + 1.
As a result, it is equivalent to placing a Dirichlet distribution on the predicted categorical distribution, allowing a single network to output different predictions.The output layer of an EDLbased neural network parameterizes a simplex distribution representing the probability distribution of class assignments.The softmax/sigmoid classification layer is replaced with a ReLU activation layer that outputs non-negative continuous values, resulting in e ij .The vector of predicted classification probabilities can be computed by: where S i = K j=1 α ij is called the Dirichlet strength.The class probability vector for voxel i given by p i is modeled as a random vector drawn from the Dirichlet distribution [18].
Let y i be the one-hot encoded labels with y ik = 1 and y ij = 0 for all j = k.Treating the Dirichlet distribution Dir (p i | α i ) as a prior on the multinomial likelihood M ult(y i | p i ), one can minimize the negative logarithm of the marginal likelihood: where B is the multinomial beta function [27].Alternatively, one can minimize the Bayes risk of the cross-entropy loss: or the mean squared error: where ψ refers to the digamma function [28].Sensoy et al. [17] observed that L ML,i and L CE,i produced excessively high belief masses and were less stable than L MSE,i .This can be attributed to the fact that these two loss functions encourage maximizing the correct likelihood.

Related Work of EDL
Sensoy et al. [17] used the MSE loss for natural image classification.They showed that the loss decreases as the correct class parameter grows and decreases when the largest incorrect parameter decays.Furthermore, they integrated the KL divergence loss to narrow the error class parameters further.However, the properties of the aggregated loss function were not shown, and the behavior of the loss was not studied for all parameters.Also, for the image classification problem, [18] improved the square-norm of MSE loss to max-norm and achieved higher performance.Because max-norm minimizes the highest class prediction error, and square-norm minimizes the total sum of squares, which is more susceptible to outliers.However, this situation may not be applicable for tumor segmentation with severe class imbalance.The TBraTS network [25] attempted to apply EDL's CE loss to brain tumor segmentation.In order to improve the segmentation accuracy, the network output was additionally passed through the softmax layer to calculate the soft Dice loss, which was added with the CE loss.However, this increases training costs and complexity, and an incomplete deployment of the EDL framework may cause the network to fail to produce true evidence values.Different from these methods that employed MSE or CE loss, our method minimized region-based prediction error (soft Dice loss) under the Dirichlet prior distribution to facilitate tumor segmentation performance of the EDL framework.

Method
This section details our approach, a novel regionbased EDL framework for 3D brain tumor segmentation (Subsection 3.1) and describes how we quantify the uncertainty (Subsection 3.2).

Region-Based Evidential Deep Learning
For semantic segmentation of medical images, it is important to consider the accuracy of segmented regions in addition to standard classification errors.Hence, we proposed a region-based objective to minimize the expected prediction error in the EDL framework while maintaining high segmentation accuracy.Unlike Zou et al. [25] who added the soft Dice (sDice) loss based on the result of softmax activation to L CE,i , we directly minimized the Bayes risk of sDice loss: By using the identity: the equation can be formulated in an easily interpretable form: .
By factoring out the denominator of sDice (sDice-Den) and variance (Var), the loss aims to achieve the joint goal of minimizing the region-based prediction error and variance of the Dirichlet experiments generated by the neural network for the training set.
In order to ensure an effective EDL framework that allows the network to learn to generate subjective opinions from evidence correctly, the loss function needs to have the following properties.
Hypothesis 1 When the network optimizes, the loss function prioritizes data fitting over variance estimation.

Hypothesis 2
The loss function has a tendency to fit the data.

Hypothesis 3
The loss function avoids generating evidence for all observations it cannot explain.
These properties of the proposed DICE loss (L DICE ) can be guaranteed by the following theorems, each numbered one-to-one with the hypothesis.The proofs of all Theorems are presented in the Appendix A.
Theorem 1 For any α ij ≥ 1, the inequality sDiceDen > V ar is satisfied.
Theorem 2 For a given voxel p with the correct label c, L DICE decreases when new evidence is added to αpc and increases when evidence is removed from αpc.
Theorem 3 For a given voxel p with the correct label c, L DICE decreases when evidence is removed from all incorrect Dirichlet parameters αpw for all w = c.
To summarize, Theorems 1 to 3 demonstrate that the proposed loss function can optimize the neural network to provide more evidence for the correct class of each voxel while avoiding misclassification by discarding misleading evidence.By accumulating evidence, the loss also tends to reduce the variance of its predictions on the training set, but only if the additional evidence leads to a better fit to the data.
Furthermore, to further minimize the contribution of parameters associated with incorrect classes, a KL divergence loss function is introduced to shrink their evidence to 0 as follows: where Γ(•) is the gamma function [28] and α i = y i +(1 − y i ) α i is the Dirichlet parameters after removal of the non-misleading evidence.The following theorem shows a desirable monotonicity property of this regularization loss as a supplementary to [17].
Theorem 4 For a voxel i with the correct label c, L KL,i increases in α iw for all w = c.
Theorems 3 and 4 show that the strength of parameters associated with misleading results is expected to decrease during training.Since the parameters are all expected to be minimized, the preferred behavior of the proposed loss function results in a higher uncertainty of misclassification.
The final loss function is defined as: where L KL,mean is the mean KL divergence loss over all voxels and λ is an annealing coefficient.
The KL divergence loss is gradually introduced by λ for a stable training due to its strong regularization effect.The annealing scheme is set to reach a maximum 1  10 as: where n is the current epoch.
In addition, the weighted sDice loss, L wDICE , is also proposed to ease the class imbalance between tumor and background voxels.The weight for each segmentation class is one minus the ratio of foreground voxels to background voxels.Since the weights are all positive and class-wise, all theoretical properties of the loss function still hold.
Furthermore, the parameter of Dirichlet distribution in our framework is re-defined as: Unlike [17] defined the Dirichlet α ij = e ij + 1, the alternative formula allows the network to output high Dirichlet parameters more easily.This avoids the defect that it is almost impossible for the network to express a high degree of uncertainty for a particular outcome since each outcome gives a minimal proof of one, i.e., α ij ≥ 1.

Uncertainty Quantification
Calculating the predictive entropy (PE) is a common way to quantify uncertainty.Based on the information theory, PE uses confidence scores of predictions to calculate the total uncertainty for a voxel i, which is defined as: where p i is the confidence score vector [29].In order to better compare different methods, we normalized the PE by its maximum possible value as: As a result, the value range of normalized predictive entropy (NPE) is normalized to [0, 1], where 1 implies the maximum uncertainty and 0 implies the absolutely confident prediction.

Experiment Setup
Experiments on the standard benchmark (BraTS 2020) were conducted to compare different techniques for uncertainty quantification and evaluate qualitatively the produced segmentation along with the uncertainty associated with each voxel.We first present the implementation details (Subsection 4.1) and then introduce the models (Subsection 4.2) and evaluation metrics (Subsection 4.3) for comparative experiments.

Data Acquisition and Processing
The BraTS 2020 [7,30,31] dataset comprises brain MRI images of various scanners and protocols.The ground truth (GT) label includes the GD-enhancing tumor (ET), peritumoral edema (ED), and necrotic and non-enhancing tumor core (NCR + NET).The segmentation masks were evaluated on three tumor subregions: the ET, the tumor core (TC = ET + NCR + NET), and the whole tumor (WT = ET + NCR + NET + ED).
Four MRI modalities of T1, T1ce, T2, and T2-FLAIR were co-registered with a size of 240 × 240 × 155.They were then interpolated to 1mm All images are cropped to 160 × 192 × 128 to reduce computational waste in the background and are then preprocessed by intensity normalization.During the training, various data augmentation techniques were applied on-the-fly as in [32] to artificially increase the dataset size and minimize the risk of overfitting.

Model Training and Optimization
We chose LKAU-Net [32,33] as our Base network model.All softmax/sigmoid layers in the Base network were replaced with ReLU activation layers as described in the previous section.For comparison, we used different loss functions to optimize the network: L CE,i , L MSE,i , L DICE , and L wDICE .Since the evaluation would be based on more meaningful tumor subregions, the network was trained to segment each overlapping subregion separately.However, we also trained the network for multiclass segmentation of the basic labels using L DICE of K = 4 for ablation study.
In addition, we also employed training strategies of Ensemble [15], Dropout [14], and TBraTS [25].For Ensemble, we trained five networks with different initialized weights, which has proven to be sufficient in practice [34].Dropout layers (factor of 0.5) were added to the deepest three layers of the Base network to handle high-level features, which is the most efficient [16].These layers were also active during inference, and the same images were passed 10 times to quantify the prediction uncertainty [14].Previous research has found that a sampling rate of 10 is adequate for reasonable uncertainty estimation [14].Moreover, we used the strategy of TBraTS, which combined existing losses for multi-label segmentation.
The adaptive moment estimator (Adam) optimizer was used to optimize all networks in 200 epochs, with a batch size of 1 and an initial learning rate of 0.0003.Experiments were implemented using PyTorch 1.10 on NVIDIA GeForce RTX 3090 GPUs.

Evaluation Metrics
Our method was evaluated using the independent test set of BraTS 2020 (74 cases).The segmentation performance was evaluated using the Dice score, which is defined as: where X and Y are sets of GT and prediction.The Dice score measures spatial overlap between the GT and segmentation results, where a score of 1 indicate a complete overlap.
In addition, the following metrics were utilized to evaluate uncertainty estimation: expected calibration error (ECE) [1], soft uncertainty-error overlap (sUEO), and BraTS score (BraS) [35].ECE is defined by the absolute calibration error between the confidence interval and the accuracy interval (c m and a m , where m is the m-th bin defined in the interval [0, 1]), weighted by the number of voxels (n m ) in the interval.ECE is given by where N and M are the total numbers of voxels and bins, and the confidence is calculated by one minus the uncertainty.ECE ranges from 0 to 1, where lower values indicate better calibration.To reduce the effect of the large, confident, and accurate extracranial regions typically found in brain tumor MRI, we only considered voxels within the brain.Improved on the uncertainty-error overlap (UEO) [1], we proposed the soft uncertainty-error overlap (sUEO) that directly uses the uncertainty quantities (u i ) to measure the overlap: sUEO does not require thresholding the uncertainty map, which can save time optimizing the threshold over the validation set.It shows whether a model can precisely localize segmentation errors.Moreover, the comprehensive BraS is defined by: where AUC Dice , AUC FTP , and AUC FTN are area under three curves: 1) Dice vs. confidence threshold, 2) ratio of filtered True Positives (FTP) vs. confidence threshold, and 3) ratio of filtered True Negatives (FTN) vs. confidence threshold.The curves are plotted against the segmentation filtered by different confidence levels, which only voxels with confidence greater than the threshold retain.This metric rewards uncertainty estimates that yield high confidence for correct segmentations or assigns a low confidence level to incorrect segmentations and penalizes uncertainty measures that result in a higher percentage of under-confident correct segmentations.

Results and Discussion
This section first evaluates the performance of the novel region-based EDL framework for brain tumor segmentation and uncertainty quantification through experiments on the original dataset (Subsection 5.1).It then examines its robustness by applying various image processing techniques to the test image data (Subsection 5.2).

Segmentation and Uncertainty Estimation
Our method generated comparable segmentation results with the GT labels, as visualized in Figure 1.The quantitative results of our methods averaged over three tumor subregions were compared in Table As for the uncertainty estimation, Ensemble and Dropout still showed their advantages in accuracy.They obtained the lowest ECE metrics of 0.009 and 0,010, which indicated they were wellcalibrated.However, our methods achieved the highest sUEO and BraS of 0.420 and 0.875, showing their ability to precisely locate errors and assign uncertainty.The EDL model optimized by the proposed wDice loss generated the most accurate uncertainty map to indicate the potential false predictions.On the other hand, the proposed EDL (DICE) model made the most reliable uncertainty estimation, maintaining the lowest error while thresholding along the uncertainty dimension.The advantages of our region-based EDL methods are also shown in Figure 2. The proposed methods had the most precise uncertainty map consistent with the error map.Ideally, a learned model only give high uncertainty for a possible erroneous prediction.Despite the high segmentation accuracy, both Ensemble and Dropout methods generated more uncertainty around mask boundaries and other correct regions, leading to inferior uncertainty estimation performance in terms of sUEO and BraS.It is also worth mentioning that EDL models trained to segment each tumor subregion separately outperformed the ones trained with multi-class labels (DICE-M).
In addition, the inference runtimes of the uncertainty estimation methods on one sample

Robustness Experiment
To verify the robustness of the segmentation model, we applied several image processing techniques to simulate the low-quality acquisition that usually happens in actual practice.We first blurred the four modalities of the MRI images using a Gaussian filter with sigma = 1.5.Subsequently, we re-evaluated the performance of all methods for segmentation and uncertainty estimation, as shown in Table 3.We can observe that with the addition of Gaussian blur, the segmentation performance of all methods dropped significantly, especially Ensemble and Dropout.
Our method leaped to the highest Dice metric of 0.572 for blurry images, demonstrating its robustness.By comparing the segmentation results with the original input and high-noise input in Figure   2, it can be seen that the EDL using our loss function segmented the WT region more accurately than all other methods.This is due to the evidence extracted from the data that produced these subjective opinions.
Furthermore, our method exhibited the most reliable uncertainty quantification on blurred images compared to other uncertainty estimation methods.As shown in Table 3, all uncertainty evaluation metrics decreased, except for sUEO for all EDL-based methods.This might benefit from the robust evidence captured by the EDL segmentation framework, where the advantage is more noticeable with larger error regions.The proposed EDL (DICE) method achieved top performance, especially for generating reliable uncertainty maps.Besides showing the robustness of our method, this again demonstrated the importance of region-based loss for locating semantic segmentation errors.As shown in the right half of Figure 2, the uncertainty map generated by the EDL (DICE) method is the most relevant to the error map.Unlike other methods that only generate high uncertainty at the edges of the predicted mask, the proposed method can indicate potential error regions inside masks.It is true that regional boundaries should a priori undergo higher uncertainty, but this high uncertainty should not be assumed to be exclusive of these regions.This is what is favored by our proposed methods, as they declare high uncertainty in delimiting and non-delimiting regions of the segmented image.This demonstrates the potential of our proposed method for clinical application.Potential error regions fed back by the model can assist in automatic correction or quality quantification of tumor segmentation.
To enrich the robustness experiment, we further applied Gaussian noise and Gamma correction to the original input to simulate the noise and the contrast variability introduced by imaging or enhancement technique.With the Gaussian noise of variance = 1.5, the segmentation performance of all methods was superior to burred ones, as shown in Table 3.The proposed EDL (wDICE) remained the top in The proposed EDL (wDICE) method remained the top in terms of the Dice metric (0.771), followed by the proposed EDL (DICE) with Dice 0.769.The metrics for uncertainty estimation resembled blurred input, while the sUEO of EDL (wDICE) became 0.002 higher than that of EDL (DICE).However, as for the result after Gamma correction with gamma = 5, the proposed EDL (DICE) method excelled in all metrics, which showed its robustness to unexpected contrast variance.
These observations can also be visually inspected in Figure 3. Compared to other methods, our methods provided more precise uncertainty maps.Ensemble and Dropout models had trouble handling the boundaries, especially for Gamma corrected input.The extracranial area is no longer zero after Gamma correction, which might cause problems when applying zeropadding.Moreover, their overconfident prediction using softmax/sigmoid is shown for Gamma corrected input where the main error regions were not indicated in the uncertainty map.Non-regionbased EDL methods also showed inaccurate uncertainty maps.The shortcoming of using MSE loss in EDL to quantify the uncertainty of medical image segmentation can be seen in Figure 3, which was significantly biased by the interference.

Conclusions and Future Research Directions
In this paper, we proposed a region-based EDL framework to segment brain tumors and quantify their uncertainty reliably and robustly.We demonstrated that the proposed region-based loss could generate reliable prediction confidence by gathering evidence in the output image by demonstrating four theoretical properties.Our method produced voxel-level uncertainty maps for brain tumor segmentation, which provided additional information on segmentation confidence for cancer diagnosis.Extensive experiments showed that the proposed method is more robust than previous methods on the BraTS 2020 dataset and achieves the best performance in segmentation uncertainty estimation.Furthermore, the novel framework maintained the low computational cost properties of EDL and can be easily integrated into any neural network.However, our method was currently inferior to Ensemblmultie and Dropout methods in the segmentation performance of raw images.EDL frameworks with higher segmentation accuracy are worthy of further study.In addition, since the predictive uncertainty can be separated into epistemic and aleatoric uncertainty, future work can also focus on the inherent value for automated diagnosis that uncertainty estimation brings when differentiating between the two sources of uncertainty.The third direction is validating this framework in other diagnostic applications, possibly favoring the fusion of more multimodal information sources.Then, we can assess whether the fusion of different information modes permits a decrease in the overall uncertainty of the model estimated by our EDL segmentation framework. , As y ij ∈ 0, 1, Proof of Theorem 2 The loss function becomes: where ε is the change in αpc.For ε > 0, where εw is the change in α iw /α iw .As for A, when εw > 0, Γ εw since K j=1 α ij > α iw .Hence, A increases as α iw /α iw increases.Similarly for B and C, when εw > 0, As a result, L KL,i increases as α iw /α iw increases.

Fig. 1 :
Fig. 1: Representative visual segmentation results of the proposed region-based EDL method on the BraTS 2020 test set.The labels are enhancing tumor (yellow), edema (green), and necrotic and nonenhancing tumor (red).

Fig. 2 :
Fig. 2: Representative visual results of the whole tumor (WT) produced by different uncertainty estimation methods on the BraTS 2020 test set.The right half of the figure was evaluated on the test images blurred by a Gaussian filter of sigma = 1.5.

Fig. 3 :
Fig. 3: Representative visual results of the whole tumor (WT) produced by different uncertainty estimation methods on the BraTS 2020 test set.The left half of the figure was evaluated on the test images added with Gaussian noise of variance = 1.5.The right half was evaluated on the test images after Gamma correction of gamma = 5.

Table 1 :
Quantitative comparisons of different uncertainty estimation methods on the BraTS 2020 test set.(Bold numbers: best results)

Table 2 :
Inference runtimes of different uncertainty estimation methods for one image.

Table 3 :
Quantitative comparisons of different uncertainty estimation methods on preprocessed BraTS 2020 test set.(Bold numbers: best results)