Performance of machine learning algorithms for glioma segmentation of brain MRI: a systematic literature review and meta-analysis

Objectives Different machine learning algorithms (MLAs) for automated segmentation of gliomas have been reported in the literature. Automated segmentation of different tumor characteristics can be of added value for the diagnostic work-up and treatment planning. The purpose of this study was to provide an overview and meta-analysis of different MLA methods. Methods A systematic literature review and meta-analysis was performed on the eligible studies describing the segmentation of gliomas. Meta-analysis of the performance was conducted on the reported dice similarity coefficient (DSC) score of both the aggregated results as two subgroups (i.e., high-grade and low-grade gliomas). This study was registered in PROSPERO prior to initiation (CRD42020191033). Results After the literature search (n = 734), 42 studies were included in the systematic literature review. Ten studies were eligible for inclusion in the meta-analysis. Overall, the MLAs from the included studies showed an overall DSC score of 0.84 (95% CI: 0.82–0.86). In addition, a DSC score of 0.83 (95% CI: 0.80–0.87) and 0.82 (95% CI: 0.78–0.87) was observed for the automated glioma segmentation of the high-grade and low-grade gliomas, respectively. However, heterogeneity was considerably high between included studies, and publication bias was observed. Conclusion MLAs facilitating automated segmentation of gliomas show good accuracy, which is promising for future implementation in neuroradiology. However, before actual implementation, a few hurdles are yet to be overcome. It is crucial that quality guidelines are followed when reporting on MLAs, which includes validation on an external test set. Key Points • MLAs from the included studies showed an overall DSC score of 0.84 (95% CI: 0.82–0.86), indicating a good performance. • MLA performance was comparable when comparing the segmentation results of the high-grade gliomas and the low-grade gliomas. • For future studies using MLAs, it is crucial that quality guidelines are followed when reporting on MLAs, which includes validation on an external test set. Supplementary Information The online version contains supplementary material available at 10.1007/s00330-021-08035-0.


Introduction
Gliomas are the most frequently occurring primary tumor of the brain [1]. Accurate segmentation of gliomas on clinical magnetic resonance imaging (MRI) scans plays an important role in the quantification and objectivation of diagnosis, treatment decision, and prognosis [2][3][4]. In current clinical practice, T1-weighted, post-contrast T1-weighted, T2-weighted, and T2-fluid attenuated inversion recovery (FLAIR) sequences are required to characterize the different components and to assess the infiltration of the surrounding brain parenchyma [5,6]. Glioma segmentation requires the distinguishing of tumor tissue from healthy surrounding tissues by the radiologist [7] and the segmented region of interest or volume of interest can be used to compute featurebased radiomics and quantifiable measurements [8,9]. However, segmentation is a time-consuming task with high inter-observer variability [10,11]. Therefore, automatic segmentation methods have been searched for as these could facilitate consistent measures and simultaneously could reduce time spent on the task by radiologists in their daily practice. These developments have been powered by the organization of the annual multimodal Brain Tumor Segmentation (BraTS) challenge (http:// braintumorsegmentation.org/). Within the BraTS challenges, the organization committee released multimodal scan volumes of a relatively large number of patients suffering from glioma after which different research groups aim to construct machine learning algorithms (MLAs) to automatically segment the gliomas. The BraTS data were accompanied by corresponding segmentations which served as the ground truth [11]. Recent developments in automatic segmentation by the use of MLAs helped to achieve higher precision [12]. Within the BraTS challenges, the MLAs which yielded the most accurate results included different 2D and 3D convolutional neural networks (CNNs) [13][14][15][16][17], including 3D U-Nets [18,19].
Despite the large body of scientific literature covering this topic, a comprehensive overview and meta-analysis of the accuracy of MLAs in glioma segmentation is still lacking [20,21]. Therefore, factors which enable the further development of MLAs for glioma segmentation remain partially elusive. The aim of the current study therefore was to provide a systematic review and meta-analysis of the accuracy of MLAbased glioma segmentation tools on multimodal MRI volumes. By providing this overview, the strengths and limitations of this field of research were highlighted and recommendations for future research were made.

Methods
The systematic review and meta-analysis was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [22]. Prior to initiation of the research, the study protocol was registered in the international open-access Prospective Register of Systematic Reviews (PROSPERO) under number CRD42020191033.
Papers that developed or validated MLAs for the segmentation of gliomas were reviewed. Literature was searched for in MEDLINE (accessed through PubMed), Embase, and The Cochrane Library, between April 1, 2020, and June 19, 2020. No language restrictions were applied. The full search strings, including keywords and restrictions, are available in the Appendix. Studies describing MLA-based segmentation methodologies on MR images in glioma patients were included. Additional predefined inclusion criteria were as follows: (1) mean results were defined as dice similarity coefficient (DSC) score; (2) study results needed to be validated either internally and/or externally. Letters, preprints, scientific reports, and narrative reviews were included. Studies based on animals or non-human samples or that presented non-original data were excluded.
Two researchers screened the papers on title, abstract, and fulltext independently. Discussions between both researchers were held to resolve all disagreements about non-consensus papers. The investigators independently extracted valuable data of the included papers using a predefined data extraction sheet after which the data was cross-checked. Data extracted from the included studies comprised the following: (a) first author and year of publication; (b) size of training set; (c) mean age of participants in the training set; (d) gender of participants in the training set; (e) size of internal test set; (f) whether there was an external validation; (g) study design, including the used MRI sequences and the segmentations which formed the ground truth; (h) architecture of the AIalgorithm(s); (i) target condition; (j) performance of the algorithm(s) in terms of DSC score, sensitivity, and specificity for both the training and the internal and/or external test sets. When studies performed external validation of the described AI-system(s), externally validated data were included in data extraction tables. Data from the internal validation were used when studies solely carried out the internal validation of the reported MLAs.
The quality of the included studies was not formally assessed, as a formal quality assessment is a well-known challenge in this area of research [23][24][25]. Nevertheless, Collins and Moons (2019) announced their initiative to develop a version of the transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD) statement tailored to machine learning methods [26]. Pinto dos Santos suggested on the European Society of Radiology website various items to take into consideration when reviewing literature regarding machine learning [27]. These items were included in this review.

Statistical assessment
An independent statistician was consulted to discuss the statistical analyses and approaches with regard to the meta-analysis. To estimate the overall accuracy of the current MLAs, a random effects model meta-analysis was conducted. To be included in the meta-analysis, studies needed to have reported the outcome of interest (i.e., DSC score), in combination with a standard deviation (SD), standard error (SE), and/or the 95% confidence interval (95% CI). For studies reporting the SE and/ or the 95% CI, the SD was statistically assessed [28]. Metaanalysis was performed on aggregated data of all studies providing suitable outcomes. Then, subgroup analyses were conducted on two separate target conditions, for studies describing the segmentation of either HGGs or LGGs.
Statistical analyses were carried out by use of IBM SPSS Statistics (IBM Corp. Released 2017. IBM SPSS Statistics for Windows, Version 25.0. IBM Corp.). Variables and outcomes of the statistical assessment were presented as mean with ± SD when normally distributed. When data were not normally distributed, they were presented as the median with range (minimum-maximum). Statistical tests were two-sided and significance was assumed when p < 0.05.
The DSC score represents an overlap index and is the most used metric in validating segmentation images. In addition to the direct comparison between automated and ground truth segmentations, the DSC score is a common measure of reproducibility [29,30]. The DSC score ranges from 0.0 (no overlap) to 1.0 (complete overlap). In this meta-analysis, a DSC score of ≥ 0.8 was considered good overlap. A DSC score of ≤ 0.5 was considered poor.
The quantitative meta-analysis was partially carried out using OpenMeta[Analyst] software, which is the visual front-end for the R package (www.r-project.org; Metafor) [31]. Forest plots were created to depict the estimated DSC scores from the included studies, along with the overall DSC score performance. When the 95% CI of the different subgroup analyses overlapped, no further statistical analysis was carried out.
The heterogeneity of the included studies was tested with the Higgins I 2 -test. The Higgins I 2 -test quantifies inconsistency between included studies, where a value > 75% indicates considerable heterogeneity between groups. A low heterogeneity corresponds with a Higgins I 2 between 0 and 40% [28]. Both the meta-analyses of the aggregated groups as the metaanalyses of the subgroups were performed using a random effects model, due to an observed high heterogeneity (Higgins I 2 > 75%) between included studies [32].
To showcase possible publication bias, a funnel plot was created by means of Stata (StataCorp. 2019. Stata Statistical Software: Release 16.: StataCorp LLC.).

Results
Initially, 1094 publications were retrieved through database searching. An additional ten publications were identified through cross-referencing. After removing duplicates, the remaining 734 publications were screened. Based on the title and abstract, 509 papers were excluded. A total of 225 fulltext articles were assessed for eligibility and 42 studies were included in the systematic review. Ten studies were eligible for inclusion for the meta-analysis as they provided sufficient quantitative data (e.g., only these studies provided the DSC score along with SD for the performance of the MLA) (Fig. 1). Publications describing the use of (automated) segmentations to apply MLAs to classify molecular characteristics of gliomas (n = 135) were excluded. Fourteen papers were excluded as they described the use of MLAs on gliomas to perform texture analyses. Eleven papers did not report the DSC score and another 11 studies showed unclarities in data reporting. Contacting the authors of these papers did not result in the acquisition of the needed data. Five studies did not report results of internal or external validation steps, whereas an additional three studies did not report data from the traininggroup. Three studies described separate combined features, instead of a coherent MLA methodology. One study was excluded due to the inclusion of other brain tumors next to gliomas (e.g., metastases) ( Fig. 1).
Thirty-eight studies combined different combinations of MRI sequences for brain tumor segmentation (Table 1) [13, 17, 33-42, 44, 45, 47-57, 59-72]. Only 3 studies used one MRI sequence for the algorithm to segment [43,46,58]. One conference paper did not report on the used MRI sequences [56]. Four studies reported not to have used (any part of) the BraTS datasets [36,46,50,51]. Two of these papers used original data [46,51]. The other two papers used either data from the Cancer Imaging Archive (TCIA) [50] or a combination of TCIA data and original data [36].
In 36 studies, the ground truth (i.e., segmentations) was derived from the BraTS dataset [13, 17, 33-36, 38-45, 47-49, 52-55, 57-72]. In two of these studies, the researchers added segmentations of additional original data. Segmentations were manually annotated by two experienced professionals independently following the BraTS segmentation protocol [54,64]. In one paper, only original data with corresponding segmentations were used. These segmentations were made independently by two experienced professionals following the BraTS segmentation protocol [51]. Three papers used segmentations which were obtained without adhering to the BraTS segmentation protocol [36,46,50]. In one conference paper, the segmentation methodology was not described [56]. Please note that the ground truth segmentations of BraTS 2015 were first produced by algorithms and then verified by annotators, whereas the ground truth of BraTS 2013 fused multiple manual annotations.
The performance of the MLAs, in terms of sensitivity, specificity, and DSC score, is displayed in Table 1. All studies used retrospectively collected data. Nine studies focused specifically on the segmentation of HGGs, whereas seven studies focused on the segmentation of LGGs. The remaining studies (n = 31) described the segmentation of gliomas in general without the subdivision of LGG and HGG. Five of the included studies [33,35,38,62,65] described segmentation of multiple target conditions (i.e., segmentation of both HGG and LGG). For these studies, the results of each different target are displayed in Table 1 as well. All of the included studies conducted some version of cross-validation on the MLAs; however, only four studies [35,36,51,64] performed an external validation of performance.

Meta-analysis of the included studies
The aggregated meta-analysis comprised twelve MLAs, described in ten individual studies [33, 36, 44, 47, 51, 54, 58, 62,     Studies included in the meta-analysis were italicized BraTS, Brain Tumor Image Segmentation Benchmark; CNN, convolutional neural network; DSC, dice similarity coefficient; kNN-CRF, k-nearest neighbor conditional random fields; KSVM-CRF, kernel support vector machine with rbf kernel conditional random fields; LSTM, long short-term memory; MLA, machine learning algorithms; N, no; NR, not reported; PKSVM-CRF, proposed product kernel support vector machine conditional random fields; SD, standard deviation; SK-TPCNN (+RF), small kernels two-path convolutional (+ random forests) neural network; SN, sensitivity; SP, specificity; TCIA, the Cancer Imaging Archive; TCGA, the Cancer Genome Atlas; Y, yes. *The deep learning model is based on the recently published DeepMedic architecture, which provided top scoring results on the BRATS data set [17]. **Data separated by LGG and HGG for each network available in the original paper For more information on the multivendor BraTS dataset, see Menze et al [11]. Please note that the ground truth of BraTS 2015 was first produced by algorithms and then verified by annotators; in contrast, the ground truth of BraTS 2013 fused multiple manual annotations 66,72], and showed an overall DSC score of 0.84 (95% CI: 0.82 -0.86) (Fig. 2). Heterogeneity showed to be 80.4%, indicating that studies differed significantly (p < 0.001).
For the subgroup analysis of segmentation studies focusing on HGGs, the results are depicted in Fig. 3. Overall, DSC score for the five included studies [33,36,51,62,72] was 0.83 (95% CI: 0.80 -0.87). The estimated I 2 heterogeneity between groups showed to be 81.9% (p = 0.001). Two studies [33,62] focusing on the segmentation of LGGs were included in another subgroup meta-analysis. Overall, the DSC score was found to be 0.82 (95% CI: 0.78-0.87) (Fig. 4). The estimated heterogeneity of included groups was 83.62% (p = 0.013). Hence, the heterogeneity was determined as high for both subgroup meta-analyses.

Publication bias
Studies included in the funnel plot were the ten studies that were meta-analyzed (Fig. 5). The funnel plot showed an asymmetrical shape, giving an indication for publication bias among included studies. Besides, not all studies were plotted within the area under the curve of the pseudo-95% CI, supporting the indication of possible publication bias [28].

Discussion
Various MLAs for the automated segmentation of gliomas were reviewed. Although heterogenous, MLAs showed to have a good DSC score with no differences between the segmentation of LGG and HGG. However, there were some indications for publication bias within this field of research.
Currently, segmentation of tumor lesions is a subjective and time-consuming task [58]. By replacing the current manual methods with an automated computer-aided approach, improvement of glioma quantification and subsequently radiomics can be achieved. However, automated segmentation of gliomas is a challenging task, due to the large variety of morphological tumor characteristics among patients [11]. As HGGs usually show more heterogeneous MRI characteristics, their automated segmentation could be expected to be more challenging compared to LGGs. Furthermore, the low proliferative state of LGGs likely results in lower perfusion and higher diffusion values in affected tissue [73,74]. No performance difference was observed between the segmentation of HGGs and LGGs. Given the differences between HGGs and LGGs, it was expected that significant differences would arise in automatic segmentation tasks. Nevertheless, the ground truth segmentations were based on manual delineation by a (neuro)radiologist, indicating that the performance of automatic segmentation could only be as good as the ground truth segmentations. In addition, the ground truth of BraTS 2015 was first produced by algorithms and then verified by annotators, whereas the ground truth of BraTS 2013 fused multiple manual annotations.
Although MLAs performing automated segmentation show quite promising results (overall DSC score of 0.84; 95% CI: 0.82-0.86), there is still no wide acceptance and implementation of these methodologies in daily clinical practice. One of the explanations for this can be found in the different MLA methodologies; different MLA approaches and their exact details have a significant impact on the outcomes, even when applied to the same dataset. For example, in the BraTS 2019 challenge, the top three with regard to the segmentation task comprised a two-stage cascaded U-Net [75], a deep convolution neural network [76], and an ensemble of 3D-to-2D CNNs [77].
Another reason may be the absence of standardized procedures on how to properly use these segmentation systems. There are substantial differences between advanced systems that offer computer-aided segmentation and the current standards for neuroradiologists, which impedes the integration of MLA methods. CE-certified software is limitedly available in clinical practice, which is one of the reasons for the impediment. Also, the purpose for the use of MLAs varies; where radiologists mainly use these techniques for follow-up, neurosurgeons mostly use MLAs for therapeutic planning. In addition, direct integration into the neuroradiologist's daily practice without extra time spent on the task will be needed to make automatic glioma segmentation feasible. Moreover, the current automated segmentations still need to be supervised by trained observers. It seems more likely that implementation of MLAs in neuroradiology will lead to an interaction between doctor and computer so that neuroradiologists will utilize more advanced technologies in the establishment of diagnoses [78]. The future implementation of MLAs in the diagnosis of glioma is of great clinical relevance, as these algorithms can support the non-invasive analysis of tumor characteristics without the need of histopathological tissue assessment. More specifically, automatic segmentations form the basis of further sophisticated analyses to clarify meaningful and reliable associations between neuroimaging features and survival rate [79,80]. In conclusion, as automated segmentation of glioma is considered to be the first step in this process, the implementation of MLAs holds great potential for the future of neuroradiology.
Various publications were found with regard to the automated segmentation of gliomas in the post-operative setting [81][82][83][84]. Quantitative metrics are believed to be needed for therapy guidance, risk stratification, and outcome prognostication in the post-operative setting. MLAs could also represent a potential solution for automated quantitative measurements of the burden of disease in the post-operative setting. As shown in Table 2, however, the DSC scores of these studies  are lower as compared to the DSC scores of the pre-operative MLA-based segmentations [81][82][83][84]. An explanation for these differences in performance could be the post-surgical changes of the brain parenchyma and the presence of air and blood products in the post-operative setting. Together these factors have been reported to affect the performance of MLAs [81]. Several methodological shortcomings of the present metaanalysis should be considered. First, various studies were excluded for the quantitative synthesis, due to missing data. Besides, heterogeneity of all analyses was considerably high, probably caused by technical variances of different MLA methodologies for segmentation. Lastly, only four out of 42 studies performed an out-of-sample external validation, emphasizing the importance of external validation to assess the robustness. It is probable that publication bias was present as there is no interest in the publication of poorly performing MLAs. In addition, differences in MR sequence input, ground truth, and other variables could play a role with regard to the outcomes, although this was considered a minor limitation as the source data across studies was similar in most studies.
Future gains of research on this topic may include an ensemble approach, as this might significantly boost the performance of segmentation. Thus, in addition, to focus current research on training individual segmentation systems, it may be interesting to investigate the fusion of multiple systems as well (i.e., segmentation of different imaging features in order to obtain different imaging biomarkers) [11]. Lastly, all included studies used retrospectively collected data, most of which using data from the BRATS databases. In order to further validate the performance of segmentation systems in clinical practice, larger-scale and external validated studies are preferred. In addition, data availability and providing online tools or downloadable scripts of the used MLAs could enhance future developments within this field of research significantly.

Conclusion
In this study, a systematic review and meta-analysis of different studies using MLA for glioma segmentation shows good performance. However, external validation is often not carried out, which should be regarded as a significant limitation in this field of research. Therefore, further verification of the accuracy of these models is recommended. It is crucial that quality guidelines are followed when reporting on MLAs, which includes validation on an external test set.
Acknowledgements The authors would like to acknowledge Dr. Rogier Donders for his statistical insights.
Funding The authors state that this work has not received any funding. BraTS, Brain Tumor Image Segmentation Benchmark; CNN, convolutional neural network; DSC, dice similarity coefficient; MLA, machine learning algorithms; N, no; NA, not applicable; NR, not reported; SD, standard deviation; SN, sensitivity; SP, specificity; Y, yes For more information on the multivendor BraTS dataset, see Menze et al [11]. Please note that the ground truth of BraTS 2015 was first produced by algorithms and then verified by annotators; in contrast, the ground truth of BraTS 2013 fused multiple manual annotations