Prostate cancer is a heterogeneous disease, with a strong relationship between aggressiveness, as characterized by Gleason grade, and survival [1]. More recently, the concept of Gleason 3 and Gleason 4 tumors representing distinct disease states has emerged [2], due to the different signatures at a genomic level [3] and the distinct survival rates encountered in large long-term follow-up studies [4, 5]. Indeed, percentage Gleason 4 has been shown to outperform traditional Gleason grading as a prognostic marker in a multivariate study of 379 prostatectomy specimens [6].

A reliable, quantitative, and non-invasive test to identify patients at risk of aggressive disease (those with a potential Gleason 4 component) would therefore have significant clinical value but does not currently exist.

Clinical parameters such as tumor volume [7] and serum prostate-specific density (PSAd) have been shown to correlate with Gleason grade [8].

While there is some evidence that the subjective opinion of radiologists interpreting multiparametric (mp) MRI can be used to estimate Gleason grade [9], quantitative measurements of signal intensity including normalized T2 signal intensity and apparent diffusion coefficient (ADC) also moderately correlate with Gleason grade [10, 11] and have been shown to differ in peripheral zone (PZ) vs. transition zone (TZ) tumors [12].

The purpose of this study was (i) to test whether machine learning classifiers for TZ and PZ (based on clinical and quantitative mp-MRI parameters) can correctly classify tumors into those with/without a Gleason 4 component and (ii) to compare the performance of the best performing classifiers against the subjective opinion of three board-certified radiologists.

Materials and methods

Our Institutional Review Board approved the study and waived the requirement for individual consent for retrospective analysis of prospectively acquired patient data collected as part of clinical trials/routine care (R&D No: 12/0195, 16 July 2012).

Patient cohorts

Two temporally separated cohorts were built: one for generating models (training cohort) and another for temporal validation (validation cohort).

For the training cohort, a trial dataset of 330 patients was interrogated. Full details of the trial have been previously reported [13]. In brief, inclusion criteria were (i) men who underwent previous transrectal ultrasound biopsy whereby suspicion remained that cancer was either missed or misclassified and (ii) men suitable for further characterization using transperineal template prostate mapping (TPM) biopsy. Exclusion criteria were (i) previous history of prostate cancer treatment and (ii) lack of complete gland sampling or inadequate sampling density at TPM.

Selection criteria for building the training cohort were (i) 3-T mp-MRI, performed between February 2012 and January 2014; (ii) Likert [14] ≥ 3/5 index lesion on mp-MRI, defined on the trial proforma following multidisciplinary tumor board discussion, whereby lesions were assigned to be of TZ or PZ origin; and (iii) TPM and targeted index lesion biopsy confirming tumor (defined as Gleason score 3 + 3 or greater). Gleason pattern 5 was not found in any samples. The index lesion was defined as the most conspicuous lesion with the highest Likert score (3, 4, or 5). This cohort consisted of 72 Gleason 4 containing lesions (38 Gleason 3 + 4, 34 Gleason 4 + 3), and 27 Gleason 3 + 3 lesions for the PZ whereas the TZ had 22 Gleason 4 containing lesions (20 Gleason 3 + 4, 2 Gleason 4 + 3), and 27 Gleason 3 + 3 lesions. A flow diagram for patient selection is shown in Fig. 1.

Fig. 1
figure 1

Flow diagram of patient selection for the training cohort

The validation cohort consisted of 30 consecutive men: 20 PZ (6 Gleason 3 + 4, 4 Gleason 4 + 3, and 10 Gleason 3 + 3) and 10 TZ (3 Gleason 3 + 4, 1 Gleason 4 + 3, and 5 Gleason 3 + 3) with the same selection criteria and scanning protocol as in the training cohort, performed between June and December 2015.

Table 1 shows the age, the PSA, and the gland and tumor volume of the patients in the two cohorts.

Table 1 Clinical characteristics

Multiparametric MRI protocol

Mp-MRI was performed using a 3-T scanner (Achieva, Philips Healthcare) and a 32-channel phased-array coil. Prior to imaging, 0.2 mg/kg (up to 20 mg) of a spasmolytic agent (Buscopan; Boehringer Ingelheim) was administered intravenously to reduce bowel peristalsis. Mp-MRI was compliant with the European Society of Uroradiology [14] guidelines. Full acquisition parameters are shown in Table 2.

Table 2 Description of mp-MRI parameters

Targeted biopsy

Ultrasound-guided TPM ± targeted biopsy acted as the reference standard for the training cohort using cognitive MR-guided registration. A systematic biopsy of the whole gland was performed through a brachytherapy template-grid placed on the perineum using a 5-mm sampling frame. Focal index lesions underwent cognitive MRI-targeted biopsies at the time of TPM. A genitourinary pathologist with 12 years of experience analyzed biopsy cores blinded to the MRI results. There were no instances of non-targeted samples yielding higher Gleason grades than targeted specimens.

TPM and targeted biopsies were chosen as the reference standard because they are superior to transrectal ultrasound biopsy, are the sampling method of choice in the active surveillance population, and avoid the spectrum bias associated with a prostatectomy reference standard [15], which favors patients with aggressive disease.

Multiparametric MRI review

Mp-MRI images were qualitatively assessed on an Osirix workstation by three board-certified radiologists independently (readers SP, MA, and SP). Radiologists were fellowship-trained, with 10, 2, and 3 years of experience in the clinical reporting of mp-prostate MRI, with each year comprising more than 100 mp-MRIs per year with regular attendance at weekly multidisciplinary tumor board meetings [16]. Radiologists were informed of the PSA level and subjectively evaluated whether the index lesion contained a Gleason pattern 4 component or not (i.e., a binary classification), based on their personal evaluation of imaging characteristics, as developed from years of prostate MRI reporting and pathological feedback at multidisciplinary tumor board meetings.

Radiologists were aware that high signal on b = 2000 s/mm2 DWI with corresponding low ADC value, low T2W signal, and avid early contrast enhancement compared with normal prostatic tissue suggest higher grade disease [10, 17].

Extraction of mp-MRI-derived quantitative parameters

MR datasets were analyzed with MIM Symphony Version 6.1 (MIM Software Inc), which carries out rigid translational co-registration of volumetric and axial T2W, ADC, and DCE images for semi-automatic registration, after which subsequent manual refinement can then be performed.

A fourth board-certified radiologist (EJ) with 3 years of experience in the quantitative analysis of mp-prostate MRI was blinded to the histopathology results and the opinion of the other radiologists manually contoured a volume of interest for each index lesion and recorded the mean signal intensity (SI) of each volume on the axial T2W, ADC, and DCE images at all time points. Contouring was performed on T2WI and manually adjusted on the DCE images and ADC maps to account for distortion and registration errors. A typical contoured lesion is shown in Fig. 2. In order to standardize signal intensity between subjects, normalized T2 signal intensity metrics were calculated by dividing the signal intensity of the lesion by that of the bladder urine [18].

Fig. 2
figure 2

a Axial T2 TSE of a 64-year-old male showing the volumetric contour of a TZ prostate tumor for extraction of mp-MRI parameters. b Axial post gadolinium dynamic contrast-enhanced image. c Axial (b) = 2000 mm/s2. d ADC “map”

Early enhancement (EE) and maximum enhancement (ME) metrics were derived from the DCE-MRI signal enhancement time curves. EE was defined as the first strongly enhancing postcontrast SI divided by the precontrast SI, and ME as the difference between the peak enhancement SI and the baseline SI normalized to the baseline SI [19].

Clinical features of the tumor volume, gland volume, and PSAd were also selected as features to include in the model development, whereby the first two features were measured using tri-planar measurements and the prolate ellipsoid formula [20].

Machine learning models

Five classification models were tested, namely logistic regression (LR) [21], naïve Bayes (NB) [21], support vector machine [21], random forest (RF) [22], and feed-forward neural network (FFNN) [21].

To validate each model, a fivefold cross-validation was applied, whereby data was split into five folds, with four folds being used for training and one for testing the classifiers. This was repeated for five trials with each fold used once as a test set. At each trial, a receiver operator characteristic (ROC) curve was built for both the training and test set and the corresponding AUC calculated. The values of the AUCs for the five trials were averaged to produce a single estimate, and the process was repeated for 100 rounds using a different partitioning of the data for each repetition.

Since the performance of machine learning classifiers decreases when the data used to train the model is imbalanced [23], which applies to the PZ cohort in our study (72 Gleason 4, vs. 27 Gleason 3 + 3), a resampling technique called Synthetic Minority Over-sampling TEchnique (SMOTE) [24] was applied to the PZ training cohort. Here, the minority class is over-sampled by introducing synthetic examples along the line segments joining any/all of the k minority class nearest neighbors of each minority class sample. After applying SMOTE to the PZ training cohort, 45 synthetic samples belonging to the class of 3 + 3 Gleason cancers were added and this new re-balanced data was used to generate the classifiers. SMOTE was not applied to the TZ training cohort as this cohort was sufficiently balanced.

The Statistics and Machine Learning Toolbox of MATLAB (version R2017b, MathWorks) was used for all algorithms, using one hidden layer of 20 neurons for FFNN.

Model feature selection and internal validation

The best combination of features was derived from the training cohort dataset using the correlation feature selection (CFS) algorithm [25] for TZ and PZ lesions, denoted as SELTZ and SELPZ respectively. CFS determines (i) how each feature correlates with the presence of Gleason 4 tumor, and (ii) whether any of the selected features are redundant due to correlations between them. Redundant features were removed from the SELTZ and SELPZ feature sets.

As Fig. 3 shows, to test whether CFS was effective, we compared the performance of classifiers trained using all features (denoted ALL) with the performance of the classifiers trained using only SELTZ and SELPZ.

Fig. 3
figure 3

Flow diagram outlining the feature selection validation strategy used in the study. CFS, correlation features selection; ALL, set containing all the features; SELTZ, subset of feature selected for the TZ; SELPZ, subset of feature selected for PZ; AUCALLPZ, area under the curve obtained on PZ using all the features; AUCALLTZ, area under the curve obtained on TZ using all the features; LR, linear regression; FFNN, feed-forward neural network; SVM, support vector machine; NB, naïve Bayes; RF, random forest, AUCSELPZ, area under the curve obtained on PZ obtained using the selected feature; AUCSELTZ, area under the curve obtained on TZ obtained using the selected feature

Best model selection and temporal validation

Using SELTZ and SELPZ, we applied a fivefold cross-validation to compare the classifiers and to select the best performing zone-specific models, defined by the highest AUC. A flow diagram of the comparisons is shown in Fig. 4.

Fig. 4
figure 4

Flow diagram outlining the model validation strategy used in the study. SELTZ, subset of feature selected for the transition zone; SELPZ, subset of feature selected for PZ; AUCSELPZ, area under the curve obtained on PZ obtained using the selected feature; AUCSELTZ, area under the curve obtained on TZ obtained using the selected feature; LR, linear regression; FFNN, feed-forward neural network; SVM, support vector machine; NB, naïve Bayes; RF, random forest; PZ, peripheral zone; TZ, transition zone

Once the best performing models were selected for each zone, their performance was compared with that of the three radiologists. Mean values of sensitivity and specificity were compared with that obtained by the classifiers at three cut-off points of interest on the ROC curves. In particular, we considered:

  1. (i)

    The point characterized by a specificity of 50% (point_50), which is of interest from a clinical standpoint as we can tolerate classifying 50% of patients as false-positives provided a high level of sensitivity (i.e., low numbers of false-negatives) is maintained.

  2. (ii)

    The point characterized by a specificity equal to the mean specificity of the three radiologists who assessed the images (point_RAD), we used this point to compare our models to the performance of an experienced radiologist.

  3. (iii)

    The point closest to the point with sensitivity and specificity equal to 1 (point_01), we chose this point as it is characterized by the best trade-off between specificity and sensitivity. For all the three points, we derived the corresponding thresholds on the ROC curve obtained on the training set and then applied these thresholds to compute the sensitivity/specificity of the classifiers on the test set.

Finally, we applied a temporal-separated validation whereby the best performing classifier was trained on the training cohort and tested on the validation cohort. SMOTE was applied to the training set before using it to train the classifier for all the analyses performed in the PZ cohort.


Model selection and internal validation

CFS selected SELPZ = {ADC, ME, PSAd} and SELTZ = {ADC, ME}. Table 3 shows the mean and standard deviation of the AUCs obtained on the test set by the classifiers using all the parameters (ALL) and the selected features (SELTZ and SELPZ).

Table 3 Mean and standard deviation (in brackets) of the AUC obtained on the test set by the five classifiers following the fivefold cross-validation, when all the features (ALL) and only the features selected by CFS (SEL) are used

For the PZ, the mean AUC on the test set of the models trained with SELPZ was greater than that of the models trained using ALL. However, for the TZ, only LR and NB benefitted from feature selection, while FFNN, SVM, and RF obtained slightly better AUC values when trained with ALL.

To statistically validate the comparison between classifiers trained with and without CFS, we applied the Wilcoxon signed-rank test for pairwise comparison [26] between the AUC values obtained on the test set by each classifier trained with and without feature selection. For the PZ, all the classifiers except RF obtained statistically better AUC values when trained with SELPZ than the ones trained with ALL (p value < 0.05), while for the TZ, only LR and NB obtained statistically better AUC values when trained with SELTZ.

Although some TZ classifiers did not statistically improve their performance when trained with SELTZ, the best performing models (LR, NB) obtained better results when CFS was applied. For this reason, we decided to build the models for both TZ and PZ training the classifiers with SELTZ and SELPZ, respectively.

For the PZ, LR and NB had higher AUC values followed very closely by FFNN and RF, with SVM obtaining the worst results.

The results obtained by the five classifiers on TZ were similar to the PZ, although the AUC values were generally lower. NB and LR were the best performing models, and SVM and RF were the classifiers with the lowest mean AUC values.

To compare AUC distributions obtained by the different classifiers for TZ and PZ, the Friedman [26] and Iman and Davenport tests [26] were applied. If a statistical difference was detected, the Holm test [26] was performed to compare the best performing classifier (with the lowest Friedman rank) and the remaining ones. The results for these tests are shown in Table 4, whereby the best performing classifiers were NB and LR for TZ and PZ, respectively. The Iman and Davenport statistical hypothesis of equivalence was rejected in both cases.

Table 4 Results of the statistical tests on AUC distributions obtained on the test set by the 5 classifiers, trained with SEL following fivefold cross-validation

For the TZ, the Holm post hoc procedure stated that the AUC distributions on the test set obtained by NB are statistically better than those of all the other classifiers. For the PZ, the LR classifier achieved statistically better performance.

Radiologist comparison and temporal validation

Figure 5 shows, for the test set, the mean ROC curve along with the sensitivity and specificity mean values obtained by the three radiologists and computed at the three cut-off points over 100 rounds of fivefold cross-validation by NB and LR classifiers for TZ and PZ.

Fig. 5
figure 5

Mean ROC curve, along with the sensitivity and specificity mean values obtained by the three radiologists and computed at the three cut-off points generated on the test set following the fivefold cross-validation by the best performing classifiers (NB and LR) on TZ (left) and PZ (right). PZ, peripheral zone; TZ, transition zone; LR, linear regression; NB, naïve Bayes; ROC, receiver operator characteristic

In Table 5, the mean values of sensitivity and specificity calculated on the test set at point_50, point_01, and point_RAD are shown along with those obtained by the three radiologists.

Table 5 Mean values of sensitivity (SN) and specificity (SP) at the three cut-off points obtained by the three radiologists and the best performing classifiers following fivefold cross-validation

For the PZ, at all three cut-off points, LR achieved higher values of sensitivity (0.93, 0.76, and 0.88, respectively) and specificity (0.53, 0.73, and 0.65, respectively) vs. respective sensitivity and specificity of 0.72 and 0.40 for the three radiologists.

Although the radiologists had a higher specificity in the TZ than in the PZ, their mean performance (specificity and sensitivity equal 0.82 and 0.44, respectively) was still lower than that achieved by NB, whereby point_50, point_01, and point_RAD were equal to 0.88, 0.75, and 0.92 for sensitivity, and to 0.51, 0.57, and 0.44 for specificity.

Finally, for temporal validation, the best performing classifiers (NB for TZ and LR for PZ) were trained using the training cohort and tested using the validation cohort. The AUC values obtained on the validation cohort for TZ and PZ were 0.85 and 1.00, respectively.


Our results show that the classifiers designed to predict a Gleason 4 component in known prostate cancer are zone specific although use a similar set of features, namely {ADC, ME, PSAd} for the PZ and {ADC, ME} for the TZ. Furthermore, the best performing models were superior to the subjective opinion of radiologists at all probability thresholds and maintained their performance at temporal validation.

Several studies have previously reported logistic regression and mp-MRI-derived parameters for the prediction of Gleason grade in prostate cancer [27,28,29,30]. While our study is in agreement that ADC is a useful parameter for this purpose, our study differs from the literature in a number of ways. Firstly, all other studies excluded tumors < 0.5 ml; meaning, such data is not generalizable to smaller index lesions, which can be aggressive [31] and are often followed in active surveillance programs.

Hötker et al [27] studied 195 patients and reported a best performing univariate parameter (ADC) achieved an AUC of 0.69 for distinguishing 3 + 3 tumors from those containing a Gleason 4. A possible explanation of their lower reported AUC could be the multiscanner nature of the study and the combination of PZ and TZ cancers into a single model. Furthermore, the authors showed that Ktrans failed to add value for discriminating such tumors and the models did not undergo external validation.

The other studies in the literature [27,28,29,30] derive models based on less than 60 patients and combine DWI with spectroscopic metrics, which necessitates specialist equipment and knowledge. Indeed, all of our metrics can be extracted from the minimum protocol requirements as recommended by international consensus guidelines [14] and thus are more generalizable to non-specialist centers. Since our model uses PSAd as a predictor of Gleason 4 tumor, our study affirms that serum and imaging biomarkers can be synergistic [32]. Our results are also consistent with another group who found no additive value of the tumor volume in Gleason grade prediction [27].

In this study, we chose to analyze index lesions only, to avoid statistical clustering and because index lesions usually drive management strategy and patient outcome [33], particularly in the context of focal therapy and active surveillance. We also chose to exclude patients without evidence of cancer at biopsy since we wished to build a tool which could be used in patients who undergo MRI surveillance and would benefit from quantitative estimates of Gleason grade.

While we did not derive Tofts’ model parameters due to our institutional preference for higher spatial resolution of DCE-MRI over temporal resolution (which is required for a Tofts’ fitting), we demonstrated that ME which is a robust, semiquantitative metric [34] can improve the discriminatory ability for the prediction of Gleason 4 cancer components above ADC alone. While enhancement characteristics play a limited role in PI-RADSv2, the present study suggests these characteristics may be more beneficial in the characterization of Gleason grade rather than tumor detection.

With further work, machine learning classifiers could be used in active surveillance programs to non-invasively detect whether tumors have undergone transformation to a higher Gleason grade and thereby provoke biopsy or intervention. This potential application is particularly pertinent in light of the findings from the ProtecT study [35] which showed no significant difference in survival outcomes at 10-year follow-up in patients randomized to active surveillance, surgery or radiotherapy which is likely to impact the uptake of active surveillance as a management strategy. Indeed, mp-MRI is already advocated by the NICE in the UK as part of active surveillance programs [36].

One possible limitation is the unbalanced nature of the PZ cohort, due to a higher natural incidence of Gleason 4 containing tumors. However, this was addressed by the use of SMOTE. Although the TZ cohort was balanced, a larger cohort would be required to confirm the performance of classifiers, especially for the validation cohort. While the size of the two cohorts was limited, overfitting was avoided by feature selection which reduced the number of input variables, and by regularization which permitted a small percentage of misclassification in the training dataset to produce a less complex model. Finally, although TPM biopsy offers several advantages over transrectal ultrasound-guided biopsy, it may not be as accurate as whole-mount prostatectomy [37].

Further work could therefore consider both prospective large-scale external validation (e.g., at other centers) and the impact these predictive models have on patient outcome.


Machine learning classifiers combining PSAd and quantitative multiparametric MRI parameters outperform experienced radiologist opinion for the prediction of Gleason pattern 4 in prostate cancer. These classifiers could therefore harbor great potential when making management decisions in the prostate cancer pathway and would be particularly useful to inform decisions regarding patients on active surveillance programs.