Sepsis is a dangerous clinical condition that happens when the body over-reacts to an infection, and its mortality is strictly related to sepsis severity. The more severe is the sepsis, the more risks there are for the patient.

Predicting the severity of a sepsis episode and if a patient will survive it are urgent tasks, because of the riskiness of this condition. A severe sepsis episode is called septic shock. Septic shocks require the prompt use of vasopressors, and must be treated immediately to improve the survival chances of the patient [1].

In addition to sepsis severity and survival prediction, another important task for doctors and physicians is to anticipate the possible sequential organ failure assessment that the patient will experience as a consequence of the sepsis episode. To diagnose the level of organ failure happening in the body, the biomedical community takes advantage of the sequential organ failure assessment (SOFA) score [1], which is based upon six different rates (respiratory, cardiovascular, hepatic, coagulation, renal and neurological systems) [1].

In this context, machine learning and artificial intelligence applied to electronic health records (EHRs) of patients diagnosed with sepsis can provide cheap, fast, non-invasive and effective methods that are able to predict the aforementioned targets (septic shock, survival, and SOFA score), and to detect the most predictive symptoms and risk factors from the features available in the electronic health records. Scientists, in fact, already took advantage of machine learning for survival or diagnosis prediction and for clinical feature ranking several times in the past [2], for example to analyze datasets of patients having heart failure [3, 4], mesothelioma [5], neuroblastoma [68], and breast cancer [9].

Several researchers employed computational intelligence algorithms to medical records of patients diagnosed with sepsis, too, especially for clinical decision-making purposes.

Gultepe and colleagues [10] applied machine learning to the EHRs of 741 adults diagnosed with sepsis at the University of California Davis Health System (California, USA) to predict lactate levels and mortality risk of the patients. Tsoukalas et al. [11] employed several pattern recognition algorithms to analyze medical record data of 1,492 patients diagnosed with sepsis at the same health centre. Their data-derived antibiotic administration policies improved the conditions of patients. Taylor and colleagues [12] analyzed medical records of a cohort of approximately 260 thousand individuals from three hospitals in the USA. They used machine learning to predict in-hospital mortality of patients diagnosed with sepsis, and to show the superior results of machine learning over traditional univariate biostatistics techniques. Horng et al. [13] applied computational intelligence techniques to medical records of 230,936 patient visits containing heterogeneous data: free text, vital signs, and demographic information. The dataset was collected at the Beth Israel Deaconess Medical Center (BIDMC) of Boston (Massachusetts, USA). Shimabukuro and colleagues [14] employed machine learning techniques to clinical records of 142 patients with severe sepsis from University of California San Francisco Medical Center (California, USA) to predict the in-hospital length of stay and mortality rate. Burdick et al. [15] used several computational intelligence methods on medical records of 2,296 patients related to sepsis, that were provided by Cabell Huntington Hospital (Huntington, West Virginia, USA). Their goal was to predict patients’ mortality and in-hospital length of stay. Calvert and colleagues [16] merged together several datasets of clinical records of sepsis-related patients to create a large cohort of approximately 500 thousand individuals. Then they used machine learning to forecast how the high-risk patients are likely to have a sepsis episode. Barton et al. [17], lastly, re-analyzed two datasets previously exploited [13, 14] to predict sepsis up to 48 hours in advance.

Scientists employed machine learning for the prediction of sepsis in infants in the neonatal intensive care unit (NICU), as well. In 2014, Mani and colleagues [18] applied nine machine learning methods to 299 infants admitted to the neonatal intensive care unit in the Monroe Carell Junior Children’s Hospital at Vanderbilt (Nashville, Tennessee, USA). Barton et al. [19] took advantage of data mining classifiers to analyze the EHRs of 11,127 neonatal patients collected at the University of California San Francisco Medical Center (California, USA). More recently, Masino and his team [20] applied computational intelligence classifiers to the data of infants admitted at the neonatal intensive care unit of the Children’s Hospital of Philadelphia (Pennsylvania, USA).

To recap, four studies applied machine learning to minimal electronic health records to diagnose sepsis or predict survival of patients [10, 11, 21, 22], while six other studies applied them to complete electronic health records for the same goals [1214, 16, 17, 19]. The study of Burdick and colleagues [15] even reported an observed decreased in the mortality at the hospital where the computational intelligence methods were applied to recognize sepsis. Only two articles, additionally, include a feature ranking phase to the binary classification: Mani et al. [18] identified as most predictive variables hematocrit or packed cell volume, chorioamnionitis and respiratory rate, while Masino and coauthors [20] highlighted central venous line, mean arterial pressure, respiratory rate difference, systolic blood pressure.

Our present study fits in the latter category: we use several machine learning methods not only to predict survival, SOFA score, and septic shock, but also to detect the most relevant and predictive variables from the electronic health records. Moreover, we also perform a feature ranking through traditional biostatistics rates, and make a comparison between the results obtained with these two different approaches. And, differently from all the studies mentioned earlier, we do not focus only on predicting survival and diagnosing sepsis, but we also make computational predictions on the SOFA score, that means predicting how much and how many organs will fail because of the septic episode.

Regarding scientific challenges and competitions, in 2019 PhysioNet [23, 24], an online platform for physiologic data sharing, launched an online scientific challenge for the prediction of early sepsis in medical records [25].

On the business side, the San Francisco bay area startup company Dascena Inc. recently released InSight, a machine learning tool able to computational predict sepsis in EHR data [26]. Desautels et al. [21] applied InSight to predict sepsis in the medical records of the Multiparameter Intelligent Monitoring in Intensive Care (MIMIC)-III dataset [27].

In the present study, we analyzed a dataset of electronic health records of patients having cardiovascular heart diseases [28]: each patient profile has 29 clinical features, including a binary value for survival, a binary value for septic shock, and a numerical value for the sequential organ failure assessment (SOFA) score. We separately used each of these three features as an independent target, and employed several machine learning classifiers to predict it with high accuracy and precision. Afterwards, we employed machine learning to detect the most important features of the dataset for the three target separately, and compared its results with the results obtained through traditional biostatistics univariate techniques.


The original dataset contains electronic health records (EHRs) of 29 features for 364 patients, and was first analyzed by Yunus and colleagues to investigate the role of procalcitonin in sepsis [29]. These 364 patients with sepsis diagnosis entered the general medical ward and intensive care unit between September 2014 and December 2016 at the Methodist Medical Center and Proctor Hospital (today called UnityPoint Health – Methodist ∣ Proctor) in Peoria, Illinois, USA [29]. The group of patients include 189 men and 175 women, aged 20–86 years old [29, 30].

Each patient stayed at the hospital for a period between 1 and 48 days, and her/his dataset profile represent the co rresponding clinical record at the moment of discharge or death. Since the maximum observation window was 48 days, we consider our binary predictions in reference to the same time frame.

The dataset collectors defined septic shock “as a condition that requires the use of vasopressors in order to maintain a mean arterial pressure (MAP) of 65 mm Hg or above, and a persistent lactate greater than 2 mmol/L in spite of adequate fluid resuscitation” [29, 30].

We report the quantitative characteristics of the dataset (amount of individuals and percentage of individuals for each binary feature condition; median and mean for each numeric or category feature) in Table 1, and the interpretation details (meaning, measurement unit, and value range in the dataset) in Table 2. More information about the analyzed dataset can be found in the original dataset curators publication [29, 30].

Table 1 Statistical quantitative description of the features. Binary and category features on the left, and numeric features on the right
Table 2 Dataset feature description. Meanings, measurement units, and intervals of each feature of the dataset

We derived the survival feature from the outcome feature of the original dataset (Supplementary information) [31]. The extent of the infection feature can have 3 values that represent bacteremia, focal infection, or both. The urine output 24 hours feature can have 3 values that represent >500 mL, [200,500] mL, or <200 mL.

Regarding the dataset imbalance, considering septic shock as the target, there are 297 individuals without septic shock (having value 0 for the vasopressors feature), corresponding to 81.59% of the total size, and 67 individuals with septic shock (having value 1 for the vasopressors feature), corresponding to 18.41% of the total size.

When we consider the survival as target, instead we observe 48 deceased patients (class 0, corresponding to 13.19% of all the individuals), and observe 316 survived patients (class 1, corresponding to 86.81% of all the individuals).

The dataset with septic shock as target results therefore negatively imbalanced, and the dataset with survival as target results positively imbalanced.


We implemented our computational pipeline in the open-license, free R programming language, using common machine learning packages (randomForest, caret, e1071, keras, ROSE, DMwR, mltools, DescTools). We also released all our code scripts publicly online (“Availability of data and materials”).

As described in subsection S7.2, we can recap the computation pipeline of the analsys with the following steps:

  1. 1

    construction of the dataset (“Dataset” section);

  2. 2

    definition of the three tasks:

    1. a

      the binary classification problem of predicting septic shock (vasopressors);

    2. b

      the regression problem of predicting SOFA score;

    3. c

      the binary classification problem of predicting survival;

    based on a subset of the available variables selected as input variables (Table 2);

  3. 3

    for each of these three tasks (septic shock, survival, and SOFA score) and for each of the algorithms (DT, RF, SVM (linear), SVM (kernel), and NN, (DT, RF, SVM (linear), SVM (kernel), NB, k-NN, LR, and DL, noting that NB and LR can be used just for classification problems) we built a model using the MS strategy (“Methods” section) where we set the number of fold k=10. During the MS we searched the hyper-parameters using the following ranges

    1. a

      DT: \(\mathcal {H} = \{ d \} \in \{ 2, \, 4, \, 6, \, 8, \, 10, \, 12, \, 14 \}\);

    2. b

      RF: we set nt=1000 since increasing it does not increases the accuracy;

    3. c

      SVM (linear): \(\mathcal {H} = \{ C \} \in \mathcal {R}\);

    4. d

      NB: we use kernel density estimate no Laplace correction and no adjustment (R library caretnb algorithm);

    5. e

      k-NN: \(\mathcal {H} = \{ k \} \in \{ 1,3,5,11 \}\);

    6. f

      LR: \(\mathcal {H} = \{ \lambda \} \in \mathcal {R}\);

    7. g

      DL: \(\mathcal {H} = \{ l_{1}, l_{2}, l_{3}, wd \} \in \{ 2,4,8,16,32 \} \times \{ 2,4,8,16,32 \} \times \{ 2,4,8,16,32 \} \times \{.001,.01,.1,1 \} \) ;

    8. h

      SVM (kernel): \(\mathcal {H} = \{ C, \gamma \} \in \mathcal {R} \times \mathcal {R}\);

    9. i

      NN: \(\mathcal {H} = \{ n_{h}, \, p_{d}, \, p_{b}, \, r_{l}, \, \rho, \, r_{d} \} \in \{ 5, \, 10, \, 20, \, 40, \, 80, \, 160 \} \times \{ 0, \, 0.001, \, 0.01, \, 0.1 \} \times \{ 0.1, \, 1 \} \times \{ 0.001,0.01,0.1,1 \} \times \{ 0.9,0.09 \} \times \{ 0.001,0.01,0.1,1 \}\) and as activation function we used the rectified linear unit (ReLU) [32];

    where \(\mathcal {R} = \{ 0.0001, \, 0.0005, \, 0.001, \, 0.005, \, 0.01, \, 0.05, \, 0.1, \, 0.5, \, 1, \, 5, \, 10, \, 50 \}\);

  4. 4

    for each of the constructed models we reported the results using the EE strategy and previously introduced the metrics (“Methods” section) together with the standard deviation where we set nr=100;

  5. 5

    for each of the tasks we reported the ranking of the features selected by the two feature ranking procedures (MDI and MDA, “Methods” section) together with the mode of the ranking position where we set pFR=0.7 and nFR=100, and aggregated through Borda’s method [33].

We report and discuss the results in the next sections.


In this section we show the results of applying the classification and regression methods (“Methods” section) on the described dataset (“Dataset” section).

Target predictions

In this section, we describe the results obtained for the binary prediction of septic shock, for the SOFA score regression estimation, and for the binary prediction of survival in the ICU. For the two binary classifications (septic shock prediction and survival prediction), we used τ=0.5 as cut-off threshold for the confusion matrices. We chose this value because it corresponds to the value 0 for the Matthews correlation coefficient (MCC) [34], which means the predicted value is not better than random prediction.

We focused on and ordered our results by the scores of the MCC, because this rate provides a high score only if the classifier was able to correctly predict the majority of positive data instances and the majority of the negative data instances, despite of the dataset imbalance [35, 36].

In the interest of providing fuller information, we also reported the values of ROC AUCs [37] and PR AUCs [38], which are computed considering all the possible confusion matrix thresholds.

Septic shock prediction

We report the performance of the learned models for the septic shock (vasopressors) prediction with the different methods evaluated with the different metrics in Table 3, ranked by the MCC.

Table 3 Septic shock (vasopressors) prediction

Our methods were able to obtain high prediction results and showed the ability of machine learning to predict septic shock (positive data instances), but showed low ability to identify patients without septic shock (negative data instances). In particular, Random Forests and the Multi-Layer Perceptron Neural Network outperformed the other methods (Table 3), by achieving average MCC equal to +0.32 and +0.31, respectively. All the classifiers obtained high scores for the true positive rate, accuracy, and F1 score, but achieved low scores on the true negative rates (Table 3). Decision Tree, kernel SVM, Logistic Regression, Deep Learning, and Naive Bayes were the only methods which predicted correctly most of the negative instances, by achieving average specificity equal to 0.50.

Regarding ROC AUC, it is interesting to notice that standard deviations for all the methods is high (standard deviation from 0.20 to 0.31. Table 3).

To check the predictive efficiency of the algorithms in making positive calls, we reported the positive predictive value (PPV, or precision). From a clinical perspective, the PPV represents the likelihood that patients with a positive screening test truly have the septic shock [39]. The PPV results show that Random Forests achieved the top performance among the methods tried, but was unable to correctly make the majority of the positive calls (PPV=0.47 in Table 3). This result means that, for each patient predicted to have septic shock, we cannot be sure that she/he will actually have a septic shock: there is an average top probability of 47% that she/he might have it, which leaves large room for uncertainty.

From a clinical perspective, the negative predictive value (NPV) represents the probability that a patient who got a negative screening test will truly not suffer from a septic shock [39]. Regarding this ratio of correct negative predictions, all the methods achieved good results, with Logistic Regression outperforming the other ones (NPV=0.90 in Table 3). This result means that, for each patient predicted not to have septic shock, we can be at 90% confident that he/she will not have septic shock, which leaves small room for uncertainty.

SOFA score prediction

We report the performance of the learned models for the SOFA score prediction with the different methods evaluated with the different metrics in Table 4, ordered by the coef- ficient of determination R2, and the SOFA score scatterplot of the actual and predicted value of the ne test sets in Fig. 1. We used R2 for the method sorting because this rate incorporates the SOFA score distribution.

Fig. 1
figure 1

SOFA score regression. Scatterplot of the actual and predicted value of the ne test sets for the SVM (linear) method

Table 4 SOFA score prediction

Our results show that machine learning can predict SOFA score with low error rates (Table 4). Differently from the septic shock prediction, here the Deep Learning model resulted as the top classifier by outperforming the other methods in the R2 and MAE. The linear SVM, the Multi-Layer Perceptron, the kernel SVM, and Random Forests obtained similar results, and resulted in being close to the top method for this task. It is interesting to notice that linear SVM resulted in being the top method when its predictions were measured through RMSE and MSE, but not in the other cases.

Survival prediction

We report the performance of the learned models for the survival prediction with the different methods evaluated with the different metrics in Table 5, ranked by the MCC.

Table 5 Survival prediction

Our results show that it is possible to use machine learning to predict the survival of sepsis patients, with high accuracy (Table 5). In this case, the MLP neural network outperformed the other classifiers by obtaining higher scores for MCC, F1 score, and true positive rate. All the methods obtained high results on the true negative rates, but only the MLP neural network and Random Forests were able to predict most of the positive data instances, obtaining average sensitivity equal to 0.75 and 0.58, respectively.

Regarding correct positive predictions (PPV), all the methods were able to correctly make positive predictions (Table 5), while they obtained low results for the ratio of correct negative predictions (NPV).

Contrarily to what happened previously for the septic shock (“Septic shock prediction” section), here we can be confident that the patients predicted to survive will actually survive (top PPV=0.94 for MLP). However, the low NPV values state that the probability of decease of patient predicted as “non survival” is just 0.31%on average for the best method (Naive Bayes), making our predictions less trustworthy in this case.

Feature rankings

In this section, we present the feature ranking results for the three targets (septic shock, SOFA score, and survival), obtained through Random Forests and through traditional univariate biostatistics approaches.

For complete information, we reported the feature rankings measured thorugh Random Forests as barcharts in the Supplementary Information (Figure S3, Figure S4, andFigure S2).

Septic shock feature ranking

We reported the feature ranking for the septic shock obtained by the two feature selections performed through Random Forests (Methods) in Table 6, and the feature rankings obtained through traditional biostatistics coefficients (Pearson correlation coefficient, Student’s t-test, p-values) in Table 7.

Table 6 Septic shock (vasopressors) feature ranking – Random Forests
Table 7 Septic shock (vasopressors) features ranking – biostatistics analysis

Random Forests identified creatinine, Glasgow coma scale, mean arterial pressure, and initial procalcitonin as the most important features to identify septic shock (Table 6), that resulted in top positions also in the traditional univariate biostatistics rankings (Table 7). The Student’s t-tests and p-values identified age as the top most important feature, that instead obtained the 10th position for the Pearson correlation coefficient (Table 7) and the 14th position for the Random Forests ranking (Table 6).

Overall, with the significant exception of age, the Random Forests ranking and the traditional univariate biostatistics rankings showed similar positions for the features importance, confirming also the importance of the Glasgow come scale value and the blood creatinine levels to recognize patients having septic shock.

SOFA score feature ranking

We reported the feature ranking for SOFA score obtained by the two feature selections performed through Random Forests (Methods) in Table 8, and the feature rankings obtained through traditional biostatistics coefficients (Pearson correlation coefficient, Student’s t-test, p-values) in Table 9.

Table 8 SOFA score feature ranking – Random Forests
Table 9 SOFA score features ranking – biostatistics analysis

Random Forests selected Glasgow come scale, creatinine, and platelets as most important feetures for SOFA score (Table 8). While all the biostatistics rates recognized Glasgow coma scale and platelets were recognized as relevant features too (Table 9), the Student’s t-test and the p-values ranked creatinine as 22 nd most important feature.

Similar to septic shock, the biostatistics techniques ranked age as a top feature, while Random Forests put it in the 11th position of its ranking. All the other features obtained similar rank positions in all the rankings.

Survival feature ranking

We reported the feature ranking for survival obtained by the two feature selections performed through Random Forests (Methods) in Table 10, and the feature rankings obtained through traditional biostatistics coefficients (Pearson correlation coefficient, Student’s t-test, p-values) in Table 11.

Table 10 Survival feature ranking – Random Forests
Table 11 Survival features ranking – biostatistics analysis

The feature ranking results obtained for the survival target generated more divergence between Random Forests and traditional biostatistics methods, among all three target feature rankings.

Random Forests identified platelets as the most important feature (Table 10), which resulted on a top position also in the Pearson correlation coefficient ranking, but not in the ranking of the Student’s t-test and the ranking of the p-values (Table 11). Random Forests then selected creatinine, and respiration (PaO2) as most relevant features for survival, but these three features were ranked in low positions by the traditional biostatitics techniques (Table 11).

Another difference regarded chronic kidney disease (CKD) without dialysis. While the Student’s t-test, p-values, and PCC ranked this feature in mid-high positions (7th, 7th, and 11th position, respectively) (Table 11), Random Forests considered CKD without dialysis as the penultimate less important feature (Table 10).

All the ranking methods, in this case, ranked age as a top feature.


Our results showed that machine learning can be employed efficiently to predict septic shock, SOFA score, and survival of patients diagnosed with sepsis, from their electronic health records data. In particular, Random Forests resulted in being the top method in correctly classifying septic shock patients, even if no method achieved good prediction performance in correctly identifying patients without septic shock (“Septic shock prediction” section) The Deep Learning model outperformed the other classifier in the SOFA score regression (“SOFA score prediction” section). Regarding the survival prediction, the Multi-Layer Perceptron Neural Network achieved the top prediction score among all the classifiers (“Survival prediction” section).

This difference in the top performing methods might be due to the different kinds and different ratios of the dataset targets (negatively imbalance for the septic shock, regression for SOFA score, and positively imbalanced for survival, “Dataset” section), and the different data processing made by each algorithm.

Regarding feature ranking, Random Forests feature selection identified several unexpected symptoms and clinical components as relevant for septic shock, SOFA score, and survival.

For septic shock, Random Forests selected creatinine as a top feature, differently from the traditional univariate biostatistics approaches (“Septic shock feature ranking” section). Recent scientific discoveries confirm this trend: the level of creatinine in the blood is often used as a biomarker for sepsis [42], especially in presence of a serious kidney injury [43].

Random Forests also ranked initial procalcitonin (PCT) as a top feature, confirming the relationship between this protein and septic shock found by Yunus and colleagues [29].

About the SOFA score prediction, the ranking positions of the Random Forests feature selection resulted in being consistent with the ranking positions of the traditional univariate biostatistics analysis. Also in this case, Random Forests also ranked initial procalcitonin (PCT) as a mid-top feature, confirming the weak positive relationship between this protein and the SOFA score found by Yunus and colleagues [29].

On the contrary, Random Forests labeled as important several features that were not ranked in top positions by the Student’s t-test, p-values, and Pearson correlation coefficient rankings. Different from the univariate biostatistics analysis, Random Forests, in fact, identified creatinine, respiration (PaO2) as top components in the classification of survived sepsis patients versus deceased sepsis patients. Kang et al. [44] recently confirmed the strong association between serum creatinine level and mortality. Regarding respiration (PaO2), Santana and colleagues [45] recently showed how the SaO2/FiO2 ratio (a rate strongly correlated to the PaO2/FiO2 ratio) is associated with mortality from sepsis. This aspect suggests the need of additional studies and analyses in this direction.

Additionally, Random Forests feature ranking showed difference with the biostatistics rankings in the last ranking positions. Random Forests, in fact, considered having chronic kidney disease (CKD) without dialysis as a scarcely important component for survival, while the traditional biostatistics rates ranked that element in top positions. Maizel and colleagues [46] confirmed our finding in 2013 by stating: “Non-dialysis CKD appears to be an independent risk factor for death after septic shock in ICU patients” [46].


Sepsis is still a widespread lethal condition nowadays, and the identification of its severity can require a lot of effort. In this context, machine learning can provide effective tools to speed up the prediction of an upcoming septic shock, the prediction of the sequential organ failure, and the prediction of survival or mortality of the patient by processing large datasets in a few minutes.

In this manuscript, we presented a computational system for the prediction of these three aspects, the feature ranking of their clinical features, and the interpretation of the results we obtained. Our system consists of classifiers able to read the electronic health records of the patients diagnosed with sepsis, and to computationally predict the three targets for each of them (septic shock, SOFA score, survival) in a few minutes. Additionally, our computational intelligence system can predict the most important input features of the electronic health records of each of the three targets, again in a few minutes. We then compared the feature ranking results obtained through machine learning with the feature rankings obtained with traditional univariate biostatistics coefficients. The machine learning feature rankings highlighted the importance of some features that traditional biostatistics failed to underline. We found confirmation of the importance of these factors in the biomedical literature, which suggests the need of additional investigation on these aspects for the future.

Our discoveries can have strong implications on biomedical research and clinical practice.

First, medical doctors and clinicians can take advantage of our methods to predict survival, septic shock, and SOFA scores from any available electronic health record having the same variables of the datasets used in this study. This prediction can help doctors understand the risk of survival and septic shock for each patient, and how many organs risk to fail because of the septic episode. Doctors could use this information to decide the following steps of the therapy.

Additionally, the results of the machine learning feature ranking suggest additional, more thorough investigations on some factors of the electronic health records that would have been unnoticed otherwise: creatinine for septic shock, procalcitonin for SOFA score, and respiration (PaO2) for survival. We believe these discoveries could orientate the scientific debate regarding sepsis, and suggest to medical doctors to pay more attention to these three variables in the clinical records.

Regarding limitations, we have to report that our machine learning classifiers were unable to efficiently predict patients without septic shock among the dataset, and therefore obtained low true negative rates. We believe this drawback is due to the imbalance of the dataset, that contains 81.59% positive data instances (patients with septic shock), and only 18.41% negative data instances (patients without septic shock). In the future, we aim at exploring several over-sampling techniques to deal with this data imbalance problem [47].

Another limitation of our study was the employment of a single dataset: having an alternative dataset where to confirm our findings would make our results more robust. We looked for alternative datasets with the same clinical features to use as validation cohorts, but unfortunately could not find them. Because of this issue and of the small size of our dataset (364 patients), we cannot confirm that our approach is generalizable to other cohorts.

In the future, we plan to employ alternative methods for feature ranking, to compare their results with the results we obtained through Random Forests. We also plan to employ similarity measures to analyze the semantic similarity between patients [48].