Background

Early warning scores (EWS) are simple tools to help detect clinical deterioration to improve patient safety in hospitals. EWS are often implemented as part of a wider early warning system, also known as “rapid response system”, whereby detection of a likely deterioration will trigger an alert or pre-planned escalation of care by healthcare providers [1, 2]. These EWS use objective parameters such as vital signs and laboratory results, and may include subjective parameters (e.g. “nurses’ worry”) [3] as input; and then output an integer score. A higher score generally indicates a higher likelihood of clinical deterioration, but is not a direct estimate of risk.

The first EWS was published in 1997 [4], and the concept gradually gained traction with the National Institute for Health and Clinical Excellence (NICE) recommending the use of early warning systems to monitor all adult patients in acute hospital setting in a 2007 guideline [5]. Currently there are many different EWS that have become available and are routinely used in hospitals globally, including USA, UK, Netherlands, Denmark and South Korea [6,7,8]. The recent advancements in machine learning (ML) have also opened up a new paradigm of novel EWS development, using ML techniques such as random forests and deep neural networks, giving rise to arguably better EWS [9,10,11].

As EWS have an impact on patient care, it is critical that they are rigorously validated [12]. In this regard, several systematic reviews have already looked at the performance of various EWS [6,7,8]. Notably, these systematic reviews drew conflicting conclusions about EWS performance – Smith ME et al. concluded that EWS perform well, while Gao et al. and Smith GB et al. found conflicting and unacceptable performance. A possible reason for this disagreement is a lack of consistency in the methods used to validate EWS [8].

An example of difference in validation methods of EWS is between a study by researchers from Google and another study that validated the National Early Warning Score (NEWS) [13, 14]. Although both study teams validated their respective EWS ability to predict inpatient mortality, the former validated its EWS being used once, at the start of the admission (AUROC 0.93–0.94), while the latter validated their EWS using its score for every observation set of vitals measured for the entire admission (AUROC 0.89–0.90). In terms of case definition, we consider the former to have used the “patient episode” definition, while the latter used the “observation set” definition. Case definition is one of several differences in validation methods.

The aim of this review is to examine the different methodologies and performance metrics used in the validation of EWS so that readers will pay attention to specific aspects of the validation when making comparisons between EWS performance from published studies. As far as we are aware, there has not been any similar work done before. It is not this review’s intention to identify better performing EWS or to perform quality or bias assessment of the studies.

Methods

Search strategy

We used PubMed to search the MEDLINE database from inception to 22 Feb 2019 for studies of EWS in adult populations. We used the keywords “early warning score”, “predict”, “discriminate”, and excluded “paediatrics”, “children” and “systematic review”. For completeness, we also sought to include publications that we found but were not returned in the PubMed search. This involved looking at studies from other EWS review papers [6,7,8], and consulting with experts.

To assess the validation of EWS, we included only articles that performed validation on at least one EWS, in which investigators reported associations between EWS scores and inpatient mortality, intensive care unit (ICU) transfers, or cardiac arrest (CA). Systematic review papers were excluded as they lacked granularity in description of the data handling and statistical analysis. We also excluded studies which did prospective validation whereby the EWS was already in operation to influence care decisions and impact patient outcomes. In these cases, the validation did not purely evaluate the discriminative ability of the EWS, but also included other factors such as staff compliance and availability of rapid response resources.

Investigators then reviewed titles and abstracts of citations identified through literature searches, and eligible articles were selected for full-text review and data abstraction.

Data abstraction

Pre-defined data for abstraction was largely based on the TRIPOD Checklist for Prediction Model Validation [15], with some added elements which the study team felt were pertinent to EWS.

A full-text review of each eligible study was performed by two investigators independently. Data for abstraction included the specific EWS validated, validation dataset used, number of subjects, population characteristics, outcome of interest (inpatient mortality, ICU transfer, cardiac arrest), method of validation (case definition, time of EWS use, type of aggregation for methods with multiple scores), method of handling missing values, and reported metrics. For discrepancies in the abstracted data, the investigators would perform a repeat review of the paper together to reach a consensus.

Results

The PubMed search yielded a list of 125 study abstracts. From reviewing the study abstracts, 47 studies were selected for full-text review (Fig. 1). Of these, we excluded a further 12 studies – 11 (unable to access full study article) and 1 (full study article in Korean, only abstract in English). We included 13 additional relevant studies that we found from review papers and from consulting with experts. In total, 48 studies were included in the final review [3, 9, 11, 14, 16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59].

Fig. 1
figure 1

Flow chart describing inclusion of articles for full-text review from search result list

A summary table of the selected studies and data abstracted are found in Table 1 (see Additional file 1). 8 of the 48 studies were published in 2018 or later. Majority of the study populations were from UK (23) and USA (10), with 5 from South Korea and one each from Canada, China, Denmark, Hong Kong, Israel, Italy, Netherlands, Singapore, Sweden and Turkey.

Altogether, there were 54 unique EWS that were reviewed by the different studies, excluding the 33 other EWS assessed in the study by Smith in 2013 [14], and the 44 MET criteria assessed in the study by Smith in 2016 [27]. The most reviewed EWS were the Modified Early Warning Score (MEWS) and National Early Warning Score (NEWS), which were included in 22 and 16 studies respectively.

Validation dataset

16 of the studies performed an internal validation, where a proportion of the entire study dataset was used to develop the EWS (training set), with the remaining proportion was used for validation (validation set) [9, 11, 17, 18, 20, 22, 24, 28, 32, 34, 35, 38, 39, 41, 44, 46, 57]. Varying proportion sizes were used for the validation set ranging from 25.0 to 100%. The other studies did an external validation with a study population different from that used to develop the EWS.

The study size used ranged from 43 to 269,999. Slightly over half (25 of 48) of the studies were performed on general admission cases, with the others focused on populations with specific conditions (e.g. chorioamnionitis [48], community-acquired pneumonia [31, 50, 51]), patients admitted to a certain specialty (e.g. Obstetrics [39], Haematology [28]), or only a subset of the general admission population (e.g. those reviewed by MET [33, 56], those with NEWS ≥1 [21]).

Outcomes of interest

All the studies included at least one of the outcomes of: inpatient mortality, ICU transfer or cardiac arrest, or a combination of them (Fig. 2).

Fig. 2
figure 2

Summary of studies with various combinations of outcomes

For the 24 studies that evaluated more than one outcome, 17 studies validated EWS using a composite of all the outcomes as the endpoint, while the others validated EWS for each of the individual type of outcomes.

Case definition, time of EWS use and aggregation method

There were two different ways a case was defined – a patient episode or an observation set – and this definition had impact on the subsequent validation steps (Fig. 3). The patient episode definition considered an entire admission as a single case and used either a single or multiple recordings of vital signs and other parameters from the admission, while the observation set definition considered each observation set of vital signs and other parameter recordings from the same admission as independent of one another.

Fig. 3
figure 3

Illustration of how different case definition affect EWS validation for 2 patients

In the observation set definition, recordings from each observation set would be used to compute a score to evaluate the EWS association with an outcome within a certain time window from the time of recording of the observation set. In the patient episode definition, because multiple recordings were available, study teams either chose to use a single score, or multiple scores to validate the EWS. This reflected how study teams intended for the EWS to be used in practice, so we termed this component as “time of EWS use”. If the time of EWS use was multiple, then the series of scores would be aggregated in evaluating the EWS with the outcome of the patient episode.

Among the 48 studies, 34 used the patient episode method while 12 used the observation set method, and 2 did the validation using both methods.

Of the studies that used the patient episode method, 18 studies used a single point of time score to validate the EWS, most of which used the first recorded observation. For the 18 studies that used multiple recordings, 13 studies used use the maximum score as to aggregate the scores for each patient episode to compare with the outcomes. Most studies used all recordings from the patient episode, but there were 2 studies that excluded readings just prior to outcome to account for the “predictive” ability of the EWS [28, 44].

For the studies that used the observation set method, all of them used EWS values generated within the 0-24 hour time window prior to outcome to validate the EWS, with the exception of one study [11] that used EWS values within the 30 minute to 24 hour time window.

Handling of missing values

As the validation datasets were obtained from real-world data, missing values were inevitable. In general, there were two ways the missing values were handled – either exclude or impute values. 19 studies excluded the missing values, 15 used imputations, 4 used a combination of both methods, and in 10 studies it was not stated. There were a variety of imputation methods used. The most commonly used were imputing a value from the last observation (6 studies), imputing a median value (3 studies), a combination of the last observation and median (5 studies) and imputing with a normal value (4 studies). There were also some sophisticated imputation methods, such as using random forest [17] and multiple imputation [23].

Performance metrics

For performance metrics, we grouped them into two types – discrimination and calibration metrics. Discrimination is the measure of the EWS ability to differentiate significantly between cases with outcome and cases without outcome. Calibration is an evaluation of the extent to which estimated probabilities of the EWS scores agree with observed outcome rates [60]. Where an integer EWS was validated, it would not be possible to assess calibration.

The most commonly reported discrimination metric was the area under the curve of the receiver operating characteristics (AUROC). Only 6 studies did not use this metric. Twenty-two studies reported using any one of sensitivity, specificity, or predictive value (positive and negative). A lesser used alternative to the AUROC was the area under the precision-recall curve (AUPRC) which was used in two more recent studies by Kwon et al [11] and Watkinson et al [20]. The authors reasoned that this is a more suitable metric for verifying false-alarm rates with varying sensitivity.

The EWS efficiency curve was another measure used to visualize the discriminatory ability of EWS in 8 studies. The EWS efficiency curve was first introduced in the study by Smith et al. to provide a graphical depiction of the proportion of triggers that would be generated at varying EWS scores [13].

Six studies performed a statistical test of model calibration. Four used the Hosmer-Lemeshow goodness-of-fit test, one calculated the calibration slope and one used both metrics. Another six studies did not perform a statistical test of calibration, but provided a visualization of the model calibration.

Discussion

Studies that validate EWS used a wide variety of validation methods and performance metrics [4, 9, 11, 14, 16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45]. Given that these variations have a bearing on EWS performance measurement, one should be mindful of them when interpreting and comparing bottom-line metrics, like AUROC values.

While the TRIPOD checklist for prediction model validation provides a standardized framework for multivariable predictive model validation reporting [15], it lacks the finer details for EWS which are more multi-faceted than typical prediction models. Unlike clinical prediction tools, EWS are unique in that they may be intended for use at multiple time-points over a patient episode. Some key differences in validation methodology we found in our review, and propose EWS evaluators take note of, are the validation dataset, outcomes of interest, case definition, time of EWS use, aggregation method, and handling of missing values. These differences could explain the reason for conflicting opinion on whether EWS perform well or otherwise [6,7,8].

In terms of EWS performance reporting, our review also had similar findings from previous reviews, that studies tended to give more prominence to discrimination and have rarely assessed model calibration [12, 61]. We concur with the TRIPOD recommendation that both discrimination and calibration should be considered when judging a model’s accuracy [15]. Also, this review found some reporting metrics that can be considered as promising alternatives to AUROC in EWS performance reporting. The AUPRC mentioned earlier is one of them. It was noted to be suitable for verifying false-alarm rates with varying sensitivity [21, 22]. Another would be to exclude measurements within a time window just prior to outcome, to account for the “predictive” ability of the EWS [21, 27, 44].

We acknowledge that a limitation of our review may be the fairly narrow search strategy to include EWS studies with keywords “predict” and “discriminate”, and thus might have unwittingly excluded other studies that performed EWS validation. Future reviews may consider broadening the scope of the initial search.

Conclusions

Current EWS validation methods are heterogenous and this probably contributes to conflicting conclusions regarding their ability to discriminate or predict the patients at risk of clinical deterioration. A standardized method of EWS validation and reporting can potentially address this issue.