Background

Systematic reviews of clinical prediction model studies are becoming increasingly popular. Prediction models are covered by the type III prognostic research studies proposed by the PROGRESS (PROGnosis RESearch Strategy) partnership [1, 2]. The most common aims of these systematic reviews are to identify and summarize all available models for a particular target population, condition or outcome, and to summarize the predictive performance of a specific prognostic model while identifying potential sources of heterogeneity [3]. During the systematic review process, it is crucial for reviewers to extract key data from the relevant studies. Data extraction provides the reviewer the necessary information for describing and summarizing the findings, and examining the risk of bias and any applicability concerns of the models. Risk of bias refers to the likelihood that a primary predictive model study leads to a distorted, usually overly optimistic, estimate of predictive performance. Applicability concerns arise when a primary study question differs from the specific review question in terms of population, predictors or outcomes. Several checklists and toolkits have been developed to guide the process of data extraction and risk of bias assessment for different types of review questions [4].

The CHARMS checklist (CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies) provides guidance for both formulating the review question, and for extracting data the primary studies reporting prediction models [5].

The PROBAST tool (Prediction model Risk Of Bias Assessment Tool) is a checklist for assessing the risk of bias and the applicability of prognostic model studies [6, 7]. The PROBAST includes four domains: participants, predictors, outcome, and analysis. For each domain the tool provides signalling questions for determining whether the risk of bias and the applicability should be graded as low, high or unclear.

With the aim of facilitating the use of these two tools (i.e. CHARMS and PROBAST) for reviewers performing a systematic review of clinical prediction model studies, we have created an Excel template for extracting data and assessing the risk of bias and the applicability of predictive models.

Implementation

The Excel file (named CHARMS and PROBAST template.xls) consists of eight sheets. The first sheet “Home” provides a description of the Excel file, instructions for its use and links to relevant papers and forms. The following three sheets (“Summary”, “CHARMS” and “PROBAST”) correspond to the collection of data from the studies included in the systematic review, and the following three sheets (“Study Characteristics”, “Model characteristics “, and “PROBAST summary”) contain the tables and figures generated from the data collected. The final sheet (“CHARMS. Drop-down response lists”) allows tailoring of the template to the systematic review. A more detailed description of each sheet is presented next.

To start with the data extraction process, for each predictive model presented in each study included in the systematic review, the user should tick the “new model” box on the “Summary” sheet. This operation enables the CHARMS and PROBAST forms for this new model in the corresponding sheets. The Excel template assumes that each study in the review reports a single prognostic model, but it can easily be generalized to a study reporting two or more models. In that case, the reviewer shall enable as many rows in the template as models are reported in that study. In the “Summary” sheet the following basic information of the new study should be filled in: author, year, title or an identifier (i.e. PMID or DOI), journal of publication and name of the model, if applicable. An identifier for each model is automatically created based on author name and year. In the last two columns of this summary sheet, the reviewer finds information on the status (i.e. complete or incomplete) of the CHARMS and PROBAST sheets.

The “CHARMS” sheet contains the template from Moons et al. [4]. The data extraction sheet is structured according to the eleven CHARMS domains: source of data, participants, outcome to be predicted, candidate predictors, sample size, missing data, model development, model performance, model evaluation, results and interpretation. To complete the data extraction process reviewers should fill in all the cells shaded in yellow. Depending on the item, the reviewers can choose from a drop-down list of options, or they can enter a free-text response. The items with available drop-down lists are showed in the last sheet of the Excel file (sheet named “CHARMS. Drop-down response lists”). The categories of these default lists can be tailored by the reviewer. When the information in the study report is not available, the reviewer has to fill in the cell with “No information”. In the participant description section, reviewers can specify the relevant characteristics that they plan to extract from the primary studies, tailored to the target population in the review. These characteristics will be the same for all models included in the review. For each domain within CHARMS, its status is incomplete whenever a cell within that domain remains empty (marked in yellow). In the observations section of the CHARMS checklist table (bottom part of CHARMS sheet), the reviewer will find a status line that flags each model as “All information has been successfully registered” when all domains are complete, or “Incomplete data extraction” otherwise. Additional information of the model could be extracted and filled in as free text on an additional information field at the bottom line. When all relevant information from a model has been extracted for all domains in the form, the CHARMS checklist for that model is flagged as complete in the “Summary” sheet.

The “PROBAST” sheet contains the template from Wolff et al. [6]. To make information of the model accessible to the reviewers, relevant information (such as source of data, inclusion and exclusion criteria, validation methods, performance measures, etc.) from CHARMS domains are automatically transferred into the “PROBAST” sheet. Reviewers should fill in signalling questions for all PROBAST domains: participants, predictors, outcome and analysis. These questions are shaded in yellow and responses should be selected from a drop-down list with the following categories “Yes”, “Probably yes”, “Probably no”, “No” or “No information”. Once all signalling questions for one domain have been filled, the risk of bias and applicability assessment cells become editable. Reviewers should rate risk of bias and concerns for applicability of the model as “Low”, “High” or “Unclear” for both. When the risk of bias assessment and the applicability of a model have been rated for all domains in the form, the PROBAST assessment for the model is flagged as complete in the “Summary” sheet.

Results

In this section we present a worked example of the template file. This example is based on the data from a systematic review of prognostic models for mortality after cardiac surgery in patients with infective endocarditis [8].

Once we have extracted the data of the models included in the review using the corresponding CHARMS sheet (see Table 1 with data extracted from one of the models as an example) and after completion of the risk of bias assessment using PROBAST sheet (see Table 2 with the risk of bias assessment of the same model), the reviewers could obtain a number of tables and figures aimed to assist in the process of reporting adequately the review findings. All tables and figures can be copied and pasted for further editing.

Table 1 Example of CHARMS sheet using data from a primary study included in the systematic review of prognostic models for mortality after cardiac surgery in patients with infective endocarditis [8]
Table 2 Example of PROBAST sheet using data from a primary study included in the systematic review of prognostic models for mortality after cardiac surgery in patients with infective endocarditis [8]

The first result table automatically created (sheet named: “Study characteristics”) shows a summary of the characteristics of included studies listed in the “Summary” sheet. It presents information covered by methods section (items 4 and 5) and results section (item 13) of the TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis) statement [9]. The headers of the table include the source of data, the enrolment period, study setting and regions, and the participant characteristics previously pre-defined in the CHARMS sheet, in our example, these characteristics includes age, specification of native valve endocarditis and valves affected (see Table 3 with characteristics of the studies included in the review).

Table 3 Example of the table with study characteristics automatically produced by the Excel file using data from the systematic review of prognostic models for mortality after cardiac surgery in patients with infective endocarditis [8]

The second table of results (sheet named: “Model characteristics”) shows the relevant information of the predictive models included in the review. It presents information about the methods section (items 7, 8 and 10) and the results section (item 14) of the TRIPOD statement. In addition, for each included model, a summary of the results of the risk of bias assessment and applicability is shown (see Table 4 with the characteristics of the models reviewed).

Table 4 Example of the table with model characteristics automatically produced by the Excel file, using data from the systematic review of prognostic models for mortality after cardiac surgery in patients with infective endocarditis [8]

The sheet named "PROBAST summary” presents a table and a graph with the results of the risk of bias and applicability assessments (Table 5 and Fig. 1).

Table 5 Example of the table with the summary of PROBAST tool automatically produced by the Excel file using data from the systematic review of prognostic models for mortality after cardiac surgery in patients with infective endocarditis [8]
Fig. 1
figure 1

Example of the graph with the summary of PROBAST tool automatically produced by the Excel file using data from the systematic review of prognostic models for mortality after cardiac surgery in patients with infective endocarditis [8]

The template as well as a filled in file with an example is provided as supplementary material, and this version and further updates can be downloaded from https://github.com/Fernandez-Felix/CHARMS-and-PROBAST-template.

Discussion

We present in this manuscript an Excel template for extracting data and assessing the risk of bias and applicability of predictive modelling studies.

This template is the first to combine the CHARMS and PROBAST tools into one file. The template simplifies and standardizes the tasks of data extraction and risk of bias assessment, reducing the risk of errors and increasing reliability between data extractors. Having the relevant information at hand while assessing the risk of bias will make the review process more efficient. The template is easy to use and allows the reviewers to fill the forms using drop-down lists that are easily customisable. Such customisation makes our template versatile and adaptable to meet users' needs. The template generates several summary tables that can be used directly for publication with minor edits. All these characteristics will speed up the process of performing some of the steps of a systematic review and reporting its findings; surely, systematic reviewers will appreciate its usefulness.

There are some limitations to our template. First, it has been designed to include up to 30 existing models only (or 30 validation studies of a model). Second, the summary tables we produce are generic and might not fit every purpose. However, the tables could be edited outside the template to incorporate other aspects of interest for a specific review.

Conclusion

We have designed a useful template for extracting data and assessing the risk of bias and the applicability of clinical prediction models using the CHARMS and PROBAST checklists. The template makes it easier for reviewers to manage these tools, and to produce results tables ready for publication with minor edits. We hope this template will promote a better and more comprehensive reporting of systematic reviews of prediction models. We encourage piloting the template and providing feedback to improve the template in future versions.

Availability and requirements

Project name: None.

Project home page: None.

Operating system(s): Operating system with Microsoft Office.

Programming language: Only formulae available in Excel are employed.

Other requirements: None.

License: None required.

Any restrictions to use by non-academics: None.