Introduction

Clinical predictive models (CPMs) are clinically useable mathematical equations that relate multiple predictors for a particular individual to the probability of risk for the presence (diagnosis) or future occurrence (prognosis) of a particular outcome [1]. They are an increasingly common and important methodological tool for patient-centered outcomes research and for clinical care. By providing evidence-based estimates of the patient’s probability of health outcomes, CPMs enable clinicians and patients to make decisions that are more rational and consistent with a patient’s own risks, values, and preferences.

It is no surprise that many researchers use multicenter datasets as a substrate to develop clinical prediction models. To be useful, clinical prediction models must be reliable in new patients, potentially including patients from different hospitals, countries, or care settings. Recruitment at multiple sites makes it easier to collect sufficient data to estimate model parameters reliably, especially when the outcome of interest is a rare event.

Currently, the TRIPOD guidelines do not mention any requirements that are specific to multicenter studies, other than reporting the number and location of centers [1, 2]. Nonetheless, multicenter studies pose particular challenges to statistical data analysis and offer new opportunities. On the one hand, a key assumption underlying common regression techniques is violated. Observations are not independent as they are clustered within hospitals; patients within a hospital may be more alike than patients from different hospitals. On the other hand, the fact that hospital populations differ from one another may shed light on the generalizability of the model.

Studies have shown that mixed (i.e., random intercept) and fixed (i.e., center variables) effects regression allow to study differences between centers in event rates and predictor effects and may even provide better predictions [3,4,5,6,7,8]. Leave-center-out cross-validation has been proposed to efficiently assess the generalizability of the model to centers not included in the development set [9, 10]. For example, small center effects and successful cross-validation of a model developed in multiple tertiary care centers in one country strongly indicate generalizability to other tertiary centers in that country. However, transportability of the model to clinical care settings or distinct populations not represented in the data can never be guaranteed [11,12,13]. In the example above, predictive performance in primary or secondary care or in foreign centers (“transportability”) remains to be assessed in external validation studies.

Hence, multicenter studies are extremely interesting if they are representative of the settings in which the model is intended to be used. This is uncertain if centers that participate in studies differ from those who do not, for example, because they have an academic interest or specialization in the disease under study (selection bias) [14]. Cases identified from specialist centers may not be representative of all cases in the general population, and patients without the condition may have been referred there because they presented with many risk factors, distorting regression estimates (referral bias). Moreover, the predictive performance of a model may differ between subgroups in a population (spectrum bias), which has led to the recommendation to use the prevalence in the studied setting as a guide when evaluating whether the reported predictive performance is applicable to a particular clinical setting [15,16,17,18].

The purpose of this research is to investigate the properties, analysis, and reporting of multicenter studies in the Tufts PACE Clinical Prediction Model Registry, a comprehensive database of clinical prediction models for cardiovascular disease [19, 20]. In addition, we provide simulated illustrations of consequences of common design and analysis choices.

Methods

We searched the Tufts PACE Clinical Prediction Model Registry for multicenter studies. This registry contains published clinical prediction models for patients at risk for and with known cardiovascular disease. The inclusion and exclusion criteria and electronic search strategy are published in detail elsewhere [19, 20]. Briefly, the registry was constructed from a PubMed search for English-language articles containing newly developed prediction models published from January 1990 to March 2015. It includes prognostic and diagnostic models to predict binary outcomes (e.g., myocardial infarction or death). Only articles that show the model in a format that allows readers to make individual predictions were included (e.g., an equation, a point score, an online calculator).

We considered studies labeled in the registry as multicenter and published between January 2000 and March 2015. From these, a randomly selected subset of 50 papers underwent full-text screening. When a paper presented multiple prediction models, we selected the primary model as identified by the authors. Where no primary model was identified, we selected the one with the smallest number of events per variable (EPV). We extracted sample size and other dataset characteristics for the development dataset. In two studies, the development dataset was single center (due to a geographical train-test split of multicenter data). In these cases, we described the complete multicenter dataset.

One researcher (LW) extracted the following and entered it in an excel spreadsheet:

  • The total sample size of the development dataset;

  • The number of events of interest in the development dataset;

  • The ratio of the number of patients with events divided by the total sample size (ignoring censoring in time-to-event data);

  • The number of centers;

  • The ratio of the sample size in the largest and smallest center;

  • Center sizes, or any descriptive information related to center size;

  • A description of the setting in which the model was developed (e.g., all tertiary/academic centers, a community cohort, mixed settings);

  • The type of study data used for model development (registry/cohort, trial, individual patient data meta-analysis (IPD-MA));

  • The regression technique used for model development;

  • Whether an external validation was included in the study (a random train-test split is not considered external validation);

  • How many external validation datasets were used;

  • Whether external validation happened in the same center (or community) as model development;

  • Whether external validation happened in the same country as model development;

  • Whether external validation happened in the same care setting as model development;

  • Whether external validation used data from the same time period as model development;

  • Statements regarding the generalizability of the prediction model (including selection, referral, and spectrum bias).

When a publication did not report on these items, but the authors referred to earlier publications or a study website when describing the methodology, we screened these sources. The retrieved information was cross-checked with recorded information in the Tufts PACE database, and conflicts were resolved [19, 20]. We summarized all extracted data as frequencies, medians, ranges, and interquartile ranges.

We did not register this review, nor did we publish a review protocol. We used the PRISMA checklist to prepare this manuscript [21].

To highlight potential issues with common design and analysis choices for multicenter prediction models, we supplemented the review with illustrations, the detailed methods of which are provided in the Appendix. Briefly, in the first illustration, we compare marginal predicted probabilities (obtained by using standard logistic regression ignoring clustering) to conditional predicted probabilities from a mixed effects model. We simulated a dataset with a binary outcome with event rate 0.33 and a single standard normal predictor (with the same distribution across centers) with an effect of 0.8 on the logit scale. We generated data for 500 centers, with 1000 observations each, and let the intercepts vary per center with a standard deviation of 1 or 0.5. We fitted a standard logistic regression model and a mixed effects logistic regression model and evaluated predictions in terms of discrimination and calibration.

In the second illustration, we exemplify non-transportability with a published example of a risk score to predict recurrent cardiovascular events [22] and by using a real multicenter clinical dataset that was collected to develop preoperative prediction models for ovarian cancer diagnosis [23]. The dataset contains information on 3439 patients from 12 oncology referral centers and 1664 patients from 8 general hospitals. We consider a binary outcome Y (ovarian malignancy) and two predictors: age and the log-transformed CA125 biomarker value. The observed relations (from logistic regression) between the outcome and predictors in tertiary care and secondary care are taken to be the true models in each setting. Using resampling techniques, we generated a typical multicenter model development dataset in tertiary care, a large external validation set in tertiary care, and a large external validation set in secondary care. We made calibration plots and computed C-statistics to assess the generalizability and transportability of the model.

Results

The number of clinical prediction models published per year showed a strong increase over time. The proportion of models built using multicenter data increased slightly over time. Sixty-four percent (602/944 models) of models published after 2000 used multicenter data (see Fig. 1).

Fig. 1
figure 1

Evolution of the yearly total number of prediction models (upper line) and the yearly number of prediction models built with multicenter data (shaded area) in the Tufts PACE Clinical Prediction Model Registry

Of all studies included in the Tufts PACE Clinical Prediction Model Registry, 52% (390/747 studies) were multicenter studies published after 2000 (see flowchart in Fig. 2). We discuss the details of a random subset of 50 studies (Table 1) [24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73]. Forty of them analyzed observational multicenter data. Six created a prediction model making secondary use of multicenter trial data. For one study, the reporting on the data source was equivocal. Three studies were individual patient data meta-analyses (IPD-MA), of which two combined datasets from multiple community cohorts and one combined data from multiple trials (some of them are multicenter trials themselves). In IPD-MA of existing study data, the primary source of heterogeneity is at the study level and not at the center level. Hence, from these three studies, study-level information was extracted and analyzed instead of center-level information (e.g., number of studies in the IDP-MA, n in each of the original studies, statistical account of clustering within studies).

Fig. 2
figure 2

The Tufts PACE Clinical Prediction Model Registry was searched for relevant multicenter articles published after 2000

Table 1 Key characteristics of 50 selected publications

The sample size for model development in the 50 selected studies varied from 155 to 834,696 with a median of 2263 (IQR 667 to 13,705). Sixteen studies had between 100 and 1000 observations, 18 studies had between 1000 and 10,000 observations, 13 studies had between 10,000 and 100,000 observations, and 3 studies had more than 100,000 observations.

Multicenter: a catch-all term

The number of centers contributing data to a multicenter study varied widely from just two centers to 5473 centers, with a median of 10 (IQR 3 to 76) (Fig. 3). As such, “multicenter” is a term covering anything from two researchers joining forces to large international registries. There was a moderate positive correlation between the total sample size and the number of contributing centers (Fig. 3; Spearman ρ 0.4). Four studies did not report the number of participating centers. Two of them were large studies (N = 562,791 and 9556) using data from a quality of care improvement program mentioning more than a hundred centers participated, one was a secondary analysis of a multicenter trial (N = 1692), and one was a cohort study (N = 747).

Fig. 3
figure 3

Total sample size and number of centers (logarithmic axis scales)

The median number of patients per center was 133 (range 1 to 16,295; IQR 62 to 530), but imbalance in center contributions was very common. Sample sizes for each included center were reported in only 9/50 studies. The ratio of largest to smallest center size could be computed in 10 studies. Among these, the median sample size ratio of the largest center to the smallest was 4 (range 1 to 34, IQR 3 to 7). Twelve additional studies contained summary statements on sample sizes (e.g., interquartile ranges) or hospital volume (e.g., number of beds, surgical volume, catchment population sizes). These indicated imbalance in center sizes in nine studies, were equivocal in one study, and, interestingly, described active control to limit imbalance in two studies. These active controls were including only the first 10 to 20 consecutive patients in each center and setting an upper inclusion limit for centers. The majority (28/50) did not report on center size imbalance. These tended to be smaller studies (median N 1654 versus 4294).

Multicenter collaborations were not exclusively used to study rare outcomes. The median ratio of the number of events to the total sample size was 0.07 (range 0.001 to 0.60, IQR 0.03 to 0.16, not reported in 1), and the median number of events 177 (range 6 to 48,412; IQR 98 to 497; not reported in 1).

Clustered data is commonly ignored during analysis

The vast majority of studies (39/50) completely ignored the clustered data structure that is typical for multicenter studies during analysis (Fig. 4; 24 used logistic regression models, 13 Cox models, 2 Fine and Gray models). The other studies took clustering into account when building the prediction model in a variety of ways: standard Cox or logistic regression with a center-level covariate (e.g., a fixed center effect, a region effect, a physician experience effect), mixed effects logistic regression, or stratified regression. Two studies split their dataset into a single-center model development set and used the other centers for geographical validation.

Fig. 4
figure 4

Analysis techniques used for model building in 50 randomly selected multicenter studies

Fig. 5
figure 5

Didactical example of predictions made ignoring center effects, when differences between centers are very large

Validation of the model

In leave-center out-cross-validation, each cluster is left out of the dataset as a validation set once, while the model is fitted on all other clusters [9]. Only one study used this technique, an IPD-MA that used leave-one-study-out cross-validation.

Ten studies included an external validation in the model development study, two of which used two separate external validation datasets. In 3/10 studies, validation was merely temporal. In 6/10 studies, validation was a mixture of temporal and geographic validation (in a different center or community, four studies used validation data from a different country). Validation in data collected at the same time as model development, but from another center, occurred in 1/10. All validations took place in hospitals or care settings similar to the ones in which the model was developed.

Multicenter data does not guarantee generalizability

To evaluate in what type of settings a prediction model could be applied, it is crucial that researchers describe the settings from which study data was obtained. In our sample, 23/50 studies failed to do so.

Among the studies that did report on the settings from which data was obtained, 16/27 were single-setting studies: 13 collected all data in tertiary/academic centers, 2 collected all data in community hospitals, and 1 collected all data in veteran affair hospitals. Data from multiple settings (8/27; e.g., a mix of academic and community hospitals, a mix of teaching and non-teaching hospitals) and use of population or community cohort data (3/27) were rare.

Most studies used data from only one country; 11/50 were international studies, and 3/50 did not report where data was collected.

Only 17/50 of studies included a direct or indirect statement on the generalizability of their findings to the target population or the potential applicability in other centers. Eleven studies addressed potential selection bias (including referral bias and spectrum bias) in their study. Five studies critically reflected on generalizability in light of a high disease prevalence in their study population. One study questioned generalizability due to the use of specific devices in the participating centers. More than half the studies (27/50) did not discuss representativeness of the study sample or selection bias, but simply stated that external validation is needed. A few studies (4/50) discussed neither generalizability nor external validity. Only two studies (2/50) claimed generalizability of their model to the target population. The first did so on the basis of using data from a large, multinational registry (N = 63,118; 126 centers) covering a heterogeneous population, accrued using consecutive enrolment with a limit on the number of inclusions per month per site to ensure representativeness, with standardized definitions, quality control efforts, and audits. The second (N = 1048, 288 centers, a mix of teaching and non-teaching hospitals) did so based on their use of registry data rather than clinical trial data, a claimed lack of selection, recall, and respondent bias, and successful external validation in an independent registry with a lower proportion of severely ill patients than the development dataset.

Fig. 6
figure 6

Example of a prediction model built in tertiary care that is generalizable to a new tertiary care center but not transportable to a new secondary care center

Discussion

This review of the Tufts PACE Clinical Prediction Model Registry indicated that 64% of clinical prediction models published after 2000 were derived from multicenter data. Our survey of published studies resulted in a number of important observations.

Firstly, “multicenter” is a broad term, covering small-scale studies conducted at only two collaborating centers as well as very large international registries. Secondly, all studies had in common that the collected patient data cluster within centers, yet this data structure is most often ignored during analysis. In the minority of studies that did acknowledge the data structure, a broad array of analysis techniques were used, indicating there is no commonly accepted way to build prediction models in clustered data. Thirdly, even though multicenter studies hold the promise of better generalizability to the clinical target population than single-center studies, reporting on the potential generalizability and external validity is not transparent. Nearly one in two studies failed to describe the care settings from which data was obtained, and nearly two thirds of studies failed to critically reflect on the potential generalizability of the developed model to new centers or resorted to vague statements that further validation was needed.

Our findings are in line with Kahan’s, who reviewed multicenter randomized trials and found that the reporting of key aspects was poor and only 29% of studies adjusted for center in the analysis [77]. They are also in line with numerous reviews that signaled incomplete reporting and poor design and analysis for prediction models for diabetes [78], kidney disease [79, 80], cancer [81], neurological disorders [82, 83], and fetal and maternal outcomes in obstetrics [84]. Some have criticized vagueness of reported selection criteria, such that it remained unclear whether participants were selected in an unbiased way [85], and specialist fields contributing the majority of models, which are unlikely to be generalizable to general hospital populations [80]. A common grievance is the paucity of external validations [79,80,81,82,83,84,85]. The proportion of multicenter studies was rarely reported in published reviews but seems to vary by specialty [80, 82, 83]. Researchers in neurology have identified the common use of single-center data as one of the main causes for the lack of generalizability of models in their fields [82, 83].

An obvious limitation of our review is that it is limited to 50 risk prediction studies in cardiovascular disease. We expect that in other fields, multicenter studies also differ widely in terms of numbers of included centers and analyses techniques used. A second limitation is that publication bias may have influenced the results. Large-scale studies with many participating centers may have had a higher likelihood of being published, potentially leading to overestimated study sizes and numbers of participating centers in this review.

Recommendations for research practice

The strengths of a multicenter design are the ability to speed up data collection and the coverage of a broader population. However, the successful conduct of prospective multicenter studies requires careful study organization and coordination, motivated study staff at the participating centers, and a dedicated and experienced method center [86]. Based on our findings, we can make some suggestions to improve the analysis and reporting of multicenter studies.

Model development

We recommend using appropriate statistical techniques to analyze clustered data. Studies have shown that standard logistic regression yields suboptimal results when data is clustered [3,4,5]. First, mixed and fixed effects regression provide valuable insights into the differences in event rate between centers, after adjustment for the patient-level predictors [6]. Second, it is well known that ignoring clustering yields incorrect standard errors (often underestimated). This also affects stepwise regression, which is known to induce “testimation bias” even in unclustered data [87], but is still common. Third, standard regression methods yield regression coefficients and predicted probabilities with a marginal interpretation, averaged in the population, ignoring centers [74, 75]. As illustrated here and elsewhere [4, 76], this has a negative impact on model calibration. Miscalibration gets worse as the differences between centers increase. In contrast, fixed (with center dummies) and mixed effects regression methods yield correct standard errors for patient-level predictors and better calibrated predictions to support decision-making at the center level, even if average center effects are used in new centers [3, 4]. Fourth, modeling center effects also improves discrimination between two patients from different centers [4, 88].

During model building, center-level covariates can be used to tailor predictions to centers with specific characteristics. This captures the combined effects of omitted predictors that vary by center and are not fully captured by the patient-specific covariates (e.g., regional differences in population health, different patient spectra due to referral mechanisms). Those may be extremely difficult to capture otherwise. For example, one may incorporate a predictor in the model that specifies the center type or specialty (e.g., tertiary centers versus others) [8, 89, 90]. Note, however, that standard regression techniques that ignore clustering yield standard error estimates for effects of center characteristics that are typically too small.

When using fixed or mixed effects regression, one may check for center-predictor interaction (or random slopes). This type of interaction may be rare and have little influence on predictions [91].

Fixed and mixed effects regression models can easily be fitted with commonly used statistical software. To apply such models in a new center, an average intercept can be used (which is a random intercept of zero in a mixed effects model; in a fixed effects model, this is not straightforward) and simple updating techniques can yield predictions tailored to new centers [13, 92].

Model validation

A geographical split into a single-center development dataset and a validation set consisting of data from the other centers may alleviate the need to address clustering during model building but is very inefficient as only a part of the available data is used for developing the model.

In contrast, leave-center-out cross-validation (also known as internal-external cross-validation) makes efficient use of all available data and offers an excellent opportunity to test the generalizability of the model to new centers [9]. To summarize the predictive performance in each center, a simple average performance [9] or an optimism-corrected performance measure can be computed [93]. At the very least, however, the difference in performance between centers should be inspected [94]. Meta-analytic techniques summarize performance and simultaneously quantify between-center heterogeneity, by distinguishing between sampling variability within centers and variance in predictive performance between centers [18, 95]. Note that leave-center-out cross-validation does not resolve data clustering in the development sets and does not guarantee transportability to distinct care settings or populations not represented in the multicenter dataset.

Reporting

We recommend identifying the patients and care settings for which the model is intended and transparently reporting the threats to generalizability to this target population. This includes careful reporting of center-level characteristics such as types of centers (e.g., secondary versus tertiary care) and the sample sizes per center. Describing the limitations of the study, including non-representativeness of study samples, is already a key element on the TRIPOD checklist [1]. The STROBE checklist includes the requirement to address potential sources of bias and states that the direction and magnitude of any potential biases should be addressed [96]. We fully endorse these recommendations. Selection bias is of special interest in prediction model studies. The study sample may not be representative of the clinical population in which the developed model is intended to be applied, for example, because data was collected in teaching hospitals by physicians with a research interest in the disease under study (referral bias). The performance of a prediction model is influenced by the case-mix and the disease prevalence [15, 97, 98]. Hence, if specialized centers collected the study data, selection bias may lead to overestimated disease probabilities (miscalibration-in-the-large) or lower specificity. Note that in half of the studies reporting information on study settings, all participating centers were tertiary or academic centers. Arguably, prediction models and decision-support systems are most useful at lower levels of care (secondary and primary), where the patient case-mix is broad, and practitioners need to triage and refer patients efficiently.

Design for generalizability

Lastly, multicenter studies have most potential if generalizability is not an afterthought but considered at study design. Frequently, researchers develop prediction models from databases collected for other purposes. However, if prediction is considered in the design phase, centers from diverse settings can participate to ensure the dataset is representative of the intended target population. Moreover, researchers can aim to collect consecutive patients from each participating center, whilst maintaining a good balance in center sample sizes.

Conclusion

Multicenter designs are very common in prediction model development. Although multicenter studies may provide better insight in the generalizability of developed prediction models, this potential often remains untapped due to the use of unsuitable analysis methods and lack of transparent reporting.