Background

Patient-reported outcome measures (PROMs) are now frequently used in randomised controlled trials (RCTs) as primary endpoints. All RCTs are longitudinal, and many have a baseline, or pre-randomisation (PRE) assessment of the outcome, and one or more post-randomisation assessments of outcome (POST).

For such pre-test post-test RCT designs, using a continuous primary outcome, the sample size estimation and the analysis of the outcome can be done using one of the following methods:

  1. 1.

    Analysis of post-randomisation treatment means (POST)

  2. 2.

    Analysis of mean changes from pre- to post-randomisation (CHANGE)

  3. 3.

    Analysis of covariance (ANCOVA).

For brevity (and following Frison and Pocock’s nomenclature [1]), these methods will be referred to as POST, CHANGE and ANCOVA respectively.

Sample size calculations are now mandatory for many research protocols and are required to justify the size of clinical trials in papers before they will be accepted for publication by journals [2]. Thus, when an investigator is designing a study to compare the outcomes of an intervention, an essential step is the calculation of sample sizes that will allow a reasonable chance (power) of detecting a pre-determined difference (effect size) in the outcome variable, when the intervention is actually effective, at a given level of significance. Sample size is critically dependent on the type of summary measure, the proposed effect size and the method of calculating the test statistic [3]. For example, for a given power and significance level, the sample size is inversely proportional to the square of the effect size, so halving the effect size will quadruple the sample size. For simplicity, this paper will assume that we are interested in comparing the effectiveness (or superiority) of a new treatment compared to a standard treatment, at a single point in time post-randomisation.

Sample size

In a two-group study with a Normally distributed outcome, comparing POST-randomisation mean outcomes between two groups, the number of subjects per group nPOST assuming equal sample sizes and equal standard deviations (SDs) per group for a two-sided significance level α and power 1 – β is [4]:

$$ {n}_{POST}\ \mathrm{per}\ \mathrm{group}=\frac{2{\sigma}^2{\left[{Z}_{1-\alpha /2}+{Z}_{1-\beta}\right]}^2}{\delta^2}, $$

where:

δ is the target or anticipated difference in mean outcomes between the two groups

σ is the SD of the outcome post-randomisation (which is assumed to be the same in both groups)

Z1 – α/2 and Z1 – β are the appropriate values from the standard normal distribution for the 100 (1 – α/2) and 100 (1 – β) percentiles respectively.

Consider a two-group study with a Normally distributed outcome, with a single baseline and single post-randomisation assessment of outcomes. Comparing mean outcomes between two groups, adjusted for the baseline or pre-randomisation value of the outcome, using an ANCOVA model for the number of subjects per group nANCOVA (assuming equal sample sizes and equal SDs, at baseline and post-randomisation, per group) for a two-sided significance level α and power 1 – β is:

$$ {n}_{ANCOVA}\ \mathrm{per}\ \mathrm{group}=\frac{2{\sigma}^2{\left[{Z}_{1-\alpha /2}+{Z}_{1-\beta}\right]}^2}{\delta^2}\left\{1-{\rho}^2\right\}. $$

Here, ρ denotes the correlation between the baseline and post-randomisation outcomes and σ is the post-randomisation SD, which is assumed to be the same as the baseline SD [1, 5]. Machin et al. [5] refer to the (1 – ρ2) term as the ’design effect’ (DE).

In a two-group study with a Normally distributed outcome, comparing the mean change in outcomes (i.e. post-randomisation outcome – baseline) between two groups, the number of subjects per group nCHANGE (assuming equal sample sizes and equal SDs, at baseline and post-randomisation, per group) for a two-sided significance level α and power 1 – β is:

$$ {n}_{CHANGE}\ \mathrm{per}\ \mathrm{group}=\frac{2{\sigma}^2{\left[{Z}_{1-\alpha /2}+{Z}_{1-\beta}\right]}^2}{\delta_c^2}\left\{2-2\rho \right\}. $$

Here, δc is the target or anticipated difference in mean change in outcomes between the two groups and σ is the post-randomisation SD that is assumed to be the same as the baseline SD. If the expected mean values of the baseline outcomes are the same in both groups, which is likely in an RCT, then δc is the same as δ.

Figure 1 shows the relationship between the total sample size and the correlation between the baseline and post-randomisation outcomes, for the three methods of sample size estimation (POST, CHANGE and ANCOVA) with a 5% two-sided significance level, 90% power, a target difference (a difference in post-treatment means or a difference in mean changes) of 0.50 and an SD of 1.0. Figure 1 shows how the total sample size is constant for POST irrespective of the baseline and post-randomisation follow-up correlation; the sample size declines as the correlation increases for ANCOVA and CHANGE; and that for correlations above 0.5 the sample size for ANCOVA is always the lowest and is less than or equal to the sample size for CHANGE.

Fig. 1
figure 1

Relationship between the total sample size and the correlation between the baseline and post-randomisation outcomes for the three methods of sample size estimation (POST, CHANGE and ANCOVA)

Example

The SELF study [6] was a multicentre, pragmatic, unblinded, parallel-group randomised control superiority trial designed to evaluate the clinical effectiveness of a self-managed single exercise programme versus usual physiotherapy treatment for rotator cuff tendinopathy (pain or weakness in the shoulder muscles). The intervention was a programme of self-managed exercise prescribed by a physiotherapist in relation to the most symptomatic shoulder movement. The control group received usual physiotherapy treatment. The primary outcome measure was the total score on the Shoulder Pain and Disability Index (SPADI) at 3 months post-randomisation. The SPADI Shoulder Score ranges from 0, being the best outcome (less disability), to 100 the worst (greater disability).

The original sample size calculation for the SELF trial assumed that a 10-point difference in the mean 3 months post-randomisation SPADI scores between the intervention and control groups would be regarded as a minimum clinical important difference (MCID). It assumed an SD of 24 points, a power of 80% and a (two-sided) significance level of 5%, meaning that using the POST sample size formula, 91 participants per group were required (182 in total). However, in light of new information from an external pilot study, the investigators undertook a sample size re-estimation (SSR) calculation, which was approved by the ethics committee. The new information related to a narrower estimate of population variance from an external pilot RCT (n = 24) of 16.8 points on the SPADI and, additionally, a correlation between baseline and 3 months SPADI scores of 0.5. Using the ANCOVA sample size formula, with an SD of 17 points; correlation between baseline and 3 months SPADI scores of 0.50, 80% power, 5% two-sided significance and a MCID (as before) of 10 points, it was estimated that 34 participants per group were required (68 in total). This contrasts with a sample size of 45 per group using the POST means formula with the revised SD of 17 points. Thus, with a correlation of 0.50 between baseline and follow-up, using the ANCOVA method for sample size estimation, we can reduce the sample size by approximately 25% (i.e. 1–0.52) compared to the POST treatment means method.

Should the method of sample size estimation mirror the proposed method of statistical analysis (of the outcome data)? That is, if an ANCOVA model is likely to be used in the statistical analysis of the collected outcome data, should an ANCOVA method that allows for the correlation also be used in the sample size estimation method? And if so, what correlation (between baseline and follow-up outcomes) should be assumed and used in the sample size estimation? Other factors/parameters in the sample size estimation method being unchanged, an assumed correlation of 0.70 (between baseline and follow-up outcomes) means that we can halve the require sample size at the study design stage, if we used an ANCOVA method compared to a comparison of POST treatment means method. It is, however, paramount to assess how realistic a correlation of 0.50 or 0.70 between baseline and post-randomisation outcomes is, and to make evidence-based assumptions on these values, as an overestimated correlation could result in an underpowered study. The aim of this paper is to estimate the observed correlations between baseline and post-randomisation follow-up PROMs from a number of RCTs, bridging a gap in the evidence.

Methods

Data sources

This was a secondary analysis of RCTs with continuous patient-reported outcomes (both primary and secondary) undertaken in the School of Health and Related Research (ScHARR) at the University of Sheffield published between 1998 and 2019. Secondary ethics approval was gained through the University of Sheffield ScHARR Ethics Committee (Reference 024041).

Statistical analysis

For each included trial, the correlation between baseline and post-randomisation outcomes was calculated using the Pearson correlation coefficient [7]. Given a set of n pairs of observations (x1, y1), (x2, y2), …, (xn, yn), with means \( \overline{x} \) and \( \overline{y} \) respectively, then the Pearson correlation coefficient r is given by:

$$ r=\frac{\sum \limits_{i=1}^n\left({y}_i-\overline{y}\right)\left({x}_i-\overline{x}\right)}{\sqrt{\sum \limits_{i=1}^n{\left({y}_i-\overline{y}\right)}^2\sum \limits_{i=1}^n{\left({x}_i-\overline{x}\right)}^2}} $$

with a standard error SE(r) = \( SE(r)=\sqrt{\frac{1-{r}^2}{n-2}} \).

A variety of summary statistics for the baseline and post-randomisation correlations were calculated, including (1) the unweighted sample mean and median; (2) a weighted sample mean, using the fixed effect inverse variance method [4], and (3) a sample mean with allowance for clustering by trial derived from a multilevel mixed-effects linear model with a random effect for the trial using restricted maximum likelihood estimation (REML) [8]. The correlations were calculated overall and then split by trial, outcome and time point.

Results

Trials

Table 1 shows a summary of the 20 RCTs included in the analysis. Various outcome measures were used in the trials for both the primary and secondary outcomes. Table 2 provides a brief description of the outcome measures and how they were scaled. Three of the outcome measures, the Clinical Outcomes in Routine Evaluation - Outcome Measure (CORE-OM), Pelvic Organ Prolapse/Urinary Incontinence Sexual Questionnaire (PISQ-31) and SPADI, have a total score and various subscales: both were included in the analysis. The 20 included RCTs had sample sizes (at baseline) ranging from 49 to 2659 participants. The time points for the post-randomisation to follow-up assessments ranged from 7 days to 24 months. The maximum sample size for the baseline follow-up correlations ranged from 39 to 2659 participants. Four-hundred and sixty-four correlations between baseline and follow-up were estimated in the 20 trials. Table 1 shows, for example, that the Leg Ulcer trial (Trial 1) had 9 outcomes all assessed at 2 post-randomisation time points (3 and 12 months), giving a total of 2 × 9 = 18 correlations. The median number of outcomes per trial was 9 and ranged from 1 (in the 3Mg trial) to 15 (AIM-High, PLINY and IPSU). The median number of correlations calculated per trial was 16.5 and ranged from 1 (in the 3Mg trial) to 65 (in the DiPALS trial). The median number of post-randomisation follow-up time points across the 20 trials was 2.5 and ranged from 1 to 6.

Table 1 Summary of the 20 randomised controlled trials
Table 2 Description of the outcome measures used in 20 randomised controlled trials and how they are scaled/scored

Correlation

Figure 2 shows a histogram of the 464 estimated baseline to follow-up correlations. The histogram is reasonably symmetrical, and the overall mean correlation was 0.50 (median of 0.51). The baseline to follow-up correlations ranged from − 0.13 to 0.91 with an interquartile range of 0.41 to 0.60. Since the sample sizes for the trials varied from 49 to 2659 participants, a weighted estimate of the mean correlation, using the inverse variance method, was 0.51. Since the 464 correlation estimates were from 20 trials and the correlations were nested or clustered with trials, the estimated mean correlation after allowing for clustering by trial, using a multilevel mixed-effects linear regression model (with a random effect or intercept for the trial), was 0.49 (95% confidence interval [CI] 0.45 to 0.53). These other summary estimates were very similar to the simple unweighted mean value of 0.50.

Fig. 2
figure 2

Histogram of n = 464 correlations with overall median, 25th and 75th percentiles

Table 3 shows the baseline to post-randomisation follow-up correlations aggregated by trial. The largest average correlations per trial showed a mean of 0.67 observed in the PLINY trial; the lowest average correlations were observed in the POLAR trial. The trial with the widest range of correlations was the PRACTICE trial. Figure 3 shows a box and whisker plot of how the observed baseline to follow-up correlations varied across the 20 RCTs along with the overall median correlation. There was considerable intertrial variation in the correlations, and it should be noted that some of the trials had less than or equal to six baseline to follow-up correlations estimated (3Mg [N = 1 outcome and correlation], BEADS [N = 3], Homeopathy [N = 5] and PRACTICE [N = 6]).

Table 3 Baseline to post-randomisation follow-up correlations by trial
Fig. 3
figure 3

Box and whisker plot of (n = 464) correlations by trial

The time points for the post-randomisation follow-up assessments ranged from 7 days to 24 months. Table 4 shows the baseline to post-randomisation follow-up correlations by post-randomisation follow-up time point. Figure 4 shows a scatter plot of the baseline to follow-up correlations by post-randomisation follow-up time point for the 464 correlations from the 20 trials. Although it is not obvious from the scatter plot, a multilevel mixed-effects linear regression model (with a random intercept for the trial) suggests a small decline in the baseline to post-randomisation follow-up correlations the further the time points are apart. The estimated regression coefficient from the model was − 0.003 (95% CI − 0.006 to − 0.001; P = 0.005). This implies that for every unit or 1-month increase in the time from baseline to the post-randomisation follow-up the correlation declines by 0.003 point. Figures 5 and 6 show how the correlations change over time for the Short Form Health Survey (SF-36) outcomes (282 correlations and 12 trials) and the EuroQol five dimension scale (EQ-5D) Utility score outcome (29 correlations and 12 trials). A similar pattern to the overall pattern is observed for these specific outcomes with a small decline (0.003 for the SF-36 and 0.002 for the EQ-5D) in baseline to follow-up correlations over time.

Table 4 Baseline to post-randomisation follow-up correlations by post-randomisation follow-up time point
Fig. 4
figure 4

Scatter plot of correlations against post-randomisation follow-up time point with regression line (464 correlations from 20 trials)

Fig. 5
figure 5

Scatter plot of correlations against post-randomisation follow-up time point with regression line, SF-36 outcomes (282 correlations from 12 trials)

Fig. 6
figure 6

Scatter plot of correlations against post-randomisation follow-up time point with regression line, EQ-5D Utility outcome (29 correlations from 12 trials)

Table 5 shows the baseline to post-randomisation correlations by outcome. The SF-36 was the most popular outcome and used in 12 out of the 20 trials. The correlations for SF-36 outcomes and its various dimensions (12 trials and n = 282 correlations) showed a mean of 0.51 (median 0.53), range 0.06 to 0.91. The second most popular outcome was the EQ-5D, which was used in 12 of the trials as well. Correlations for EQ-5D outcomes only (12 trials and n = 50 correlations) showed a mean of 0.49 (median 0.51), range − 0.13 to 0 87. Three of the outcome measures, the CORE-OM, PISQ-31 and SPADI, in Table 5 have a total score and various subscales. There was no clear pattern in the correlations and no reliable evidence that the total scale score correlated more highly than an individual subscale score.

Table 5 Baseline to post-randomisation follow-up correlations by outcome

Discussion

The 20 reviewed RCTs had sample sizes, at baseline, ranging from 49 to 2659 participants. The time points for the post-randomisation follow-up assessments ranged from 7 days to 24 months; 464 correlations between baseline and follow-up were estimated; the mean correlation was 0.50 (median 0.51; SD 0.15; range − 0.13 to 0.91).

The 20 RCTs included in this study were a convenience sample of trials and data and may not be representative of the population of all trials with PROMs. However, they include a wide range of populations and disease areas, a variety of different interventions and outcomes that are not untypical of other published trials. We also reviewed detailed reports of 181 RCTs published in the National Institute for Health Research (NIHR) Health Technology Assessment (HTA) journal from 2004 to the end of July 2017 and found 11 NIHR HTA reports (and 12 outcomes) that had a sample size calculation based on the ANCOVA model [50]. For these 12 outcomes the mean baseline to follow-up correlation that was assumed and used in the subsequent sample size calculation was 0.49 (SD 0.09) and ranged from 0.31 to 0.60. Thus, our results, with a mean correlation of 0.50, are consistent with correlations used and published in the NIHR HTA journal.

We observed a small decline in baseline to follow-up correlations over time of − 0.003 per month. That is, for every unit or 1-month increase in the time from baseline to the post-randomisation follow-up, the correlation declines by 0.003 point. Frison and Pocock [1] also report a slight decline in correlation amongst more distant pairs of time points post-randomisation, with the estimated slope being − 0.009 per month apart. So our results are also consistent with a slight decline.

It is important to make maximum use of the information available from other related studies or extrapolation from other unrelated studies. The more precise the information, the better we can design the trial. We would recommend that researchers planning a study with PROMs as the primary outcome pay careful attention to any evidence on the validity and frequency distribution of the PROM and its dimensions.

Strictly speaking, our results and conclusions only apply to the study population and the outcome measures used in the 20 RCTs. Further empirical work is required to see whether these results hold true for other outcomes, populations and interventions. However, the PROMs in this paper share many features in common with other PROM outcomes, i.e. multidimensional, ordinal or discrete response categories with upper and lower bounds, and skewed distributions; therefore, we see no theoretical reasons why these results and conclusions may not be appropriate for other PROMs.

Throughout this paper, we only considered the situation where a single dimension of the PROM is used at a single endpoint. Sometimes there is more than one endpoint of interest; PROMs are typically multidimensional (e.g. the SF-36 has eight dimensions). If one of these dimensions is regarded as more important than the others, it can be named as the primary endpoint and the sample size estimates calculated accordingly. The remainder should be consigned to exploratory analyses or descriptions only.

We have also assumed a rather simple form of the alternative hypothesis that the new treatment/intervention would improve patient-reported outcomes compared to the control/standard therapy. This form of hypothesis (superiority versus equivalence) may be more complicated than actually presented. However, the assumption of a simple form of the alternative hypothesis—that the new treatment/intervention would improve outcomes compared to the control/standard therapy—is not unrealistic for most superiority trials and is frequently used for other clinical outcomes. Walters gives a more comprehensive discussion of multiple endpoints and suggests several methods for analysing PROMs [4].

Overall, 5 of the 464 observed correlations were small (less than 0.10). Two of these small correlations came from the PRACTICE trial [26]. In this trial (PRACTICE) we observed a negative correlation of − 0.13 (n = 36 participants) between the baseline and 3 months follow-up post-randomisation time point for the EQ-5D visual analogue scale (VAS) and 0.09 (n = 42 participants) between the baseline and 1 month follow-up. The correlations were based on small sample sizes (n = 36 and 42), and examination of the scatter plots suggested no outlying values and a random scatter. The EQ-5D VAS outcome asks respondents to rate their health today on a 0 (the worst health you can imagine) to 100 (best health you can imagine) visual analogue scale. It may be that there genuinely is no correlation in the population (of chronic obstructive pulmonary disease [COPD] patients) with this outcome.

We calculated several summary correlations to allow for clustering of the outcomes by trial and the variance or standard error of the correlation estimate. The overall summary correlation for the 464 correlations was robust to the summary measure (mean, median, weighted mean, clustered mean) and was around 0.50.

Clifton and Clifton [51] comment that baseline imbalance may occur in RCTs and that ANCOVA should be used to adjust for baseline in the analysis. Clifton et al. [52] also point out the following theoretical assumptions for using the ANCOVA method for sample size estimation: (1) the pairs of baseline and post-randomisation outcomes follow a bivariate normal distribution; (2) the values of the baseline to post-randomisation follow-up, r, are the same in both groups; (3) the variances or SDs of the outcomes are the same in both groups. However, ANCOVA is known to be robust to departures from the assumptions of Normality. The work of Heeren and D’Agostino [53]and Sullivan and D’Agostino [54] supports the robustness of the two independent samples t test and ANCOVA when applied to three-, four- and five-point ordinal scaled data using assigned scores (like PROMs), in sample sizes as small as 20 subjects per group.

Conclusions

There is a general consistency in the correlations between the baseline and follow-up PROMs, with the majority being in the range from 0.4 to 0.6. The implications are that we can reduce the sample size in an RCT by 25% if we use an ANCOVA model, with a correlation of 0.50, for the design and analysis. When allowing for the correlation between baseline and follow-up outcome in the sample size calculation, it is preferable to be conservative and use existing data that are relevant to your outcome and your population if they are available. Secondly, be wary of having an ’automatic’ rule of adjusting your required sample size downwards by 25% just because you have a baseline assessment.

There is a slight decline in correlation between baseline and more distant post-randomisation follow-up time points. Finally, we would stress the importance of a sample size calculation (with all its attendant assumptions) and also stress that any such estimate is better than no sample size calculation at all, particularly in a trial protocol [55, 56]. The mere fact of calculation of a sample size means that a number of fundamental issues have been considered: what is the main outcome variable, what is a clinically important effect, and how is it measured? The investigator is also likely to have specified the method and frequency of data analysis. Thus, protocols that are explicit about sample size are easier to evaluate in terms of scientific quality and the likelihood of achieving objectives.