Background

Many randomized trials involve measuring a continuous outcome at baseline and after treatment. Typical examples include trials of pravastatin for hypercholesterolemia [1], exercise and diet for obesity in osteoarthritis patients [2] and acupuncture for pain in athletes with shoulder injuries [3]. In each trial, the outcome measure used to determine the effectiveness of treatment - cholesterol, body weight or shoulder pain - was measured both before treatment had started and after it was complete.

In the case of a single post treatment outcome assessment, there are four possibilities for how such data can be entered into the statistical analysis of such trials. One can use the baseline score solely to ensure baseline comparability and enter only the post-treatment score into analysis (I will describe this method as "POST"). Alternatively, one can analyze the change from baseline, either by looking at absolute differences ("CHANGE") or a percentage change from baseline ("FRACTION"). The most sophisticated method is to construct a regression model which adjusts the post-treatment score by the baseline score ("ANCOVA"). Figure 1 describes each of these methods in mathematical terms. Figure 2 gives examples of the results of each method described in ordinary language.

Figure 1
figure 1

Mathematical description of the four methods

Figure 2
figure 2

Examples of the results of a trial analyzed by each method in ordinary language terms

Some trials assess outcome several times after treatment, a design known as "repeated measures." Each of the four methods described above can be used to analyze such trials by using a summary statistic such as a mean or an "area-under-curve" [4]. There are several more complex methods of analyzing such data including repeated measures analysis of variance and generalized linear estimation [5]. These methods are of particular value when the post-treatment scores have a predictable course over time (e.g. quality of life in late stage cancer patients) or when it is important to assess interactions between treatment and time (e.g. long-term symptomatic medication). This paper will concentrate on the simpler case where time in not an important independent variable.

The choice of which method to use can be determined by analysis of the statistical properties of each. An important criteria for a good statistical method is that it should reduce the rate of false negatives (β). The β of a statistical test is usually expressed in terms of statistical power (1-β). Power is normally fixed, typically at 0.8 or 0.9, and the required amount of data (e.g. number of evaluable patients) is calculated. A method that requires relatively fewer data to provide a certain level of statistical power is described as efficient.

The characteristics of the four methods - POST, CHANGE, FRACTION and ANCOVA - have been studied by statisticians for some time [6, 7, 8]. In this paper, I aim to provide statistical data that can guide clinical research yet is readily comprehensible by non-statisticians. Accordingly, I will compare the methods using a hypothetical trial and express results in terms of statistical power.

Methods

All calculations and simulations were conducted using the statistical software Stata 6.0 (Stata Corp., College Station, Texas). I created a hypothetical pain trial with patients divided evenly between a treatment and a control group. The pain score for any individual patient was sampled from a normal distribution. The mean score at baseline was 50 mm on a visual analog scale of pain (VAS); after treatment, mean pain was expected to be 50 mm in controls and 45 mm in treated patients. The standard deviation of all scores was 10. The text of the simulation is given in the appendix (appendix.doc).

I calculated the statistical power of the different methods of analysis for this trial given a sample size of 100 patients. As power varies according to the correlation between baseline and follow-up scores, a range of different possible correlations were used. The power for POST, CHANGE and ANCOVA were calculated using the "sampsi" function of Stata. This derives power analytically using formula developed by Frison and Pocock [6]. The power for FRACTION was calculated by the simulation described above. The simulation was first validated against Stata's results for POST and CHANGE at a correlation of 0.5. It was then conducted using 1000 repetitions calculating ttests for FRACTION at a range of correlations between 0.2 and 0.8. The number of results in which p was less than 5% was calculated.

Results and Discussion

The true positive rates of the four statistical methods given different correlations are given in table 1. These data are equivalent to statistical power, or 1-β. As has been previously reported [6], ANCOVA has the highest statistical power. CHANGE has acceptable power when correlation between baseline and post-treatment scores are high;when correlations are low, POST has reasonable power. FRACTION has poor statistical efficiency at all correlations.

Table 1 Statistical power of each method of analysis

Moreover, the power of FRACTION is sensitive to changes in the characteristics of the baseline distribution. If the range of baseline values is large, the variance of FRACTION increases disproportionately and power falls. Simulations were repeated with the standard deviations and difference between groups doubled. There was no difference in the power of POST, CHANGE or ANCOVA. The power of FRACTION fell dramatically: at correlations of 0.2, 0.35, 0.5, 0.65 and 0.8 respectively, power was 18%, 24%, 33%, 45% and 63%.

It is arguable that the method of simulation is biased against FRACTION because the treatment effect is additive, that is, the simulation models an absolute 5 mm difference between groups. In theory, the difference between FRACTION and CHANGE should decrease if the treatment effect is proportional. The simulation was therefore repeated with the treatment group experiencing an average 10% decrease from baseline. Correlation between baseline and follow-up scores was varied randomly between 0.2 and 0.8. The p values from a ttest of FRACTION and CHANGE were directly compared over 1000 simulations: p values were lower for CHANGE approximately 65% of the time.

Theoretical considerations suggestion two further disadvantages to FRACTION. First, because it incorporates both baseline and post-treatment scores, it would appear to control for any chance baseline imbalance between groups. However, this is not the case because of regression to the mean: FRACTION will create a bias towards the group with poorer baseline scores (the same is true for CHANGE; POST causes bias in the opposite direction). Second, because it is calculated using a ratio, it may cause outcome data to be non-normally distributed. In a bivariate normal distribution (such as a baseline and post-treatment score) any statistic using either variable alone or combining both by addition or subtraction will be normally distributed. There is no analytic reason why a statistic created by multiplying or dividing one variable by the other should necessarily have a normal distribution.

Conclusion

Reporting a percentage change from baseline gives the results of a randomized trial in clinically relevant terms immediately accessible to patients and clinicians alike. This is presumably why researchers investigating issues such as the effects of medication on hot flashes [9], or of different chemotherapy regimes on quality of life [10], report this statistic.

However, percentage change from baseline is statistically inefficient. Perhaps counterintuitively, it does not correct for imbalance between groups at baseline. It may also create a non-normally distributed statistic from normally distributed data. Percentage change from baseline should therefore not be used in statistical analysis. Trialists wishing to report percentage change should first use another method, preferably ANCOVA, to test significance and calculate confidence intervals. They should then convert results to percentage change by using mean baseline and post-treatment scores. For an example of this approach, see Crouse et al. [11].

The findings presented here reconfirm previously reported data suggesting that ANCOVA is the method of choice for analyzing the results of trials with baseline and post treatment measurement. In cases where ANCOVA cannot be used, such as with small samples or where the assumptions underlying ANCOVA modeling do not hold, CHANGE or POST are acceptable alternatives, especially baseline variables are comparable between groups (perhaps ensured by stratification) and if correlation between baseline and post-treatment scores are either high (for CHANGE) or low (for POST). The use of FRACTION should be avoided.