Background

It has recently been suggested that many treatments are likely to have non-additive effects on costs and quality-adjusted life-years (QALYs), and that ignoring such interactions and making separate decisions on treatments which could in practice be used together may not achieve the best allocation of healthcare resources [1, 2]. Estimates of the incidence, magnitude and direction of interactions for economic endpoints are therefore required for decision-makers considering which interventions can be assessed independently and for researchers conducting factorial trials with economic evaluations or model-based economic evaluations on multiple treatments.

Factorial randomised controlled trials (RCTs) provide unbiased estimates of the magnitude of interactions. These studies randomise patients to different levels of at least two factors: for example, a 2 × 2 design may compare placebo, A, B and A + B. Taking account of interactions when analysing factorial trials avoids bias, but reduces statistical power, while omitting interaction terms and assuming that there is no interaction is more efficient, but introduces bias whenever the true interaction is non-zero [3,4,5,6]. In practice, researchers cannot know whether treatments genuinely interact and have only a single sample in which to decide which interactions matter and estimate treatment effects. Analysts must therefore pre-specify a decision rule or criterion that determines the circumstances in which interactions will be included in the base case analysis.

Analyses of primary clinical endpoints generally omit interactions that are not statistically significant [3, 4, 7]. Economic evaluation, however, focuses on estimating expected costs and benefits to inform decision-making, where statistical inference is arguably irrelevant [8]. It has therefore been suggested that it is important to avoid bias by including interactions in economic evaluations of factorial RCTs unless they are shown to be negligible [1]. However, even within this context there are several reasons to avoid conducting inefficient analyses. Firstly, inefficient analyses will over-estimate the value of further information, potentially displacing spending on healthcare today with over-investment in research. Secondly, small sample sizes or inefficient analyses may mean that (by chance) the treatment with highest expected net monetary benefit (NMB) in the sample being analysed is not the one that would genuinely maximise NMB in the population. However, it is not known what criteria achieve the best balance in minimising inefficiency and bias for economic evaluations.

This paper aims to assess the magnitude of interactions within a sample of published economic evaluations, evaluate the impact that different analytical methods would have had on the results and compare the performance of different criteria for identifying which interactions should be taken into account. We first conducted a systematic review of full economic evaluations conducted alongside factorial RCTs and reviewed the methods used in different studies and the incidence, magnitude, statistical significance, and type of interactions observed within the trials. As part of this review, we identified the existence of “mixed” interactions and developed the “interaction-effect ratio” as a measure of the magnitude and direction of interactions compared with main effects. For those studies reporting sufficient data, we assessed whether changing the form of analysis to ignore or include interactions would have changed the conclusions. We then evaluated how well different criteria for determining which interactions are considered in the analysis would perform in practice, using simulated data generated to match the summary statistics of the published examples.

Systematic review

Methods

A systematic review was conducted to identify studies for the simulation study. This aimed to identify all factorial RCTs with economic evaluations published before 2010 evaluating any intervention/comparator in any patient group. The protocol is available in Additional file 1. MEDLINE (including daily update and old MEDLINE), EMBASE, Econlit and Journals@Ovid were searched through Ovid on 9th February 2010. We also searched www.bmj.com, Tufts CEA registry (https://research.tufts-nemc.org/cear/Default.aspx), Wiley Interscience, National Institute for Health Research (NIHR) publications list (http://www.hta.ac.uk) and Centre for Reviews and Dissemination (CRD, http://www.crd.york.ac.uk/crdweb) Database on the same date. The review was not updated because the original review was sufficient to identify a representative sample of studies and provide the basis for the simulation study. The review followed PRISMA guidelines [9].

Search terms to identify factorial trials (e.g. “factorial”, “2 x 2”, “2 by 2”, “two by two”, or “2 x 3”) were combined with search terms to identify economic evaluations (“cost-effect*” or “economic evaluation” (See Additional file 1). Since some papers on factorial trial-based economic evaluations do not describe the design as factorial, clinical papers on factorial trials that happened to be picked up in the main database searches and which mentioned plans for an economic evaluation or collection of cost data were flagged. Additional targeted literature searches were then conducted to identify papers reporting economic evaluations of these specific factorial trials.

One author (HD) examined titles and abstracts to assess whether they met all of the following inclusion criteria:

  • Described the methods and/or results of a cost-effectiveness, cost-utility, cost-consequence or cost-benefit analysis quantifying the costs and benefits of interventions designed to improve health or affect healthcare systems.

  • Used patient- or cluster-level data from a factorial RCT, as defined in Additional file 1.

  • Published at least brief details of the methods and/or results of the trial-based economic evaluation on/before 31st December 2009. Studies were not excluded from the review based on language, providing that at least an English abstract was available. For completeness, protocols published as journal articles by 31st December 2009 were also included, to give information on intended analytical methods.

The same author extracted data on study characteristics, study design, statistical methods and results (See Additional file 2). Mean costs and mean health benefits within each cell of the factorial design and their standard deviations were extracted if reported. These data were used in the simulation study and to estimate the magnitude, influence and (where possible) statistical significance of interactions.

Interactions were placed in one of four categories:

  • super-additive: where the effect of the combination is greater than the sum of the parts;

  • sub-additive: where the effect of the combination is less than the sum of the parts, but the interaction does not change the direction of effects;

  • qualitative: where at least one of the treatments under investigation changes sign (not just magnitude) depending on whether or not the other therapy is given; and

  • mixed: we developed the “mixed” category to reflect situations where one factor decreases outcome while the other increases it, such that the interaction has the same sign as one treatment effect, but the opposite sign from the other.

To measure the magnitude of interactions relative to between-group differences, we developed the interaction:effect ratio,Footnote 1 which indicates both the size of interactions and whether the interaction is super-additive, sub-additive/mixed or qualitative. The interaction:effect ratio (IERAB) equals the interaction term (IAB = μ0 − μa − μb + μab) divided by the simple effect of A (δA):

$$ {IER}_{AB}={I}_{AB}/{\delta}_A=\left({\mu}_0-{\mu}_a-{\mu}_b+{\mu}_{ab}\right)/\left({\mu}_a-{\mu}_0\right) $$
(1)

Simple effects comprise the difference in means between the group receiving one treatment and the group not receiving that treatment (δA = μa − μ0). When all treatments have the same direction of effect (e.g. when A and B both increase cost, or both decrease cost), the factor defined as A is the one for which the simple effects has the smaller absolute magnitude (where |μa − μ0| < |μb − μ0|). For mixed interactions, factor A should be the factor for which δA has the opposite sign to IAB. These rules ensures that qualitative interactions (those changing the ranking of treatments) have interaction:effect ratios <− 1. In all cases, interaction:effect ratios <− 1 indicate qualitative interactions, ratios between − 1 and 0 indicate sub-additive or mixed interactions, ratios equal to 0 indicate additive effects, while interaction:effect ratios > 0 indicate super-additive interactions.

Results

Searches identified 1671 references (Fig. 1, Additional files 1). Of these, 40 complete studies presenting economic evaluation results, 13 published protocols and one prematurely-terminated studyFootnote 2 met the inclusion criteria. Additional file 2 gives details of all included studies.

Fig. 1
figure 1

Flow diagram showing study identification

Of the completed studies, 23% (9/40) allowed for interactions between factors when analysing the primary clinical endpoint, 53% (21/40) assumed no interaction, while 25% (10/40) did not clearly state their methods (Table 1). Twenty studies (50%) used regression methods for the primary endpoint, of which five included interaction terms, seven did not and eight did not clearly describe their methods. Four studies used inside-the-table analysis and 14 used at-the-margins. Only three studies (8%) observed statistically significant interactions for the primary endpoint, although nine others (23%) observed large or qualitative interactions that did not reach statistical significance or for which significance was not reported. Interaction results were not clearly reported for 15 studies.

Table 1 Characteristics of the studies meeting inclusion criteria

By contrast, 53% (21/40) of completed studies allowed for interactions in their base case economic evaluation: more than twice the number allowing for interactions in the primary endpoint. Studies were also more likely to report sufficient information to identify whether interactions were taken into account for cost-effectiveness than primary endpoints, although in most cases it was necessary to infer the methods used from the tables reported. Only five studies analysed economic results using regression analyses, while two used event-based cost-effectiveness analysis, 17 inside-the-table and 14 at-the-margins; this may reflect the difficulties associated with regression-based economic evaluation identified previously [1].

Fifteen completed studies (38%) presented the probability of treatment being cost-effective within the text or as cost-effectiveness acceptability curves. Of these: nine studies presented pair-wise comparisons giving the probability that one treatment is cost-effective compared with a single comparator; three studies presented figures showing how the probability of each treatment evaluated in the trial having highest NMB varies with ceiling ratio; and a further three studies presented acceptability curves for both pair-wise and multiple comparisons. Six further studies quantified uncertainty in other ways (e.g. scatter graphs or confidence intervals). One study also presented the value of information [11,12,13].

Sixteen studies (40%) reported results inside-the-table in sufficient detail that interactions for both costs and health benefits could be directly evaluated (See Additional file 3).Footnote 3 Large interactions arose frequently: 33% (24/72) of interactions had an absolute magnitude larger than one or more simple effect (interaction:effect ratios > 1 or < − 1; Table 2). Interaction:effect ratios varied between − 44 and 232. Overall, 33% of interactions were super-additive (23/72), 49% (35/72) were sub-additive or qualitative, while 17% (12/72) were mixed (Table 2). Large and qualitative interactions occurred at least as commonly for health benefits as for costs and NMB. Among the studies measuring health in units other than QALYs, 50% (7/14) of interactions were larger than simple effects. However, although 29% (7/24) of studies had qualitative interactions for NMB, the interaction changed the treatment adoption decision in only one case [15].

Table 2 Magnitude of interactions for the 16 studies reporting mean costs and mean health benefits for each cell within the factorial design

Six studies (reporting nine interactions) reported standard deviations around both costs and health benefits in each group [15,16,17,18,19,20]. Within these studies, 56% (5/9) of interactions for cost were statistically significant (p < 0.05), although there were no statistically significant interactions for health benefits or NMB.

Simulation study

Methods

The six studies reporting standard deviations for each group [15,16,17,18,19,20] were used in simulation work to evaluate the different criteria for identifying which interactions should be included in economic analyses. Using simulated data means that: (a) whereas for a real trial, we only see one sample, for simulated data, we can generate multiple samples and see how performance varies; (b) we specify the true data-generating mechanism and can compare the conclusions of each individual sample against the true answer; (c) we can vary the characteristics of the data-generating mechanism (e.g. interaction size and sample size) and see the impact on the results. For simplicity, simulations focused on balanced 2 × 2 full factorial designs with no covariates or missing data. We therefore only included the first two levels for each factor evaluated by Hollis et al. [20] and the Alexander Technique, Exercise And Massage (ATEAM) trial [17].

In addition to the original studies, five variants of each trial were simulated using interaction terms that were 0, 50% or 200% of the size observed in the original study, and using double the sample size with either the original interaction or zero interaction (See Additional file 3). The analysis used Stata version 12 (College Station, Texas) to simulate and analyse 300 samples of each of the 36 scenarios from the six trials. The data-generation methods and Stata code are shown in Additional file 3 and use the data in Additional file 5.

The costs and benefits for each sample were analysed using four mixed models with different combinations of interaction terms: no interactions; interaction for costs only; interaction for health benefits only; and interactions for costs and benefits. The mixed models implemented seemingly-unrelated regression allowing for correlations between costs and benefits by predicting outcomes (which could be either costs or benefits) with random effects by patient. However, separate constants, treatment effects and (where appropriate) interactions were estimated for costs and benefits and unstructured residuals were used. This approach gives identical results to the sureg command [21]. The log-likelihood, degrees of freedom and coefficients and their standard errors were recorded for each model.

The coefficients estimated in mixed models were used to calculate NMB. For simplicity, all costs were interpreted as though they were in pounds Sterling. Results focus on ceiling ratios of £20,000/QALY [14] for the five studies measuring benefits in QALYs, and £5000 per unit of benefit for other studies.

We evaluated 15 criteria for determining which interactions should be taken into account (Table 3) and applied these to each simulated trial sample. We compared the results of each analysis against the “true” results for each dataset, which (for the purposes of this simulation study) were assumed to equal the mean values for treatment effects and interactions shown in Additional file 3, Table 3.3 The sensitivity and specificity for identifying interactions, the probability of adopting the best treatment and the opportunity cost of making the wrong decision [1] were evaluated for each of the 15 criteria (Table 4).

Table 3 List of the criteria for determining which interactions are taken into account that were evaluated in the study
Table 4 The measures used to assess performance of the criteria for deciding which interactions are considered

We used the opportunity cost as the primary measure of which criterion works best, since it focuses on the central question of economic evaluation: namely maximising health gains from a finite budget. Coverage, statistical power and bias were also calculated (Additional file 4).

Results

The 15 criteria differed in the proportion and type of interactions that were correctly identified (Table 5). Other than the “always include interactions” criterion (criterion 1), including interactions where p < 0.25 (criterion 5) and including interactions that are statistically significant or greater than simple effects (criterion 10) resulted in the largest number of cost interactions being included. By contrast, criteria 5 and 9–12 included the largest number of benefit interactions. In general, specificity and sensitivity were inversely proportional; measures based on information criteria or statistical significance at alpha = 0.05 tended to have high specificity and low sensitivity.

Table 5 Comparison of performance of the different criteria with regards the probability and the opportunity cost associated with adopting a treatment that does not have highest true NMB. The values shown in bold represent the most favourable of all criteria for this measure

Averaging across all 36 scenarios from the six trials, including interactions ≥0.25 or ≥ £250 minimised the opportunity cost from adopting treatments that do not in fact maximise true NMB, while the opportunity cost of “always include interactions” was £0.04 larger (Table 5). “Never include interactions” performed worst, while criteria 3–7 (based on statistical significance and information criteria) also performed poorly.

However, the criterion with lowest opportunity cost differed between individual scenarios (See Additional file 4). As expected, “never include interactions” was, on average, the best criterion for the scenarios that did not have qualitative interactions, although no criteria had high opportunity costs when interactions were zero. Across the 13 scenarios with qualitative interactions, “always include interactions” performed best, although criteria 11–13 also performed well (including qualitative interactions, including interactions >simple effects or including interactions ≥0.25 or ≥ £250).

Across all scenarios, criterion 9 (including interactions >simple effects) had the highest probability of adopting the treatment that has highest true NMB (Table 5). “Never include interactions” performed worst overall on this measure, but performed best in scenarios without qualitative interactions for NMB. “Always include interactions” performed best when there were qualitative interactions. However, results differed substantially between scenarios (not shown).

Doubling the sample size reduced the opportunity cost and the probability of adopting the wrong treatment for all criteria. However, criteria based on statistical significance or information criteria (which explicitly take account of sample size) did not appear to perform any better relative to other criteria in larger studies. Furthermore, criterion 11 (including qualitative interactions for cost., benefits or NMB) performed best in scenarios with double the original sample size, whereas “always include interactions” performed best with a smaller sample size.

Including all interactions was also the only criterion for which the 95% confidence intervals gave 95% coverage and also had no bias (See Additional file 4). Excluding all interactions had lowest coverage and highest bias. Including all interactions had lowest statistical power, while criteria 2, 8, 14 and 15 had highest statistical power (never include interactions, include qualitative interactions, include interactions ≥0.5 or ≥ £500 and include interactions ≥1 or ≥ £1000).

Discussion

Between-treatment interactions that can change the treatment adoption decision need to be taken into account in healthcare decision-making, model-based economic evaluations and economic evaluations based on factorial RCTs [1, 2]. However, to our knowledge, this is the first study to evaluate the magnitude of interactions within published economic evaluations or compare different criteria for determining which interactions should be included in economic analysis.

This systematic review found that 26% of all interactions in factorial trial-based economic evaluations published before 2010 were qualitative (i.e. change the ranking of treatments and render at-the-margins estimates misleading [5, 36]), although interactions changed the treatment adoption decision in only one study. This provides empirical evidence on the importance of taking account of interactions within economic evaluations based on factorial trials [1] and within decision-analytical models and health technology assessment [2]. Our results may also be useful for researchers defining informative priors for Bayesian analyses: one previous study assumed that the probability of a qualitative interaction is just 2.5% [12]: less than a tenth of the frequency that we observed in our review.

However, 60% of studies did not report mean costs and benefits for each group inside-the-table; such presentation is important to allow readers to assess the impact of interactions and the extent to which they may bias the results [1]. Furthermore, the 16 studies reporting costs and benefits inside-the-table may not be typical: studies may have reported results inside-the-table because interactions were large. Of the completed studies, 53% allowed for interactions in their base case economic evaluation, whereas only 23% considered interactions for the primary clinical endpoint; these figures are similar to those reported previously [10, 37]. The higher figure for economic evaluations could be due to interactions being smaller for the primary clinical analysis than the endpoint used in economic evaluation, or interactions being smaller when analysed on the logarithmic scale, which may be appropriate for many clinical endpoints but not economic evaluation [1]. Alternatively, the greater use of inside-the-table analysis within economic evaluation could reflect economic thinking: particularly the view that inference is irrelevant [8], or that treatment-combinations should be evaluated as mutually-exclusive alternatives.

Our review aimed to assess the magnitude of interactions in a representative sample of studies and provide data inputs for simulation work. Our literature search was conducted in 2010 and a separate systematic review of economic evaluations of factorial trials conducted in 2013 identified seven studies published since our search date but used a different search strategy [37]. However, there is no reason to expect the incidence of interactions or the performance of different criteria to have changed over time. Systematic identification of factorial trials is hindered by the absence of a medical subject heading (MeSH) term specific to this type of design. Our review may therefore have missed studies that did not mention the factorial design in the abstract, particularly if they presented results for only one factor; as result, the review may underestimate the proportion of studies that have ignored interactions. However, our literature searches nonetheless identified four times as many pre-2010 papers than the review by Frempong et al. [37]: probably by using more general search terms, which yielded 10 times as many hits in bibliographic databases.

In four studies [38,39,40,41], the interaction between factors was confounded as the treatment given to the ab group was not equal to the sum of the treatments given to the a and b groups.Footnote 4 Giving an additional intervention (e.g. advice or training) to the control group or to the three active treatment groups or varying how treatment is administered means that all estimates of the AB interaction are confounded by differences in treatment and makes analyses ignoring any interactions questionable. Future studies should avoid such confounding. If it is essential to give an additional treatment (e.g. for ethical reasons), papers should justify this decision and discuss what effect this is likely to have had on outcomes and interactions and (arguably) should not describe the study as factorial if the additional treatment is likely to influence outcomes.

Across all 36 scenarios, strategies of including all interactions, or including interactions larger than an arbitrary but relatively low threshold minimised the average opportunity cost associated with adopting the wrong treatment. Excluding all interactions, or using information criteria or statistical significance generally performed poorly on the measures most relevant to economic evaluation. However, the best criterion depended on how criteria were evaluated. For example, the probability of adopting the treatment with the highest NMB was slightly higher for criterion 9 (including interactions larger than simple effects) than for “always include interactions”. There were also substantial variations in the relative performance of different criteria between trials and scenarios. In particular, criteria that excluded most interactions performed well in scenarios where interactions equalled zero or did not change the ranking of treatments. The performance of different criteria varied little with sample size and the best-performing criteria take no account of sample size, suggesting that avoiding bias is more important than avoiding inefficiency even when sample size is limited.

The simulation study was based on six factorial trials and a small range of variants on each study. Since the most appropriate criterion differs between studies, different results could have been obtained with a different set of trials or scenarios. The analysis focused on 2 × 2 full factorial trials and interactions between two factors. Although the same principles are likely to apply to larger factorial designs, higher-order interactions between three or more treatments may be harder to detect. Furthermore, all trials were simulated and analysed as though they measured health benefits on continuous scales and all costs and health benefits were analysed on a natural scale using arbitrary ceiling ratios. The data-generating mechanism also simulated trials with complete, uncensored data, equal numbers in each arm, gamma-distributed costs with predictable patterns of heteroskedasticity and Gaussian, homoskedastic health benefits. Interactions were assumed to affect all patients in the A + B group equally (which may not be the case for rare events). Mixed models and the criteria based on statistical significance may perform less well in real trials where these idealised data characteristics do not apply. The optimal choice of criteria may also be sensitive to these features common to all simulated datasets (see Additional file 3).

Conclusions

Large and qualitative interactions occur relatively commonly for costs, QALYs and net benefits. Future systematic review updates may help assess whether the conduct of economic evaluations of factorial trials has changed and quantify interactions in a wider sample of trials.

The simulation study demonstrated that it is better to include interactions that may have arisen by chance than risk ignoring genuine interactions that could change the conclusions. Researchers planning an economic evaluation based on a factorial trial should pre-specify and justify the criterion used to determine which interactions will be taken into account in the base case analysis [1]: e.g. in a health economics analysis plan [42]. The chosen criterion should balance the risk of bias from ignoring interactions against the loss of power from including interactions and the risk of drawing the wrong conclusions by chance. Although the criteria that performed best in our study depended on the magnitude of the true interaction, minimising the risk of bias by including all interactions or excluding only small/quantitative interactions tended to perform best. Criteria relying on statistical significance or information criteria performed poorly. This differs from the approach currently used by statisticians, although at least one published economic evaluation has used a pre-specified rule that interactions larger than main effects would be taken into account [43]. Any prior evidence or beliefs about the size of interactions could be used to select the appropriate criteria or as informative priors in a Bayesian analysis. In particular, a strategy of including all interactions above a certain size may perform better if the threshold is based on the expected treatment effects or the amount of bias that is acceptable in a particular setting. In addition to the criteria considered here, researchers could exclude all interactions not hypothesised a priori, or those that do not have plausible biological explanations. Whenever the base case analysis excludes any interactions, researchers should always present a sensitivity analysis including all interactions to assess the risk of bias [1].