Introduction

Randomised controlled trials (RCTs) are the gold standard for a comparative evaluation of interventions. Their robust design helps prevent different biases, most importantly confounding by indication. However, RCTs often require large numbers of patients, and even then many appear to be underpowered—and thus inconclusive—due to misspecification of original assumptions used for sample size calculation [1, 2]. Furthermore, especially in critically ill patients, it is difficult to acquire informed consent for interventions that need to start immediately, such as treatment of infections. This may result in selected populations, reducing the generalisability of study findings [3]. Adaptive trials are trials that include decision rules to change key trial design elements during the RCT. The promise of adaptive trials is to provide answers to therapeutic research questions as efficiently as possible without compromising reliability. They can be designed such that a conclusive answer is always reached and that—during the course of the study—the proportion of patients receiving the most promising treatment increases [4]. This benefit for individual patients may overcome ethical barriers to apply deferred or waived consent for randomisation, and thereby increase generalisability of the results. In this viewpoint we aim to elucidate principles, advantages and pitfalls of adaptive trials.

The first adaptive trials were performed in the 1970s, but were not widely adopted due to methodological shortcomings, lack of understanding by clinical investigators, and ethical concerns about weighted randomisation [5]. To the best of our knowledge, in critically ill patients only five adaptive trials have been performed (all using adaptive sample sizes [6,7,8,9,10]) and one is ongoing (ClinicalTrials.gov NCT02735707). As recent improvements now overcome most of the methodological and technological shortcomings, adaptive designs are gaining more attention [11].

What is an adaptive trial?

Key trial design elements that could be subject to adaptation during the RCT are (1) sample size, (2) intervention arms, (3) allocation ratio, and (4) study population (Table 1). As a result, adaptive trials will—upfront—always have an unknown sample size. Importantly, adaptive trials do not provide a free ticket for trial adaptations: adaptations are based on the analyses of accumulating data with adaptation rules being pre-specified in the study protocol.

Table 1 Most frequently involved design elements in adaptive trials

Changing the sample size

There are several methods that allow adaptation of the sample size during a study. For instance, through conducting frequent interim analyses in order to continue the trial until a reliable conclusion is reached. If done with a fixed maximum sample size, this allows for early termination for superiority or futility (termed “group-sequential design”). It can also be done without a fixed maximum sample size (termed “adaptive group-sequential design”) in which case recalculation of a maximum sample size during each interim analysis is included. This implies that the trial doesn’t stop as long as the interim result is inconclusive, and thus the planned maximum sample size can increase during the study. Adaptive sample sizes have been rarely applied in the ICU setting (Table 2) whereas they would have been beneficial in many studies in critical care medicine, such as the recent trial comparing hydrocortisone to placebo in sepsis patients [12]. Although the difference in 90-day mortality was not statistically significant, the confidence interval included a relevant effect size (95% CI for the OR 0.82–1.10). In an adaptive design, randomisation could have continued (assuming sufficient funding) until a clinically relevant benefit was convincingly demonstrated or excluded. Arguably, the study would have been more expensive, but also more informative, with research budget better spent.

Table 2 Examples of adaptive trials in critically ill patients, all using adaptive sample size only

Changing the intervention

Adaptation can be suitable when comparing more than two different drugs, dosages and/or durations of treatment for the same indication. For instance, in a study of cryptococcal meningitis, three different dosing regimens of liposomal amphotericin B + fluconazole were compared to the standard dosing regimen in the first 160 patients (40 per arm), and only the best faring dosage was compared to standard dosage in the next 300 patients (150 per arm) [13]. This adaptation is referred to as a “drop-the-loser” or “pick-the-winner” design and is often applied in dose-finding studies.

Changing the allocation ratio

Response-adaptive randomisation means that the allocation ratio of randomised interventions is changed during the study based on the results of interim analyses. For instance, consider a three-arm trial with an initial allocation ratio of 1:1:1 for arms A, B, and C. In the first interim analysis, A and B have a better outcome, although C is not statistically significantly inferior. Based on a pre-defined plan, the allocation ratio could be changed to 2:2:1, with less patients being randomised to C. In a subsequent interim analysis C may be found inferior and will then be dropped, leaving more patients for the comparison of A versus B. This was applied in a trial of gepotidacin in three different dosage regimens for patients with acute bacterial skin infections [14]. After the first interim analysis, less patients were randomized to the highest dose regimen, and this arm was dropped at the fourth interim analysis.

Changing the study population

Subgroup-specific effects, e.g. due to differences in pathophysiology, risk of side effects, or pharmacology, occur in many interventions. By measuring subgroup effects during interim analyses, all aforementioned adaptations can be applied to subgroups. An example of this is the I-SPY2 trial on chemotherapy regimens in stage-II/III breast cancer patients with eight biomarker-based subgroups. The investigators recently published the results for one of these subgroups, while in the meantime the trial goes on to determine the optimal treatment for the other subgroups [15].

Advantages of adaptive designs

The adaptive design may have many advantages, most of which are not specific to infectious diseases. Patients have the advantage of a higher chance of receiving better treatment. For researchers and funders there is reasonable chance (though without guarantee) that research questions can be answered with fewer patients, leading to more efficient use of research recourses. Finally, in the case of infectious diseases, adaptive trials may include study domains to be activated in case of emerging diseases or epidemics.

Requirements for adaptive designs

The complexity of the statistical analyses of adaptive trials should not be underestimated. First, there is a need to account for multiple testing due to the frequent interim analyses. Second, due to low numbers within subgroups, imbalance of baseline characteristics is possible, which needs to be corrected for during each interim analysis. Third, time trends may confound effects, particularly if response adaptive randomisation is used. Fourth, as more adaptations are implemented, operational characteristics such as the expected sample size and the chance of incorrect conclusions cannot be calculated with standard approaches, but require simulation studies. Therefore, involvement of qualified statisticians is required, and a detailed statistical analysis plan specifying all possible adaptations must be designed before the study starts.

Conclusion

As compared to the classical RCT, adaptive trials can answer research questions in a more efficient and effective way, but require an extensive and much more complex statistical preparation. Broader use of adaptive trials is expected to improve the cost–benefit ratio of clinical trials in critically ill patients.