Skip to main content

Practical Considerations and Recommendations for Master Protocol Framework: Basket, Umbrella and Platform Trials


Master protocol, categorized as basket trial, umbrella trial or platform trial, is an innovative clinical trial framework that aims to expedite clinical drug development, enhance trial efficiency, and eventually bring medicines to patients faster. Despite a clear uptake on the advantages in the concepts and designs, master protocols are still yet to be widely used. Part of that may be due to the fact that the master protocol framework comes with the need for new statistical designs and considerations for analyses and operational challenges. In this article, we provide an overview of the master protocol framework, unify the definitions with some examples, review the statistical methods for the designs and analyses, and focus our discussions on some practical considerations and recommendations of master protocols to help practitioners better design and implement such studies.


The past decades have witnessed a massive revolution of biomedical technology. Consequently, the development of innovative treatment has benefited from the breakthrough of these modern biotechnologies. Meanwhile, the challenges of limiting participant exposure to potentially inferior drugs, shortening development cycles, and lowering drug development costs have emerged along with numerous exciting potential treatments/therapies. With these inspirations in mind, ASA Biopharmaceutical (BIOP) Section Oncology Scientific Working Group (SWG) chartered a sub-team on Master Protocols to explore statistical designs and analysis methods with such an innovative framework.

Master protocol, classified as basket trials, umbrella trials or platform trials, refers to a type of trial designs that test multiple therapies, either individually or in combination, and/or multiple diseases in parallel under a single overarching protocol, without a need to develop individual protocols for every sub-study [1]. Such an innovative framework can bring lots of advantages to drug development. First, the use of master protocols enhances operational efficiency because the same infrastructure is developed and implemented for multiple sub-studies, including site selection, patient screening, data management, investigational review board (IRB)/ethical committee, and trial monitoring committee. Liu [2] demonstrated using statistical models that the variation of efficacy outcome comparison would be reduced when using a same set of sites in a master protocol. Master protocol trials’ ability to include multiple diseases or multiple biomarkers/populations also enables recruiting broader patient populations compared to traditional clinical trials that only accommodate one disease/biomarker, and therefore reduce the overall screen failure rate and increase the efficiency of the trial, especially for rare diseases. The NCI-COG pediatric MATCH (Molecular Analysis for Therapy Choice) trial (NCT03155620) is an example [3].

Secondly, utilizing master protocol with a common control could save patient resource and make the trial more appealing to participants as individual participant would have a higher chance being randomized to an experimental arm. For example, to evaluate treatment for patients with advanced renal cell carcinoma, five clinical trials were conducted almost simultaneously [4,5,6,7,8], evaluating pembrolizumab, avelumab, nivolumab, atezolizumab and lenvatinib in combination with another agent, respectively, with Sunitinib as the same choice of control. A master protocol evaluating all five combination treatments with a common control could have required fewer control arm patients.

Lastly, information borrowing across sub-studies could be made available in such trials through innovative statistical methods. Saville and Berry [9] and Hobbs et al. [10] conducted simulations to quantitatively demonstrate the efficiencies of platform clinical trials compared to traditional trials when borrowing information across sub-studies. The increased efficiencies resulting from the special features of the master protocol framework may reduce patient burden, expedite drug development, reduce cost, increase stakeholder (e.g., regulatory, payers, sponsors) engagement and in turn, improve patient care.

Given these advantages of the master protocol framework, such a design becomes more popular recently during the COVID pandemic as the fastest path to evaluate COVID-19 treatment simultaneously. Examples of master protocols on COVID-19 treatment are (1) RECOVERY (Randomized Evaluation of COVID-19 Therapy) Trial (NCT04381936) [11], (2) I-SPY COVID trial (NCT04488081) [12], (3) and COMMUNITY (COVID-19 Multiple Agents and Modulators Unified Industry Members) Trial (NCT04590586) [13].

Terminology and Overview of Three Types of Master Protocol Trials

The definitions for basket trial and umbrella trial are mostly consistent in the literatures, though the definition of platform trial are somehow different [14,15,16,17] and can be confusing to audience. We provide our definition of platform trial below while keeping the definitions of basket trial and umbrella trial consistent with literatures aiming to be clearer and comprehensive:

Basket Trial

A master protocol designed to evaluate a single investigational drug or drug combination in different disease populations defined by disease stage, histology, number of prior therapies, genetic or other biomarkers, or demographic characteristics is commonly referred to as a basket trial [1]. Examples include: (1) A BRAF V600 study evaluating vemurafenib in multiple nonmelanoma cancers with BRAF V600 mutations (NCT01524978) [18]; (2) KEYNOTE-158 evaluating pembrolizumab in patients with various types of advanced solid tumors who have progressed on standard of care of therapy (NCT02628067) [19,20,21]; (3) NAVIGATE evaluating larotrectinib for the treatment of advanced solid tumors harboring a fusion of neurotrophic tyrosine receptor kinase (NTRK) of types 1–3 (NCT02576431) [22].

Umbrella Trial

A master protocol designed to evaluate multiple investigational drugs administered as individual drugs or as drug combinations in a single disease population are commonly referred to as an umbrella trial [1]. For example, ALCHEMIST (NCT04267848, etc.) is a series of umbrella trials conducted by the National Cancer Institute for patients with NSCLC with EGFR mutation or ALK gene [23].

Platform Trial

A master protocol designed to incorporate design features of both the basket and umbrella trials, or with focus on the perpetual manner of a basket or/and umbrella trial, are commonly referred to as a platform trial.

We propose the terminology of platform trials aiming to cover the rest of the master protocol design scenarios when they are not strictly covered by basket trials or umbrella trials. Under this definition, platform trials may be conducted in a perpetual manner, where multiple drugs and/or multiple disease populations can be added to the platform trial for investigation at different times. Example platform trials with the perpetual feature include, I- SPY 2 (NCT01042379) [24] and GBM-AGILE (NCT03970447) [25]. Some of the perpetual trials may incorporate design features of both basket and umbrella trials, such as NCI-MATCH, which is also conducted in a perpetual manner. In some literatures, trials with this hybrid basket and umbrella feature are referred to matrix trials [26]. In practice, the most common type of platform trial is an umbrella trial with perpetual manner. Since umbrella and platform trials share many common practical attributes, they are discussed as platform trials in the remainder of this article.

Worth mentioning that, although platform trial initially started as cross pharmaceutical company collaborations, more and more individual pharmaceutical companies have set up platform trials to evaluate multiple combination therapies in the same disease in recent years. A few examples of such company sponsored trials are: MOPHEUS trials (NCT03193190, etc.) by Roche [27], which include a few phase 1b/II trials in different cancers. FRACTION (NCT02750514) sponsored by BMS [28], MSD sponsored KEYMAKER U01 (NSCLC; NCT04165798) [29] and U02 (melanoma; NCT04305054)  [30], Pfizer sponsored B8011011 (NSCLC; NCT04585815) [31], GSK sponsored platform study with NSCLC indication (NCT03739710) [32].

Structure of This Paper

While extensive efforts have been made by other authors to provide an overview of the master protocol framework with real-life examples [14,15,16,17], this article focuses on summarizing practical considerations on the designs, implementation and registration of master protocol trials. In sections "Practical considerations for basket trials" and "Practical considerations for umbrella/platform trials", we provide practical considerations for basket and umbrella/platform trials, while common statistical methods used in the design and analysis are provided in the Supplementary Materials 1 and 2. Type I error control and multiplicity in confirmatory master protocols trials will also be discussed within each type of master protocol trials. Regulatory considerations are provided in section "Regulatory considerations", followed by conclusions.

Practical Considerations for Basket Trials

Compared to traditional clinical trial setting where different diseases with same treatment are studied in separate studies, treating patients from different diseases with same treatment allows efficacy and safety data be borrowed from different diseases within the same study for the evaluation of the same treatment. That is a major advantage of basket trials. Supplementary Material 1 [10, 33,34,35,36,37,38,39,40,41] reviews various statistical methods for borrowing information across disease populations with some real-life applications. Before implementing the design of borrowing information across multiple diseases, extensive evaluation of the operating characteristics and underlying assumptions is required. Careful review of the statistical properties to ensure adequate power and control of the false positive rate as well as potential bias in the estimation along with ancillary variables such as enrollment rates are required to design an efficient and robust study that fits the scientific needs of the trial. For example, each sub-study may accrue patients at different speeds depending on their prevalence in the overall population. If the statistical analysis evaluating stopping rules depends on a hierarchical model that borrows information across indications, estimations for sub-studies with small patient numbers could be heavily influenced by the population mean across the other groups in the trials, which could increase the false positive/negative rates in other sub-studies and the overall study. Therefore, when designing interim analysis decision rules, one may consider either a weighted approach so that each sub-study can be considered appropriately or enforce a minimal sample size requirement for each sub-study at the interim analysis. Due to different enrollment rates, the final analysis timing for each sub-study may also be different. When a hierarchical model is used as the analysis, the decision for a completed sub-study should be considered “locked” at the time of its final analysis, and not have the final analysis results updated based on sub-studies that continue to accrue data.

Depending on the goal of the basket trial, different multiplicity considerations may be applied. If the purpose is to evaluate each disease population independently without borrowing data across disease populations, each disease population can be considered independent and has its own type I error rate [42]. Therefore, one may use the same type I error rate (usually two-sided 0.05 in a Phase 3 study) for each sub-study. Similarly, if rules of pooling do not depend on the study data, as long as the hypotheses testing after pooling are for mutually exclusive populations, multiplicity adjustment is not required. On the other hand, if the decision of pooling is based upon interim analysis results within the study, multiplicity adjustment is required at the final analysis, due to the error inflation brought by the interim analysis. The pruning and pooling mentioned in the Supplementary Material 1 uses an analytical formula of type I error control at the interim analysis and final analysis with estimates from the correlation of test statistics at the interim and final analysis [34, 43]. For other methods, if type I error control is not demonstrated analytically, or in Bayesian method where type I error is not part of the design features, detailed simulations with comprehensive scenarios are required to evaluate what the false positive rate (type I error) would be like.

In oncology, the concept of basket trials has been extended to Phase 1b/2 studies by including multiple tumor indications in the same study to establish PoC before further evaluation in the confirmatory setting. Confirmatory basket trials usually include patients based on a unique biomarker confirmatory, while the PoC type of basket trials are not necessarily biomarker driven. KEYNOTE-028 (NCT02054806) [44] and Checkmate 032 (NCT01928394) [45] were PoC basket trials where the drug or the combination had been evaluated in one or two lead tumor types before initiating the basket trial to broaden the efficacy exploration. If the basket trial is used as the first efficacy PoC clinical trial for the drug or combination, due to the limited understanding of the mechanism of action of the drug, tolerability, and unknown efficacy [46], a limited number of tumor types should be explored as the first wave before broadening the efficacy search. Chen et al. [47] evaluated the question “How many tumor indications should be initially screened?” using the cost–benefit model and concluded that approximately three to five tumor indications studied in the first wave for an efficacy PoC basket trial would be optimal.

Practical Considerations for Umbrella/Platform Trials

Practical considerations for platform trials are discussed below, while statistical methods for common designs in umbrella and platform trials are provided in Supplementary Materials 2 [42, 48,49,50,51,52,53,54,55,56,57,58,59]. Throughout the rest of the section, we use the term platform trials for simplicity though most of the discussions here are applicable to both umbrella and platform trials.

Consideration of Control

It is recommended to include control in the platform trial in general for the accurate evaluation of treatment effect. However, decision to include a control arm in a platform trial would depend on the nature of the trial (exploratory versus confirmatory), the rarity of the disease, the availability of the control and ethical considerations. For example, if a platform trial is set up to replace the traditional individual Phase 1b/2 efficacy proof-of-concept studies which are often designed as single-arm trials with small sample sizes and compared to the historical control, the platform trial may be conducted without a control arm when the uncertainty of historical data is small or when the expected treatment effect is large [60]. When uncertainty of historical data is large, control arm should be considered. While for confirmatory studies, including a control arm is usually required whenever feasible, to control the false positive rate and ensure the validity of the hypothesis testing. A platform trial without appropriate control arm can pose difficulties in assigning attribution of adverse events to multiple experimental agents or combinations in a trial [1], as well as determining the efficacy unless the effect is vastly superior to what has been seen historically depending on the endpoint and potential influences of patient selection bias, ascertainment bias, etc.

If control is needed in a platform trial, a common control for multiple experimental arms can increase the efficiency of the platform trial and make it more appealing to participants and sponsors. The choice of the common control is usually a standard of care (SoC) accepted by the regulatory agencies and medical communities. When a platform trial is set up in a perpetual manner and the platform trial would be open for many years, SoC may change when new experimental drugs are approved. As a result, the control arm in the same platform trial would need to be updated to the new SoC. For example, when the STAMPEDE trial (NCT00268476) opened in 2005, the SoC in this population was androgen deprivation therapy (ADT) only [61]. A few years later, the SoC had changed to docetaxel plus ADT. As a result, the control arm in STAMPEDE was updated to be docetaxel plus ADT. If the choice of treatment in the control arm needs to be updated, it is recommended to pause the trial until the protocol, the statistical analysis plan (SAP), and the informed consent documents are modified to include the new SoC as the control. When a master protocol trial involves different lines of experimental treatments, the corresponding SoC may be different. A master protocol may allow multiple common control arms in this situation. The GBM-AGILE study is an example of such [25]. In certain cases, the common control may be investigator’s choice if there are more than one SoC available. The proportions of investigator’s choice arms could be prespecified and may be determined by regions, feasibility, or other design considerations, e.g., budget, desired efficacy improvements, and safety profiles of the controls.

Consider a platform trial in a perpetual manner with a common control arm. For a new treatment arm added after the start of the platform trial, at the time of the analysis, there would be data from the common control arm that were enrolled prior to the addition of the new treatment arm which is called non-concurrent control. Even though non-concurrent control patients were enrolled under the same inclusion/exclusion criteria, due to the non-concurrent randomization, baseline characteristics may not be balanced between the non-concurrent control and the experimental treatment. It is also possible that due to potential changes in medical practice, there may be drift in population that results in better or worse outcomes for the control arm over time. Lee and Wason [62] conducted simulations to show that the use of non-concurrent control may inflate type I error but may be beneficial for estimation if the concern of drift is small. Using non-concurrent control or not in a platform trial aiming for market registration purpose was discussed in a forum organized by the ASA BIOP Statistical Methods in Oncology Scientific Working Group in coordination with the US FDA Oncology Center of Excellence [63]. A general consensus is to only include the concurrent randomized control data in the comparison with the corresponding experimental arm for confirmatory trials with registrations purposes, with analysis including all controls as potentially sensitivity analysis [42]. In situations like rare diseases where it is challenging to conduct a large clinical trial, it would be more acceptable to use all available control arm data by including non-concurrent control, after careful evaluation of the heterogeneity of the control arm data over time. Statistical models such as Normal Dynamic Linear Model (NDLM) [64, 65] considering potential heterogeneity over time may be considered when non-concurrent controls are used.

Another practical consideration about common control is the complexity that the inclusion/exclusion criteria in a platform trial may not always be the same for all experimental agents. If an experimental treatment arm requires additional inclusion/exclusion criteria, the comparability of the common control group must be assessed in these situations. This may happen when additional inclusion/exclusion to an experimental agent are needed for safety considerations as the safety of the trial participants must always be deemed paramount. In these cases, the implications on the comparability of the arms should be carefully considered and accounted for in the analyses. In cases where these factors are not associated with efficacy outcomes, one can still pool common control arms for comparison. In other cases, however, when there are potential risks that these factors may be associated with efficacy outcomes, the control comparison could be specified to include only those control patients that meet the experimental agent’s additional inclusion/exclusion criteria. Regardless, as elaborated in later sections, additional inclusion/exclusion criteria should be limited to cases with clear and acceptable justifications such as safety considerations to reduce complications as mentioned above as well as other operational complications such as randomizations.

In some therapeutic areas such as psychiatry, different investigational drugs may be from different routes of administration while maintaining blinding to reduce placebo effect is crucial, patients may be randomized between an active arm and its corresponding placebo-matched control in the same route of administration. This creates a common placebo-control arm from a mixture of routes of administration. If a placebo is highly unlikely to affect the efficacy, the primary analysis could use the full common control arm pooling across placebo types. There could be concerns in some settings with a more subjective endpoint that routes of administration could have either a positive or negative effect. The primary analysis could include covariate adjustment for different routes of administration or still propose to pool across placebo types and assess the impact of different placebos in a prespecified sensitivity analysis. The DIAN-TU (Dominantly Inherited Alzheimer Network Trial) trial (NCT01760005) is an example that included multiple experimental arms with different routes of administration and its paired placebo-control. The primary analysis of this trial pooled the control patients across all routes of administrations [66].

Data Sharing and Transparency

Many platform trials are collaborated by multiple sponsors to maximize its operational efficiency and is usually set up and conducted by a contracting research organization (CRO). The ownership, sharing, and use of data from a platform trial should be specified and agreed upon by all trial stakeholders at the outset. What is shared, who it is shared with, and when it is shared each have different considerations. COVID R&D Alliance set up some examples how summary-level and individual-level data can be shared [67].

When an experimental arm graduates from a platform trial, the sponsor of that experimental arm would receive the individual-level data for patients in their experimental arm as well as in the corresponding control arm included in their final analysis. The datasets used for the final analysis, could potentially be used to plan a future study, or for registration purposes. Disclosing trial data to the trial’s stakeholders should be held as confidential and should not impose any risk of maintaining the blinding of the remaining arms of the study or any operational bias.

In addition to the individual-level data sharing with trial stakeholders, the trial could also share summary data on the common control arm with the public. For example, as trial results are published, the observed aggregated results of the common control arm to date may be included in the publication. Public release of data summaries has potential risk to impact the blind for ongoing or future experimental arms. If a sponsor is provided with blinded or pooled safety or efficacy data, but the performance of the common control arm is known, it is possible to derive some estimates of safety or efficacy of the ongoing arm. Therefore, knowledge of safety and efficacy, even in the pooled fashion, of ongoing arms in the platform should be restricted to mitigate this risk.

Cautions must be made when releasing de-identified patient-level data in a platform trial to the public. Patient-level data can be made known either through displays of individual-level data, such as the trajectory of each patient’s outcomes over time or by contributing to publicly available datasets. When any given experimental arm is completed, there may be common control arm patients included in an arm’s analysis that are still active in the platform’s follow-up and the data from the ongoing and complete common control patients may be included in the primary analysis of future experimental agents. To avoid the risk that any single patient still active in the trial could be identified and the blind for that patient broken, patient-level data should not be made available to the public prior to the completion of the trial.

In making the performance of the common control arm public, there is another potential risk: a future experimental arm added to the platform trial could use the publically available common control data to select a subgroup that performed more poorly than expected and propose to evaluate the new experimental agent in that subgroup. This behavior would impact the power, type I error and interpretation of the results for that future comparison. If the historical patients experienced a randomly poor set of outcomes, then including them in the analysis has the potential to make the control arm separate more from the experimental arm. However, the structure and the perpetual nature of platform trial can mitigate this to some extent. The platform trial should have a set of the general inclusion/exclusion criteria with limitations on additional inclusion/exclusion criteria for efficacy for any particular experimental treatment. With this approach, additional inclusion/exclusion criteria should be limited to cases with clear and acceptable justifications such as safety considerations to ensure homogeneity of the population across multiple treatment arms. An arm selection committee (ASC) set up for some platform trials has been used to govern the inclusion and prioritization of new arms, and further guide new experimental arm-specific protocol Appendix including topics such as arm-specific inclusion/exclusion criteria. Second, if the platform trial is perpetual, additional control arm data are constantly accruing. If the historical outcomes are randomly poor, the additional data is likely to bring the results back toward the true underlying rates. For a planned analysis that only includes concurrent control, it would be very difficult to leverage any advantage in a future randomized comparison from knowing the performance of controls already, as any future comparisons would have the protection of randomization. If the planned analysis includes non-concurrent controls, prespecified sensitivity analyses that include only concurrent controls should be considered.

Type I Error Consideration in Confirmatory Platform Trials

There have been long debates whether overall study level type I error control is required for platform trials. A recent open forum have discussed this topic thoroughly between pharmaceutical industries and regulators [59]. It is in general consensus that controlling type I error would be needed if different treatment arms are for a single claim or related objectives, while individual arm may be able to have the full alpha if different treatment arms have their own objectives. For example, if both drug X and add-on of drug X + SOC are studied with the goal to select either the monotherapy or the combination, two experimental arms are considered for the same claim and multiplicity adjustment is needed between these two arms. If drug X and drug Y are two different drugs with different mechanisms of action (MOA) with different intents for efficacy and safety evaluation, their claims and evaluations are unrelated and multiplicity adjustment is not required. With a common control, the chance of falsely declaring at least one experimental arm positive is smaller than that when separate two-arm trials are conducted with its own control arm [50, 68], while the chance of falsely declaring more than one experimental arm positive simultaneously is larger [42]. Supplementary Material 2 provides more elaborated debates on type I error consideration in platform trials.

Regulatory Considerations

About Tumor Agnostic Indications

Using basket trial to demonstrate that the drug is efficacious based on the molecular signature, regardless of tumor types in Oncology may lead to tumor agnostic indications. KEYNOTE-158 is an example of such biomarker driven clinical trials that ended up with two tumor agnostic indications in regulatory approvals, one for microsatellite instability (MSI)-high solid tumors and the other is for tissue mutational burden (TMB)- high solid tumors [21, 69]. All the existing studies leading to tumor agnostic approval are accelerated approval based on the estimates of the efficacy instead of hypothesis testing. As most of the tumor agnostic indications are targeting last line patients, regulatory agencies (mainly FDA) may allow evaluating the study results based on the observed substantial objective response rate (ORR) and the duration of response (DoR) in an accelerated approval setting. Further requirements to convert the accelerated approval to full approval [70] include the following aspects in common: (1) To ensure the quality of the response data, response evaluation must be based on independent central review. (2) Additional number of patients are required in some specific tumor types to confirm the initial efficacy evaluation; (3) All responding patients are required to be followed for at least 12–24 months from the onset of response to attain more robust evaluation on the durability of response in absence of a clinical endpoint like overall survival which is not reliable in single-arm setting. It appears to us that the agency is looking for solid evidence of ORR, DoR and safety from a drug that can be evaluated in broader tumor types supported by scientific foundations on mechanism of action.

About Cross-Treatment Comparisons in Confirmatory Platform Trials

It is a general concern for pharmaceutical companies to be part of a platform trial where potential cross-treatment comparisons may be made available when multiple experimental treatments are included in one study, while the platform study was not set up for nor powered for between treatment comparisons. In fact, the focus of a platform trial is to compare each experimental treatment with the control or just establish the activity of the experimental arm if no control arm is present. It is also not of regulatory agencies’ interest to obtain information about cross-treatment comparisons for market approvals for legal reasons. The main interest is if the drug is efficacious and safe compared to SoC and that should be the only hypothesis for testing [1]. However, it may be a payer’s interest to understand the comparisons across multiple on-market treatments for reimbursement purposes.

Some European (EU) Regulatory Feedback on Confirmatory Platform Trials

In a platform trial setting, the authors noticed from their experiences that EU regulatory seems to prefer a prespecified duration of the trial and number of sub-studies that may be entered to a completely open platform trial without pre-defined ending; In addition, EU regulators seem to have reservations on using simulations to control type I error while analytical procedures seem to be more preferred. On the other hand, when the study design gets so complex that analytical procedures may not be readily available, Adaptive Designs for Clinical Trials of Drugs and Biologics: Guidance for Industry on practical guidance acknowledges in those situations simulations are an acceptable tool to demonstrate type I error is properly controlled [71]. Worth mentioning, to ease part of the regulatory concern that it is only practically possible to simulate finite number of plausible scenarios on nuisance parameters before trial conduct, sponsors may offer to provide simulations to the agency during the trial conduct when more trial operational nuisance parameters are known, such as actual accrual patterns, availability of treatment to be included in the platform trial, etc., especially when type I error rate may be suspected to exceed the desired level based on certain monotonicity assumptions [72].


The emergence of master protocols is paradigm shift for sponsors, regulatory agencies, policy makers, payers and the patients. In this article, the authors from ASA biopharmaceutical section oncology SWG master protocol sub-team have shared experiences together, focused on highlighting some practical considerations relevant to such an innovative trial design framework.

Despite all the advantages that a master protocol framework may offer to the sponsors and the stakeholders, like everything else, there are some limitations. Master protocol trials, in many cases come with increased complexity in trial designs, prolonged preparation periods, non-standardized operational processes, expanded uncertainties from regulatory endorsement perspective, challenging trial monitoring processes, complex statistical analyses and interpretations, which in return may lead to reduced efficiency in some situations. Furthermore, when the master protocol involves portfolio prioritizations leading directly to registration intention, pharmaceutic sponsors are cautious about its strategic implications and whether it is the optimal approach. Typically the decision of prioritizing which treatment/investigational drug or combination to continue to the confirmatory stage in a pharmaceutical company’s portfolio is a multifaceted decision, including clinical efficacy, safety, internal available assets, external competitive landscape, development cost, manufacturing, value and access, marketing, etc. In the setting of a Phase II/III master protocol trial, selecting which treatment arm(s) or/and indication(s) to move on to Phase III confirmatory stage must rely on some prespecified rules which are mainly based on clinical efficacy and safety. The fact that these rules have to be implemented and governed by an independent body while sponsors are blinded to the treatment-level data throughout the conduct of the Phase II and Phase III portion of the trial may lead to suboptimal decisions on the prioritization of internal assets, while a different decision may be made under a traditional program outside the master protocol framework. That might be part of the reasons that Phase II/III master protocol trials are not widely utilized for market registration by individual pharmaceutical companies. Non-profit organization sponsored platform trials, on the other hand, may design Phase II/III trials with market registration intentions as they usually collaborate with various companies while designing such a trial so internal portfolio prioritization would not be a concern. For smaller companies that do not have a large portfolio, participating in the platform trials managed by the non-profit organization would also be a better choice to gain efficiency and cost savings for their phase III program.

Bertz et al. [73] used the analogy that an adaptive design in clinical trials is like a swiss army knife, while we believe in our case the same analogy fits for master protocol designs. The master protocol framework is most suitable when multiple experimental treatments and/or multiple diseases are to be developed without a clear priority or path forward for one treatment or indication. On the other hand, if a clinical development program team has a clear focus on one compound and/or one indication based on sufficiently promising preliminary data, the utilization of a master protocol framework may not be the most efficient approach, just like using a swiss army knife to perform a heavy paper cutting job. In practice, the efficiency of drug development has never been a one size fits all situation. It requires much scrutiny and cross-functional efforts to decide the most efficient approach in a drug development program for a particular team, at a particular time of the development era that many believe is science with a tremendous amount of art aspects.


  1. Master protocols: efficient clinical trial design strategies to expedite development of oncology drugs and biologics. Guidance for Industry. U.S. Department of Health and Human Services. Food and Drug Administration; 2018.

  2. Liu F, Li N, Li W, Chen C. Impact of clinical center variation on efficiency of exploratory umbrella design. Stat Biosci 2019:1–20.

  3. NCI-COG Pediatric MATCH. 2020.

  4. Motzer RJ, Penkov K, Haanen J, Rini B, Albiges L, Campbell MT. Avelumab plus axitinib versus sunitinib for advanced renal-cell carcinoma. N Engl J Med. 2019;380:1103–15.

    CAS  Article  Google Scholar 

  5. Rini BI, Plimack ER, Stus V, Gafanov R, Hawkins R, Nosov D. Pembrolizumab plus axitinib versus sunitinib for advanced renal-cell carcinoma. N Engl J Med. 2019;380:1116–27.

    CAS  Article  Google Scholar 

  6. Motzer RJ, Tannir NM, McDermott DF, Frontera OA, Melichar B. Nivolumab plus Ipilimumab versus sunitinib in advanced renal-cell carcinoma. N Engl J Med. 2018;378:1277–90.

    CAS  Article  Google Scholar 

  7. Rini B, Powles T, Atkins MB, Escudier B, McDermott DF, Suarez C. Atezolizumab plus bevacizumab versus sunitinib in patients with previously untreated metastatic renal cell carcinoma (IMmotion151): a multicentre, open-label, phase 3, randomised controlled trial. Lancet. 2019;393(10189):2404–15.

    Article  Google Scholar 

  8. Motzer JR, Alekseev B, Rha S-Y, Porta C, Eto M. Lenvatinib plus pembrolizumab or everolimus for advanced renal cell carcinoma. N Engl J Med. 2021;384:1289–300.

    CAS  Article  Google Scholar 

  9. Saville BR, Berry SM. Efficiencies of platform clinical trials: a vision of the future. Clin Trials. 2016;13(3):358–66.

    Article  Google Scholar 

  10. Hobbs BP, Landin R. Bayesian basket trial design with exchangeability monitoring. Stat Med. 2018;37(25):3557–72.

    Article  Google Scholar 

  11. RECOVERY: this national clinical trial aims to identify treatments that may be beneficial for people hospitalised with suspected or confirmed COVID-19. 2021

  12. I-SPY COVID. Quantum leap healthcare collaborative. 2021

  13. Study of multiple candidate agents for the treatment of COVID-19 in hospitalized patients. 2021

  14. Hirakawa A, Asano J, Sato H, Teramukai S. Master protocol trials in oncology: review and new trial designs. Contemp Clin Trials Commun. 2018;12:1–8.

    Article  Google Scholar 

  15. Woodcock J, LaVange LM. Master protocols to study multiple therapies, multiple diseases, or both. N Engl J Med. 2017;377(1):62–70.

    CAS  Article  Google Scholar 

  16. Berry SM, Connor JT, Lewis RJ. The platform trial: an efficient strategy for evaluating multiple treatments. JAMA. 2015;313(16):1619–20.

    Article  Google Scholar 

  17. Renfro LA, Sargent DJ. Statistical controversies in clinical research: basket trials, umbrella trials, and other master protocols: a review and examples. Ann Oncol. 2017;28(1):34–43.

    CAS  Article  Google Scholar 

  18. Hyman DM, Puzanov I, Subbiah V, Faris JE, Chau I, Blay J-Y, et al. Vemurafenib in multiple nonmelanoma cancers with BRAF V600 mutations. N Engl J Med. 2015;373:726–36.

    CAS  Article  Google Scholar 

  19. Chung H, Ros W, Delord J, Perets R, Italiano A. Efficacy and safety of pembrolizumab in previously treated advanced cervical cancer: results from the phase II KEYNOTE-158 study. J Clin Oncol. 2019;37(17):1470–8.

    CAS  Article  Google Scholar 

  20. Strosberg JR, Mizuno N, Doi T, Grande E, Delord JP, Shapira-Frommer R, et al. Efficacy and safety of pembrolizumab in previously treated advanced neuroendocrine tumors: results from the phase 2 KEYNOTE-158 study. Clin Cancer Res. 2020.

  21. Marabelle A, Le DT, Ascierto PA, Giacomo AM, Jesus-Acosta AD, P DJ. Efficacy of pembrolizumab in patients with noncolorectal high microsatellite instability/mismatch repair-deficient cancer: results from the phase II KEYNOTE-158 study. J Clin Oncol. 2020;38(1):1–10.

  22. Drilon A, Laetsch TW, Kummar S, DuBois SG, Lassen UN, Demetri GD. Efficacy of larotrectinib in TRK fusion-positive cancers in adults and children. N Engl J Med. 2018;378:731–9.

    CAS  Article  Google Scholar 

  23. ALCHEMIST (the adjuvant lung cancer enrichment marker identification and sequencing trials): National Cancer Institute. 2018

  24. Barker AD, Sigman CC, Kelloff GJ, Hylton NM, Berry DA, Esserman LJ. I-SPY 2: an adaptive breast cancer trial design in the setting of neoadjuvant chemotherapy. Clin Pharmacol Ther. 2009;86(1):97–100.

    CAS  Article  Google Scholar 

  25. Alexander BM, Ba S, Berger MS, Berry DA. Adaptive global innovative learning environment for glioblastoma: GBM AGILE. Clin Cancer Res. 2018;24:737–43.

    Article  Google Scholar 

  26. Report on terminology, references and scenarios for platform trials and master protocols. 2020

  27. Chau I, Haag GM, Rahma OE, Macarulla TM, McCune SL, Yardley DA, et al. MORPHEUS: a phase Ib/II umbrella study platform evaluating the safety and efficacy of multiple cancer immunotherapy (CIT)-based combinations in different tumour types. Ann Oncol. 2018;29:viii439–viii40.

  28. Simonsen KL, Fracasso PM, Bernstein SH, Wind-Rotolo M, Gupta M, Comprelli A, et al. The fast real-time assessment of combination therapies in immuno-oncology (FRACTION) program: innovative, high-throughput clinical screening of immunotherapies. Eur J Cancer. 2018;103:259–66.

    CAS  Article  Google Scholar 

  29. Umbrella master protocol: studies of investigational agents with either pembrolizumab (MK-3475) alone or with pembrolizumab PLUS chemotherapy in participants with advanced non-small cell lung cancer (NSCLC) (MK-3475-U01/KEYNOTE-U01).

  30. Substudy 02B: safety and efficacy of pembrolizumab in combination with investigational agents or pembrolizumab alone in participants with first line (1L) advanced melanoma (MK-3475-02B/KEYMAKER-U02). 2021

  31. Umbrella study of sasanlimab combined with targeted therapies in participants with non small cell lung cancer. 2021

  32. Platform trial of novel regimens versus standard of care (SoC) in non-small cell lung cancer (NSCLC). 2021

  33. Bunn V, Liu R, Lin J, Lin J. Flexible Bayesian subgroup analysis in early and confirmatory trials. Contem Clin Trials 2020;98.

  34. Chen C, Li X, Yuan S, Antonijevic Z, Kalamegham R, Beckman RA. Statistical design and considerations of a phase 3 basket trial for simultaneous investigation of multiple tumor types in one study 2016 Jul 2;8(3):248–57. Stat Biopharm Res. 2016;8(3):248–57.

  35. Cunanan KM, Iasonos A, Shen R, Begg CB, Gonen M. An efficient basket trial design. Stat Med. 2017;36(10):1568–79.

    PubMed  PubMed Central  Google Scholar 

  36. Liu R, Liu Z, Ghadessi M, Vonk R. Increasing the efficiency of oncology basket trials using a Bayesian approach. Contemp Clin Trials. 2017;63:67–72.

    Article  Google Scholar 

  37. Simon R, Geyer S, Subramanian J, Roychowdhury S. The Bayesian basket design for genomic variant-driven phase II trials. Semin Oncol. 2016;43(1):13–8.

    Article  Google Scholar 

  38. Thall PF, Wathen JK, Bekele BN, Champlin RE, Baker LH, Benjamin RS. Hierarchical Bayesian approaches to phase II trials in diseases with multiple subtypes. Stat Med. 2003;22(5):763–80.

    Article  Google Scholar 

  39. Zhou H, Liu F, Wu C, Rubin EH, Giranda VL, Chen C. Optimal two-stage designs for exploratory basket trials. Contemp Clin Trials. 2019;85:105807.

    Article  Google Scholar 

  40. Li M, Liu R, Lin J, Lin V. Bayesian semi-parametric design (BSD) for adaptive dose-finding with multiple strata. J Biopharm Stat. 2020.

  41. Neuenschwander B, Wandel S, Roychoudhury S, Bailey S. Robust exchangeability designs for early phase clinical trials with multiple strata. Pharm Stat. 2016;15:123–34.

    Article  Google Scholar 

  42. Collignon O, Gartner C, Haidich AB, James Hemmings R, Hofner B, Petavy F, et al. Current statistical considerations and regulatory perspectives on the planning of confirmatory basket, umbrella, and platform trials. Clin Pharmacol Ther. 2020;107(5):1059–67.

    Article  Google Scholar 

  43. Chen C, Beckman RA. Control of type I error for confirmatory basket trials. In: Antonijevic Z, Beckman RA, editors. Platform trial in drug development: umbrella trials and basket trials. Chapman & Hall/CRC Press; 2018.

  44. Study of pembrolizumab (MK-3475) in participants with advanced solid tumors (MK-3475-028/KEYNOTE-28). .2021

  45. A study of nivolumab by itself or nivolumab combined with ipilimumab in patients with advanced or metastatic solid tumors. 2020

  46. Expansion cohorts: use in first-in-human clinical trials to expedite development of oncology drugs and biologics guidance for industry. U.S. Department of Health and Human Services. Food and Drug Administration 2018.

  47. Chen C, Deng Q, He L, DV M, Rubin EH, Berry SM. How many tumor indications should be initially screened in development of next generation immunotherapies? Contemp Clin Trials 2017;59:113–7.

  48. Bai X, Deng Q, Liu D. Multiplicity issues for platform trials with a shared control arm. J Biopharm Stat. 2020;30(6):1–3.

    Article  Google Scholar 

  49. Bretz F, Koenig F. Commentary on Parker and Weir. Clin Trials. 2020;17(5):567–9.

    Article  Google Scholar 

  50. Howard DR, Brown JM, Todd S, Gregory WM. Recommendations on multiple testing adjustment in multi-arm trials with a shared control group. Stat Methods Med Res. 2018;27(5):1513–30.

    Article  Google Scholar 

  51. Korn EL, Freidlin B. Outcome-adaptive randomization: is it useful? J Clin Oncol. 2011;29(6):771–6.

    Article  Google Scholar 

  52. Lin J, Bunn V. Comparison of multi-arm multi-stage design and adaptive randomization in platform clinical trials. Contemp Clin Trials. 2017;54:48–59.

    Article  Google Scholar 

  53. Lin J, Li-An L, Sankoh S. A general overview of adaptive randomization design for clinical trials. J Biom Biostat. 2016;7(2):294.

    Google Scholar 

  54. Parker RA, Weir CJ. Non-adjustment for multiple testing in multi-arm trials of distinct treatments: rationale and justification. Clin Trials. 2020;17(5):562–6.

    Article  Google Scholar 

  55. Viele K, Broglio K, McGlothlin A, BR S. Comparison of methods for control allocation in multiple arm studies using response adaptive randomization. Clin Trials. 2019.

  56. Wason JM, Robertson DS. Controlling type I error rates in multi-arm clinical trials: a case for the false discovery rate. Pharm Stat. 2021;20(1):109–16.

    Article  Google Scholar 

  57. Wathen JK, Thall PF. A simulation study of outcome adaptive randomization in multi-arm clinical trials. Clin Trials. 2017;14(5):432–40.

    Article  Google Scholar 

  58. Yuan Y, Yin G. On the usefullness of outcome-adaptive randomization. J Clin Oncol. 2011;29(13):390–2.

    Article  Google Scholar 

  59. Sridhara R, Marchenko O, Jiang Q, Pazdur R, Posch M, Redman M, et al. Type I error considerations in master protocols with common control in oncology trials: report of an American statistical association biopharmaceutical section open forum discussion. Stat Biopharm Res. 2021:1–7.

  60. Taylor JMG, Braun TM, Li Z. Comparing an experimental agent to a standard agent: relative merits of a one-arm or randomized two-arm Phase II design. Clin Trials. 2006;3(4):335–48.

    Article  Google Scholar 

  61. James N, Sydes M, Clarke N, Mason M, Dearnaley D, Spears M. Addition of docetaxel, zoledronic acid, or both to first-line long-term hormone therapy in prostate cancer (STAMPEDE): survival results from an adaptive, multiarm, multistage, platform randomised controlled trial. Lancet. 2016;387(10024):1163–77.

    CAS  Article  Google Scholar 

  62. Lee KM, Wason J. Including non-concurrent control patients in the analysis of platform trials: is it worth it? BMC Med Res Methodol. 2020;20(1):1–2.

    CAS  Article  Google Scholar 

  63. Sridhara R, Marchenko O, Jiang Q, Pazdur R. Use of non-concurrent common control for treatment comparisons in master protocols. ASA BIOP Biopharm Rep. 2021:12–4.

  64. Berry SM, Reese CS, Larkey PD. Bridging different eras in sports. J Am Stat Assoc. 1999;94(447).

  65. Viele K, Berry SM. Controls in platform trials joint statistical meeting. 2019

  66. Bateman RJ, L BT, Berry SM. The DIAN-TU next generation Alzheimer’s prevention trial: adaptive design and disease progression model. Alzheimer's Dementia. 2017;13:8–19.

  67. Pandemic response ushers in new era of biopharma data sharing: guest commentary. The COVID R&D alliance tells the story of a pharma data sharing initiative that could extend beyond COVID-19. 2021

  68. Proschan M, Follman D. Multiple comparisons with control in a single experiment versus separate experiments: why do we feel differently? Am Stat. 1995;49:144.

    Google Scholar 

  69. FDA approves pembrolizumab for adults and children with TMB-H solid tumors.

  70. Postmarket requirements and commitments.

  71. Adaptive designs for clinical trials of drugs and biologics: guidance for industry. U.S. Department of Health and Human Services. Food and Drug Administration; 2019.

  72. Coalition TAPT. Adaptive platform trials: definition, design, conduct and reporting considerations. Nat Rev Drug Discov. 2019

  73. Bretz F, Gallo P, Maurer W. Adaptive designs: The Swiss Army knife among clinical trial designs? Clin Trials. 2017.

Download references


There were no funding sources for this manuscript.

Author information

Authors and Affiliations



All authors have contributed to drafting and/or revising the work critically for important intellectual content. All authors are responsible for final approval of the version to be published.

Corresponding author

Correspondence to Chengxing (Cindy) Lu.

Ethics declarations

Conflict of interest

The authors acknowledge that they have no conflicts of interest.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (PDF 114 kb)

Supplementary file2 (PDF 143 kb)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Lu, C., Li, X., Broglio, K. et al. Practical Considerations and Recommendations for Master Protocol Framework: Basket, Umbrella and Platform Trials. Ther Innov Regul Sci 55, 1145–1154 (2021).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


  • Master protocol
  • Umbrella trial
  • Basket trial
  • Platform trial
  • Complex clinical trial
  • Adaptive design