Journal of General Internal Medicine

, Volume 33, Issue 5, pp 759–763 | Cite as

Application and impact of run-in studies

  • Michael Fralick
  • Jerry Avorn
  • Jessica M. Franklin
  • Abdurrahman Abdurrob
  • Aaron S. Kesselheim
Review Paper



A run-in phase is often employed prior to randomization in a clinical trial to exclude non-adherent patients, placebo responders, active drug non-responders, or patients who do not tolerate the active drug. This may impact the generalizability of trial results.


To determine if clinical outcomes differed between randomized controlled trials with run-in phases compared with randomized controlled trials of the same medication without run-in phases.

Design, participants

From 2006 to 2014, the Food and Drug Administration approved 258 new medications. Sitaglitpin, saxagliptin, linagliptin, and alogliptin were among the only drugs with a common mechanism of action that each had multiple clinical trials, some of which had run-in phases and some of which did not. We identified all published randomized controlled trials for these four medications from MEDLINE and EMBASE as well as prior systematic reviews.

Main measures

We extracted key measures of medication efficacy (reduction in hemoglobin A1C) and safety (serious adverse events) from qualifying trials. Study results were pooled for each medication using random effects meta-analysis.

Key results

We identified 106 qualifying trials for DPP4 inhibitors, of which 88 had run-in phases and 18 did not. The average run-in phase duration was 4.0 weeks (range 1–21), and 73% of run-in phases administered placebo rather than active drug. The reduction in hemoglobin A1C compared to baseline was similar for trials with and without run-in phases (0.70%, 95% confidence interval [CI] 0.65–0.75 vs 0.76%, 95% CI 0.69–0.84, p = 0.27). The proportion of patients with serious adverse events was also similar for trials with and without run-in phases (4%, 95% CI: 3–5% vs 3%, 95% CI: 1–4%, p = 0.35).


Trials with run-in phases provided similar estimates for medication efficacy and safety compared to trials without run-in phases. Because run-in phases are costly and time-consuming, these results call their utility into question for clinical trials of short duration.


run-in lead-in clinical trial study design 



We thank Dr. Donald Redelmeier, Dr. Chana Sacks, Dr. Nicola Goldberg, and Kristina Stefanini for providing comments on earlier versions of our manuscript (none received any compensation for their work).

Authors’ contributions

Study concept and design: Fralick M, Kesselheim A, and Avorn J.

Acquisition of data: Fralick M and Abdurrob A.

Analysis/interpretation of data: Fralick M, Kesselheim A, Franklin J, and Avorn J.

Drafting of the manuscript: Fralick M.

Critical revision of the manuscript: Kesselheim A, Franklin J, Avorn J, and Abdurrob A.

Statistical analysis: Fralick M and Franklin J


Dr. Fralick receives funding from the Eliot Phillipson Clinician-Scientist Training Program and the Clinician Investigator Program at the University of Toronto and from The Detweiler Traveling Fellowship funded by the Royal College of Physicians and Surgeons of Canada. Dr. Kesselheim’s work is supported by the Laura and John Arnold Foundation, with additional support from the Harvard Program in Therapeutic Science and the Engelberg Foundation.

Compliance with ethical standards

Conflict of interest

Dr. Franklin is the principal investigator on a grant from Merck. All other authors declare no conflicts of interest.


  1. 1.
    Pocock S. Design and analysis of randomized clinical trials requiring prolonged observation of each patient, part i. Statistician. 1982;31:1–18.Google Scholar
  2. 2.
    Bothwell LE, Greene JA, Podolsky SH, Jones DS. Assessing the gold standard—lessons from the history of rcts. Malina D, ed. N Engl J Med. 2016;374(22):2175–2181.Google Scholar
  3. 3.
    Pablos-Méndez A, Barr RG, Shea S. Run-in periods in randomized trials: implications for the application of results in clinical practice. JAMA. 1998;279(3):222–225.CrossRefPubMedGoogle Scholar
  4. 4.
    Rothwell PM. Factors that can affect the external validity of randomised controlled trials. PLoS Clin Trials. 2006;1(1):e9.CrossRefPubMedPubMedCentralGoogle Scholar
  5. 5.
    Lang J, Buring J, Rosner B, Cook N, Hennekens C. Estimating the effect of the run-in on the power of the physicians’ health study. Stat Med. 1991;10:1585–1593.CrossRefPubMedGoogle Scholar
  6. 6.
    Lang J. The use of a run-in to enhance compliance. Stat Med. 1990;9:87–95.CrossRefPubMedGoogle Scholar
  7. 7.
    Physicians’ Health Study. Final report on the aspirin component of the ongoing physicians’ health study. N Engl J Med. 1989;321:129–135.CrossRefGoogle Scholar
  8. 8.
    Peto R, Gray R, Collins R, et al. Randomised trial of prophylactic daily aspirin in british male doctors. BMJ. 1988;296:313–316.CrossRefPubMedPubMedCentralGoogle Scholar
  9. 9.
    Kesselheim AS, Myers JA, Avorn J. Characteristics of clinical trials to support approval of orphan vs nonorphan drugs for cancer. JAMA. 2011;305(22):2320–2326.CrossRefPubMedGoogle Scholar
  10. 10.
    Food and Drug Administration. Drugs@fda: fda approved drug products. Available at: Accessed January 24, 2017.
  11. 11.
    Dersimonian R, Laird N. Meta-analysis in clinical trials. Stat Med. 1986;188:177–188.Google Scholar
  12. 12.
    Newcombe R. Two-sided confidence intervals for single proportion: comparison of seven methods. Stat Med. 1998;17:857–872.CrossRefPubMedGoogle Scholar
  13. 13.
    Merill R. Introduction to epidemiology. 7th ed. Burlington, MA: Jones & Bartlett Learning LLC; 2017.Google Scholar
  14. 14.
    Oleckno WA. Concepts and Methods. Long Grove, IL: Waveland Press, Inc; 2008.Google Scholar
  15. 15.
    Office of New Drugs In the Center for Drug Evaluation and Research at the Food and Drug Administration. Available at: Good review practice: clinical review of investigational new drug applications. Accessed February 22, 2017.
  16. 16.
    McMurray JJV, Packer M, Desai AS, et al. Angiotensin–neprilysin inhibition versus enalapril in heart failure. N Engl J Med. 2014;371(11):993–1004.CrossRefPubMedGoogle Scholar
  17. 17.
    Lam KSL, Chow CC, Tan KCB, et al. Practical considerations for the use of sodium–glucose co-transporter type 2 inhibitors in treating hyperglycemia in type 2 diabetes. Curr Med Res Opin. 2016;32(6):1097–1108.CrossRefPubMedGoogle Scholar
  18. 18.
    Wu JHY, Foote C, Blomster J, et al. Effects of sodium-glucose cotransporter-2 inhibitors on cardiovascular events, death, and major safety outcomes in adults with type 2 diabetes: a systematic review and meta-analysis. Lancet Diabetes Endocrinol. 2016;4(5):411–419.CrossRefPubMedGoogle Scholar
  19. 19.
    Chen CH. Critical questions about paradigm-hf and the future. Acta Cardiol Sin. 2016;32(4):387–396.PubMedPubMedCentralGoogle Scholar
  20. 20.
    Correia LCL, Rassi Jr. A. Paradigm-hf: a paradigm shift in heart failure treatment? Arq Bras Cardiol. 2016:77–79.Google Scholar
  21. 21.
    Tyler J, Teerlink J. The safety of sacubitril-valsartan for the treatment of chronic heart failure. Expert Opin Drug Saf. 2017;16(2):257–263.PubMedGoogle Scholar
  22. 22.
    Vilela-Martin JF. Spotlight on valsartan-sacubitril fixed-dose combination for heart failure: the evidence to date. Drug Des Devel Ther. 2016;10:1627–1639.CrossRefPubMedPubMedCentralGoogle Scholar
  23. 23.
    Kang G, Banerjee D. Neprilysin inhibitors in cardiovascular disease. Curr Cardiol Rep. 2017;19(2):16.CrossRefPubMedGoogle Scholar

Copyright information

© Society of General Internal Medicine 2018

Authors and Affiliations

  • Michael Fralick
    • 1
    • 2
    • 3
  • Jerry Avorn
    • 1
    • 3
  • Jessica M. Franklin
    • 1
    • 3
  • Abdurrahman Abdurrob
    • 3
  • Aaron S. Kesselheim
    • 1
    • 3
  1. 1.Program On Regulation, Therapeutics, And Law (PORTAL), Division of Pharmacoepidemiology and Pharmacoeconomics, Department of Medicine Brigham and Women’s Hospital and Harvard Medical SchoolBostonUSA
  2. 2.Eliot Phillipson Clinician-Scientist Training ProgramUniversity of TorontoTorontoCanada
  3. 3.Division of Pharmacoepidemiology and Pharmacoeconomics, Department of MedicineBrigham and Women’s Hospital and Harvard Medical SchoolBostonUSA

Personalised recommendations