There have been ongoing efforts to understand when and how data from observational studies can be applied to clinical and regulatory decision making. The objective of this review was to assess the comparability of relative treatment effects of pharmaceuticals from observational studies and randomized controlled trials (RCTs).
We searched PubMed and Embase for systematic literature reviews published between January 1, 1990, and January 31, 2020, that reported relative treatment effects of pharmaceuticals from both observational studies and RCTs. We extracted pooled relative effect estimates from observational studies and RCTs for each outcome, intervention-comparator, or indication assessed in the reviews. We calculated the ratio of the relative effect estimate from observational studies over that from RCTs, along with the corresponding 95% confidence interval (CI) for each pair of pooled RCT and observational study estimates, and we evaluated the consistency in relative treatment effects.
Thirty systematic reviews across 7 therapeutic areas were identified from the literature. We analyzed 74 pairs of pooled relative effect estimates from RCTs and observational studies from 29 reviews. There was no statistically significant difference (based on the 95% CI) in relative effect estimates between RCTs and observational studies in 79.7% of pairs. There was an extreme difference (ratio < 0.7 or > 1.43) in 43.2% of pairs, and, in 17.6% of pairs, there was a significant difference and the estimates pointed in opposite directions.
Overall, our review shows that while there is no significant difference in the relative risk ratios between the majority of RCTs and observational studies compared, there is significant variation in about 20% of comparisons. The source of this variation should be the subject of further inquiry to elucidate how much of the variation is due to differences in patient populations versus biased estimates arising from issues with study design or analytical/statistical methods.
Health care decision makers, particularly regulators but also health technology assessment agencies, have depended upon evidence from randomized clinical trials (RCTs) to assess drug effectiveness and to make comparisons among treatment options. Widespread adoption of the RCT was the hallmark of progress in clinical research in the twentieth century and accelerated the development and approval of new therapeutics; confidence in RCTs derived from their experimental nature, designs to minimize bias, rigorous data quality, and analytic approaches that supported causal inference.
In the last 30 years, we have witnessed an explosion of observational real-world data (RWD) and evidence (RWE) derived from RWD that has supplemented our understanding of the benefits and risks of treatments in broader populations of patients. RWE has been largely leveraged by regulators to assess the safety of marketed products and for new drug approvals when RCTs are infeasible, such as in rare diseases, oncology, or for long-term adverse effects. RCTs often do not have sufficient sample size to detect rare adverse events or long enough follow-up to detect long-term adverse effects. In such cases, regulatory decisions are often supplemented by RWE. However, leveraging of RWE has been much more slowly embraced in comparison to the adoption of RCTs for a variety of reasons. Imputation of causality is less certain in the absence of randomization and RWD can be much sparser and often requires extensive curation before it can be analyzed. Thus, skepticism about the robustness of observational RWD studies has made decision makers cautious in relying solely upon it to render judgments about the availability and appropriate use of new therapeutics, particularly by regulatory bodies.
Moreover, observational studies examining the effectiveness of treatments in similar populations have not always provided results consistent with RCTs. Despite many studies finding similar treatment effect estimates from RCTs and RWD analyses [1,2,3], other analyses have documented wide variation in results from RWD analyses within the same therapeutic areas , including analyses using propensity score-based methods . Nonetheless, public interest has grown in the routine leveraging of RWD to promote the creation of a learning healthcare system, and regulatory bodies and other decision makers are exploring ways to expand their use of RWE. This is partly due to increasing acknowledgement of the value of RWE, such as its ability to better reflect actual environments in which the interventions are used.
One promising approach to understanding the sources of variability between RCT and observational study results is to compare estimates obtained from RWD analyses that attempt to emulate the eligibility criteria, endpoints, and other features of trials as closely as possible. A small number of RWD analyses have generated findings similar to previous RCTs [6, 7], and the findings of other RWD analyses have been consistent with subsequent RCTs . In a small number of cases, RCTs and RWD studies have been published simultaneously . This has the advantage of not knowing the RCT estimate when conducting the RWD study. There have been disagreements between observational RWD analyses and RCTs that were based upon avoidable errors in the RWD analysis design [7, 10]. This has led to a focus on the importance of research design in observational RWD analyses attempting to draw causal inferences regarding treatment effects [11,12,13].Emulation studies can improve understanding of when observational studies may reliably generate results consistent with RCTs; however, not all RCTs can be feasibly emulated using RWD due to limitations in observational datasets. Existing sources of observational data, such as health insurance claims and electronic health records (EHRs), may not routinely capture the intervention, indication, inclusion and exclusion criteria, and/or endpoints used in RCTs .
The objective of this paper is to provide further evidence on the comparability of RCTs and observational studies when the latter use a range of study designs and were not designed to emulate RCTs. We aim to quantify the extent of the difference in treatment effect estimates between RCTs and observational studies. We go beyond previous comparisons of RCTs and observational studies, with a focus purely on pharmaceuticals, and provide a systematic landscape review of the (in)consistency between RCT and observational study treatment effect estimates. The reasons for the variation in relative treatment effects are not assessed in this review but should be the subject of further study.
Published systematic literature reviews designed to compare relative treatment effects from observational studies with the corresponding effects from RCTs; or
Published systematic literature reviews that reported subgroup analyses stratified by RCT and observational study design; and
Observational studies included in these reviews have to be retrospective or prospective cohort studies, or case-control studies
Population: Human subjects
Intervention(s) and comparator(s): Any active or placebo-controlled pharmaceutical or biopharmaceutical intervention
Efficacy/effectiveness or safety outcomes
Pooled relative treatment effect estimates for both observational studies and RCTs
Systematic reviews that compared absolute outcomes, such as event rates, between non-comparative observational studies and RCTs
Non-pharmaceutical-based studies, e.g., surgical procedures, traditional medicine, vitamin/herbal supplements, etc.
Abstracts or conference proceedings
We searched PubMed and Embase to identify relevant systematic literature reviews published between January 1, 1990, and January 31, 2020. Anglemeyer et al.’s search strategy  was used as a template to develop the search strategy, which included a wide range of MeSH terms and relevant keywords. We updated Anglemeyer et al.’s systematic review hedge and used the more recent CADTH systematic review/meta-analysis hedge, created in 2016, in both PubMed and Embase . We restricted our search to focus on pharmaceuticals only. PubMed and Embase were searched for the following concepts: pharmaceuticals, study methodology, and comparisons (filters: Humans and English language). The PubMed search strategy which was adapted for use in Embase can be found in Additional File 1.
After removing duplicate references, three authors (JG, YH and LO) screened the titles and abstracts to identify relevant reviews. Once complete, LO verified the screening for accuracy. Following the title and abstract screen, full text articles were obtained for all potentially relevant reviews. Full text articles were then assessed to determine if they meet the selection criteria for final inclusion in the review.
A pilot extraction was first done by two authors (JG and YH) on a sample of three articles using a standardized extraction table. This was done to test the standardized extraction table and to ensure consistency between the authors performing the data extraction. JG and YH then independently extracted information from each review using the standardized extraction table. A third author (LO) verified the extraction for accuracy and identified any discrepancies. These discrepancies were discussed until resolved.
We focused on primary outcomes reported in the reviews and extracted information summarizing the scope of each of the identified systematic reviews. Extracted information included the following: review objective, population, disease/therapeutic area, interventions, outcome(s), number of included RCTs and observational studies, pooled relative treatment effect estimates for RCTs and observational studies along with the 95% confidence intervals (95% CI), and measures of heterogeneity.
Based on the extracted information, we calculated the ratio of the relative treatment effect estimate from observational studies over the relative treatment effect estimate from RCTs (e.g., RRobs/RRrct), along with the corresponding 95% CI obtained via a Monte Carlo simulation for each pair of pooled RCT and observational study estimates. Outcomes for which the relative treatment effect was not expressed with a relative risk (RR), odds ratio (OR), or hazard ratio (HR) were excluded from our analysis.
We expressed differences in pooled effect estimates with the following measures: ratios that were < 1, > 1, or = 1, ratios indicating an “extreme difference” (< 0.70 or > 1.43)  and absence of an extreme difference. We evaluated (in)consistency between pooled RCT and observational study estimates with the following measures: presence of opposite direction of effect, RCT effect estimate outside the 95% CI of the observational study estimate, and vice versa, statistically significant difference between RCT and observational study estimates, and statistically significant difference along with opposite direction of effect. Statistically significant difference was determined by examining the 95% CI of the ratio of the relative treatment effect estimates from observational studies and RCTs derived from the Monte Carlo simulation. We examined differences in relative effect measures from observational studies and RCTs by outcome type and therapeutic area.
To test the robustness of our findings, we conducted two sensitivity analyses. As some reviews assessed more than one endpoint and contributed more than one pair of pooled relative treatment effects from RCTs and observational studies to our analysis, we repeated the analysis with one endpoint per review, i.e., a single pair of pooled relative treatment effects from RCTs and observational studies from each review, selecting the most frequently used endpoints for inclusion whenever possible. Additionally, as some studies were included in more than one review, we repeated the analysis ensuring that there is no overlap of data between the included reviews, i.e., ensuring that each study was included in only one review included in our analysis. Details on the sensitivity analyses are included in Additional File 2. All analyses were conducted using RStudio, version 1.3.1073 (©2009-2020 RStudio, PBC).
Our search on PubMed and Embase yielded 3798 unique citations after removing duplicates. After screening titles and abstracts, we identified 93 full text articles for further review. Of these, 30 reviews met our inclusion criteria (Fig. 1).
Included systematic reviews
The characteristics of the included reviews and the pairs of pooled relative treatment effects from RCTs and observational studies reported in the reviews are summarized in Table 1. Thirty systematic reviews across 7 therapeutic areas (cardiovascular disease [15/30], infectious disease [6/30], oncology [3/30], mental health [2/30], immune-inflammatory [1/30], metabolic disease [1/30], and other [2/30]) were identified from the literature. These reviews included 519 RCTs and observational studies and provided 79 pairs of pooled relative treatment effects from RCTs and observational studies across multiple interventions, comparators, and outcomes. Five pairs were excluded from our assessment because they concerned continuous outcomes (n = 1) or no pooled effect estimate was reported for observational studies (n = 4). As a result, 74 pairs of pooled relative treatment effects from RCTs and observational studies from 29 reviews were available for assessment of consistency.
Ratio of relative effect measures from observational studies and RCTs
Figure 2 presents the scatterplot of relative effect measures from observational studies and RCTs across the 74 pairs of pooled relative treatment effects with the 95% CI bars. The ratio of the relative effect measure from observational studies over the corresponding relative effect measure from RCTs ranged from 0.09 to 6.50 (median = 0.92, interquartile range = 0.69–1.27). The ratio was greater than 1, i.e., the relative effect was larger in observational studies in 31 of the 74 pairs (41.9%). The ratio was less than 1, i.e., the relative effect was larger in RCTs in 42 of the 74 pairs (56.8%), and the ratio was equal to 1 in one of the 74 pairs (1.4%). The ratio was greater than 1.43 in 12 of the 74 pairs (16.2%) and less than 0.7 in 20 of the 74 pairs (27.0%) indicating an extreme difference. There was an absence of an extreme difference (0.7 ≤ ratio ≤ 1.43) in 42 of the 74 pairs (56.8%; Table 2). Sensitivity analyses including only one endpoint from each review and ensuring no overlap of data between the included reviews resulted in similar findings (Table 2). Scatterplots of relative effect measures from observational studies and RCTs by outcome type and therapeutic area can be found in Additional File 3: Figures S1 and S2.
Consistency of relative effect measures from observational studies and RCTs
In 30 of the 74 pairs (40.5%), effect estimates from observational studies and RCTs pointed in opposite directions of effect. The RCT point estimate was outside the 95% CI of the observational study in 35 of the 74 pairs (47.3%) and the observational study point estimate was outside the 95% CI of the RCT in 27 of the 74 pairs (36.5%). There was a statistically significant difference between relative effect estimates from observational studies and RCTs in 15 of the 74 pairs (20.3%). In 13 of the 74 pairs (17.6%), there was a statistically significant difference and the effect estimates of observational studies and RCTs pointed in opposite directions (Table 3). The results remained fairly consistent when the sensitivity analyses were conducted (Table 3).
Our analysis of 29 reviews comparing results of RCTs and observational studies of pharmaceuticals showed, on average, no significant differences in their relative risk ratios across all studies, but also considerable study-by-study variability. The median ratio of the relative effect measure from observational studies to RCTs was 0.92, indicating just slightly lower effectiveness/safety estimates in observational studies than corresponding RCTs. This is in fact somewhat higher than the 0.80 ratio recently found in meta-research comparing effect estimates of randomized clinical trials that use routinely collected data (i.e., from traditional observational study sources such as registries, electronic health records, or administrative claims) for outcome ascertainment with traditional trials not using routinely collected data . However, whether judging by the frequency of “extreme” differences (43.2%) or statistically significant differences in opposite directions (17.6%), one could not claim that observational study results consistently replicated RCT results on a study-by-study basis in our sample.
There are a number of reasons that any given observational study result may not replicate an RCT comparing the same treatments. First, it may not have been the intent of the observational study researchers to match a specific clinical trial—they may have intentionally studied a different treatment population, setting, or protocol in order to complement or test the RCT findings. In such cases, there would be variation in effect estimates due to estimating a different causal effect. Even if the researcher does attempt to match a specific RCT, the data may not have been available to closely match it, since patient histories, test results, etc., used for RCT inclusion criteria may not be observed, or outcomes may not be captured the same way. Even given similar data, non-randomized studies have the potential for selection/channeling bias into treatment determined by factors unobservable in either type of study, and analytic attempts to correct for such confounding may have limited success. In some cases, treatment conditions may differ enough between the RCT and real-world practice that replication of results should not be expected, e.g., due to careful safety monitoring that affects subsequent treatment in RCTs. Finally, it is possible that other pharmacoepidemiologic principles, beyond the study design considerations we already mentioned, were violated in the individual RWD studies, which could have caused disagreement between their results and the RCTs. While variation in treatment effect estimates due to estimating a different causal effect in a different study population is expected and valid, biased estimates arising from issues with study design or analytical methods may be problematic.
Details in these reviews were typically insufficient to distinguish among these possible explanations, without detailed review of the individual studies, which we did not attempt here. However, some reviews did attempt to explain the differences they found. For example, in the review by Gandhi et al. (2015) , which compared dual-antiplatelet therapy (DAPT) to mono-antiplatelet therapy (MAPT) following transcatheter aortic valve implantation, there was a statistically significant difference in pooled relative treatment effect estimates from observational studies and RCTs. The primary outcome was more likely to occur in the DAPT group than in the MAPT group in the observational studies (OR 3.02; 95% CI 1.91–4.76); however, no statistically significant difference was found between DAPT and MAPT in the RCTs (OR 0.98; 95% CI 0.46–2.11). The authors explained that the RCTs (n = 2) and observational studies (n = 2) included in this review had variable patient inclusion/exclusion criteria and there were differences in the type of prosthetic aortic valve used, which may have introduced selection bias .
To allow for better use of individual observational studies to inform decision-making, their ability to replicate RCT results needs to become more reliable, and the “target trial” approach seems to be a path forward. Several systematic efforts using sophisticated observational data research designs to emulate multiple RCTs are underway [48, 49]. These efforts are intended to provide regulatory bodies and other decision makers with empirical evidence to support the development of a framework for assessing when and under what circumstances observational RWE can be used to support a wider range of regulatory decisions. RCT DUPLICATE is a collaboration between the Food and Drug Administration (FDA), Brigham and Women’s Hospital and Harvard Medical School Division of Pharmacoepidemiology, to replicate 30 completed Phase III or IV trials and to predict the results of seven ongoing Phase IV trials using Medicare and commercial claims data . The RCT DUPLICATE team has recently reported results for its first 10 trials . They report hazard ratio estimates within the 95% CI of the corresponding trial for 8 of 10 emulations.
The Multi-Regional Clinical Trials Center and OptumLabs are leading another effort called Observational Patient Evidence for Regulatory Approval and Understanding Disease (OPERAND) which extends the trial emulation activity and relaxes the inclusion/exclusion criteria of the trials to examine treatment effects in the broader patient population treated in routine care . The FDA has also funded the Yale University-Mayo Clinic Center of Excellence in Regulatory Science and Innovation to predict the results of three to four ongoing safety trials using OptumLabs claims data .
It is important to understand that clinical trials emulation efforts are being conducted solely to improve understanding of when observational studies may be expected to produce robust results. Bartlett and colleagues  found that in a review of 220 clinical trials published in high impact medical journals in 2017, 15% could potentially be emulated using data available from medical claims or EHRs. For example, the inclusion/exclusion criteria for many oncology trials require data on genetic markers and progression free survival unavailable in EHRs. The estimate by Bartlett and colleagues may prove to be an underestimate as the ability to link different types of observational data continues to improve. Nevertheless, it is reasonable to assume that it is not possible to emulate most trials with existing observational datasets.
These efforts are critical to advance our understanding of the strengths and limitations of observational RWE, identifying issues with study design, endpoint definition, data quality, and analytical methodology that may impact the consistency of findings between RWE and RCTs. While much attention has focused on differences in study populations between observational studies and RCTs as the reason for the inconsistency in effect estimates, emerging evidence suggests that issues with study design (e.g., establishing time zero of exposure) may be equally if not more important . Therefore, the results of these efforts will not provide definitive guidance to decision makers but they emphasize how even subtle differences in study design and endpoint definition can impact absolute estimates of treatment effect. Moreover, RWE studies are answering a different question than RCTs, i.e., “Does it work?” verses “Can it Work?” The former is important to a variety of stakeholders beyond regulators. Hence, they should not be expected to provide results identical to RCTs.
In conclusion, although our review shows no average significant difference in the relative risk ratios between published RCTs and observational studies, there is substantial study-to-study variation. It was impractical to review all individual observational study designs and examine their potential biases, but future work should elucidate how much of the variation is due to differences in study populations versus biased estimates arising from issues with study design or analytical methods. As more target trial replication attempts are conducted and published, more systematic evidence will emerge on the reliability of this approach and on the potential for observational studies to more routinely inform healthcare decisions.
Availability of data and materials
The data analyzed in this study are included in this published article.
Electronic health record
Food and Drug Administration
Observational Patient Evidence for Regulatory Approval and Understanding Disease
Randomized controlled trial
Anglemyer A, Horvath HT, Bero L. Healthcare outcomes assessed with observational study designs compared with those assessed in randomized trials. Cochrane Database Syst Rev. 2014;4(4):MR000034. https://doi.org/10.1002/14651858.MR000034.pub2.
Concato J, Shah N, Horwitz RI. Randomized controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med. 2000;342(25):1887–92. https://doi.org/10.1056/NEJM200006223422507.
Benson K, Hartz AJ. A comparison of observational studies and randomized, controlled trials. N Engl J Med. 2000;342(25):1878–86. https://doi.org/10.1056/NEJM200006223422506.
Madigan D, Ryan P, Schuemie M, et al. Evaluating the impact of database heterogeneity on observational study results. Am J Epidemiol. 2013;178(4):645–51. https://doi.org/10.1093/aje/kwt010.
Forbes S, Dahabreh I. Benchmarking observational analyses against randomized trials: a review of studies assessing propensity score methods. J Gen Intern Med. 2020;35(5):1396–404. https://doi.org/10.1007/s11606-020-05713-5.
Seeger J, Bykov K, Bartels D, Huybrechts K, Zint K, Schneeweiss S. 2015. Safety and effectiveness of dabigatran and warfarin in routine care of patients with atrial fibrillation. Thromb Haemost. 2015;114(6):1277–89. https://doi.org/10.1160/TH15-06-0497.
Dickerman B, Garcia-Albeniz X, Logan R, et al. Avoidable flaws in observational analyses: an application to statins and cancer. Nat Med. 2019;25(10):1601–6. https://doi.org/10.1038/s41591-019-0597-x.
Schneeweiss S, Seeger JD, Landon J, Walker AM. Aprotinin during coronary-artery bypass grafting and risk of death. N Engl J Med. 2008;358(8):771–83. https://doi.org/10.1056/NEJMoa0707571.
Noseworthy P, Gersh B, Kent D, et al. Atrial fibrillation ablation in practice: assessing CABANA generalizability. Peter A NoseworthyRobert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, 200 1st St SW, Rochester, MN, USA. Eur Heart J. 2019;40(16):1257–64. https://doi.org/10.1093/eurheartj/ehz085.
Hernán MA, Alonso A, Logan R, Grodstein F, Michels KB, Willett WC, et al. Observational studies analyzed like randomized experiments: an application to postmenopausal hormone therapy and coronary heart disease. Epidemiology. 2008 Nov;19(6):766–79. https://doi.org/10.1097/EDE.0b013e3181875e61.
Petersen M, van der Laan M. Causal models and learning from data: integrating causal modeling and statistical estimation. Epidemiology. 2014;25(3):418–26. https://doi.org/10.1097/EDE.0000000000000078.
Goodman S, Schneeweiss S, Baiocchi M. Using design thinking to differentiate useful from misleading evidence in observational research. JAMA. 2017;317(7):705–7. https://doi.org/10.1001/jama.2016.19970.
Franklin JM, Schneeweiss S. When and How can real world data analyses substitute for randomized controlled trials? Clin Pharmacol Ther. 2017 Dec;102(6):924–33. https://doi.org/10.1002/cpt.857.
Bartlett V, Dhruva S, Shah N, Ryan P, Ross J. Feasibility of using real-world data to replicate clinical trial evidence. JAMA Netw Open. 2019;2(10):e1912869. https://doi.org/10.1001/jamanetworkopen.2019.12869.
Strings attached: CADTH database search filters [Internet]. Ottawa: CADTH; 2016. Available from: https://www.cadth.ca/resources/finding-evidence
Dahabreh IJ, Sheldrick RC, Paulus JK, Chung M, Varvarigou V, Jafri H, et al. Do observational studies using propensity score methods agree with randomized trials? A systematic comparison of studies on acute coronary syndromes. Eur Heart J. 2012;33(15):1893–901. https://doi.org/10.1093/eurheartj/ehs114.
Abuzaid A, Ranjan P, Fabrizio C, et al. Single anti-platelet therapy versus dual anti-platelet therapy after transcatheter aortic valve replacement: a meta-analysis. Structural Heart. 2018;2(5):408–18. https://doi.org/10.1080/24748706.2018.1491082.
Agarwal N, Mahmoud AN, Mojadidi MK, Golwala H, Elgendy IY. Dual versus triple antithrombotic therapy in patients undergoing percutaneous coronary intervention-meta-analysis and meta-regression. Cardiovasc Revasc Med. 2019;20(12):1134–9. https://doi.org/10.1016/j.carrev.2019.02.022.
Agarwal N, Mahmoud AN, Patel NK, Jain A, Garg J, Mojadidi MK, et al. Meta-analysis of aspirin versus dual antiplatelet therapy following coronary artery bypass grafting. Am J Cardiol. 2018;121(1):32–40. https://doi.org/10.1016/j.amjcard.2017.09.022.
An KR, Belley-Cote EP, Um KJ, Gupta S, McClure G, Jaffer IH, et al. Antiplatelet therapy versus anticoagulation after surgical bioprosthetic aortic valve replacement: a systematic review and meta-analysis. Thromb Haemost. 2019;119(2):328–39. https://doi.org/10.1055/s-0038-1676816.
Chien HT, Lin YC, Sheu CC, Hsieh KP, Chang JS. Is colistin-associated acute kidney injury clinically important in adults? A systematic review and meta-analysis. Int J Antimicrob Agents. 2020;55(3):105889. https://doi.org/10.1016/j.ijantimicag.2020.105889.
Chopra V, Rogers MA, Buist M, Govindan S, Lindenauer PK, Saint S, et al. Is statin use associated with reduced mortality after pneumonia? A systematic review and meta-analysis. Am J Med. 2012;125(11):1111–23. https://doi.org/10.1016/j.amjmed.2012.04.011.
Desai RJ, Thaler KJ, Mahlknecht P, Gartlehner G, McDonagh MS, Mesgarpour B, et al. Comparative risk of harm associated with the use of targeted immunomodulators: a systematic review. Arthritis Care Res (Hoboken). 2016;68(8):1078–88. https://doi.org/10.1002/acr.22815.
Gandhi S, Schwalm JD, Velianou JL, Natarajan MK, Farkouh ME. Comparison of dual-antiplatelet therapy to mono-antiplatelet therapy after transcatheter aortic valve implantation: systematic review and meta-analysis. Can J Cardiol. 2015 Jun;31(6):775–84. https://doi.org/10.1016/j.cjca.2015.01.014.
Ge Z, Faggioni M, Baber U, Sartori S, Sorrentino S, Farhan S, et al. Safety and efficacy of nonvitamin K antagonist oral anticoagulants during catheter ablation of atrial fibrillation: a systematic review and meta-analysis. Cardiovasc Ther. 2018;36(5):e12457. https://doi.org/10.1111/1755-5922.12457.
Heffernan AJ, Sime FB, Sun J, Lipman J, Kumar A, Andrews K, et al. Beta-lactam antibiotic versus combined beta-lactam antibiotics and single daily dosing regimens of aminoglycosides for treating serious infections: a meta-analysis. Int J Antimicrob Agents. 2020;55(3):105839. https://doi.org/10.1016/j.ijantimicag.2019.10.020.
Ho ET, Wong G, Craig JC, Chapman JR. Once-daily extended-release versus twice-daily standard-release tacrolimus in kidney transplant recipients: a systematic review. Transplantation. 2013;95(9):1120–8. https://doi.org/10.1097/TP.0b013e318284c15b.
Khan SU, Lone AN, Asad ZUA, Rahman H, Khan MS, Saleem MA, et al. Meta-Analysis of efficacy and safety of proton pump inhibitors with dual antiplatelet therapy for coronary artery disease. Cardiovasc Revasc Med. 2019;20(12):1125–33. https://doi.org/10.1016/j.carrev.2019.02.002.
Kirson NY, Weiden PJ, Yermakov S, Huang W, Samuelson T, Offord SJ, et al. Efficacy and effectiveness of depot versus oral antipsychotics in schizophrenia: synthesizing results across different research designs. J Clin Psychiatry. 2013;74(6):568–75. https://doi.org/10.4088/JCP.12r08167.
Land R, Siskind D, McArdle P, Kisely S, Winckel K, Hollingworth SA. The impact of clozapine on hospital use: a systematic review and meta-analysis. Acta Psychiatr Scand. 2017;135(4):296–309. https://doi.org/10.1111/acps.12700.
Li L, Li S, Deng K, Liu J, Vandvik PO, Zhao P, et al. Dipeptidyl peptidase-4 inhibitors and risk of heart failure in type 2 diabetes: systematic review and meta-analysis of randomised and observational studies. BMJ. 2016;352:i610. https://doi.org/10.1136/bmj.i610.
Melloni C, Washam JB, Jones WS, Halim SA, Hasselblad V, Mayer SB, et al. Conflicting results between randomized trials and observational studies on the impact of proton pump inhibitors on cardiovascular events when coadministered with dual antiplatelet therapy: systematic review. Circ Cardiovasc Qual Outcomes. 2015;8(1):47–55. https://doi.org/10.1161/CIRCOUTCOMES.114.001177.
Miles JA, Hanumanthu BK, Patel K, Chen M, Siegel RM, Kokkinidis DG. Torsemide versus furosemide and intermediate-term outcomes in patients with heart failure: an updated meta-analysis. J Cardiovasc Med (Hagerstown). 2019;20(6):379–88. https://doi.org/10.2459/JCM.0000000000000794.
Mongkhon P, Naser AY, Fanning L, Tse G, Lau WCY, Wong ICK, et al. Oral anticoagulants and risk of dementia: a systematic review and meta-analysis of observational studies and randomized controlled trials. Neurosci Biobehav Rev. 2019;96:1–9. https://doi.org/10.1016/j.neubiorev.2018.10.025.
Raheja H, Garg A, Goel S, Banerjee K, Hollander G, Shani J, et al. Comparison of single versus dual antiplatelet therapy after TAVR: a systematic review and meta-analysis. Catheter Cardiovasc Interv. 2018;92(4):783–91. https://doi.org/10.1002/ccd.27582.
Ramjan R, Calmy A, Vitoria M, Mills EJ, Hill A, Cooke G, et al. Systematic review and meta-analysis: patient and programme impact of fixed-dose combination antiretroviral therapy. Trop Med Int Health. 2014;19(5):501–13. https://doi.org/10.1111/tmi.12297.
Shi M, Zheng H, Nie B, Gong W, Cui X. Statin use and risk of liver cancer: an update meta-analysis. BMJ Open. 2014;4(9):e005399. https://doi.org/10.1136/bmjopen-2014-005399.
Teo J, Liew Y, Lee W, Kwa AL. Prolonged infusion versus intermittent boluses of Î2-lactam antibiotics for treatment of acute infections: a meta-analysis. Int J Antimicrob Agents. 2014;43(5):403–11. https://doi.org/10.1016/j.ijantimicag.2014.01.027.
Vinceti M, Filippini T, Del Giovane C, Dennert G, Zwahlen M, Brinkman M, et al. Selenium for preventing cancer. Cochrane Database Syst Rev. 2018;1:CD005195. https://doi.org/10.1002/14651858.CD005195.pub4.
Wang CH, Li CH, Hsieh R, Fan CY, Hsu TC, Chang WC, et al. Proton pump inhibitors therapy and the risk of pneumonia: a systematic review and meta-analysis of randomized controlled trials and observational studies. Expert Opin Drug Saf. 2019;18(3):163–72. https://doi.org/10.1080/14740338.2019.1577820.
Wat R, Mammi M, Paredes J, Haines J, Alasmari M, Liew A, et al. The effectiveness of antiepileptic medications as prophylaxis of early seizure in patients with traumatic brain injury compared with placebo or no treatment: a systematic review and meta-analysis. World Neurosurg. 2019;122:433–40. https://doi.org/10.1016/j.wneu.2018.11.076.
Wong AYS, Chan EW, Anand S, Worsley AJ, Wong ICK. Managing cardiovascular risk of macrolides: systematic review and meta-analysis. Drug Saf. 2017;40(8):663–77. https://doi.org/10.1007/s40264-017-0533-2.
Yang J, Yu S, Yang Z, Yan Y, Chen Y, Zeng H, et al. Efficacy and safety of supportive care biosimilars among cancer patients: a systematic review and meta-analysis. BioDrugs. 2019;33(4):373–89. https://doi.org/10.1007/s40259-019-00356-3.
Yu W, Wang B, Zhan B, Li Q, Li Y, Zhu Z, et al. Statin therapy improved long-term prognosis in patients with major non-cardiac vascular surgeries: a systematic review and meta-analysis. Vascul Pharmacol. 2018;109:1–16. https://doi.org/10.1016/j.vph.2018.06.015.
Zhang C, Gu ZC, Ding Z, Shen L, Pan MM, Zheng YL, et al. Decreased risk of renal impairment in atrial fibrillation patients receiving non-vitamin K antagonist oral anticoagulants: a pooled analysis of randomized controlled trials and real-world studies. Thromb Res. 2019;174:16–23. https://doi.org/10.1016/j.thromres.2018.12.010.
Zhao Y, Peng H, Li X, Qin Y, Cao F, Peng D, et al. Dual antiplatelet therapy after coronary artery bypass surgery: is there an increase in bleeding risk? A meta-analysis. Interact Cardiovasc Thorac Surg. 2018;26(4):573–82. https://doi.org/10.1093/icvts/ivx374.
McCord KA, Ewald H, Agarwal A. Treatment effects in randomised trials using routinely collected data for outcome assessment versus traditional trials: meta-research study. BMJ. 2021;372:n450. https://doi.org/10.1136/bmj.n450.
Thompson D. Replication of randomized, controlled trials using real world data: what could go wrong? Value Health. 2021;24(1):112–5. https://doi.org/10.1016/j.jval.2020.09.015.
Crown W, Bierer B. Real-world evidence: understanding sources of variability through empirical analysis. Value Health. 2021 Jan;24(1):116–7. https://doi.org/10.1016/j.jval.2020.11.003.
FDA Prediction Project – RCT DUPLICATE [Internet]. Available from: www.rctduplicate.org
Franklin J, Patorno E, Desai R, et al. Emulating randomized clinical trials with nonrandomized real-world evidence studies: first results from the RCT DUPLICATE initiative. Circulation. 2021;143(10):1002–13. https://doi.org/10.1161/CIRCULATIONAHA.120.051718.
Evaluating RWE from observational studies in regulatory decision-making: lessons learned from trial replication analyses. Trial Emulation Studies and OPERAND. Duke-Margolis Center for Health Policy Virtual Meeting, February 16-17, 2021.
Yale School of Medicine. Center for Outcomes Research and Evaluation (CORE). Current Projects. https://medicine.yale.edu/core/current_projects/cersi/research/.
No funding was received for this study.
Ethics approval and consent to participate
Consent for publication
Yoon Duk Hong, John Guerino, Marc L. Berger, William Crown, Richard J. Willke, Wim G. Goettsch, and Lucinda S. Orsini have no conflicts of interest to report. Jeroen P. Jansen is a part-time employee of Precision Medicine Group (PMG) (PRECISIONheor) and has stock options from Precision Medicine Group. PMG provides contracted research services to pharmaceutical and biotech industry. C. Daniel Mullins has received consulting fees from AstraZeneca, Bayer, Incyte, Merck, Pfizer, and Takeda and has received support from Bayer and Pfizer for attending meetings and/or travel.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figures S1 & S2. Figure S1. Relative effect measures from observational studies versus corresponding relative effect measures from randomized controlled trials by outcome type. Figure S2. Relative effect measures from observational studies versus corresponding relative effect measures from randomized controlled trials by therapeutic area.
About this article
Cite this article
Hong, Y.D., Jansen, J.P., Guerino, J. et al. Comparative effectiveness and safety of pharmaceuticals assessed in observational studies compared with randomized controlled trials. BMC Med 19, 307 (2021). https://doi.org/10.1186/s12916-021-02176-1