Skip to main content

Interpretation and Impact of Real-World Clinical Data for the Practicing Clinician


Real-world studies have become increasingly important in providing evidence of treatment effectiveness in clinical practice. While randomized clinical trials (RCTs) are the “gold standard” for evaluating the safety and efficacy of new therapeutic agents, necessarily strict inclusion and exclusion criteria mean that trial populations are often not representative of the patient populations encountered in clinical practice. Real-world studies may use information from electronic health and claims databases, which provide large datasets from diverse patient populations, and/or may be observational, collecting prospective or retrospective data over a long period of time. They can therefore provide information on the long-term safety, particularly pertaining to rare events, and effectiveness of drugs in large heterogeneous populations, as well as information on utilization patterns and health and economic outcomes. This review focuses on how evidence from real-world studies can be utilized to complement data from RCTs to gain a more complete picture of the advantages and disadvantages of medications as they are used in practice.

Funding: Sanofi US, Inc.


Real-world studies seek to provide a line of complementary evidence to that provided by randomized controlled trials (RCTs). While RCTs provide evidence of efficacy, real-world studies produce evidence of therapeutic effectiveness in real-world practice settings [1]. The RCT is a well-established methodology for gathering robust evidence of the safety and efficacy of medical interventions [2]. In RCTs, the investigators are able to reduce bias and confounding by utilizing randomization and strict patient inclusion and exclusion criteria. This internal validity is often achieved at the expense of external validity (generalizability), since the populations enrolled in RCTs may differ significantly from those found in everyday practice. Real-world evidence has emerged as an important means to understanding the utility of medical interventions in a broader, more representative patient population. The strict exclusion criteria for RCTs may exclude the majority of patients seen in routine care; therefore, real-world evidence can give vital insight into treatment effects in more diverse clinical settings, where many patients have multiple comorbidities [3, 4].

Data from real-world studies can provide evidence that informs payers, clinicians, and patients on how an intervention performs outside the narrow confines of the research setting, providing essential information on the long-term safety and effectiveness of a drug in large populations, its economic performance in a naturalistic setting, and for assessment of comparative effectiveness with other treatments. With improvements in the rigor of methodology being applied to real-world studies, along with the increasing availability of higher-quality, larger datasets, the importance of findings from these studies is growing. The value of real-world data has been recognized by regulatory bodies such as the US Food and Drug Administration (FDA) and the European Medicines Agency (EMA) [5, 6]. These bodies acknowledge the importance of real-world data in supporting marketed products and their potential role in supporting life cycle product development/monitoring and decision-making for regulation and assessment [5, 6]. A survey of the pharmaceutical and medical devices industry in the European Union and the USA determined that 27% of real-world studies are conducted by industry, performed “on request” by regulatory authorities [7]. Real-world data form a key component of healthcare technology assessments used by national and regional bodies, such as the National Institute for Health and Care Excellence (NICE) in the UK and Germany’s Institute for Quality and Efficiency in Health Care (EQWiG), to guide clinical decision-making [8]. The data from real-world studies are also increasingly utilized by payers. In a US survey, the majority of payers who responded reported using real-world data to guide decision-making, in particular on utilization management and formulary placement [9]. Such data usage may have profound effects; for example, the reversal of a decision by the EQWiG that analogue basal insulins showed no benefit over human insulin, which restored market access and premium pricing for insulin glargine in Germany [10]. The increase in the number of real-world studies has resulted in more clinical evidence being available to guide treatment decisions, and can allow assessment of the impacts of off-label use. In this paper, we review the impact of real-world clinical data and how their interpretation can assist clinicians to assess clinical evidence appropriately for their own decision-making.

The Association of the British Pharmaceutical Industry defines real-world data as “data that are collected outside the controlled constraints of conventional RCTs to evaluate what is happening in normal clinical practice” [11]. Real-world studies can be either retrospective or prospective, and when they include prospective randomization, they are called “pragmatic trial design” studies (Table 1) [12]. The clearest distinction between RCTs and real-world studies is based on (a) the setting in which the research is conducted and (b) where evidence is generated [2]. RCTs are typically conducted with precisely defined patient populations, and patient selection is often contingent on meeting extensive eligibility (i.e., inclusion and exclusion) criteria. Participants in such trials (and the data they provide) are subject to rigorous quality standards, with intensive monitoring, the use of detailed case-report forms (to capture additional information that may not be present in ordinary medical records), and carefully managed contact with research personnel (who are responsible for ensuring protocol adherence) being commonplace. Real-world evidence, in contrast, is often derived from multiple sources that lie outside of the typical clinical research setting: these can include offices that are not generally involved in research, electronic health records (EHRs), and patient registries and administrative claims databases (sometimes obtained from integrated healthcare delivery systems). Despite these differences, real-world evidence can also be used retrospectively as external control arms for RCTs, to provide comparative efficacy data [13]. Consequently, this article is based on previously conducted studies and does not contain any studies with human participants or animals performed by any of the authors.

Table 1 Comparison of randomized controlled trials and real-world studies

Large “pragmatic trials” are an increasingly common real-world data source. Such trials are designed to show the real-world effectiveness of an intervention in a broad patient group [14]. They incorporate a prospective, randomized design and collect data on a wide range of health outcomes in a diverse and heterogeneous population (i.e., they are consistent with clinical practice) [15,16,17]. Pragmatic trials are conducted in routine practice settings [1], include a population that is relevant for the intervention and a control group treated with an acceptable standard of care (or placebo), and describe outcomes that are meaningful to the population in question [14]. Aspects of care other than the intervention being studied are intentionally not controlled, with clinicians applying clinical discretion in their choice of other medications [11]. Pragmatic trials may focus on a specific type of patient or treatment, and study coordinators may select patients, clinicians, and clinical practices and settings that will maximize external validity (i.e., the applicability of the results to usual practice) [16]. As such, pragmatic trials are able to provide data on a range of clinically relevant real-world considerations, including different treatments, patient- and clinician-friendly titration and treatment algorithms, and cost-effectiveness, which in turn may help address practice- and policy-relevant issues. These studies can focus specifically on the outcomes which are most important to patients, and take into account real-world treatment adherence and compliance on the direct impact of a medication or treatment regimen for patients.

Understanding the Strengths and Weaknesses of Real-World Studies

Compared with RCT data, real-world evidence has the potential to more efficiently provide answers that inform outcomes research, quality improvement, pharmacovigilance, and patient care [2]. As they are performed in clinical settings and patient populations that are similar to those encountered in clinical practice, real-world studies have broader generalizability. Specifically, RCTs provide evidence of efficacy, while real-world studies give evidence of effectiveness in real-world practice settings [1]. Additionally, observational, retrospective real-world studies are generally more economical and time efficient than RCTs [18] as they use existing data sources such as registries, claims data, and EHRs to identify study outcomes [16].

Key to the utility of real-world studies is their ability to complement data from RCTs in order to fill current gaps in clinical knowledge. Specific trial criteria may cause RCTs to exclude a particular group of patients commonly seen in clinical practice; for example, RCTs frequently exclude older adults. In the case of diabetes, while many RCTs focus primarily on the safety and glucose-lowering efficacy of antihyperglycemia drugs [19], it is desirable to have real-world effectiveness outcomes data in patients with type 2 diabetes (T2D) that take into account issues such as adherence [20, 21] and the frequency of side effects in less controlled settings (which may affect outcomes). Such studies suggest that the difference between glycated hemoglobin reduction in RCTs and in practice may be related to adherence and point to the potential value of real-world studies assessing clinical-practice effectiveness. In addition, real-world evidence can address important issues such as the impact of treatment on microvascular disease and cardiovascular (CV) events [22] and enable the examination of outcomes, which are difficult to assess in RCTs, such as the utilization of healthcare resources by patients receiving different therapies. In the DELIVER-3 study, for example, insulin glargine 300 U/ml (Gla-300) was associated with reduced resource utilization compared with other basal insulins [23]. An example, which demonstrates the utility of pragmatic trial design, is the exploration of patient-driven insulin titration protocols that highlight the practical need that patients face in everyday life, rather than reflecting the needs of a highly controlled, well-motivated RCT population [24,25,26].

Real-world studies have a number of limitations. Retrospective and non-randomized real-world studies are subject to bias and confounding factors, problems that are controlled for in randomized blinded trials [27]. Electronic data may be inconsistently collected, with missing data elements that can eventually result in reduced statistical validity and a decreased ability to answer the research question [16]. The types of bias seen in real-world trials include selection bias (e.g., therapies may be differently prescribed depending upon patient and disease characteristics, e.g., severity of disease and/or other patient characteristics), information bias (misclassification of data), recall bias (caused by selective recall of impactful events by patients/caregivers), and detection bias (where an event is more likely to be captured in one treatment group than another) [28]. While systematic reviews have found little evidence to suggest that treatment effects or adverse events in well-designed observational studies are either overestimated or qualitatively different from those obtained in RCTs, each real-world study must be examined individually for sources of bias and confounding [29,30,31]. Indeed, caution should be exercised when using data from real-world studies (particularly retrospective studies) to influence change in clinical practice [18] because of confounding and bias. Techniques such as propensity score matching (PSM) can be used to reduce selection bias by matching the characteristics of patients entering different arms of studies (see below) [32].

Properly designed, prospective, interventional pragmatic trials have the potential to overcome many of the limitations of observational and retrospective real-world studies. However, the main limitation of pragmatic trials is that they do not often place constraints on patients and clinicians, which may result in inconsistent or missing data in source documents such as EHRs. This, together with heterogeneity in terms of clinical practice and associated documentation, may lead to a reduced capability of the study to answer the research question [16]. In addition, heterogeneity of clinical practice and patient populations reduces the translatability of pragmatic trial data to different settings and locations [33]. There are also numerous challenges inherent in pragmatic trial design. These are illustrated by the trade-off between blinding of results to reduce bias and the desire to create a fully pragmatic design where the intervention is delivered as in normal practice [14]. Pragmatic trials, in producing evidence of effectiveness in real-world-practice settings, may trade aspects of internal validity for higher external validity, which ultimately means that they are more generalizable than RCTs [1].

Learning from Real-World Findings: Examples

Retrospective Observational Studies

A real-world study that had a definite effect on prescribing practice concerned a live attenuated nasal spray influenza vaccine in the USA. On the basis of results from a number of RCTs, which showed the superior efficacy of this vaccine over the inactivated influenza vaccine, the Advisory Committee for Immunization Practices (ACIP) issued a guidance for its use in children [34]. However, because of data from real-world observational studies showing worse performance compared with the RCT data and near zero performance against some pandemic influenza strains, the ACIP subsequently changed its guidance and recommended against the use of the live attenuated vaccine [34]. Retrospective, observational real-world data can confirm or refute the findings of RCTs. For example, the DELIVER-2 and DELIVER-3 studies were conducted in a broad population of patients with T2D on basal insulin, including at-risk older adults, and showed that those who switched to Gla-300 experienced significantly fewer hypoglycemia events—including events associated with hospitalization or emergency room visits—than those who switched to other basal insulins, without compromising blood glucose control [23, 35, 36], corroborating the results obtained in the EDITION RCTs [37,38,39].

Prospective Observational Studies

The importance of prospective observational studies has been clearly illustrated. For example, the Framingham Heart Study, initiated almost 70 years ago [40]. This study has provided substantial insight into the epidemiology of cardiovascular disease (CVD) and its risk factors, and has significantly influenced clinical thinking and practice. In the case of diabetes, prospective observational studies have provided key evidence that has guided the development of treatment guidelines worldwide. Ten years of long-term follow-up after the completion of the UK Diabetes Study confirmed and extended data on the importance of glycemic control in preventing the development of the microvascular and macrovascular complications of T2D in a real-world population [41]. The Epidemiology of Diabetes Interventions and Complications (EDIC) prospective observational follow-up study of the Diabetes Control and Complications Trial (DCCT) has described the long-term effects of prior intensive therapy compared with conventional insulin therapy on the development and progression of microvascular complications and CVD in type 1 diabetes [42].

The prospective observational ReFLeCT study is looking at rates of hypoglycemia, glycemic control, patient-reported outcomes, and quality of life under normal clinical practice conditions in approximately 1200 European patients with either type 1 or 2 diabetes for which they are prescribed insulin degludec. An analysis of data from the Cardiovascular Risk Evaluation in people with type 2 Diabetes on Insulin Therapy (CREDIT) study found that improved glycemic control in patients beginning insulin resulted in significant reductions in CV events such as stroke and CV death; no differences were observed between different insulin regimens, suggesting that it was good glycemic control that was the most important factor [43].

Pragmatic Prospective Randomized Trials

A number of pragmatic randomized trials have been completed or are underway to investigate a range of real-world diabetes patient-care issues, including the long-term effectiveness of major antihyperglycemia medications [44], glucose monitoring [45, 46], insulin initiation [47], and support strategies [48]. Since 2008, the FDA and subsequently the EMA have required sponsors of new antihyperglycemia therapies to evaluate their CV safety. This has resulted in a number of large-scale CV outcome trials including pragmatic trials such as the Trial Evaluating Cardiovascular Outcomes with Sitagliptin (TECOS) [49] and the Exenatide Study of Cardiovascular Event Lowering (EXSCEL) trial [50].

Real-World Studies: Addressing Generalizability

RCT exclusion criteria may rule out a significant proportion of real-world patients. As previously mentioned, patients excluded from RCTs are older, have more medical comorbidities, and have more challenging social and demographic issues than those included in these trials. Real-world studies have the potential to assess whether results seen in RCTs would be generalizable to real-world patient populations. The EMPA-REG OUTCOME RCT selected T2D patients with established CVD and, for those treated with the sodium-glucose co-transporter-2 (SGLT2) inhibitor empagliflozin vs placebo, reported a significant reduction in the primary composite endpoint of a three-point major adverse cardiac event (MACE) (CV death, non-fatal myocardial infarction, and non-fatal stroke), as well as the individual endpoints of CV death, all-cause death, and hospitalization for heart failure [51]. The CANVAS RCT investigating the SGLT2 inhibitor canagliflozin, which included a lower percentage of patients at high CV risk than EMPA-REG, also reported a significant reduction in the primary composite endpoint of a three-point MACE and the individual endpoint of hospitalization for heart failure but did not show a significant benefit for CV mortality or all-cause mortality alone [52]. Evidence from a further real-world study may support and expand upon the RCT data. The CVD-REAL study in over 300,000 patients with T2D, both with (13% of the total) and without established CVD, showed a consistent reduction in hospitalization for heart failure suggesting a real-world benefit of the SGLT2 inhibitor drug class as a whole in patients with T2D, irrespective of existing CV risk status or the SGLT2 inhibitor used [53].

Improving Quality of Evidence Generated from Real-World Studies

Criteria for the design of observational studies have been developed and, if followed, should result in higher-quality studies (Table 2) [28]. The STROBE guidelines (STrengthening the Reporting of OBservational studies in Epidemiology) provide a reporting standard for observational studies [54]. An extension to the CONSORT guideline for RCTs provides specific guidance for pragmatic trials and provides a reporting checklist that covers background, participants, interventions, outcomes, sample size, blinding, participant flow, and generalizability of findings [55]. Adherence to such criteria should improve not only the quality but also the validity of real-world study data in clinical practice.

Table 2 Quality criteria for comparative observational database studies

A number of methods have also been developed to reduce the effects of confounding in observational studies, including PSM. This method aims to make it possible to compare outcomes of two treatment or management options in similar patients [32]. It does this by reducing the effects of multiple covariates to a single score, the propensity score. Comparison of outcomes across treatment groups of pairs or pools of propensity-score-matched patients can reduce issues such as selection bias [32]. Although a powerful and widely used tool, there are limits to the degree in which propensity score adjustments can control for bias and confounding variables. An example of this can be seen in RCT versus real-world data for mortality in patients with severe heart failure treated with the aldosterone inhibitor spironolactone [56]. While RCT data consistently showed a reduction in mortality, in a real-world study using PSM, spironolactone appeared to be associated with a substantially increased risk of death [57]. The authors of the study suggest that concluding that spironolactone is dangerous on the basis of the real-world study is not legitimate because of issues of unknown bias and confounding by indication (i.e., confounding due to factors not in the propensity score or even not formally measured) [57]. This illustrates a major limitation of PSM: it can only include variables that are in the available data [58]. A further major limitation is that the need for grouping or pairing data in PSM narrows the patient population analyzed, limiting generalizability and thereby reducing one of the main values of real-world studies.

“Big data” have emerged as a cutting-edge discipline that uses capture of data from EHRs and other high-volume data sources to efficiently generate hypotheses about the relationship between processes and outcomes. This demands an increased emphasis on the integrity of the data, with “high-quality” data defined in terms of their accuracy, availability and usability, integrity, consistency, standardization, generalizability, and timeliness [59, 60]. Missing data may represent a significant challenge in some datasets. For example, the US healthcare system (unlike many European countries) relies on a number of different laboratory companies to supply laboratory results data, which may result in inconsistencies in the recording of results in EHRs. The technical and methodological challenges presented by these new data sources are an active area of endeavor by key stakeholders moving towards harmonization of data collected from high-volume data sources, with the aim of creating a unified monitoring system and implementing methods for incorporating such data into research [2]. Artificial intelligence (AI) is the natural partner of big data, and the increased availability of these data sources is already allowing AI to improve clinical decision-making. AI techniques have used raw data gleaned from radiographical images, genetic testing, electrophysiological studies, and EHRs to improve diagnoses [6].

As a final caveat, with the increasing availability of real-world data, there may be some discrepancies in information derived from different sources. As with all data, be it from RCTs or real-world practice, consideration should be given to the limitations and generalizability of results when interpreting individual study outcomes and applying them to everyday clinical practice.


Real-world studies provide important information that can complement and/or even expand the information obtained in RCTs. RCTs set the standard for eliminating bias in determining efficacy and safety of medications, but have significant limitations with regard to generalizability to the broad population of patients with diabetes receiving health care in diverse clinical practice settings. Because real-world studies are performed in actual clinical practice settings, they are better able to assess the actual effectiveness and safety of medications as they are used in real-life by patients and clinicians. With improving study designs, methodological advances, and data sources with more comprehensive data elements, the potential for real-world evidence continues to expand. Moreover, the limitations of real-world studies are better understood and can be better addressed. Real-world evidence can both generate hypotheses requiring further investigation in RCTs and also provide answers to some research questions that may be impractical to address through RCTs.


  1. Luce BR, Drummond M, Jönsson B, et al. EBM, HTA, and CER: clearing the confusion. Milbank Q. 2010;88:256–76.

    Article  Google Scholar 

  2. Sherman RE, Anderson SA, Dal Pan GJ, et al. Real-world evidence—what is it and what can it tell us? N Engl J Med. 2016;375:2293–7.

    Article  Google Scholar 

  3. Barnish MS, Turner S. The value of pragmatic and observational studies in health care and public health. Pragmat Obs Res. 2017;8:49–55.

    Article  Google Scholar 

  4. Fortin M, Dionne J, Pinho G, Gignac J, Almirall J, Lapointe L. Randomized controlled trials: do they have external validity for patients with multiple comorbidities? Ann Fam Med. 2006;4(2):104–8.

    Article  Google Scholar 

  5. FDA. Developing a framework for regulatory use of real-world evidence; Public Workshop. Accessed 08 Sep 2017.

  6. EMA. Update on real world evidence data collection. 10 March 2016. stamp4/4_ real_world_evidence_ema_presentation.pdf. Accessed 08 Sep 2017.

  7. Batrouni M, Comet D, Meunier JP. Real world studies, challenges, needs and trends from the industry. Value Health. 2014;17:A587–8.

    CAS  Article  Google Scholar 

  8. Goodman CS. National Information Center on Health Services Research and Health Care Technology (NICHSR): HTA 101, 2017. Accessed Feb 2018.

  9. Malone DC, Brown M, Hurwitz JT, Peters L, Graff JS. Real-world evidence: useful in the real world of US payer decision making? How? When? And what studies? Value Health. 2018;21(3):326–33.

    Article  Google Scholar 

  10. Cattell J, Groves P, Hughes B, Savas S. How can pharmacos take advantage of the real-world data opportunity in healthcare? McKinsey and Company, 2011. Accessed Feb 2018.

  11. ABPI. The vision for real world data—harnessing the opportunities in the UK. Demonstrating value with real world data 2017. Accessed 22 Jan 2018.

  12. Schwartz D, Lellouch J. Explanatory and pragmatic attitudes in therapeutical trials. J Clin Epidemiol. 2009;62(5):499–505.

    Article  Google Scholar 

  13. Davies J, Martinex M, Martina R, et al. Retrospective indirect comparison of alectinib phase II data vs ceritinib real-world data in ALK + NSCLC after progression on crizotinib. Ann Oncol. 2017;28(suppl_2): ii28-ii51. 10.

  14. Ford I, Norrie J. Pragmatic trials. N Engl J Med. 2016;375:454–63.

    Article  Google Scholar 

  15. Dang A, Vallish BN. Real world evidence: an Indian perspective. Perspect Clin Res. 2016;7:156–60.

    Article  Google Scholar 

  16. Sox HC, Lewis RJ. Pragmatic trials: practical answers to “real world” questions. JAMA. 2016;316:1205–6.

    Article  Google Scholar 

  17. Tunis SR, Stryer DB, Clancy CM. Practical clinical trials: increasing the value of clinical research for decision making in clinical and health policy. JAMA. 2003;290:1624–32.

    CAS  Article  Google Scholar 

  18. Dubois RW. Is the real-world evidence or hypothesis: a tale of two retrospective studies. J Comp Eff Res. 2015;4(3):199–201.

    Article  Google Scholar 

  19. Studies for “Diabetes Mellitus, Type 2”. Accessed 21 Aug 2018.

  20. Carls GS, Tuttle E, Tan RD, et al. Understanding the gap between efficacy in randomized controlled trials and effectiveness in real-world use of GLP-1 RA and DPP-4 therapies in patients with type 2 diabetes. Diabetes Care. 2017;40:1469–78.

    Article  Google Scholar 

  21. Edelman SV, Polonsky WH. Type 2 diabetes in the real world: the elusive nature of glycemic control. Diabetes Care. 2017;40:1425–32.

    Article  Google Scholar 

  22. McGovern A, Hinchliffe R, Munro N, de Lusignan S. Basing approval of drugs for type 2 diabetes on real world outcomes. BMJ. 2015;351:h5829.

    Article  Google Scholar 

  23. Zhou FL, Ye F, Gupta V, et al. Older adults with type 2 diabetes (T2D) experience less hypoglycemia when switching to insulin glargine 300 U/mL (Gla-300) vs other basal insulins (DELIVER 3 study). Poster 986-P, American Diabetes Association (ADA) 77th Scientific Sessions, San Diego, CA, US, June 10, 2017.

  24. Blonde L, Merilainen M, Karwe V, Raskin P. Patient-directed titration for achieving glycaemic goals using a once-daily basal insulin analogue: an assessment of two different fasting plasma glucose targets-the TITRATE™ study. Diabetes Obes Metab. 2009;11:623–31.

    CAS  Article  Google Scholar 

  25. Gerstein HC, Yale JF, Harris SB, et al. A randomized trial of adding insulin glargine vs. avoidance of insulin in people with type 2 diabetes on either no oral glucose-lowering agents or submaximal doses of metformin and/or sulphonylureas. The Canadian INSIGHT (Implementing New Strategies with Insulin Glargine for Hyperglycaemia Treatment) Study. Diabet Med. 2006;23:736–42.

    CAS  Article  Google Scholar 

  26. Meneghini L, Koenen C, Weng W, Selam JL. The usage of a simplified self-titration dosing guideline (303 Algorithm) for insulin detemir in patients with type 2 diabetes—results of the randomized, controlled PREDICTIVE™ 303 study. Diabetes Obes Metab. 2007;9:902–13.

    CAS  Article  Google Scholar 

  27. Garrison LP Jr, Neumann PJ, Erickson P, Marshall D, Mullins CD. Using real-world data for coverage and payment decisions: the ISPOR Real-World Data Task Force report. Value Health. 2007;10:326–35.

    Article  Google Scholar 

  28. Roche N, Reddel H, Martin R, et al. Quality standards for real-world research. Focus on observational database studies of comparative effectiveness. Ann Am Thorac Soc. 2014;11(Suppl 2):S99–104.

    Article  Google Scholar 

  29. Benson K, Hartz AJ. A comparison of observational studies and randomized, controlled trials. N Engl J Med. 2000;22(342):1878–86.

    Article  Google Scholar 

  30. Concato J, Shah N, Horwitz RI. Randomized, controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med. 2000;342:1887–92.

    CAS  Article  Google Scholar 

  31. Golder S, Loke YK, Bland M. Meta-analyses of adverse effects data derived from randomised controlled trials as compared to observational studies: methodological overview. PLoS Med. 2011;8:e1001026.

    Article  Google Scholar 

  32. McMurry TL, Hu Y, Blackstone EH, Kozower BD. Propensity scores: methods, considerations, and applications. J Thorac Cardiovasc Surg. 2015;150:14–9.

    Article  Google Scholar 

  33. Patsopoulos NA. A pragmatic view on pragmatic trials. Dialogues Clin Neurosci. 2011;13:217–24.

    PubMed  PubMed Central  Google Scholar 

  34. Frieden TR. Evidence for health decision making—beyond randomized, controlled trials. N Engl J Med. 2017;377:465–75.

    Article  Google Scholar 

  35. Ye F, Agarwal R, Kaur A, et al. Real-world assessment of patient characteristics and clinical outcomes of early users of the new insulin glargine 300U/mL. Poster 943-P, American Diabetes Association (ADA) 76th Scientific Sessions, New Orleans, LA, US. June 11, 2016.

  36. Zhou FL, Ye F, Berhanu P, et al. Real-world evidence concerning clinical and economic outcomes of switching to insulin glargine 300 units/mL vs other basal insulins in patients with type 2 diabetes using basal insulin. Diabetes Obes Metab. 2018;20(5):1293–7.

    CAS  Article  Google Scholar 

  37. Bolli GB, Riddle MC, Bergenstal RM, et al. New insulin glargine 300 U/ml compared with glargine 100 U/ml in insulin-naïve people with type 2 diabetes on oral glucose-lowering drugs: a randomized controlled trial (EDITION 3). Diabetes Obes Metab. 2015;17:386–94.

    CAS  Article  Google Scholar 

  38. Riddle MC, Bolli GB, Ziemen M, et al. New insulin glargine 300 units/mL versus glargine 100 units/mL in people with type 2 diabetes using basal and mealtime insulin: glucose control and hypoglycemia in a 6-month randomized controlled trial (EDITION 1). Diabetes Care. 2014;37:2755–62.

    CAS  Article  Google Scholar 

  39. Yki-Järvinen H, Bergenstal R, Ziemen M, et al. New insulin glargine 300 units/mL versus glargine 100 units/mL in people with type 2 diabetes using oral agents and basal insulin: glucose control and hypoglycemia in a 6-month randomized controlled trial (EDITION 2). Diabetes Care. 2014;37:3235–43.

    Article  Google Scholar 

  40. Mahmood SS, Levy D, Vasan RS, Wang TJ. The Framingham Heart Study and the epidemiology of cardiovascular disease: a historical perspective. Lancet. 2014;383:999–1008.

    Article  Google Scholar 

  41. Stratton IM, Adler AI, Neil HA, et al. Association with glycaemia with macrovascular and microvascular complications of type 2 diabetes (UKPDS 35): prospective observational study. BMJ. 2000;321:405–12.

    CAS  Article  Google Scholar 

  42. Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications Research Group. Effect of intensive therapy on the microvascular complications of type 1 diabetes mellitus. JAMA. 2002;287:2563–9.

  43. Freemantle N, Danchin N, Calvi-Gries F, Vincent M, Home PD. Relationship of glycaemic control and hypoglycaemic episodes to 4-year cardiovascular outcomes in people with type 2 diabetes starting insulin. Diabetes Obes Metab. 2016;18:152–8.

    CAS  Article  Google Scholar 

  44. Nathan DM, Buse JB, Kahn SE, et al. Rationale and design of the glycemia reduction approaches in diabetes: a comparative effectiveness study (GRADE). Diabetes Care. 2013;36:2254–61.

    CAS  Article  Google Scholar 

  45. Wermeling PR, Gorter KJ, Stellato RK, et al. Effectiveness and cost-effectiveness of 3-monthly versus 6-monthly monitoring of well-controlled type 2 diabetes patients: a pragmatic randomised controlled patient-preference equivalence trial in primary care (EFFIMODI study). Diabetes Obes Metab. 2014;16:841–9.

    CAS  Article  PubMed  Google Scholar 

  46. Young LA, Buse JB, Weaver MA, et al. Three approaches to glucose monitoring in non-insulin treated diabetes: a pragmatic randomized clinical trial protocol. BMC Health Serv Res. 2017;17:369.

    Article  Google Scholar 

  47. Furler J, O’Neal D, Speight J, et al. Supporting insulin initiation in type 2 diabetes in primary care: results of the Stepping Up pragmatic cluster randomised controlled clinical trial. BMJ. 2017;356:j783.

    Article  Google Scholar 

  48. Choudhry NK, Isaac T, Lauffenburger JC, et al. Rationale and design of the Study of a Tele-pharmacy Intervention for Chronic diseases to Improve Treatment adherence (STIC2IT): a cluster-randomized pragmatic trial. Am Heart J. 2016;180:90–7.

    Article  Google Scholar 

  49. Green JB, Bethel MA, Armstrong PW, et al. Effect of sitagliptin on cardiovascular outcomes in type 2 diabetes. N Engl J Med. 2015;373:232–42.

    CAS  Article  Google Scholar 

  50. Holman RR, Bethel MA, Mentz RJ, et al. Effects of once-weekly exenatide on cardiovascular outcomes in type 2 diabetes. N Engl J Med. 2017;377:1228–39.

    CAS  Article  Google Scholar 

  51. Zinman B, Wanner C, Lachin JM, et al. Empagliflozin, cardiovascular outcomes, and mortality in type 2 diabetes. N Engl J Med. 2015;373:2117–28.

    CAS  Article  Google Scholar 

  52. Neal B, Perkovic V, Mahaffey KW, et al. Canagliflozin and cardiovascular and renal events in type 2 diabetes. N Engl J Med. 2017;377(7):644–57.

    CAS  Article  Google Scholar 

  53. Kosiborod M, Cavender MA, Fu AZ, et al. Lower risk of heart failure and death in patients initiated on sodium-glucose cotransporter-2 inhibitors versus other glucose-lowering drugs: the CVD-REAL study (comparative effectiveness of cardiovascular outcomes in new users of sodium-glucose cotransporter-2 inhibitors). Circulation. 2017;136(3):249–59.

  54. STROBE. STROBE Statement: Strengthening the reporting of observational studies in epidemiology. Accessed 26 Sep 2018.

  55. Zwarenstein M, Treweek S, Gagnier JJ, et al. Pragmatic Trials in Healthcare (Practihc) group. Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ. 2008;337:a2390.

    Article  Google Scholar 

  56. Pitt B, Zannad F, Remme WJ, et al. The effect of spironolactone on morbidity and mortality in patients with severe heart failure. N Engl J Med. 1999;341(10):709–17.

    CAS  Article  Google Scholar 

  57. Freemantle N, Marston L, Walters K, et al. Making inferences on treatment effects from real world data: propensity scores, confounding by indication, and other perils for the unwary in observational research. BMJ. 2013;347:f6409.

    Article  Google Scholar 

  58. Penning de Vries BBL, Groenwold RHH. Cautionary note: propensity score matching does not account for bias due to censoring. Nephrol Dial Transplant. 2017;1–3.

  59. Zhang R, Wang Y, Liu B, et al. Clinical data quality problems and countermeasure for real world study. Front Med. 2014;8:352–57.

    Article  Google Scholar 

  60. Chen JH, Asch SM. Machine learning and prediction in medicine—beyond the peak of inflated expectations. N Engl J Med. 2017;376(26):2507.

    Article  Google Scholar 

Download references


KK acknowledges support from the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care-East Midlands (CLAHRC-EM) and the NIHR Leicester Biomedical Research Centre.


Funding, including article processing charges and Open Access fee, was provided by Sanofi US, Inc. All authors had full access to all of the data in this study and take complete responsibility for the integrity of the data and accuracy of the data analysis.

Medical Writing and Editorial Assistance

The authors received writing and editorial support in the preparation of this manuscript. This support was provided by Grace Richmond, PhD, of Excerpta Medica, funded by Sanofi US, Inc.


All named authors meet the International Committee of Medical Journal Editors (ICMJE) criteria for authorship for this article, take responsibility for the integrity of the work as a whole, and have given their approval for this version to be published.


Lawrence Blonde received grant/research support and honoraria from AstraZeneca, Intarcia Therapeutics, Janssen Pharmaceuticals, Lexicon Pharmaceuticals, Merck, Novo Nordisk, and Sanofi. Kamlesh Khunti received honoraria and research support from AstraZeneca, Boehringer Ingelheim, Eli Lilly, Janssen, Merck Sharp & Dohme, Novartis, Novo Nordisk, Roche, and Sanofi. Stewart Harris received honoraria and grants/research support from Abbott, AstraZeneca, Boehringer Ingelheim, Eli Lilly, Intarcia, Janssen, Merck, Novo Nordisk, and Sanofi, honoraria and consulting fees from Abbott, AstraZeneca, Boehringer Ingelheim/Lilly, Janssen, Novo Nordisk, and Sanofi, and honoraria from Medtronic and Merck. Casey Meizinger has nothing to disclose. Neil Skolnik served on advisory boards for AstraZeneca, Boehringer Ingelheim, Janssen Pharmaceuticals, Intarcia, Lilly, Sanofi, and Teva, has been a speaker for AstraZeneca and Boehringer Ingelheim, and received research support from AstraZeneca, Boehringer Ingelheim, and Sanofi.

Compliance with Ethics Guidelines

This article is based on previously conducted studies and does not contain any studies with human participants or animals performed by any of the authors.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Lawrence Blonde.

Additional information

Enhanced digital features

To view enhanced digital features for this article go to

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (, which permits any noncommercial use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Blonde, L., Khunti, K., Harris, S.B. et al. Interpretation and Impact of Real-World Clinical Data for the Practicing Clinician. Adv Ther 35, 1763–1774 (2018).

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI:


  • Clinical practice
  • Real-world data
  • Real-world study