Introduction

The translation of new research evidence into clinical practice can take up to 17 years [1]. A commonly used strategy that aims to bridge this divide is clinical practice guidelines [2]. Guidelines are evidence synthesis-based recommendations developed to support beneficial clinical practices, reduce unwanted variations and improve patient care outcomes [3, 4]. They are an important source of information for clinicians, designed to help them assimilate, evaluate and adopt evidence into their clinical practice [5]. However, their uptake is still low and dependent on a range of factors relating to the guideline itself (i.e. its complexity, applicability and clarity), healthcare professionals, patients and healthcare organisation [6,7,8]. Correspondingly, multifaceted interventions targeting these various factors have been shown to be most effective in promoting guideline uptake. While health professions education is an essential part of these multifaceted interventions, it is also still commonly employed as the only guideline dissemination strategy [9]. Evidence to date has mostly focused on traditional learning. Traditional learning for clinical practice guideline adoption was shown to lead to small improvement in desired clinical practices, with more interactive and engaging interventions showing greater effectiveness [10, 11].

Traditional learning, especially the face-to-face type, can be time-consuming, costly and inaccessible [9, 12]. Printed, self-learning resources on the other hand are easily overlooked by busy healthcare professionals [13]. With the relentless growth in research evidence and healthcare complexity, traditional education seems unsustainable in the context of guideline dissemination and training. Digital education, increasingly employed in continuing medical education and professional development, may offer a more flexible, affordable and accessible alternative to traditional learning as it transcends geographical and time constraints. The use of diverse media and delivery devices allows for engaging and interactive learning resources which can be easily updated in line with the new evidence and customised to the individual healthcare professional’s learning needs [14, 15]. By freeing up educators’ and healthcare professionals’ time, digital education may prove to be more cost-effective compared to traditional learning [16, 17]. Past reviews on guideline dissemination and adoption evaluated the effectiveness of traditional education, decision support systems, multifaceted intervention or all digital interventions [9, 11, 18,19,20,21,22]. While digital education is increasingly used in continuing professional development, it is unclear how effective it is in promoting guideline adoption. To address this gap, we decided to undertake a systematic review to investigate the effectiveness and economic impact of digital education for guideline adoption among healthcare professionals.

Methods

We followed Cochrane recommendations for the conduct of systematic reviews and reported according to the PRISMA guidance [23, 24].

Study selection

We included RCTs and cluster RCTs that compared digital education to usual education or other forms of digital education to train pre- or post-registration healthcare professionals on clinical practice guidelines. We included healthcare professionals with qualifications found in the Health Field of Education and Training (091) of the International Standard Classification of Education (ISCED-F). We excluded studies of students and/or practitioners of traditional, alternative and complementary medicine. Digital education interventions could be delivered as the only mode of the education intervention or blended with traditional learning (i.e. blended learning). We included offline and online computer-based digital education, digital game-based learning (DGBL), massive open online courses (MOOCs), virtual reality environments (VRE), virtual patient simulations (VPS) and mobile learning (mLearning) [25]. In studies comparing diverse forms of digital education, we differentiated the interventions based on the level interactivity. Interventions with greater control over the learning environment were considered more interactive. We excluded studies on psychomotor skills trainers (PST) as this form of digital education may not be readily available to healthcare professionals. We also excluded studies on interventions that lacked explicit reference to a clinical practice guideline, had an optional digital education component and focused on digital tools for patient management or on computerised decision support systems. Computerised decision support systems are a type of software providing clinicians with decision support in the form of evidence-based, patient-specific recommendations at the point of care [26]. We excluded studies on computerised decision support systems as they have a different underlying principle compared to digital education by being available at the point of care, providing patient-specific recommendations, being integrated with patient data etc. No restrictions on outcomes were applied.

We extracted data on the following primary outcomes:

  • Learners’ knowledge, post-intervention. Knowledge is defined as learners’ factual or conceptual understanding measured using change between pre- and post-test scores.

  • Learners’ skills post-intervention. Skills are defined as learners’ ability to demonstrate a procedure or technique in an educational setting.

  • Learners’ attitudes post-intervention towards new competencies, clinical practice or patients (e.g. recognition of moral and ethical responsibilities towards patients). Attitude is defined as the tendency to respond positively or negatively towards the intervention.

  • Learners’ satisfaction post-intervention with the learning intervention (e.g. retention rates, dropout rates, survey satisfaction scores). This can be defined as the level of approval when comparing the perceived performance of digital education compared with one’s expectations.

  • Change in healthcare professional’s practice or behaviour.

We also extracted data on the following secondary outcomes:

  • Cost and cost-effectiveness of the intervention

  • Patient-related outcomes (e.g. heaviness of smoking index, number of smoking cessation)

  • Adverse/unintended effects of the intervention

Data sources, collection, analysis and risk of bias assessment

This review is part of a global evidence synthesis initiative on digital health professions education for which a wider search strategy was developed (see Additional file 1). The following databases were searched from January 1990 to September 2018: MEDLINE (Ovid), Embase (Ovid), Central Register of Controlled Trials (CENTRAL) (Cochrane Library), PsycINFO (EBSCO), Educational Resource Information Centre (ERIC) (EBSCO), CINAHL (EBSCO) and Web of Science Core Collection (Thomson Reuters). The rationale for using 1990 as the starting year for our search was because preceding this year, the use of the computers was largely restricted to very basic functions. No language or publication restrictions were applied. We searched reference lists of all included studies and relevant systematic reviews. We also searched the International Clinical Trials Registry Platform, Search Portal and Current Controlled Trials metaRegister of Controlled Trials to locate unpublished or ongoing trials. We contacted the relevant investigators for missing information. Search results from different sources were combined in a single library, and duplicate records were removed. Two reviewers individually screened titles and abstracts identified by the searches. Full texts of potentially relevant articles were obtained and assessed for inclusion independently by two reviewers. Where data was missing or incomplete, reviewers were contacted for additional information. Any disagreements were settled through discussion between the two reviewers with a third reviewer acting as an arbiter.

Two reviewers extracted the data independently using a standardised data extraction form which was piloted and amended based on feedback. Data was extracted on study design, participants’ demographics, type of digital education, intervention content and outcomes. We contacted study authors in the event of any ambiguous or missing information. Disagreements between reviewers were resolved by discussion. A third reviewer acted as an arbiter in cases where disagreements persisted.

The methodological quality of included RCTs was independently assessed by two reviewers using the Cochrane Risk of Bias Tool which includes the following domains: (1) random sequence generation, (2) allocation concealment, (3) blinding of participants to the intervention, (4) blinding of outcome assessment, (5) attrition, (6) selective reporting and (7) other sources of bias (i.e. baseline imbalances) [23]. The following five additional criteria were included for the assessment of cluster RCTs: (1) recruitment bias which can occur when individuals are recruited to the trial after the clusters have been randomised, (2) baseline imbalance, (3) loss of clusters, (4) incorrect analysis and (5) comparability with individually randomised trials to make sure intervention effects are not overestimated due to ‘Herd effect’ or any such reasons as recommended by the Cochrane Handbook for Systematic Reviews of Interventions [23].

Data synthesis and analysis

We included post-intervention outcome data in our review for the sake of consistency as this is the most commonly reported form of findings in the included studies. We also reported separately the change score data from the included studies. For continuous outcomes, we reported the standardised mean differences (SMDs) and associated 95% CIs across studies. Standardised mean difference was used as a summary statistic as the outcomes in the included studies were measured differently. We were unable to identify a clinically meaningful effect size from the literature specifically for digital education interventions. Therefore, in line with other evidence syntheses of educational research, we interpreted SMDs using Cohen’s rule of thumb: < 0.2 no effect, 0.2–0.5 small effect size, 0.5–0.8 medium effect size and > 0.80 large effect size [23, 27, 28]. For dichotomous outcomes, we summarised relative risks and associated 95% CIs across studies. Subgroup analyses were not feasible due to the limited number of studies within respective comparisons, and outcomes. We employed the random-effects model in our meta-analysis. The I2 statistic was employed to evaluate heterogeneity, with I2 < 25%, 25–75% and > 75% to represent low, moderate and high degree of inconsistency, respectively [23]. The meta-analysis was performed using Review Manager 5.3 (Cochrane Library Software, Oxford, UK) [23]. We reported the findings in line with the PRISMA reporting standards [24]. We assessed and reported the quality of the evidence for each outcome, using the following GRADE assessment criteria: risk of bias, inconsistency, imprecision, indirectness and publication bias. Two authors independently assessed the quality of the evidence. We rated the quality of the body of evidence for each outcome as ‘high’, ‘moderate’ and ‘low’. We prepared ‘Summary of findings’ tables for each comparison to present the findings and the quality of the evidence (Additional file 1) [29]. We were unable to pool the data statistically using meta-analysis for some outcomes (e.g. skills, behaviour) due to high heterogeneity in types of participants, interventions, comparisons, outcomes, outcome measures and outcomes measurement instruments. We presented those findings in the form of a narrative synthesis. We organised the studies by the comparisons and outcomes. We transformed the data expressed in different ways into a common statistical format. We tabulated the results to identify patterns in data across the included studies focusing on both the direction as well as the effect size where possible. In addition, we displayed all the available behaviour change outcome data in a forest plot without a meta-analysis as a visual summary (see Additional file 1). In some studies, behaviour was measured in the same study participants using different approaches and tools. Instead of selecting one outcome or producing a single estimate per study, we present all behaviour change outcome data from the included studies as it focuses on different aspects of clinicians’ behaviour and practice [23].

Results

Our searches identified a total of 44,054 citations. After screening titles and abstracts, we retrieved full text for 4072 studies focusing on different digital education interventions for health professions education. We identified 40 potentially eligible studies of which 21 studies were excluded for not meeting our inclusion criteria. Seventeen studies from 19 reports, comprising of 14 individually randomised studies and three cluster randomised studies with 2382 participants, were included (Fig. 1, Table 1) [16, 30,31,32,33,34,35,36,37,38,39,40,41,42,43, 46, 47]. One of the included cluster RCTs had three different reports [43]. All seventeen included studies were published in English. Three studies focused on students (nursing students, medical students and emergency medicine students) while the remaining studies targeted post-registration healthcare professionals, mostly primary care physicians [30, 41, 47]. Except for one study from an upper middle-income country [36], all studies were from high-income countries with ten studies from the USA. Sample size ranged from 10 to 1054, with one third of studies having less than 50 participants. Ten studies reported that the intervention was delivered as part of a continuing medical education programme [16, 31, 33,34,35,36, 40, 42, 43, 46].

Fig. 1
figure 1

Study flow diagram

Table 1 Characteristics of included studies

Eight studies compared digital education to traditional learning (i.e. lectures, paper-based self-study materials and small-group tutorial) [34,35,36,37,38,39,40,41], four studies compared digital education digital education to no intervention [30,31,32,33] and five studies compared more to less interactive forms of digital education [16, 42, 43, 46, 47]. Digital interventions mostly consisted of educational materials with low or no interactivity. Nine studies focused on online modules (i.e. sequenced collection of subject-related, multimedia materials) with or without feedback and discussions [16, 31, 33,34,35, 40, 42, 43, 47]; two studies each on online simulation [30, 39], spaced education in the form of regular email-delivered surveys [32, 46] and PowerPoint presentations [36, 38]; and one study each on computer-based offline video [41] and CD-ROM-delivered intervention [37]. The educational content in the included studies spanned resuscitation, paediatric basic life support, diabetes, preventive care, labour management, myocardial infarction management, hypertension management, physical therapy, prostate cancer screening, dementia, urinary tract infection and heart failure. Four studies focused on more than one guideline, i.e. two, four, five and 1100 guidelines [31, 33, 37, 42]. Included studies measured knowledge, skills, satisfaction, behaviour and patient outcomes. None of the studies reported attitudes, cost-related outcomes or adverse/unintended effects. All studies measured outcomes immediately after the intervention. In addition, six studies measured long-term knowledge retention ranging from 1 to 9 months post-intervention [32,33,34,35, 38, 47] and two studies measured long-term behaviour change [32, 33].

Half of the studies had unclear or high risk of bias for random sequence generation and more than half had unclear risk for allocation concealment due to missing information, resulting in general unclear risk of bias for selection bias (see Fig. 2). Outcome assessment was mostly done by non-blinded assessors and without the use of validated instruments. For cluster RCTs, the risk of bias was overall low. The quality of evidence ranged from low to moderate and was downgraded because of risk of bias, imprecision and/or inconsistency (see Additional file 1).

Fig. 2
figure 2

Risk of bias summary

Digital education vs no intervention

Four studies compared the effects of digital education for clinical practice guideline adoption to no intervention (Table 1). Three of these four studies evaluated participants’ knowledge [31,32,33]. The pooled analysis of these studies showed large beneficial effect of digital education interventions for clinical practice guideline adoption on knowledge scores (SMD = 0.85, 95% CI 0.16, 1.54; I2 = 83%, moderate quality of evidence) (Fig. 3). The high observed heterogeneity was largely driven by a study on spaced education via emails showing large improvement in the intervention group (SMD = 1.52, 95% CI 1.06, 1.97) [32] and CIs that poorly overlap with the CIs from the other two studies in this analysis. The two remaining studies that evaluated online modules and case-based discussion reported mixed results [31, 33]. One study measuring long-term knowledge retention at 6 months post-intervention [33] reported moderate beneficial effect of the digital education intervention group when compared to no intervention (SMD = 0.73, 95% CI 0.09, 1.38).

Fig. 3
figure 3

Forest plot of knowledge outcome comparing digital education on clinical practice guidelines to no intervention or traditional learning

Only one study (n = 31), evaluating the use of a simulation-based module, measured participants’ skills post-intervention and reported a large beneficial effect of digital education (SMD = 0.93, 95% CI 0.18–1.68, low quality of evidence) [30]. The effect of digital education on healthcare professionals’ behaviour was reported in two studies with mixed findings [32, 33]. Study on the use of spaced education via emails reported improvement in healthcare professional’s behaviour (RR = 0.75, 95% CI 0.69, 0.828) [32]. Conversely, the study on the use of online module and discussions reported no difference in healthcare professionals’ behaviour [33]. The same two studies also reported long-term data for behavioural change outcome. The follow-up behavioural change findings in these studies were consistent with those immediately post-intervention with one study evaluating an online module reporting no difference between the groups at 6 months [33], and the other study on spaced education still favouring the intervention group at 18 months post-intervention [32].

None of the studies reported on attitudes, adverse effect, patient outcomes or cost outcomes.

Digital education vs traditional learning

Eight studies compared the effects of digital education for clinical practice guideline adoption to traditional learning (Table 1) [34,35,36,37,38,39,40,41]. Five of these eight studies (n = 405) measured knowledge [34,35,36, 38, 40]. The pooled estimate from three studies reporting post-intervention data showed a small statistically non-significant effect on knowledge scores in the digital education group compared to traditional learning (SMD = 0.23, 95% CI − 0.12, 0.59; I2 = 34%, moderate quality of evidence) (Fig. 3). The moderate heterogeneity was due to a small, pilot study with very imprecise findings [38] as shown by its wide CIs that poorly overlap with the CIs from the other two studies in this analysis. The remaining two studies without post-intervention data also reported no difference between the groups immediately post-intervention although one of them reported that the intervention group scored slightly higher than the control group when averaged across baseline, post-intervention and follow-up measurement [35]. Three studies also measured long-term knowledge retention 1 to 6 months post-intervention and reported no difference between the groups in two studies [35, 38] and moderate improvement in the digital education group in one study [34].

Of four studies evaluating participants’ satisfaction with the intervention [34,35,36, 38], three studies reported large beneficial effect of digital education compared to a lecture or printed resources [34, 36, 38]. One study, employing interactive small-group learning as a control, reported no difference [35].

Two studies (n = 133) reported post-intervention skills outcome [39, 41]. One study (n = 45) evaluating the use of simulation-based learning module reported large beneficial effect of digital education (SMD = 1.13, 95% CI 0.50, 1.76, moderate quality of evidence) in comparison to printed guidelines [39]. The other study assessed the effectiveness of computer-based video demonstration compared to peer teaching and reported higher post-intervention skills score in the control group (SMD = − 3.72, 95% CI − 4.42, 3.02, low quality of evidence) [41]. Three studies analysed the healthcare professionals’ behaviour change and reported no difference between the groups (Additional file 1) [35, 37, 40]. One study assessed patient outcomes and reported no differences between groups [40]. None of the included studies reported on attitudes, adverse effects or cost outcomes.

Digital education (more interactive) vs digital education (less interactive)

Five studies compared different configurations of digital education interventions (Table 1) [16, 42, 43, 46, 47]. Four studies evaluated online modules with performance-based or knowledge-based feedback [16, 42, 43, 47], and one study evaluated email-delivered, spaced education game [46]. The control interventions were either less interactive form of the digital education or non-interactive, online resources. Four studies measured behaviour and largely reported no difference between the groups (Fig. 4, Table 1) [16, 42, 43, 46]. Of three studies measuring knowledge [16, 46, 47], only one study on spaced education game favoured intervention (SMD = 0.81, 95% CI 0.43–1.20, moderate quality of evidence) [46]. This study also reported a modest improvement in patient outcomes. One study reported knowledge growth rate and reported no difference in mean change scores between the most interactive intervention groups and the less interactive control groups [16]. This study also reported no differences in satisfaction scores between the groups. One study reported moderate improvement in knowledge growth retention at 30-day follow-up in the more interactive form of digital education intervention compared to less interactive one (SMD = 0.63, 95% CI 0.01; 1.24) [47]. The same study reported higher satisfaction in the more interactive group at follow-up. No studies reported attitudes, adverse effect or cost outcomes.

Discussion

We identified 17 studies evaluating the effectiveness of digital education for clinical practice guideline adoption among healthcare professionals. Studies mostly focused on primary care physicians, computer-based educational interventions with low interactivity and measurement of participants’ knowledge and behaviour. With regard to knowledge, studies comparing the effect of digital education with no intervention showed a moderate, statistically significant difference in favour of digital education. Studies comparing the effect of digital education with traditional learning on knowledge showed a small, statistically non-significant difference in favour of digital education. The digital education group was more satisfied than the traditional learning group. Overall, there was little or no change in healthcare professionals’ behaviour, except in one study favouring the email-delivered, spaced education intervention group. Of three studies measuring patient outcomes, only one study on spaced education game intervention reported a modest improvement in the intervention group. The quality of evidence ranged from low to moderate across outcomes due to risk of bias, inconsistency in the findings and/or imprecision of the outcome data.

The evaluated digital educational interventions had diverse formats. The existing literature on the effectiveness of traditional learning for clinical practice guideline adoption shows that interactive approaches may be more effective than passive guideline dissemination [10, 11]. In our review, most digital education interventions focused on passive dissemination of resources with no or low interactivity, e.g. PowerPoint presentations or computer-based text [31, 34, 36,37,38]. Three studies compared the effectiveness of more interactive digital education interventions in the form of spaced education via email or online simulation to no intervention or traditional learning and reported large beneficial effect in the intervention group [30, 32, 39]. However, most studies comparing more interactive to less interactive forms of digital education reported no statistically significant difference between the groups. The interactive component of these digital education interventions was mostly in the form of performance feedback. The only study favouring the more interactive form of digital education knowledge and patient outcomes compared spaced education via emails to passive online resources. Based on these findings, future research should explore further the effectiveness of spaced digital education and simulation on clinical practice guidelines compared to other forms of education.

Included studies on interventions with limited interactivity reported various challenges relating to participants’ attrition and limited usage of the resources. Two studies evaluating the use of websites and online modules reported high attrition rates [16, 31]. In a study on a computer-based module with performance feedback, participants considered the intervention too time-demanding [42]. A study evaluating a digital education intervention in the form of a non-interactive, digitally presented clinical practice guidelines reported that more than half of participants accessed the provided resources either once or never [31]. Another study with computer-based resources customised to participants with hyperlinks reported that less than 60% of provided educational material was accessed [34]. Conversely, more interactive interventions reported lower attrition rates and higher participants’ engagement [39, 44].

The effectiveness of digital education for guideline adoption may also depend on the type of content it focuses on. Learners may already have high baseline knowledge on some topics such as diabetes which may lead to a ceiling effect in the knowledge score [33]. In addition, educational interventions focusing on several guidelines concurrently may lead to increased cognitive load and therefore be less effective than those focusing on a single guideline. Yet with the growing prevalence of chronic illnesses, ageing population and multi-morbidity, single guidelines are perceived as impractical and unhelpful [48]. Instead of passive dissemination of several distinct guidelines, digital education offers opportunity for seamless and engaging education and blending of diverse guidelines, using for example scenario-based learning and simulation. Notably, none of the included studies in our review evaluated novel educational modalities enabling simulated learning such as virtual or augmented reality. Also, none of the studies used mobile devices for delivery of digital education interventions. Mobile-delivered education may be more suitable to meet the needs of healthcare professionals by enabling easy, on-the-go access to training. These diverse digital education modalities may help promote better participant engagement and prove a more effective approach to health professions’ training on guidelines and should be evaluated in the future.

Our review has several limitations. Randomised controlled trials included in this review mostly lacked information on randomisation method, allocation concealment or blinding method. Included studies also largely reported post-intervention data, so we could not calculate pre-post intervention change data nor ascertain whether the intervention groups were matched at baseline for key characteristics and outcome measure scores. In studies reporting pre-post intervention change data, we extracted post-intervention data to ensure consistency in the presentation of findings across the studies included in this review. Studies with pre-post intervention change data reported an improvement from the baseline, but the findings were in all cases consistent with the post-intervention data. Furthermore, in our review, we only focused on studies on clinical practice guidelines and may have missed studies that use other forms of evidence-based recommendations or do not explicitly cite a clinical practice guideline. In line with other systematic reviews on digital education interventions, we have encountered substantial heterogeneity in terms of the intervention, participants, outcomes and comparisons [49]. Given such heterogeneity, our findings have to be interpreted with caution. In addition, our review focuses only on digital education on guidelines and its findings are not applicable to other digital interventions such as computerised decision support systems. Finally, some studies reported that digital education interventions were delivered as part of a continuing medical education programme which may have affected their findings. Our review strengths include a comprehensive and sensitive search as well as clear inclusion and exclusion criteria encompassing a broad range of participants, outcomes and interventions. We also performed parallel, independent and reproducible screening, data extraction and rigorous risk of bias assessment.

Conclusion

Digital education on clinical practice guidelines seems to be more effective than no intervention and at least as effective as traditional learning in terms of participants’ knowledge. Participants reported higher satisfaction with digital education compared to traditional learning. Yet, digital education overall led to little or no difference in health professionals’ behaviour as compared to control interventions. Higher interactivity and engagement in digital education interventions for clinical practice guideline adoption may lead to larger educational gains. Future research should aim to determine the effectiveness of novel modalities (e.g. mobile learning and virtual reality), as well as spaced and interactive formats of digital education, and focus on outcomes such as skills, attitudes, cost, behaviour and patient outcomes. There is a need for high-quality, well-reported RCTs with a clear presentation of the random sequence generation and allocation concealment approach as well a detailed description of the intervention and the control. Future studies should report pre-post intervention change outcome data, use validated outcome measurement tools and assess learners’ retention as well as long-term outcomes.