Introduction

Evidence-based guidelines are developed specifically to improve the effectiveness of healthcare providers professional practice, reduce the risk of unintended adverse effects to the community and are fundamental tools for the translation of research into practice [1]. Research studies, however, consistently document poor implementation of evidence-based clinical guidelines [2]. Consequently, a large volume of research examining ‘how’ to best support healthcare providers to implement evidence-based clinical guidelines now exists. The Cochrane Effective Practice and Organisation of Care (EPOC) group have published over 100 systematic reviews which describe interventions designed to improve professional practice and the delivery of effective health services by implementing evidence-based clinical guidelines and practices [3]. A systematic review of reviews in primary care alone found 91 reviews which evaluated strategies such as audit and feedback, educational meetings and other practitioner targeted interventions, to improve clinical guideline adherence [4]. The intervention approaches and interpretations synthesised within these reviews have largely assumed that practitioners behave in a rational manner. As such, these interventions have actively targeted rational constructs such as attitudes, intentions and self-efficacy in an attempt to change behaviour.

However, recent behavioural research suggests that in many instances, an individual’s behaviour and decision-making is not always perfectly rational [5]. Dual process models propose that individual decision-making and behaviour results from the interaction between two cognitive processes operating in parallel, one reflective, and the other impulsive or automatic [6]. In the provision of diabetes care, for example, studies have reported that ‘automatic’ decision-making processes operate alongside, and may mediate rational processes in influencing clinician provision of evidence-based care [7]. Additionally, a recent meta-analysis of nine studies assessing the association between ‘habit’ (which is an indicator of the automatic decision-making process) and clinician behaviour reported a moderate correlation (r = 0.33) [5, 8]. Such research suggests that interventions to change clinician behaviour need to move beyond strategies that focus purely on rational cognitive pathways, towards considering the context within which individual’s act, which are influenced by automatic pathways.

Nudge strategies have been suggested as one way to influence habitual or automatic behaviour, by targeting the subconscious routines and biases that are present in human decision-making and behaviour [9]. ‘Nudging’ was first coined in 2008 and is defined as a ‘function of any attempt at influencing people’s judgement, choice or behaviour in a predictable way, that is made possible because of cognitive boundaries, biases, routines and habits in individual and social decision-making’ (p158) [9]. These boundaries, biases and routines act as barriers for people to perform rationally consistent with their internal values. As such, nudge strategies work by targeting these boundaries, biases and habits by altering the underlying ‘choice architecture’, the social and physical environment in which the decision is made [9]. Specifically, ‘choice architecture’ involves the design of different ways in which choices can be presented to individuals. This can include influencing the range of choices (e.g. increasing number, types of choices), considering the manner in which the attributes are described (e.g. labelling, priming) and altering the way in which an object is presented (e.g. as a default, placement, presentation, sizing) [10]. This enhances the capacity for subconscious behaviours that aligns with the intrinsic values of an individual, without actively restricting options [11].

Nudge strategies are considered highly appealing from a policy and public health perspective as they are low cost and typically do not require ongoing resources to sustain. These interventions have been applied in public health policy to change behaviour and support healthier lifestyle choices. For example, many governments have applied nudges in the form of altered defaults, switching from opt in to opt out systems to increase organ donation rates. The United Kingdom (UK) Institute for Government and Behavioral Insights Team developed the Mindspace framework as a way to support the application of nudge strategies in public policy [12]. This framework describes the nine types of interventions (Mindspace: Messenger, Incentives, Norms, Defaults, Salience, Priming, Affect, Commitments, Ego) that are considered to have the most robust effects on the automatic system. Mindspace is underpinned by core principals of behavioural economics and aligns closely with other seminal lists describing nudge and choice architecture techniques [12, 13]. As such, this framework provides a structured way to support administrators, policy makers and researchers with selecting and applying nudge interventions to influence behaviour [12].

The potential impact of nudge strategies on clinician implementation behaviour has only recently started to be formally evaluated. A randomised controlled trial by Meeker found that a simple intervention encouraging practitioners to display a public poster stating their commitment to reduce inaccurate antibiotic prescription in waiting rooms, significantly reduced inappropriate prescribing rates relative to the control group (− 19.7% [95% confidence interval: − 5.8% to − 33.4]; p = 0.02) [14]. The UK Behavioural Insights team undertook a randomised 2 × 2 factorial trial examining the impact of providing social norm feedback to high antibiotic-prescribing GPs within their team. The study found that providing information concerning providers’ prescribing rate, compared with other local practices (norm) in the area from England’s chief medical officers (messenger), significantly reduced the rate of antibiotic items dispensed per 1000 population (p < 0.0001) [15].

Given the potential of ‘nudge’ strategies to impact on clinicians’ behaviours, efforts to describe the application and potential effect of such strategies on implementation are warranted. An overview of the type of settings and behaviours that have been targeted can highlight opportunities for future empirical research, and inform the design of low-cost implementation strategies.

This review aims to describe the application and effects of nudge strategies on healthcare provider and organisations’ implementation of evidence-based guidelines, policies and practices. The review will use data from randomised controlled trials included within Cochrane systematic reviews

Methods

This review has been reported in accordance to the PRISMA guidelines [16]. This review was not registered, and a protocol has not been previously published.

Information sources and search strategy

The nudge terminology was developed in 2008 and has gained popularity in the last decade. Implementation trials testing the impact of nudge strategies, however, may have been performed over many decades. Many interventions that would have been classified as nudge are not published under this term. Given this, conventional electronic searches of bibliographical databases are unlikely to be sufficiently sensitive or specific, and are likely to result in conflated numbers and potentially missed studies. To avoid this, we conducted a targeted and systematic search of studies included within eligible Cochrane systematic reviews. The Cochrane library was chosen as it is internationally recognised for publishing up to date, high-quality and current systematic reviews in healthcare settings, and has a review group dedicated to publication of studies to improve healthcare professional practices [17]. We undertook a two-stage screening process, where systematic reviews were screened for eligibility and following that, trials within eligible reviews identified in the first process assessed for inclusion. Two authors (SLY, FS) screened the titles, abstract and full text of all reviews published by the Cochrane library in the last 2 years (2016–2018) on December 2018. This timeframe was selected as Cochrane authors are encouraged to update their reviews every 2 years.

Study selection

Following this, full text of all included studies within eligible reviews were screened by at least two authors (SLY, FS, RS). Studies were included if they met all eligibility criteria described below. All disagreements were resolved via a consensus process between the two reviewers and involved a third reviewer (LW) where necessary.

Eligibility criteria

All studies that examined the impact of a nudge strategy targeting clinician and healthcare organisation’s implementation of health-related guidelines, policies and practices were included if they met the following criteria.

Types of studies

Only randomised controlled trials (RCTs) with a parallel control group that compared (i) an intervention that included a nudge strategy to improve the implementation of a healthcare-related guideline, policy and practice in healthcare settings/organisations compared with a non-nudge intervention or usual practice; and (ii) two or more different strategies, which included at least one arm with a nudge strategy, to improve the implementation of a healthcare-related guideline, policy and practice were included. Studies also had to specify the implementation of a health-related policy, guideline and practice as an explicit aim of the study, and as such were likely to be type 2 or type 3 hybrid trials [18].

Type of participants

Study participants were clinicians (medical doctors, allied health professionals) providing care in healthcare/clinical settings. Healthcare settings included acute care hospitals; long-term care facilities, such as nursing homes and skilled nursing facilities; physicians’ offices (i.e. primary care); urgent-care centres; outpatient clinics; home healthcare (i.e. care provided at home by professional healthcare providers) and emergency medical services [19].

Type of intervention

The intervention had to include at least one nudge strategy. The determination of whether a nudge strategy was present was undertaken by at least three reviewers (SY, FS, AA) for each study. Nudge strategies were defined as those that ‘applied principles from behavioural economics and psychology to alter behaviour in a predictable way without restricting options or significantly changing economic incentives’ (p6) [11]. These strategies were those that targeted the automatic decision-making processes rather than the rational decision-making processes [20]. Strategies were classified using the Mindspace framework (see Table 1) [20]. The Mindspace framework was chosen as it provides a practical checklist for summarising the application of nudge strategies in public health practice. Similar to a previous review [21], strategies were classed as (i) priming, (ii) norms and messenger, (iii) salience and affect, (iv) default, (v) commitment and ego and (vi) incentives [22]. For studies with multiple intervention arms (e.g. multi-arm RCTs, factorial trials or comparative effectiveness trials), only the arm/s that included an intervention with a nudge strategy were included. Multiple intervention arms with nudge strategies were combined as in most instances the intervention arms included the same type of nudge strategy. Where the intervention was multicomponent and included both nudge and non-nudge strategies, this was also included. Trials that included a nudge strategy in the control arm were excluded to allow the impact of the nudge strategies to be assessed relative to no nudge strategy.

Table 1 Nudge categories and description applied in the review based on the Mindspace framework

Type of outcomes

We included any subjective or objective measure of implementation outcomes. Similar to previous reviews carried out by the team [23, 24], implementation outcomes were those that described the fidelity or execution of a guideline, policy or practice at an organisational or practitioner level. Such outcomes could be assessed via self-reported surveys, observations or from other routine data sources including electronic medical records. Examples of such outcomes include appropriate prescribing or test ordering in line with guideline recommendations.

Studies were excluded if they were not published in English. There were no other exclusions beyond that specified by the original review the studies were extracted from.

Data extraction

Relevant information was extracted from the published Cochrane reviews, the primary trial and other associated papers referenced in the primary trial by at least two individuals (FS, RS, JJ, BM) using a standardised data extraction form. This included (1) study information—author name, study design, country, date of publication, type of healthcare provider/organisation, participant/service demographic/socioeconomic characteristics and number of experimental conditions; (2) characteristics of the overall implementation strategy, including the duration, number of contacts and approaches to implementation, and information to allow classification of the intervention strategy into nudge categories according to Table 1 (priming, norms and messenger, salience and affect, commitment and ego, incentives and default nudge); (3) trial primary or summary outcome measures and results, including the data collection method, validity of measures used, effect size (or raw data that allowed the calculation of an effect size) and measures of outcome variability; and (4) risk of bias assessment as published in the Cochrane reviews [25]. Where several implementation outcomes were reported, we extracted only the results and risk of bias assessment for those explicitly described as the primary outcome(s) of the trial, for all follow-up time periods. Where the primary outcome was not specified in the individual trial, we extracted the variable(s) described in the sample size calculation.

Data analysis

To describe the application of nudge strategies in practice, we calculated the number and percentage of trials using each nudge strategy, according to the Mindspace framework. We also described the application of nudge strategies according to setting and type of outcomes assessed. Where the primary outcomes were clearly identified in the aims or via sample size calculation, we calculated the within-study effect for this outcome. Where there were several primary outcomes, we focused on all implementation outcomes and calculated a pooled effect size for that study, if the outcomes were similar. Where they were different, we calculated the within-study effect for each outcome separately. We also calculated the within-study effects for outcomes reported at multiple time-points for each time-point.

Due to substantial clinical and methodological heterogeneity of included trials, it was not appropriate to conduct a meta-analysis. Instead, we summarised the effect estimates and used vote-counting methods, as outlined in the Cochrane Handbook, for where a meta-analyses was not possible [26]. We calculated within-study point estimates and 95% confidence intervals (CIs). For all studies, we extracted the raw values (mean, standard deviation, median interquartile range (IQR) and range for continuous outcomes; and percentages and frequencies for dichotomous outcomes). We used this data in the estimation of within-study effects. For continuous outcomes, we calculated the effect size as the difference in follow-up scores between intervention and control, except for one study that only reported the change in outcomes for control and intervention. The difference in change scores was used for this study [27]. For dichotomous outcomes, we calculated odds ratio (OR) as the measure of intervention effect. Odds ratios were chosen as it is a relative measure that is less sensitive to differences in baseline values than absolute measures such as risk differences. Additionally, ORs are also not as influenced by the underlying prevalence of the outcome as other relative measures [28]. We calculated within-study effects for all outcomes together with 95% CIs. The direction of a favourable intervention effect varied across studies, with some studies aiming for a reduction, and others aiming for an increase in a behaviour. Where studies aimed for a reduction, we reverse-scored these values. To provide an overview of overall impact, and by nudge strategy, we reported the number of studies and outcomes with an estimated effect in the beneficial direction as well as the percentage of effects favouring the intervention. These results were summarised in harvest plots, which visually demonstrate the directional effects of an intervention strategy and are recommended to help summarise review results when meta-analysis is not appropriate [29]. Finally, we calculated the median standardised mean effect size for continuous outcomes and median OR for dichotomous outcomes and IQR.

Statistical analyses were programmed using SAS v9.4 [30], Stata v13.0 and R [31, 32].

Clustered studies

All clustered trials were examined for unit of analysis errors to calculate within-study effects. For cluster randomised trials, the effective sample size was calculated and used for all estimates of effect sizes. This was undertaken to allow for inclusion in the harvest plots so that the potential impact of nudge strategies can be considered in light of the size of the studies. To calculate the effective sample size, the intracluster correlation co-efficient (ICC) derived from the trial (if available) or from another source (for example, the ICC used in the sample size calculation, or the mean ICC calculated from the other included studies) was used, and the design effect calculated using the formula provided in the Cochrane Handbook for Systematic Reviews of Interventions [33].

Studies with more than two treatment groups

Procedures described in the Cochrane Handbook for Systematic Reviews of Interventions [33] were followed for trials with more than two intervention or comparison arms that included a nudge strategy. This involved combining multiple intervention arms following the recommended formula set out by the Cochrane Handbook. Only intervention arms that included relevant nudge strategies as part of the intervention package were combined and compared to the control group. Multiple comparison arms were combined, rather than described separately, to help focus this review.

Results

Review characteristics

A total of 1730 systematic reviews published in the last 2 years (2016–May 2018) were obtained from the Cochrane library database. Forty-three full-text reviews were included in the full text screen and of those, 36 reviews were excluded for the following reasons: did not examine implementation outcomes (n = 33) [34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66], not undertaken in healthcare settings (n = 1) [67], not quantitative (n = 1) [68], or was a review of previous systematic reviews (n = 1) [69].

A total of seven reviews that examined a range of health-related practices including antibiotic prescribing [70], hand hygiene [71], management of obesity [72], management of musculoskeletal conditions [73], uptake of clinical guidelines across health behaviours [74, 75], and provision of mental healthcare [76] were included.

From these seven included reviews, 55 eligible RCTs that met all the inclusion criteria were included. Of these, 13 studies were excluded from the analysis: six did not report sufficient detail for calculation of a within-study effect [77,78,79,80,81,82], six included a nudge strategy in the control group [83,84,85,86,87,88], and one reported using an inconsistent outcome (time to event) [89]. Thus, a total of 42 studies, reporting across 57 outcomes, were included in the final analyses. Figure 1 contains a PRISMA flowchart of the study selection process.

Fig. 1
figure 1

PRISMA flowchart of study selection process

Study characteristics

Table 2 provides an overview of the included studies. Most studies were conducted in the USA (40%; n = 17), Canada (21%; n = 9) and the UK (14%; n = 6). Over half included two experimental arms, including the control (n = 26; 62%), 24% included three arms (n = 10) and 14% (n = 6) included four arms. Over half of the studies were cluster trials (52%, n = 22), one was a cross-over RCT [102] and the remainder were simple RCTs. Over half of the studies employed a control group that consisted of usual care or no intervention strategy (62%, n = 26). Over a quarter used a minimal intervention (26%, n = 11) which included strategies such as provision of guidelines, written recommendations, and introduction of facilities or equipment, and 7% (n = 3) used an intervention as part of the control, including education sessions to provide information; reminders and feedback targeting adequate products and facilities, provision of encouragement, monitoring and feedback and support. A further two studies [123, 125] did not provide sufficient detail of the control group.

Table 2 Study characteristics of included trials in the review by nudge strategy

Methodological quality of included studies

The risk of bias for each RCT as reported in the Cochrane reviews is presented in Table 3. Over half of the trials were judged as low risk for selection bias (random sequence generation (n = 33), allocation concealment (n = 28) and attrition bias (incomplete outcome data (n = 32)). A large number of trials were judged as unclear for selective reporting (n = 20) and protection against contamination (n = 14). In terms of blinding of outcome assessment, 21 trials were judged as having a low risk of bias (see Table 3). Overall, 29 studies (69%) met at least half of the criteria they were assessed against.

Table 3 Risk of bias assessments from individual trials extracted from published Cochrane reviews

Application of nudge strategies to improve implementation

All nudge strategies were used in at least one trial to influence healthcare provider adherence to guideline recommendations. Twenty-two studies across 30 outcomes incorporated nudge strategies as part of a multicomponent intervention, while the remainder examined the impact of nudge strategies in isolation (n = 20). Ten studies applied two nudge strategies and three applied three or more nudge strategies. The frequency to which each strategy was used and specific examples of their application are shown in Table 4.

Table 4 Summary of application of nudge strategies in 42 randomised controlled trials included in this study

Priming nudge

The most commonly used nudge strategy was priming, which was included in 69% (n = 29) of trials, across 41 outcomes. The impact of this nudge strategy was assessed on a range of implementation outcomes including adherence to antibiotic prescribing guidelines [104, 105, 123], prescribing rates of medication and test ordering of various conditions [27, 92, 93, 98, 99, 103, 106, 118, 119, 121, 129, 131], vaccinations [94], provision of care according to guidelines [90, 91, 96, 100, 103, 124, 126], and adherence to hand hygiene guidelines [95, 97, 101, 102, 107, 122, 130]. Priming nudges were also applied in various clinical settings including hospitals [95, 102, 104, 118, 122, 130], primary care practices [27, 91, 93, 94, 96, 99,100,101, 103, 119, 121, 124], mental health units [106] and community-based long-term care facilities [107].

Norms and messenger nudge

Norms and messenger nudge were the second most commonly used nudge strategy, included in 40% of studies (n = 17) across 19 outcomes. The impact of this strategy on a number of implementation outcomes including appropriate prescribing of medication [108, 111, 113, 125, 127, 132], and test ordering for various conditions [110, 119], antibiotic prescribing [123, 127], provision of preventive/lifestyle care according to guidelines [109, 124] and adherence to hand hygiene guidelines [112, 122, 130] was assessed. Interventions were undertaken in various clinical settings including hospitals [111, 112, 122, 127, 130], primary care practices [108,109,110, 113, 119, 121, 133] and pharmacies [131].

Salience and affect nudge

This was the third most frequently used nudge strategy, utilised in 19% of the studies reviewed (n = 8), across 11 outcomes. This was undertaken in interventions in hospitals [115, 127], primary care practices [114, 126] and community mental health teams [116] to improve test ordering (i.e. screening for bone mineral density) [114, 117], hand hygiene [115, 130], provision of care according to various guidelines (i.e. mental health) [116, 126, 128] and antibiotic prescribing [127].

Incentive, commitment/ego and default nudge

Less than 10% of studies incorporated either incentive (n = 3), commitment/ego (n = 1) or default (n = 1) nudge strategies as part of their interventions. The one study that used a commitment/ego nudge was conducted in hospitals to increase adherence to hand hygiene where clinicians publicly declared their commitment to either reduce inappropriate behaviour or increase recommended behaviour [130]. A default nudge was used in one study conducted in hospitals where test ordering options related to thyroid function not relevant to patients were shaded out based on ordering forms and incentive strategies [118]. The incentive strategy primarily included provision of certificates and professional development points [125, 128, 129] to primary care physicians to increase appropriate prescribing or test ordering.

Impact of nudge strategies on healthcare provider behaviour

Of the 57 outcomes assessed, 49 (86%) had an estimated effect on clinician behaviour in the hypothesised direction, of which 30 (53%) did not contain the null value (see Table 5). Figure 2 shows the distribution of studies by whether they had an estimated effect on the outcome. The median standardised mean difference across all continuous outcomes was 0.39 (IQ1 = 0.22, IQ3 = 0.45). For dichotomous outcomes, the median OR across all outcomes was 1.62 (IQ1 = 1.13, IQ3 = 2.76).

Table 5 A summary of the number and percentage of outcomes reporting an estimated effect in support of the intervention by number of nudge strategies and intervention type
Fig. 2
figure 2

Harvest plot of estimated effect estimates for all include studies

When the nudge strategy was included as part of a multicomponent intervention, 24 out of 30 (80%) had a calculated effect estimate that was in the hypothesised direction, of which 15 (50%) did not contain the null. Comparatively, 20 studies across 27 outcomes tested a nudge only intervention where 25 of these outcomes (93%) were in the hypothesised direction and 15 (56%) did not contain the null (see Table 5, Fig. 1).

Twenty-nine studies across 42 outcomes included only one type of nudge strategy as part of their intervention packages. Of this, 36 (86%) were in the hypothesised direction, of which 23 (55%) did not contain the null. Comparatively, 13 studies across 15 outcomes employed intervention packages that included more than one nudge strategy. Of these, 13 of the 15 outcomes (87%) were in the hypothesised direction and 7 (47%) did not contain the null. See Table 5 and Fig. 3 for the number and percentage of outcomes by nudge strategies. Interventions that included a priming nudge showed the most promise, with 37 out of the 41 outcomes (90%) in the hypothesised direction.

Fig. 3
figure 3

Harvest plot of estimated effect estimates by nudge classification  and whether studies were multi or single component studies

The most consistent evidence of effect was observed for behaviours such as handwashing (all outcomes in the hypothesised direction, with 70% containing the null) [95, 97, 101, 102, 107, 112, 115, 122, 130], and test ordering and prescribing for management of osteoporosis (all outcomes in the hypothesised direction, with 80% containing the null) [108, 110, 111, 117, 126]. The effects were least consistent for obesity management (two out of four in the hypothesised direction, with 25% containing the null) [90, 96, 124].

Discussion

This review of 42 RCTs included within eligible Cochrane systematic reviews found that a variety of nudge strategies have been employed in trials of clinical practice change interventions. For the majority of outcomes assessed (49/57), the effects were in the hypothesised beneficial direction. Additionally, a median effect size of 0.39 (IQ1 = 0.22, IQ3 = 0.45) for continuous and OR 1.62 ( 95% CI = 1.13, = 2.76) for dichotomous outcomes were calculated. These effects are comparable to other systematic reviews that have assessed the impact of a diverse range of implementation strategies on provider adherence to clinical guidelines [134, 135] and support the continued application of nudge interventions in practice. These comparable effects highlight the need to better consider and account for the potential impact of nudge strategies in the evaluation of complex implementation interventions, as these strategies are often overlooked.

Additionally, using vote-counting approaches, our study found that the effects of interventions were not related to the number of nudge strategies employed, or whether nudge strategies were part of a broader package of implementation support. While the evidence around this is mixed, these findings are consistent with a previous review of 25 reviews of implementation strategies, which found no compelling evidence that multicomponent strategies are more effective than single interventions [136]. As our findings relied on indirect comparisons, future studies testing the effects of nudge strategies using staggered or factorial trial designs are needed to better quantify the impact of individual nudge strategies by itself and in combination with other strategies.

Priming nudges were the most frequently evaluated, while few studies assessed the impact of incentive, commitment and ego nudges. While there were variable effects depending on outcomes, the most consistent effect was observed for handwashing, and test ordering and prescribing for osteoporosis management. It is possible that behaviours such as handwashing are more likely to be habitual, where practices are standardised and relatively simple to implement [137], and thus may respond to lower intensity stimuli. The outcomes measured for these behaviours are also likely to be more proximal and directly related to the intervention. These findings highlight the need to carefully consider the target behaviour and outcomes assessed when developing nudge interventions. Further, grounding the design and evaluation of nudge interventions in theory is needed to increase understanding of the context, type of behaviours and for whom nudge interventions may be effective for. Critically, future examination of the impact of nudge interventions according to different sociocultural characteristics (i.e. ethnicity, age, socioeconomic status) is needed to ensure such interventions do not exacerbate health inequalities.

Collectively, findings from this review highlight the positive effects of nudge interventions more broadly and support its application in clinical practice for some behaviours. It also identifies potential areas for development and testing of nudge strategies that have not been well studied such as for commitment nudges, where promising intervention effects have been reported [14]. For healthcare administrators, there is significant opportunity to embed priming, salience and default nudges within existing electronic systems or existing quality improvement tools such as reminders and audit and feedback. For example, scans or test results can be programmed to automatically pop up on electronic health records when a high-risk patient presents for care to facilitate follow-up. Additionally, electronic systems can be programmed to default to a more efficient treatment option where available (i.e. prescribing generic over brand name medications).

Where there is available infrastructure, embedding ‘nudge units’ similar to that described by Patel et al., within health services provides an innovative way of improving quality of care, while systematically developing and testing the impact of nudge interventions [138]. These units involve collaboration between health system administrators and leaders, front-line staff and researchers. Such collaborations enable the co-development of nudge solutions to address identified areas of suboptimal care. The unit is also then involved in subsequent implementation, evaluation and translation of the strategy, if shown to be effective [138]. The embedding of such units within clinical health services is similar to that previously described in public health practice [139] and provides significant opportunity to understand how nudge strategies can be best applied to improve clinical practice. Further, participation in innovative platforms such as the audit and feedback meta-lab [140], which involves collaboration between healthcare organisations and researchers, can provide an opportunity to further undertake head-to-head comparisons of nudge strategies applied within audit and feedback interventions.

Despite its promise, the application of nudge strategies into practice will firstly need to carefully consider issues around ethics and personal choice. As nudge interventions are inherently designed to influence the automatic systems, these attempts to change behaviour may be seen as challenging the role responsibility of clinicians and perceived as manipulative. Engaging clinicians early on with designing and implementing interventions is crucial to ensure the issues surrounding consent and freedom of choice are given due attention [12, 141].

Findings from this review should be considered in the context of a number of limitations. As there is no general consensus on the kinds of behavioural interventions that are classified as nudge, we used a pragmatic and systematic approach where we searched trials included in recently published Cochrane Reviews. This approach was selected as we sought to identify and characterise nudge implementation strategies across a range of medical disciplines and health conditions. As such, our approach undoubtedly failed to capture all eligible published trials, and findings described here are reflective of RCTs included in the published reviews. Future attempts to develop a more targeted electronic search strategy are likely to result in a more comprehensive review. The heterogeneity of outcomes, strategies and targeted guidelines precluded the use of meta-analysis to describe effects. We used non-meta-analytic methods of synthesis including providing a summary of overall effect estimate and vote-counting approaches. Vote-counting approaches are limited in its ability to examine intervention effectiveness as it is unable to account for differential weights in each study based on sample size. Further, seven studies had more than one primary outcome and were included more than once in the vote-counting procedure, which may have given more weight to such studies. While we attempted to extract risk of bias assessments for implementation outcomes, not all reviews specified this. As such, it is possible that the risk of bias assessment for these studies may be related to other non-implementation outcomes.

While such limitations exist, this review included 42 RCTs which provides a broad understanding of how nudge strategies have been applied to improve clinical practice and opportunities to further develop this important area of research. We used the Mindspace framework to provide a broad overview of the types of nudge intervention. To provide more insight into how nudge interventions can change behaviour, future reviews should consider using taxonomies such as the behaviour change techniques to classify these interventions by its psychological targets [142], or mapping to existing implementation taxonomies such as the Expert Recommendations for Implementing Change (ERIC) or EPOC taxonomy [143]. Our review also primarily focused on assessing the impact of the interventions on fidelity outcomes. Future reviews should consider assessing a broader range of implementation outcomes as specified by Proctor et al [144].

Conclusions

Main conclusion

This study is the first of its kind and offers new information for policy makers, practitioners and quality improvement agencies to support the application of nudge strategies to improve provision of clinical care. The review provides an overview on how such strategies have been applied and some evidence demonstrating the positive effects of nudge strategies. While more definitive research is needed, the results of this review suggest that nudges could be an effective tool to improve implementation of some clinical guidelines.

Future research

In addition to primary research exploring the effects of nudge interventions on clinical practice, there is considerable need to develop standard terminology that is applied consistently to describe these strategies as well as detailed guidance on describing such interventions. Currently, the diversity and inconsistency in the terminology represents a real barrier to synthesising, applying and advancing the field. Such work could be considered a priority in the field of implementation science given the considerable research investment and abundance of randomised trials assessing the effects of strategies target rational clinical decision-making.