Background

Audit and feedback is a complex intervention that involves the delivery of feedback on performance over a specific period [1]. Health professionals may not have the knowledge and skills to engage and respond to feedback, and this may create variation in the effectiveness of audit and feedback [2, 3]. Health systems are investing in quality improvement support to feedback recipients [4, 5]

Brown and colleagues [2] describe quality improvement co-interventions as supporting feedback recipients, “to identify the reasons for and develop solutions to sub-optimal performance” (p16). Quality improvement support is a form of feedback facilitation that might help recipients to identify, “barriers and enablers for making change” [6; p3]. The identification of influences and selection of actions to address these is known as ‘tailoring’ [7]. In addition to tailoring, authors have described the need to develop commitment, as the shared resolve to implement a change [8]; for example, through describing implications of current audit performance [9].

There is a lack a clarity about the content and delivery of feedback facilitation. Facilitation is associated with enabling and making a target behaviour easier. In the context of audit and feedback, facilitation might include how to use feedback, undertake quality improvement or set goals and plans [10]. Beyond feedback-specific facilitation, Richie and colleagues describe 22 implementation facilitation skills, including engaging stakeholders, problem-identification/solving and education skills. The template for intervention description and replication (TIDieR) [11] provides a guide to describe the content of interventions. TIDieR highlights the importance of describing what is delivered and why, who delivers the intervention, how, where, when and how much, whether there is tailoring, modifications and if fidelity is assessed and delivered. In relation to what is delivered, the Expert Recommendations for Implementing Change (ERIC) [12] describes 73 different types of implementation intervention. In relation to why a particular intervention is delivered, Colquhoun and colleagues [13] described gaps in the use of theory within audit and feedback interventions. Feedback facilitation could be considered an implementation strategy. Within the current manuscript we will refer to feedback facilitation as an intervention (Table 1), to be consistent with the description of multi-faceted interventions [1, 2], co-interventions [2] and complex interventions [14], and to reflect that feedback facilitation may be composed of multiple implementation strategies.

Table 1 Definitions for key terms

Multiple authors (e.g. [18,19,20]) recommend using logic models to describe the programme theory for an intervention. Describing the intervention components enables replication across contexts with fidelity of identified core components [14]. Lewis and colleagues [17] describe potential components in the causal pathway of interventions: intervention mechanism(s); context; pre-conditions and/or moderators; proximal and distal outcomes. Such frameworks provide a further lens through which to describe the content and delivery of feedback facilitation interventions.

The effectiveness of audit and feedback with or without a feedback facilitation co-intervention is being explored during the updated Cochrane review of randomised controlled trials. Understanding the content and delivery, as well as the effectiveness, of an intervention is extremely valuable and supports the interpretation and use of the findings. The aim of the current study is to describe the content and delivery of feedback facilitation co-interventions used in trials of audit and feedback.

Method

We explored the content of feedback facilitation co-interventions reported in randomised controlled trials of audit and feedback (A&F). Feedback facilitation trials were identified from the latest update of the Cochrane review of audit and feedback. Within the Cochrane review, co-interventions were described as a form of feedback facilitation, which “could be training about how to use feedback, or to do quality improvement in the practice, or set goals and plans, etc.” [10].

The search criteria and identification of studies is reported by the Cochrane review [21] team, who provided the papers identified as containing feedback facilitation. We reviewed these papers and their citations for further details describing the intervention content.

Inclusion criteria: Papers describing interventions delivered in randomised controlled trials of audit and feedback with additional feedback facilitation co-intervention delivered to health care workers. There were no exclusion criteria.

Participants

Audit and feedback and/or feedback facilitation developers and/or deliverers.

Intervention

Feedback facilitation co-interventions delivered alongside audit and feedback.

Quality assessment

Quality was assessed as part of the Cochrane review.

Data collection and management

We extracted data from papers describing the trial, from publicly available protocols and from companion papers. Eight reviewers extracted data from the included studies using a specifically designed and piloted proforma adapted from the TIDieR framework [11]. The adapted proforma extended TIDieR to capture the form of implementation strategy [12], theory and logic model, the identification of influences upon performance and work to align improvement actions to influences, whether information to describe the implications of performance was reported and the level of change sought.

The adapted proforma also enabled us to explore whether and how the feedback facilitation co-interventions supported teams to tailor their response to feedback. In identifying whether the facilitation explored influences upon performance, we looked for whether influences or causes were given, sought by data recipients or not recorded. The extraction guide (Appendix A) provided the example of using a framework to identify determinants or other potential list of influences from which recipients selected. The data extractors described the procedure to explore influences using language similar to that in the text and categorised this as ‘sought by data recipients’, ‘given by study team’, ‘co-produced’ or ‘not recorded’. The extractors then described the presence or absence of a process by which implementation strategies were determined; for example, whether they were given by the study team or selected by data recipients.

Data was recorded and managed in Excel

Duration of feedback facilitation was calculated in minutes; where specified in days, duration was converted to 450 min per day. Maximum duration was used unless an average time was given. The deliverer of facilitation was classified into expert, peers or improvement specialists [22].

We developed and piloted reviewer guidance notes to accompany the proforma (Appendix A). Each paper was reviewed by 2 reviewers. The reviewers were health service researchers, five of whom were also clinicians. Six reviewers were involved in the development of the codebook through iterative discussion, design and testing. Two further reviewers received training and supervision in use of the code book. The reviewers extracted separately, and disagreements were resolved through consensus between the two reviewers.

Data analysis and synthesis

Two members of the team (MS and SA, both experienced implementation scientists) cleaned the data set and used the extracted data to codify the ERIC strategies, referring to source papers where necessary. MS and SA analysed the data narratively, graphically and statistically using Excel and StataMP 17. Our analysis drew upon the full data set, with the exception of the narrative analysis of the use of theory, which focussed on the period since a review of the use of theory in audit and feedback [13]. Statistical analysis involved a linear regression to determine if the number of TIDieR framework items not reported changed with publication year. We examined plots of residuals from the regression analyses and performed a Breusch-Pagan test for heteroskedasticity. The synthesis was presented to the research team for challenge. We summarised the content of feedback facilitation interventions and drew upon guidance and wider literature to consider implications for research and practice. To support feedback facilitation providers, we made the different forms of content and delivery that we identified explicit as a series of design choices.

The protocol for this review has been published separately [23]. We report upon variations from the protocol in the discussion.

Results

The Cochrane review identified 104 randomised controlled trials that delivered feedback facilitation alongside audit and feedback. We included 146 papers describing these trials, as detailed in the below flowchart (Fig. 1).

Fig. 1
figure 1

The PRISMA flowchart

Table 2 summarises the content and delivery of feedback facilitation in 104 trials. Additional data is provided in the Supplementary Materials. Table 3 presents a cumulative summary of the content across included studies.

Table 2 A summary of the content and delivery of the included feedback facilitation interventions
Table 3 A cumulative summary of the description, content and delivery of included feedback facilitation interventions

Date and setting

Included trials dated from 1982 to 2020 (Fig. 2). The included studies took place in primary care (n = 54; 52%), secondary care (n = 43; 41%), two in both primary and secondary care, 3 in nursing homes, 1 in an antenatal clinic (unclear whether primary or secondary care) and 1 in dental practice.

Fig. 2
figure 2

A graph describing the date and frequency of included studies of feedback facilitation and the mean number of strategies used per feedback facilitation intervention

Expert Recommendations for Implementing Change (ERIC) strategies

We identified 26 different implementation strategies within feedback facilitation (Fig. 3). The median number of strategies per trial was 3 (IQR 2–4 strategies). Figure 2 shows that the number of strategies used within feedback facilitation interventions has increased over time. There were no apparent differences in the number of strategies used depending on whether the feedback facilitation intervention was undertaken in primary or secondary care (Supplementary materials 6 & 7).

Fig. 3
figure 3

A graph of the frequency of implementation strategy use within included studies [*Added to ERIC coding]

Use of theory and logic models

We found 35 studies (34%) that described using theory. A total of 31 theories were referenced within the included papers. The most frequent were adult learning theory (n = 5; 5%) [e.g. 171], Rogers’ diffusion of innovation theory (n = 4; 4%) [172], Bandura’s self-efficacy theory (n = 4; 4%) [173], and Bandura’s social learning theory (n = 4; 4%) [174]. We found that theory was most frequently used in intervention design. Data from papers published since Colquhoun and colleagues’ exploration of the use of theory in studies of audit and feedback [13] are presented in the Supplementary Materials 2. As illustrated by the quotes, we found that it was often difficult to understand how the authors’ applied theory; for example, “(we) combined strategies shown to change providers’ behaviour with those based on the diffusion of innovation theory” [24] and “technology-assisted learning resources were also developed using motivational systems and instructional design theory” [45].

We found 10 studies provided a logic model to describe the intervention. Table 4 summarises the content of these logic models.

Table 4 Components of logic models describing the feedback facilitation interventions

Materials used in feedback facilitation

Feedback facilitation interventions used a range of materials (Supplementary Materials 1). We grouped these into the following categories:

  • Materials to support clinician behaviour change by addressing capability, for example, evidence-based guidelines (e.g. [28, 34, 99]), reminder stickers and cards ([e.g. 100, 116, 143]), written educational materials (e.g. [111, 118, 149]). We identified a subset of these materials that was administrative equipment such as patient care record [84], x-ray ordering stamps [93] and ordering sets [99].

  • Materials to address clinician motivation; for example, information about reimbursement [132]. It is possible that some of the other materials described above as addressing capability may have addressed motivation (e.g. relating to patient outcome), although this was not clear from the description.

  • Materials to support patient behaviour change by addressing capability; for example, patient information leaflets (e.g. [155]) and self-help materials [e.g. 121].

  • Clinical equipment to support clinician behaviour change by addressing ‘opportunity’; for example, testing kits (e.g. [24]) and clinical assessment tools (e.g. [114]).

  • Materials to support the improvement work: Help to analyse influences (e.g. critical event analysis form [58]; a description of ways to use the audit results, including discussions with colleagues, detailed follow up surveys among patients, and establishment of a patient panel [159]); Help both to select strategies (e.g. written recommendations [27]) and to enact strategies (e.g. action plan [58], amendable template to give information to stakeholders [164]).

Identification of priorities

We explored whether and by whom priorities were identified from within the performance feedback during feedback facilitation. In 43 studies (43%), priorities for improvement were identified by the feedback facilitators; for example, Hendryx and colleagues’ educational outreach included that “the (study) team member reviewed the findings, and offered concrete, practical suggestions for improvement” ([82] p420). In 19 studies (18%), priorities were identified by feedback recipients; for example, Ivers and colleagues provided a worksheet “to facilitate goal-setting” ([91] p3). In 4 studies (4%), there was evidence that priorities were co-designed between the study team and the feedback recipients [60, 61, 84, 85]. For example, Frijling and colleagues provided feedback facilitation where “the facilitator and the GPs discussed the content of the feedback reports, prioritized specific aspects of decision making to be improved and made change plans” ([60] p837). In 39 studies (38%), it was not possible to determine whether or by whom priorities were identified within the performance feedback.

Exploration of influences upon performance

We explored whether and how influences upon performance were investigated within feedback facilitation: In 12 studies (12%), influences upon performance were given by the feedback facilitators; for example, “data presented included hospital-specific baseline performance data and information on knowledge and organizational barriers to stroke care identified by the surveys... (including) organisational barriers such as lack of order sets and pathways” ([99] p1635). In 32 studies (31%), influences upon performance were explored by feedback recipients; for example, “a 90-min standardized small group quality improvement meeting, supervised by the medical coordinator of the diagnostic center… (including) a thorough discussion of the difficulties of achieving changes at the individual primary care physician level, the practice level, or at the patient level” ([157] p2408). In 9 studies (9%), we identified that a description of influences upon performance was co-produced by the study team and the feedback recipients. However, there were blurred boundaries between co-produced identification of influences and where influences were sought by feedback recipients; for example, where a focus group might have a facilitator, it was unclear the extent to which they provided a structure or were more directional. In Kennedy et al.’s study, co-ordinators facilitated interdisciplinary care teams to identify “barriers and facilitators to implementing evidence-based strategies, particularly changes that could be made at an organizational level” (p4). In 51 studies, it was not possible to determine whether/how influences upon performance were explored.

Where influences upon performance were given, these may be based upon previous research, including as part of intervention development. Influences upon performance were sought by recipients both in discussion and using proformas. Some focussed on specific barriers (e.g. confidence [78]) whilst others used a broader lens; for example, Chaillet et al. [43] described that, "the training program also sensitized participants to social, economic, organizational, cultural and legal factors”. Proformas were used to support recipients to explore influences (e.g. [58]). The depth of exploration varied (e.g. a 3-h training session [155] or 20-min exercise [111]) and may be a collective (e.g. focus group [27]) or individual exercise (e.g. [116]). Co-production included national analysis followed by local tailoring, information gathering from patients followed by healthcare worker selection and the sharing of learning between sites.

There were no apparent differences in whether the influences were sought by recipients, given or co-produced depending on whether the feedback facilitation intervention was undertaken in primary or secondary care (Supplementary materials 7).

Determining implementation strategies

We explored how strategies were selected: In 33 studies (32%), improvement strategies were given by the feedback facilitators, in 27 studies (26%) they were determined by the feedback recipients. Improvement strategies were co-designed in 20 studies (19%). In 24 studies (23%), it was not reported who determined the improvement strategies.

The suggested strategies given by the study team were sometimes generic suggestions to all teams (e.g. [52]), and sometimes site specific (e.g. [34]). Where the strategies were determined by recipients, this included doing so with the support of learning from other sites [126] and using a plan-do-study-act template [93]. Co-produced strategy selection included selection from a list of strategies provided by the study team and adaptation of proposed strategies (e.g. [68]). Proposed strategies could be in a list presented by peers (e.g. [47]), and/or described in meeting, webinars or calls (e.g. [146]).

The Sankey chart (Supplementary Materials 3) illustrates the lack of relationship between who identified influences and who identified strategies: In 10 (10%) trials, both the identification of influences and identification of strategies was undertaken by recipients; in 4 (4%) trials, both were given by the study team.

There were no apparent differences in whether the actions were determined by recipients, given or co-produced depending on whether the feedback facilitation intervention was undertaken in primary or secondary care (Supplementary materials 7).

Identification of implications of performance

We explored whether feedback facilitation involved identifying implications of performance. We found that implications were given as part of feedback facilitation in 36 studies and identified by feedback recipients in 7 studies (7%). In 61 studies (59%), consideration of implications was not reported. There were no apparent differences in whether the feedback facilitation intervention was undertaken in primary or secondary care (Supplementary materials 7).

Other intervention components

We looked for additional components to the intervention, not described above. We found additional components that sought to address capability and motivation: Components to address capability targeted both capability to improve (e.g. [68]) and capability to deliver care (e.g. [89]). Interventions to increase motivation included motivational text messages (e.g. [47]), celebrating good practice (e.g. [82]), and positional leader prioritisation [e.g. [82]. These may have had some impact upon the social opportunity by changing the social environment (e.g. giving permission). We did not identify additional components that specifically targeted ‘opportunity’ for the target behaviours, defined as factors that lie outside the individual that make the care or improvement behaviour possible.

Delivery of feedback facilitation

A variety of modes were used to deliver facilitation, with the most common being face-to-face (n = 86; 83%) and educational materials (n = 52; 50%). Virtual delivery by telephone (n = 16;15%) and online (n = 12; 12%) was less commonly used, which is likely to be due in-part to the age of the literature. Most studies used one (n = 45; 43%) or two (n = 50; 48%) methods of delivery, with fewer using three (n = 7; 7%; [38, 45, 47, 52, 82, 84, 136]).

Frequency of feedback facilitation

Most studies delivered feedback facilitation between 1 and 3 times (median = 3, interquartile range 1–5). Six studies (6%) delivered facilitation 15 times or more [24, 60, 61, 100, 114, 115]). The maximum times feedback facilitation was delivered was 42 times [115]. Data was not available for 25 studies (24%).

Duration of feedback facilitation

Feedback facilitation delivery took between 15–1800 min, with a median of 120 min and IQR of 75–420 min. For studies with over 420 min of facilitation, this was delivered over several consecutive days and/or as follow up calls following initial delivery. 45 studies (43%) did not record the delivery time.

Timing of feedback facilitation

Most facilitation was delivered with (n = 37; 36%) or after (n = 32; 31%) feedback delivery, so that the feedback could be reviewed with the participants. There were some studies that delivered before (n = 14; 13%), although ten of these studies (10%) also included facilitation during and/or after feedback. The five studies (5%) that only delivered facilitation pre-feedback all included educational materials. Three of these studies (3%) reported local identification of priorities [39, 42, 58], whilst it was not reported in the other two [43, 76].

Who delivered feedback facilitation

Most facilitation was delivered by experts (e.g.specialist physicians with expertise in osteoporosis or geriatrics [93]) (n = 50; 48%), followed by peers (e.g. local co-ordinators [24] (n = 31; 30%) and then quality improvement specialists (n = 21; 20%)) (Supplementary Materials 4). Facilitation was delivered virtually through a computer programme in two studies (2%) [39, 151] (Supplementary Materials 1). We discuss challenges with coding this data below.

Who received feedback facilitation

The majority of facilitation was delivered to clinicians (n = 86; 83%), with a smaller number including both clinicians and non-clinical/managerial (n = 10; 10%). There were no instances of facilitation being delivered to managers only. In eight studies (8%), it was unclear who were the recipients.

Number of recipients receiving feedback facilitation per site

It was difficult to determine the number of recipients of facilitation per site, with 68 studies (65%) either not reporting or providing unclear descriptions. The number of recipients per site ranged from 1 to 135. Studies variably described both minimum and maximum recipients per site, with others giving averages but no range. Of the 36 studies (35%) reporting recipients per site, most sites had small groups of 10 or fewer recipients (n = 28; 28%).

Number of intervention sites receiving feedback facilitation

The number of intervention sites ranged from 1 to 811, with a median of 19 and IQR of 12–38. Data was skewed to the right by 18 studies with large intervention site numbers over 50. Two studies (2%) did not report the number of intervention sites [123, 127].

Comparison of recipients per site with number of intervention arm sites

Where both number of recipients per site and number of intervention sites were recorded (n = 33; 32%), the trend was for the number of recipients per site to decrease as the number of intervention sites increased, however this was not statistically significant on linear regression (p = 0.86, Confidence intervals –0.55 to 0.21) (Supplementary Materials 5).

Number of people receiving the intervention at one time

Most studies (N = 71; 70%) did not record the number of people receiving the intervention at each time; Of the 33 studies (32%) that did, the intervention was delivered most frequently to an individual (n = 9; 9%) and most were delivered to 10 or fewer individuals (n = 28; 27%). Two studies (2%) [98, 102] delivered to 11–20 people and three (3%) to 21 or more [70, 147, 162]. The maximum number of people the intervention was delivered to at one time was 45 [70].

Comparison of number of people receiving the intervention at one time by setting

Interventions delivered in secondary care were often delivered to a larger number of people than those delivered in primary care (secondary care median = 8, IQR = 2.5–16; primary care median = 3.5, IQR = 1–7.5), however this was not a statistically significant finding on a Mann–Whitney U test (p = 0.16) (Supplementary materials 6). This is likely due to primary care studies involving feedback to individual practitioners and smaller team sizes compared to secondary care. The lack of studies describing how many people received the intervention at one time makes drawing conclusions difficult.

Level of change sought

Most facilitation sought a level of change at the team level (n = 74, 71%), with fewer studies seeking level of change at multi-team organisation levels (n = 23, 22%), at the wider system (n = 5, 5%), 2 studies directly targeted patient-level change [45, 150]. For example, Clarke and colleagues provided evidence-based education for women through two antenatal classes as part of an intervention to increase the rates of vaginal birth after caesarean section.

Tailoring of feedback facilitation

Only 19 studies (19%) reported tailoring of the intervention delivery. Types of tailoring included tailoring of the content to identified needs and barriers and local context (e.g. [34, 73, 122, 161, 166]) and additional episodes of facilitation in response to need (40,131). For example, Quinley and colleagues focussed facilitation on physicians with poorer performance where a practice contained multiple physicians [132]. Brown and colleagues [42] tailored content to existing level of knowledge and delivery through, “the use of a variety of media including individualised tuition and feedback” (p443).

Assessment of fidelity

Fidelity of facilitation was reported and described as assessed in 41 studies (39%). Where assessed, 27 out of the 41 studies reported fidelity achievement, given either as a range or mean adherence. Fidelity ranged from 29 to 100%.

Modification

Most studies (n = 60, 58%) did not report whether any modifications to the intervention took place. Of those that did, 18% (n = 8) reported making modifications whereas 82% (n = 36) did not. Examples of modifications included additional re-training sessions [24], modifications due to online system malfunctions [38], changes to number of facilitation sessions offered [74] and delivery mode, for example, where the source was unable to continue to deliver feedback facilitation in-person, so later delivery changed to teleconference  [93]. Reporting of the presence or not of modifications to facilitation interventions is improving over time, with 53% of studies reporting modifications published in 2010 or later and 88% since 2000.

Reporting of TIDieR intervention content items

The number of TIDieR items not reported within each study was determined to give a score out of 18. The results are presented in Supplementary Table 1b. The non-reporting of items ranged from 2 to 14, with a median of six content items not recorded (IQR 4.75–8). The number of items not reported reduced over time (p < 0.05) on linear regression, however this only explained 5% of the variation. Heteroskedasticity was not present on testing (p = 0.72).

Discussion

We describe the content and delivery of the feedback facilitation to support designers of future feedback facilitation interventions. Our systematic review of 146 papers describes feedback facilitation delivered alongside audit and feedback in 104 randomised controlled trials. The papers were identified during the Cochrane review of audit and feedback. [21] The Cochrane review includes an assessment of the effectiveness of feedback facilitation.

We found feedback facilitation is a heterogeneous intervention containing at least one of 26 different implementation strategies and drawing upon each of the 9 implementation strategy groupings [175]. We found evidence that the number of strategies used per intervention is increasing over time (Fig. 2). To support future delivery of feedback facilitation we have used this heterogeneity to illuminate previous intervention design choices (Table 5). This is not intended to represent an exhaustive list of choices. In making these choices, guidance [e.g. 14] recommends intervention developers draw upon evidence, theory and stakeholder views about patient outcomes, proximal outcomes, mechanisms, context, pre-conditions and/or moderators and the intervention content. Articulating these may both support consideration of the coherence of the intervention and evaluation of whether it was provided as planned. Detailed description of planned and actual content also supports learning and replication of delivery. We propose both future work with stakeholders to evolve the design options, and further studies evaluating the impact of these choices upon effectiveness and implementation outcomes such as feasibility, appropriateness and acceptability [176].

Table 5 Design options within feedback facilitation v1 (Non-exhaustive)

These design choices have important implications, including those related to tailoring and dose

In relation to tailoring, the source of both the influences and the selection of improvement actions may impact upon the effectiveness of the intervention; for example, strategies selected by the study team may have a more explicit link to theory and evidence, and may include external stakeholders able to challenge existing mental models. Conversely, the study team’s interpretation of the influences upon performance and the alignment between influences and strategies may differ from those involved in change-making, which might undermine buy-in and create barriers to specification of the change. Future research that investigates the impact of different sources and of co-produced tailoring would support providers of feedback facilitation.

We measured the ‘dose’ of the facilitation and found wide variation, including the duration (15 to 1800 min), frequency of facilitation (1 to 42 episodes) and the number of recipients per site (1 to 135). There was also wide variation in the number of people receiving the intervention at once (1 to 45) and different modes used (e.g. through materials, face-to-face or virtual approaches). Future work could investigate the most (cost-) effective way to deliver feedback facilitation; for example, through the use of SMART optimisation designs [177] with economic evaluations. Such studies should assess both cost of delivery and of receipt. Consideration of real-life scalability would be valuable, given only one study delivered to more than 150 sites. All studies delivered facilitation to an intervention group. Questions remain about whether an adaptive intervention delivering sequences of feedback facilitation strategies as a co-intervention to audit and feedback, where the type, intensity or modality of the co-intervention evolve according to changing recipient responsiveness to feedback, might be more (cost-) effective.

Implementation strategies may contain different behaviour change techniques and act upon different mechanisms [17, 178]. The heterogeneity of feedback facilitation undermines the ability to draw conclusions about its effectiveness. We found that the ERIC compilation provided a valuable tool for identifying component strategies. However, given more recent work describing potential behaviour change techniques within strategies [178], it would support replication and learning if future papers describe the active ingredients (such as, instruction on how to perform behaviour, information about health consequences or social support) within strategies. We identified overlap in the content of ERIC strategies; for example, learning collaboratives often contained educational meetings, re-examining implementation, small tests of change, whilst other studies that also delivered these elements to multiple sites at once may not be described as a learning collaborative. Where an intervention was in the overlap between ERIC definitions, we used the terms used by the authors to categorise the intervention components. We were unable to code motivational text messages [47] using ERIC and included them as an additional strategy. Similarly, we determined that ‘clinical decision support systems’ incorporated both ‘change the clinical record system’ and ‘remind clinicians’ as the closest match. We found that 47 ERIC strategies were not incorporated into feedback facilitation (Supplementary materials 8) and may provide alternative content to future feedback facilitation providers; for example, to promote adaptability.

We explored whether reporting was improving over time. We found that later reported studies had fewer non-reports of TIDieR items as expected with changes in publishing requirements, but this only explains 5% of the variance. Further action to improve reporting may be needed to support interpretation of results, replication of interventions and the advancement of implementation science. We draw particular attention to the omission of the rationale and proximal target of the intended change.

As recommended in TIDieR [11], we sought the underlying rationale for the use of feedback facilitation: 35 studies referenced the use of theory and 10 studies provided a logic model describing their programme theory. Understanding the underlying rationale for an intervention supports replication, as adaptation around core components increases fit to the new context [14]. Describing the programme theory of an intervention also supports interpretation of results; for example, consideration of the coherence of the intervention, the proposed mechanism of effect, the context, the work being done by the intervention recipient and the assessed outcomes. Detailing causal pathways helps advance implementation science [17]. We found that within the 10 trials that had a logic model, there were gaps in the reporting of mechanisms (reported in 6 studies) and of contextual, predisposing or moderating factors (reported in 5 studies); Studies reporting this detail dated from 2011.

We found that feedback facilitation interventions sought to address motivation and capability. This was evidenced within: the proximal and distal outcomes where logic models were provided; the intervention materials (e.g. providing guidelines, detailing impacts upon outcomes and information about reimbursement and providing patient information and self-help materials); and the additional components. In relation to capabilities, the interventions sought to target both capabilities to improve (e.g. support to analyse influences upon care using a critical event analysis form or action plan template) and capabilities to deliver care (e.g. reminder cards or guideline documents). However, the target of the intended proximal change was often unclear; for example, whether education targeted improvement capabilities or knowledge about clinical care. Behaviour change literature (e.g. [179]) recommends specifying the target behaviour prior to the development of interventions. Omitting this information again hampers replication and the advancement of knowledge about what influences different behaviours. We found few examples of interventions addressing opportunity. Interventions may be enhanced by supporting the opportunity to undertake the improvement work; for example, by explicitly bringing that work into a workshop [9].

Strengths and limitations

There were minor variations from the protocol: We had planned to exclude papers that provided training in the target care practice, rather than the use of feedback, but found that it was not possible to identify the target behaviour of such training. We also planned to explore the extent to which the co-intervention was solely feedback facilitation but heterogeneity within feedback facilitation undermined our ability to assess this.

We sought the presence of a logic model, as recommended by guidance [19]. More recent guidance [14] recommends that a logic model is accompanied by a more detailed description of the programme theory; some studies (e.g. [123]) provided a narrative summary of the programme theory without a logic model. We included 146 papers describing 104 trials, however as with all reviews, there is a risk that we missed papers. We focussed on feedback facilitation within trials, which may differ from feedback facilitation undertaken outside of clinical trials. We included one paper [156] which described feedback facilitation alongside audit, but that was subsequently excluded from the Cochrane review due to the nature of the outcomes measured. Our data extraction template was adapted from the TIDieR framework, with the addition of prompts to seek strategy type, whether/how priorities for improvement were identified, whether/how influences upon performance were sought, whether/how strategies were selected and whether/how implications from performance were identified. Whilst we also sought other components to the intervention, it is possible that different prompts may have identified alternative factors important to the design and delivery of feedback facilitation. It is possible that increased granularity by categorising at the level of behaviour change technique (BCT) rather than ERIC strategy may have been useful, however gaps in recording would have been amplified at the active ingredient level. It is also possible that future feedback facilitation reviewers are seeking information about the mode of delivery found in ERIC strategies but missing from BCTs. We resolved disagreements through discussion but did not keep a record of the content of the discussion. The intervention deliverer (e.g. expert, peer) was difficult to assess from the information provided. It is possible that what is key is whether they are perceived as 'experts' or 'peers' (for example, if they are viewed as ‘credible source’ [180]), an assessment which might be made by each participant, rather than on the basis of a job title. In piloting, the reviewers found it difficult to agree on whether strategies addressed capability, opportunity or motivation. As a result, this was assessed by two reviewers (MS and SA) with training and experience of using COM-B [15] as part of a focussed assessment of the target of specific strategies. We focussed on the content and delivery and characteristics of the feedback recipients, the setting and the level of change sought. We did not collect information about the target behaviours upon which feedback is being given. Further work to explore the relationship between characteristics of the target behaviour(s) and the content and delivery of feedback facilitation may identify additional design choices.

Conclusion

Feedback facilitation is a much-used intervention delivered alongside large-scale audit and feedback to increase effectiveness. Health system policy and theory-informed hypotheses advocate for the delivery of feedback facilitation, often referred to as support for quality improvement. We describe heterogeneity in the design of feedback facilitation, highlighting some of the design choices for future providers (Table 5). We were able to describe the components with feedback facilitation using ERIC, but there was the opportunity for some minor clarifications in terms and for intervention providers to provide greater specificity. Whilst reporting demonstrated extensive gaps, hindering replication and learning, there was some evidence that reporting is improving over time. We recommend future work to consider the role of ‘opportunity’ within intervention designs and the use of evaluation techniques to maximise intervention efficiency.