Background

Audit and feedback (AF) is a widely published method of providing performance data to physicians to help them translate knowledge into practice [1]. It has been shown to be more effective in helping physicians change their behavior than many traditional models of professional development [2]. However, the effectiveness of published AF interventions is variable [1]. Several authors have called attention to this issue, citing the need for further study so that the reasons for varied effectiveness of AF can be more fully understood and addressed [3,4,5,6,7].

Here, we extend our previous work, which examined physician responses to a novel type of audit and group feedback (AGF) and presented a conceptual model of physician responses to AGF sessions (AGFS) [8]. We observed that physicians engaged in planning for change more robustly in some AGFS than in others. The present study explores factors that influenced the social interactions in those AGFS.

Our understanding of what influences AF effectiveness is informed by three main areas of literature: implementation science, motivational and behavior change theory, and the educational feedback literature [5]. Colquhoun et al. have emphasized the need to draw from these different domains in order to optimize AF design and implementation [5].

Frameworks and theories from these fields can help us to understand the factors that influence implementation [9,10,11,12,13,14,15,16,17,18,19]. One widely cited, evidence-based framework is iPARIHS [17], which identifies four key domains that can be used to determine why an intervention may or may not be successful: the innovation, the recipient, the context, and the facilitation [17].

The authors of iPARIHS described facilitation as the “active ingredient” for implementation [17]. It was defined as “the construct that activates implementation through assessing and responding to characteristics of the innovation and the recipients” in context ([17], p 8). iPARIHS situates the success of implementation upon whether the facilitator can enable the recipients to make the desired change [17].

Because many published AF interventions describe passive, non-facilitated feedback in the form of physician ‘report cards’, the lack of attention to facilitation of feedback in AF is a potential criticism and may explain some of the variation in AF interventions [1, 4, 5, 7].

Brehaut et al. published 15 recommendations to enhance effectiveness of “practice feedback” [7]. Those relevant to the current study include choosing the right items on which to provide feedback (aligned with local priorities, actionable, specific), providing individualized data with relevant comparators, integrating summary messages and data visualization, using social interaction to construct feedback, managing cognitive load, and addressing barriers to change [7].

The use of social interaction to construct feedback underpins the design of AGFS described in the present study and stems from Social Learning Theory [20] and the work of Vygotsky and Ajzen, who emphasized that efficient learning can occur through observation of others’ behaviors, the rewards or consequences of others’ behaviors, and the social norms that develop in group settings [7, 20,21,22].

Similarly, the Theoretical Domains Framework (TDF) explores factors that impact behaviors [23]. The TDF identifies 14 domains to consider in implementation [23]. These include knowledge and skills, beliefs about capability for change, goal setting, the environmental context and exploration of social influences on implementation [23].

The medical education literature about feedback uptake highlights several aspects of feedback delivery. Ideally, feedback occurs in an in-person, facilitated, coaching-oriented manner within the setting of a trusting, respectful relationship between the provider and recipient of feedback [24,25,26,27].

In the R2C2 model, Sargeant et al. emphasize the primacy of the relationship between feedback providers and recipients [25]. Likewise, the R2C2 model focuses on understanding and accepting feedback prior to coaching for change [25, 28, 29]. Watling et al. highlight the value of the credibility of the feedback provider as perceived by the recipient [24, 26]. In an overview of the educational feedback literature, Telio et al. proposed a construct of “educational alliance,” which may influence feedback uptake [27].

The Calgary office of the Alberta Physician Learning Program (CPLP) delivers AF to groups of physicians. This team has delivered over 30 AF projects on various clinical topics, locally and provincially, addressing practice variation and appropriateness. Some projects engaged physicians more than others and we wished to explore why.

In a previous study, we described our approach to AF: audit and group feedback sessions (AGFS) [8]. These AGFS were designed based upon the principles of social learning theory and best practices from the education feedback and implementation science literature [7, 17, 23,24,25,26,27]. In the AGFS, physician groups participated in face-to-face, [25, 27] facilitated group feedback sessions with peers, during which they reviewed reports containing their own individualized performance data (along with anonymized peer comparators) in order to identify opportunities, barriers, and enablers for making change [7, 17, 23, 25,26,27].

We investigated the behaviors of physicians in AGFS in order to capture their reactions and engagement with the data and how the presence of peers influenced the direction of group discussions [8]. Through a qualitative analysis, we developed a conceptual model of how physicians react and interact in AGFS. Physicians expressed initial reactions to the data (skepticism, interest) and then transitioned through several discrete behaviors: understanding and questioning, justifying and contextualizing the data, and reflection and sharing of practices, before beginning to raise ‘change cues’ and make change plans. Change cues were defined as “turning points in the group discussion, initiated by a brief comment highlighting the importance of a performance gap revealed by the data reports,” and were usually raised by a group participant rather than the facilitator, who was not a group member [8].

Qualitative analysis of the AGFS transcripts showed that the degree of interaction and engagement of the physicians varied between AGFS, as did the groups’ tendency to raise change cues and plan for change [8]. This follow-up study sought to explore the factors mediating social interaction during the six AGFS.

We present the results of a comparative case study of the AGFS described in the previous study [8]. The aim was to understand what factors contribute to the successful engagement of physicians in change planning and to develop a practical, evidence-informed tool to guide AGF design and implementation.

Methods

Ethics approval for this work was received for each case from the Conjoint Health Research Ethics Board: REB13-0075 (case 1); REB14-0484 (cases 2a, 2b, 2c, 2d); REB13-0459 (case 3).

Setting

This analysis of the work of the CPLP between 2014 and 2016 took place at the Cumming School of Medicine at the University of Calgary. The CPLP is funded by the provincial medical association, to deliver audit and feedback reports about individual practice performance to Alberta physicians. Most projects are initiated when a physician group (such as a department or clinic) approaches the program with a clinical question. Program staff clarify the question; if it is amenable to AF, an algorithm for data extraction is developed and data is accessed from relevant provincial repositories to create the AF report. Each project has a unique, multi-page audit and feedback report reflecting the amount of data and information needed to answer the physician groups’ questions. Confidential reports for participating physicians contain individual data, anonymous group comparators, and relevant references reflecting best practices.

The project culminates in a facilitated AGFS in which consenting physicians have their AF reports (provided at least 1 week before the AGFS), work as a group with a CPLP facilitator to review them, and identify opportunities, barriers and enablers for change [10, 23]. In this study, across all AGFS, the facilitator was the CPLP medical director, who was not a member of any of the physician groups. The sessions were attended by CPLP staff who observed the process and served as project managers for each case from conceptualization to delivery.

The process is depicted in Fig. 1.

Fig. 1
figure 1

The CPLP process from clinical question to AGFS. Physician groups bring clinical questions of interest for review by the CPLP. The CPLP team reviews the questions for appropriateness for audit and feedback. Consideration is given to impact, reach, actionability, and accessibility of the data. CPLP collaborates with data custodians to make individualized AF reports for consenting doctors. The confidential reports include individual data with anonymous peer comparators and relevant best practice information. Consenting physicians then participate in a facilitated group feedback session with their peers, led by a CPLP and/or participant facilitator. As a group, the physician peers review each aggregate data point, along with their own performance reports and seek opportunities for practice improvement

Study design

We wished to understand why social interaction between physicians in AGFS varied across cases [8]. Comparative case analysis is an appropriate approach when context, culture, and system factors may influence a program [30, 31]. We used framework analysis to build the individual cases for this research [32,33,34,35]. The overall design of the project was as follows: (1) identification of data sources, (2) familiarization with data sources, (3) development of a program model or “change theory,” (4) creation of a framework table from the program model, (5) framework analysis to extract and index data to build individual case studies, (6) comparisons across cases to identify similarities and differences believed to influence social interaction, (7) thematic organization of similar findings into key “mediating factors,” (8) creation of an organizing framework linking the mediating implementation and design factors for social interaction in AGFS to the conceptual model of physician behaviors [8].

Participants

Participants included the physicians who participated in the AGFS and the CPLP staff who created the AF reports and delivered the AGFS through collaborative relationships with the physicians. AF participants provided written informed consent to allow access to their administrative health data for purposes of creating AF reports and to record, evaluate, and study the AF sessions in which they participated.

Characteristics of staff who contributed are described in Table 1. The staff worked closely with the physician groups to develop the AGFS and observed the sessions. A typical project would take 1 year to complete. The CPLP staff had extensive longitudinal contact with members of the physician groups and familiarity with processes and contextual and cultural elements that were observed over the course of the projects.

Table 1 Descriptions of training and roles of CPLP staff and the research team

Descriptions of the research team are also included in Table 1 in order to acknowledge their orientation, positions, and perspectives at the outset of this research as these perspectives likely inform our analysis.

It is important to note that while CS, LR, DD, and LC participated in the development and evolution of the CPLP processes over time, HA and SD were not involved in the building of the original projects or evolution of the processes described in this study, but rather joined the research team afterwards. Their perspectives were routinely sought in an effort to balance and mitigate any biases or pre-conceived ideas of other team members more intimately involved with the cases.

Case definition

For purposes of this study, a “case” was defined as an AGFS with an individual physician group, including the processes of working with the group to refine the question, prepare the report and coordinate, and facilitate the AGFS.

Cases selected for this study included all AGFS that took place through the CPLP between January 2015 and January 2016. These cases were selected because they were offered in short succession, such that researchers could begin to understand the respective cultures, patterns, and influences within each group.

Data collection

The research team began by identifying possible data sources from which to derive the six case studies. Data sources are listed and described in detail in Table 2. These included sample AF reports from each case, the transcripts and qualitative analysis from the prior study [8], basic data about the projects from a CPLP tracking database (number of reports, dates, ethics approvals, etc.), a process evaluation document that was written for case 1, field notes that were collected directly into our framework table to capture the observations of the research team as they explored the cases, and notes from structured interviews with CPLP staff who were present at the AGFS to corroborate and validate the findings of the research team (detailed later in the methods section).

Table 2 Description of data sources for the framework analysis and how they were used by the research team

Next, the research team built a program model comprised of possible elements influencing the AGF projects [30, 31].

Building the program model

Developing a program model is an important early step in a comparative case analysis [30, 31]. The model is comprised of elements that are expected to influence the cases [30, 31]. The research team worked collaboratively over several meetings to diagram the program model based on literature that describe factors that influence implementation success and acceptance of feedback [7, 17, 23,24,25,26,27]. Based on the team’s tacit knowledge derived from their collected experience in developing and delivering audit and feedback and educational feedback as well as their familiarity with the literature, additional elements not specified in published frameworks were added to the program model. The team met, drafted, and re-drafted the model iteratively until there was consensus on the likely factors influencing social interaction in AGFS. This program model, described in the “Results” section, was used as the a priori framework for collecting and organizing information from all of the data sources (Table 3).

Table 3 Program model for the comparative case analysis

The program model became the a priori framework into which all data was later indexed in order to build each case study. The framework was put into a table (the “framework table”) with the elements of the framework on the vertical axis and the cases on the horizontal axis.

Three 60-min interviews structured upon the a priori framework comprised one source of data about the cases. The interview participants (the CPLP medical director and two CPLP project managers, detailed in Table 1) were asked to describe their observations of the cases with respect to each of the framework components. Their responses were recorded directly into the framework table. They were used to corroborate and/or add to the observations and findings of the research team. The notes from the interviews were reviewed, condensed, and reviewed again by the research team and summarized in the case analysis table for purposes of the publication.

Data analysis

Framework analysis was the primary mode of data analysis in this project [32,33,34,35]. This is a method of organizing textual data according to an a priori “coding” framework and is commonly used in social sciences research [32,33,34,35]. The first step involves familiarization with the data through iterative reading and discussion. Familiarization with the AF reports and the qualitative data from the prior study took place during project development within the CPLP and in conducting the prior study [8]. Next was developing the program model and a priori framework as outlined in the previous section. Data were indexed in the framework table by searching the data sources for evidence for each element of the a priori framework in each case and populating the table with findings. If observations from the data sources did not align with the framework, there was opportunity to add additional elements.

The research team then met to review the case studies and develop consensus on the case descriptions. Corroborating evidence for the observations were sought by triangulating the evidence from different data sources.

The next step in data analysis was the case comparison. Once consensus was reached on the case descriptions, the research team worked collaboratively to identify similarities and differences across the cases. Each of these was considered by the team as to whether or not they influenced the social interactions in the respective AGFS.

The final step in the analysis was to create an organizing framework. The research team iteratively diagramed a framework that captured and organized factors that were identified as likely influences on physician engagement in the AGFS. Similar elements were grouped thematically under a single “mediating factor” for AGFS success. Each mediating factor was then ordered to reflect the CPLP processes and the role these processes might play in the physician behaviors identified in the previous study [8].

Results

Participants

A total of 99 AF reports were distributed and 50 physicians participated across the six AGFS. Two CPLP project managers and the CPLP Medical Director were interviewed for data collection.

Program model

The program model, shown in Table 3, is comprised of eight elements. These elements were derived from two widely cited frameworks that describe influencing factors in implementation [17, 23], Brehaut’s recommendations for enhancing “practice feedback”, and elements from several models of educational feedback [7, 24,25,26,27]. In addition to the factors derived from the literature, the team added these elements: AGFS interactivity and change talk and group dynamic.

Case descriptions

Case 1 was a project exploring practice variation with specialists involved in a specific surgical procedure. The group demonstrated a collegial relationship during the AGFS with a high frequency of sharing and reflecting on common practices and raising change cues which led to change talk between the participants during the AGF session. Representative examples are listed in Table 4.

Table 4 Representative examples of data that were used in building the case studies. In this table, the way in which raw data was indexed to the components of the program model is shown. Where relevant, how this data was linked to the key elements of the CAFF model is also shown in order to demonstrate the way in which inferences were drawn between the original raw data, the program model, and, ultimately, the framework

Participants in case 1 were familiar with the CPLP approach and staff from a prior project with the program. Their AF reports were co-created with input from the physician lead and three other group members. The physician lead was a highly respected leader in the group. These “group champions” were involved with all aspects of project design from the clinical question to the AGFS. The project lead shared their individual data with the group while co-facilitating the AGFS. This groups’ clinical question related to treatment decisions that were largely under their direct control.

Cases 2A–D addressed a de-prescribing question derived from Choosing Wisely Canada recommendations. These cases took place at four adult hospitals, each with separate groups of physicians, leaders, institutional cultures, and serving unique patient populations in different parts of the city. For these reasons, these AGFS are treated as separate cases. These physician groups were reviewing baseline data. While there were varying degrees of change talk and interactivity across these four AGFS, it was uncommon across all four AGFS for physicians to compare and share their practices with one another.

Case 2A occurred at a hospital serving an area with a large immigrant population. This physician group also provided palliative care. The AGFS focused heavily on discussion about the uniqueness of the patient population. There was skepticism about the data on the basis of the uniqueness of this group’s patient population accounting for the variations in prescribing. The project lead was from this group but was unable to attend this session. The group discussion was interactive and some change cues arose (see Table 4).

Case 2B took place in a very large tertiary academic health center serving a complex, high-acuity population. The group was comprised of physicians who worked together for many years. The session was interactive and led to change planning. A nurse with a formal quality improvement role attended this session. A physician participant prompted several interactions around change planning with the allied health staff who shared in-patient care.

Case 2C took place in a smaller, community-based hospital serving a predominantly elderly population. One of the physicians for the project was based within this group, attended the session, helped facilitate, and shared their data with the group. This session was less dynamic than cases 2A and 2B. During the AGFS, there were few instances of comparing and sharing practices however, two change cues were raised and led to change talk around improving documentation and discharge summaries as a way to improve community prescribing.

Case 2D took place in a newer, smaller hospital on the outskirts of the city. Because of its short history, this hospital has a unique physician culture and medical culture with a focus on patient and community-centered care. Participants expressed that they believed that the make-up of their group and their consultant colleagues influenced their prescribing behaviors (Table 4).

This session was the least interactive of our six cases; physicians did not raise change cues, share their practices, or discuss making changes. The facilitator posed few questions to the group in this session, and few questions were raised beyond trying to understand the data that was presented. Participants pointed out that the AF report was quite difficult to interpret; something that was reflected across cases 2A–D in the amount of time spent questioning the facilitator to understand the data during these AGFS (see Table 4). The facilitator expressed that this group did not seem “interested” in the AGFS.

Case 3 was a project with a group of specialists addressing broad questions about practice variation in relation to anesthesia for various surgical procedures. In this AGFS, there was a high degree of interactivity around change cues which led to change planning, and also comparing and sharing of practices. Participants raised ideas about additional rounds and education, improvements in charting, and working with other system “actors” to foster improvements in care. The participants were young and very collegial with one another. Several of them had reportedly trained together. The medical lead for the project co-created the questions with the CPLP medical director, helped with report design, pre-circulated relevant journal articles to the participant group, and shared their data with the group during the presentation.

Comparative case analysis

The results of the comparative case analysis are summarized in Table 5. While the authors recognize that physician behavior change was the goal of undertaking the AGF projects, the focus of this study was the factors influencing whether physicians engaged with the data and change planning during the AGFS. Thus, the “success” of the individual sessions was gauged based on the following three criteria: (1) The physicians engaged in the group discussion about the data, (2) Change cues that arose were followed by change talk, and (3) Further action was taken based on the AGFS to address identified gaps. There were numerous additional outputs that arose following the AGFS studied, and these are captured in Table 3 but not elaborated here as they are outside the scope of this paper.

Table 5 Summary of results of comparative case analysis for six audit and feedback sessions with different physician groups. The program model elements, which formed the framework table used for data analysis and indexing are on the vertical axis of this table and the cases are listed across the horizontal. The results presented below are summarized, for purposes of the publication, from the detailed notations captured in the original framework tables. The data sources from which the data were extracted are described in Table 1

In comparing across cases, the most change talk and planning occurred during cases 1, 2A, and 3. Comparative case analysis highlighted several key elements fostering physician engagement with the data in these groups. These included the pre-existing relationship between the CPLP and the physician group (case 1), the active involvement of a physician leader from within each AGF group (cases 1, 2A, and 3), co-creation (between group members and CPLP) of the clinical questions and AF report designs (cases 1 and 3), perceived control over the behaviors being measured in the AF report (cases 1 and 3), intrinsic group dynamic or cohesiveness (cases 1, 2A, 2B, and 3), and the approach of the facilitator leading the sessions (co-facilitation in cases 1 and 3 and coaching-oriented facilitation with prompts and questions in cases 1, 2A, 2B, and 3).

The physicians’ tendency towards sharing and comparing practices during AGFS were greater in AGFS 1 and 3, which were based on clinical questions about practice variation, than they were in cases 2A–D. The data visualization for the AF reports in cases 2A–D was overly complex. During AGFS 2A–D, a disproportionate amount of time was spent by the participants on understanding the reports. Participants in AGFS 2D commented specifically on this finding. In addition, the extent of interactivity in AGFS 2A-D diminished with each successive case. In reviewing the transcripts, and in interviewing the facilitator and CPLP staff who observed the sessions, it became apparent that the facilitation style changed across those four cases. In cases 2A and 2B, the facilitator asked many questions of the participants, prompting them to consider their data. In contrast, in case 2D, the facilitator asked very few questions and the participants made few inquiries about the data. Indeed the transcriptionist for the AGFS described the AGFS for 2D as a “lecture rather than a focus group” after transcribing the session.

Developing an organizing framework

In reviewing the similarities and differences across the cases, and grouping similar items, four overarching themes emerged: relationship building, question choice, data visualization, and facilitation. In organizing these in a way that reflected both the processes used by the CPLP and the observed physician behaviors in AGFS, it was possible to link the design/implementation factors with the cycle of predictable physician behaviors [8] to create a practical framework for AGF project planning and design. The Calgary Audit and Feedback Framework (CAFF) is shown in Fig. 2.

Fig. 2
figure 2

The Calgary Audit and Feedback Framework (CAFF) for the design and delivery of AGF. This model organizes the key findings from our case analysis that were identified as important drivers in moving physicians through the cycle of discrete behaviors that occur in AGFS towards the end goal of planning for change. Under each “mediating factor” are listed the distinguishing elements between the cases with more or less social interaction that emerged from the comparative case analysis. The framework is linked to the conceptual model of physician behaviors in AGFs to show how the different mediating factors drive the behaviors towards change

Discussion

The audit and feedback literature consists of many published interventions demonstrating that AF can be a useful means to change physician behavior [1]. However, it has been pointed out that there are gaps in the design and implementation of AF and that these may account for some of the variability in the effectiveness of published interventions [1, 3,4,5,6,7].

The CPLP developed a novel way to deliver AF that aimed to address some of the design and implementation elements emerging from the AF and educational feedback literature: AGFS [4,5,6,7, 9, 24,25,26,27]. Recognizing that social learning was a key ingredient in pivoting AGFS towards change planning, we wished to explore why some AGFS were more socially interactive than others [8, 20].

In this study, we present the results from a comparative case analysis of six AGFS and based on these findings, we propose a practical, evidence-informed framework for the development, and implementation of AGFS. We have termed the product of this research a “framework” based on Nilson’s definition:

“Frameworks in implementation science often have a descriptive purpose by pointing to factors believed or found to influence implementation outcomes.” ([36], p 2).

In the Calgary Audit and Feedback Framework(CAFF), the key findings from the case analyses were grouped into four “factors” and aligned with the conceptual model of physician behaviors to demonstrate linkages between the design and implementation elements and the desired progression of the physicians towards change planning in AGFS (Fig. 2).

The Calgary Audit and Feedback Framework

The intent in developing the CAFF was to provide a concise way to organize our findings and to delineate how the various design elements could drive physicians towards planning for change in a socially constructed learning environment (Fig. 2). The first two factors of the framework, relationship building and question choice, help physicians in AGFS overcome potential barriers to acceptability of the feedback (for example: skepticism, mistrust, non-actionable feedback). Quality data representation is needed to facilitate understanding and interpretation of the data. Facilitation as a mediating factor is dependent on the trusting collaborative relationships between the feedback providers and recipients and helps participants move from understanding the data to change planning. Each of the identified mediating factors are supported by existing literature and can be used to help us understand how social interaction drives physicians towards change planning.

Building relationships

Bing-You et al. identified that feedback recipients incorporate feedback into their learning inconsistently [37]. This may be mediated by failure to create an “educational alliance” between the providers and recipients of feedback, the credibility and constructiveness of the feedback, and/or the nature of the relationship and trust between the providers and the feedback recipients [25,26,27,28]. To mitigate skepticism and enhance feedback acceptability, the primacy of establishing respect and trust between the “provider” and “recipient” of in-person feedback cannot be over-emphasized [25, 27,28,29].

We found that when present, skepticism about the data was a barrier to moving physicians in AGFS towards change talk. Groups who had prior experience with the CPLP program had a working relationship with our team and appeared to engage with their data and the facilitator readily.

Key elements of relationship building in our program include empowering a group member to co-facilitate the AGFS, clarifying use of and the confidential nature of the AF reports, and the data limitations. These elements necessitate advance planning and direct contact between the individuals developing the AGF project and the prospective AGF recipients in order to forge an “educational alliance” [27]. We suggest the successful creation of this alliance is reflected in the subsequent engagement of the participating groups in additional projects with the CPLP. Co-creation also appeared to be a critical component in the development of the educational alliance: Cases where co-creation was emphasized in project design had the most interactive AGFS.

Question choice

A key contributing factor to intervention effectiveness is ensuring that the intervention selected is appropriate to the desired behavioral change [10, 11, 19]. The Knowledge-to-Action Cycle and the “Behavioural Change Wheel” provide guidance for the choice of intervention for fostering change depending on the nature of the desired behavior change [18, 19].

It follows that not all questions about physician performance are appropriately dealt with by using AF interventions [7, 10, 11, 19, 23]. Choosing metrics over which physicians have little control or for which there is no gold standard is unlikely to result in feedback acceptance [7].

Brehaut and others emphasize the importance of actionability of the feedback provided in AF [7, 10, 19, 26]. Watling et al. found that perceived “constructiveness” of feedback mediates its acceptance by learners to some extent, and we suggest that the perceived “actionability” described in the AF literature [7] and perceived constructiveness of educational feedback are similar constructs [7, 26]. Likewise, the availability of best practice evidence or gold-standards in addition to anonymized peer data with which to compare an individual’s performance may be an important component of the perceived “constructiveness” of the feedback [7, 24, 26].

In AGFS (cases 1 and 3) in which the physicians had direct control over the outcome in question, participants focused readily on the evidence, reflection, and change planning. In contrast, in cases 2A–D, physicians expressed reservations about their ability to make change because of other “actors” in the system who could influence the outcome (e.g., allied health). We observed that this perceived “actionability” seemed to be mediated through the proximity between the physicians’ behavior or decision and the outcome being measured.

Representation of the data

How the data is presented in an AF report affects how easily the participants can understand the report and move on to making meaning from the findings and looking for change opportunities. Our analysis revealed that the data reports for cases 1 and 3 were simpler in their design than the others (Figs. 3 and 4); this may be because the project leads were more intimately involved in co-creation of the questions the reports. This highlights the importance of managing cognitive load, building “end-user” testing of the AF reports into the design of an AGF project, and the value of co-creation of the learning [7, 38].

Fig. 3
figure 3

Representative example of a single data table from a generic report from the case 1 project. Physicians appeared to be readily able to interpret these graphs during the AGFS, allowing more time for reflection and planning for change

Fig. 4
figure 4

This graph is an aggregate exemplar of how the data for cases 2A–D were initially presented. The goal was to show physicians whether they were discontinuing sedatives and anti-psychotics in patients who were admitted with a pre-existing prescription, whether the patients were started on these medications in hospital, and whether they remained on them after discharge. A desirable outcome for a patient on sedatives or anti-psychotics either before or during their hospital stay was considered to be discharge from hospital without a prescription for these medications. Physicians found these graphs challenging to interpret, resulting in disproportionate AGFS time being spent on clarifying and questioning

Facilitation of the AGF session

The approach to facilitating the AGFS is pivotal to ensure that participants in the session move from interpreting the data to planning for change. The iPARIHS framework identifies the facilitator as playing the central role of fostering interaction between the participant, their data, and their context [17]. The iPARIHS authors emphasize that experienced facilitators can grasp the nuances of system and organizational factors (context) that may influence implementation of a change intervention [17]. The need for an understanding of context lends further support for the role of co-facilitating feedback with a group member. A non-group member facilitator, while expert at interpreting the data risked missing relevant change cues [8]. Physician engagement was highest when a respected group member co-facilitated AGFS, and we speculate that this resulted in positive credibility judgements by the other participants [10, 11, 19, 23, 26]. We suggest that training a member of the recipient group to co-facilitate or lead AGFS will enhance acceptability of the feedback and that with their privileged knowledge (as a group member) of the context and culture of the group, they will be better able to capture change cues and incorporate elements such as barriers and enablers of change to support action planning [17].

Sargeant et al. highlight that the facilitator’s role in a feedback session should be coaching-oriented [25]. The coach-facilitator helps participants to navigate through their reactions to data, to understand their data, and then plan for change [25]. The “coaching-oriented approach,” with prompts and questioning is essential. In our study, when prompts and questioning were not used by the facilitator (perhaps because of facilitator fatigue over the course of four AGFS), there was very limited social interaction in the AGFS [25].

Limitations and future research

There are several limitations of this study. Our findings of physician behaviors in AGFS represent the collected observations over the course of six AGFS, five hospitals, and three specialties, but included only consenting physicians. The authors acknowledge the risk of selection bias in our participants. Another potential limitation is the perspectives of some of the research team members, who were intimately involved with the leadership of CPLP when this work was carried out. The authors attempted to balance important tacit knowledge that informed the program development with the risk of bias from pre-conceived ideas by acknowledging their perspectives, bringing on experienced research team members who were not intimately involved with the CPLP at the time of the analyses and triangulating across data sources to corroborate our observations. Finally, it may be argued that cases 2A–D should be treated as one case because they all addressed the same clinical question. However, the authors were interested to explore contextual and cultural factors that influenced social interaction in AGFS, and because cases 2A-D occurred in different settings with different participants, it was felt that these case should be treated separately [17, 23].

The framework we present is a synthesis of our findings from analysis of AF projects designed and delivered in a novel way to address recommendations for enhancing AF in the literature [7]. While it appears that design and implementation elements used by the CPLP to deliver AGFS promote social interaction, prospective evaluation and refinement of the CAFF will be important next steps.

Conclusion

Based on the findings of our study, we present a practical, evidence-informed approach for the design and delivery of AGFS in a way that links design and implementation elements (relationship building, choice of question, quality of data representation, and facilitation style) to the anticipated behaviors of physicians participating in AGFS in order to promote social learning and behavior change.