Background

Feedback plays an important part in medical education and residency training. It has been studied for decades, and many authors have presented their own suggestions on how to deliver effective feedback with multiple definitions developed in the literature [1,2,3,4]. An early, widely-referenced paper by Dr. Jack Ende described feedback as “an informed, nonevaluative, objective appraisal of performance intended to improve clinical skills”, while a more recent literature review proposed a more detailed description, “a supportive conversation that clarifies the trainee’s awareness of their developing competencies, enhances their self-efficacy for making progress, challenges them to set objectives for improvement, and facilitates their development of strategies to enable that improvement to occur” [1, 5].

As the concept has proven difficult to define, it is not surprising that a consensus has not been reached for an ideal model for the delivery of feedback. Nonetheless, the importance of feedback remains unquestioned; as it can allow trainees to take a more critical evaluation of their performance, learn from previous mistakes and greatly improve their clinical skills in future practice. The American College of Graduate Medical Education (ACGME) has placed an emphasis on the importance of feedback by annually surveying residency programs and inquiring as to the depth of feedback that residents feel they receive within their program.

Various impediments to feedback are frequently cited. Lack of time for feedback delivery is often reported anecdotally, though direct evidence for this is difficult to find [3]. Though there are many styles and methods of providing feedback (e.g., written, formal in-person feedback sessions, and informal feedback occurring in real-time that can be considered [6]), feedback delivery can still be quite challenging for the person assigned the task of doing so. Some have suggested that the process may be inhibited due to fears of retaliation or a desire to preserve a good working relationship [7, 8]. However, recent evidence confirms that residents do want to receive feedback, particularly on higher-risk procedural skills [9].

One of the less-studied challenges related to clinical feedback involves the identification of what forms or aspects of feedback residents believe are effective. For example, faculty may provide feedback to residents during brief bedside encounters, yet, given the informal nature, this may not be fully appreciated as such by residents and other learners. Yarris and colleagues identified differences in resident versus attending perceptions of specific aspects of clinical feedback. Of note, the attendings in that study tended to report higher satisfaction with the quality of the feedback they gave than the residents did for the feedback they received. [10] Along the same lines, it has also been observed that medical students and their teachers disagree on the frequency of feedback received or given, respectively [11]. This suggests there may be a disconnect in teacher-learner perceptions of feedback within the field of medical education.

Van de Ridder et al. reviewed the literature and identified 33 factors that affected feedback, yet very little was found that addressed the feedback provider’s and recipient’s perceptions of the quality of that feedback [12]. Liberman et al. found significant differences in resident versus attending perceptions of the frequency and quality of feedback [13]. The reasons for these differences in perception are unclear. This study begins to address this gap in knowledge by characterizing what encounters faculty and residents identify as feedback and which forms are felt to be the most clinically effective by emergency medicine attendings and residents.

Methods

Prior to initiating this study, ethics approval was sought from the local institutional review board (IRB). This study was deemed exempt from IRB approval, as it had no direct connection to patients or personal protected information and, therefore, did not represent human subjects research. No funding was utilized for this project.

An electronic survey was created using REDCap [14] and circulated via email to all emergency medicine and general surgery residents and attending physicians (both full-time and part-time) at Mayo Clinic in Rochester, MN. Attendings in both the adult and pediatric emergency departments were included, regardless of the type of residency completed, and all responses were anonymous. Residents rotating in the emergency department from other services (including pediatrics residents) and medical students were excluded from this study. This sample size was estimated a priori to be adequate to detect a 20% difference in perception between the two groups with a power of approximately 0.8. The survey was administered in September to allow new residents beginning in July time to gain experience in receiving feedback during residency before completing the survey.

Within the survey, subjects were presented with five clinical scenarios and asked to determine whether or not they believed it constituted effective feedback. Each scenario was followed by the question, “Which of the following is the best description of this interaction?”; and responses were limited to “this is effective feedback” or “this is not effective feedback”. These survey scenarios can be found in Appendix A. The scenarios were created by the authors as representations of typical resident/faculty interactions at their institution, and there was not an attempt to make an a priori determination of which scenarios did and did not represent effective feedback. Prior to starting data collection, the surveys were reviewed by a total of three attendings and five residents for clarity and to determine if they were representative of a typical interaction.

Continuous features were summarized with medians, interquartile ranges (IQRs), and ranges; categorical features were summarized with frequency counts and percentages. Comparisons of features between residents and attendings were evaluated using Wilcoxon rank sum, chi-square, and Fisher exact tests. Statistical analyses were performed using version 9.4 of the SAS software package (SAS Institute, Inc.; Cary, NC). All tests were two-sided and p-values < 0.05 were considered statistically significant.

Results

Surveys were sent to 110 individuals. Of those, 72 (65.5%) responded to the survey, including 35 (49%) residents and 37 (51%) attendings. Of the 35 residents, 31 indicated their level of training, which included 13 (42%) PGY-1, 9 (29%) PGY-2, 6 (19%) PGY-3, 3 (10%) PGY-4, and none for PGY ≥ 5, respectively. Survey results include an equal breakdown of responses from residents in emergency medicine and general surgery. Attending physician responses are primarily representative from emergency medicine. Of the 37 attendings, 34 indicated the number of years since completion of residency or last fellowship, at a median of 9 years (IQR 4–14; range 1–31). One value for years since completion of training was set to missing, because an illogical value was entered by one respondent that likely represented the respondent’s age.

A comparison of the remaining survey responses between residents and attendings is shown in Table 1. Sample sizes for responses with missing data are indicated in italics in parentheses. In each of the five clinical scenarios there was not a statistically significant difference in what residents and attending physicians perceived as effective feedback. Each question of the survey demonstrated similar perceptions from residents and attending physicians with p-values ranging from 0.084 to 1.0.

Table 1 Results of Feedback Survey

Discussion

Feedback continues to play an important part in professional development. Yet, while the practice of medicine strives to be evidence-based in the twenty-first century, the practice of delivering feedback to medical trainees lags behind in this regard. The optimal format and delivery of feedback remains unknown, though recommendations can be found in the literature. Some have suggested that training is helpful to improve objective measures of clinical supervisors’ proficiency in providing feedback to residents. [15, 16]. Others have suggested that additional resources are needed to train clinicians to provide meaningful and formative feedback [7]. It has also been demonstrated that residents can provide valuable l feedback on the quality of feedback their educators provide. This suggests a role for bidirectional feedback [17, 18]. These interventions presume a shared perception of how feedback should be provided and received, but in many instances there may not be perception compatibility between teacher and pupil. However, in many instances this has yet to be clearly established.

This study investigated one of the foundational concepts of feedback; namely, do those giving and receiving feedback agree on what is feedback. In this limited, single-institution study, we have found evidence that they do agree on this aspect of feedback. This suggests that when feedback in medical training is thought to be suboptimal or ineffective, it may not be because of a fundamental disagreement between the involved parties disagree on what feedback is. Rather, consideration should be given to the delivery and content of the feedback.

This study was not designed to investigate these aspects of feedback, but a trend is suggested by the results. For three of the five scenarios, the responses for both the resident and attending groups overwhelmingly favored one response over the other. More than 85% of individuals thought effective feedback had been given in questions 1, 2, and 4. Opinions were split over the other two scenarios (scenarios 3 and 5) within both the resident and attending groups, though smaller majorities thought scenarios 3 and 5 did not include any effective feedback. The three scenarios that large majorities of both residents and attendings said did include effective feedback (scenarios 1, 2, and 4) were similar in that a specific recommendation was made for future clinical practice (e.g., “On your next needle decompression make sure to…”). Scenarios 3 and 5 did not include such a recommendation, but, instead, included only a more generalized statement that lacked actionable information (i.e., “try to place it faster” and “nice job today”. This suggests that both attendings and residents believe feedback should include actionable information, whether it is a specific recommendation for future improvement or recap of specific behaviors that were effective and warrant repeating, and is consistent with prior recommendations in the literature [1]. Future research may be useful to better identify what content is necessary for feedback to be interpreted as useful by residents and attendings.

This study is limited by the fact that participants were aware of the purpose of the study, and may have had a heightened perception of feedback that may not have been present in a real life encounter (e.g., Hawthorn effect). Additionally, the study utilized a relatively small sample size. Consequently, it was not possible to control for all variables that have been shown to affect feedback. It was, however, large enough to permit the detection of clinically meaningful differences in study results. The response rate differed by department of origin, with a much higher number of emergency medicine residents than general surgery residents responding. Further, all respondents worked at a single institution, thus limiting the diversity of the sample. There was not significant power to break results down by year of residency training. A prior study found that medical students perceptions of feedback changed as they progressed in their training [19]. It is possible that this may also be affected by level of residency training, although data on this are limited.

Conclusions

This was a small, single-center study comparing resident and attending physician perception of feedback. Despite its size, there was fairly good agreement between the two groups as to which scenarios did and did not represent effective feedback. This suggests that resident and attending physicians tend to agree on what constitutes effective feedback. Thus, focus should be directed to other factors, such as feedback content, when there is a perceived deficit in quality feedback. Further research is necessary to identify an optimal model or models for delivery of effective clinical feedback.