Advertisement

Using conversation analysis to explore feedback on resident performance

  • Marrigje E. DuitsmanEmail author
  • Marije van Braak
  • Wyke Stommel
  • Marianne ten Kate-Booij
  • Jacqueline de Graaf
  • Cornelia R. M. G. Fluit
  • Debbie A. D. C. Jaarsma
Open Access
Article

Abstract

Feedback on clinical performance of residents is seen as a fundamental element in postgraduate medical education. Although literature on feedback in medical education is abundant, many supervisors struggle with providing this feedback and residents experience feedback as insufficiently constructive. With a detailed analysis of real-world feedback conversations, this study aims to contribute to the current literature by deepening the understanding of how feedback on residents’ performance is provided, and to formulate recommendations for improvement of feedback practice. Eight evaluation meetings between program directors and residents were recorded in 2015–2016. These meetings were analyzed using conversation analysis. This is an ethno-methodological approach that uses a data-driven, iterative procedure to uncover interactional patterns that structure naturally occurring, spoken interaction. Feedback in our data took two forms: feedback as a unidirectional activity and feedback as a dialogic activity. The unidirectional feedback activities prevailed over the dialogic activities. The two different formats elicit different types of resident responses and have different implications for the progress of the interaction. Both feedback formats concerned positive as well as negative feedback and both were often mitigated by the participants. Unidirectional feedback and mitigating or downplaying feedback is at odds with the aim of feedback in medical education. Dialogic feedback avoids the pitfall of a program director-dominated conversation and gives residents the opportunity to take ownership of their strengths and weaknesses, which increases chances to change resident behavior. On the basis of linguistic analysis of our real-life data we suggest implications for feedback conversations.

Keywords

Postgraduate medical education Feedback on resident’s performance Conversation analysis Program director Resident 

Introduction

In the context of training residents to become medical specialists, feedback on clinical performance is fundamental to enhance their learning and confirm, restructure, tune, expand and overwrite their clinical performance (Hattie and Timperley 2007; Archer 2010; Ridder et al. 2015; Iobst et al. 2010). Literature on feedback in medical education is abundant with recommendations for giving effective feedback (Lefroy et al. 2015; Cantillon and Sargeant 2008; Sargeant and Mann 2010; Ramani and Krackov 2012; Bienstock et al. 2007; Eichna 1983; Archer 2010; Kornegay et al. 2017). In medical practice, however, providing effective feedback still seems difficult (Dudek et al. 2005; Kluger and DeNisi 1996; Archer 2010; Bing-You and Trowbridge 2009) and residents continue to be unsatisfied with the feedback they receive (Al-Mously et al. 2014; De et al. 2004; Sender Liberman et al. 2005). Apparently, there is an undesirable gap between feedback theory and practice when it comes to resident training.

A reason for this undesirable mismatch between theory and practice could be that, despite the large amount of literature on feedback in medical education, recommendations for effective feedback are scarcely supported by analyses of real-world feedback conversations. We aim to contribute to current literature by deepening the understanding of how feedback on residents’ performance is provided, and formulating recommendations for improving feedback practice. A summary of current insights from the literature on effective feedback is given below.

Effective feedback requires consideration of both feedback content and the feedback process (Cantillon and Sargeant 2008). The effectiveness of feedback increases when it is given frequently, positively framed, encouraging, accompanied by explanation to increase understanding, does not threaten the receiver’s self-esteem, recognizes the recipient’s perspectives and self-assessments, and focuses on setting goals for the future and developing an action plan (Ridder et al. 2015; Van de Ridder et al. 2015; Bing-You et al. 2017, 2018; Ramani et al. 2018; Boehler et al. 2006; Hattie and Timperley 2007).

Furthermore, verbally provided feedback should ideally be a ‘conversation about performance’ rather than a one-way transmission of information (Sargeant et al. 2009; Cantillon and Sargeant 2008; Archer 2010; Bing-You et al. 2018; Fluit et al. 2013; Boud and Molloy 2013). Passively received feedback does not lead to performance improvement, because receivers need the opportunity to analyze the feedback, ask questions about it and connect it with prior understandings to change their behavior and performance (Nicol 2010; Yang and Carless 2013).

Besides, several studies have investigated audio or video recordings of feedback conversations between clinical supervisors and medical students (Blatt et al. 2008; Ferguson 2010; Hasley and Arnold 2009; Holmboe et al. 2004; Spanager et al. 2015). They found that the conversations tended to be teacher-dominated and mostly contained general, positive statements about student behavior. A study on verbal feedback in mini clinical evaluation exercises found that, in order to be effective, feedback needs to be interactive so trainees can take ownership of their strengths and weaknesses (Holmboe et al. 2004). Despite this body of knowledge on effective feedback, in residency training both feedback givers and receivers are not satisfied.

To be able to provide meaningful additions to current feedback theories and improve feedback practice in resident training we need to take a closer look into real-world feedback practice. This approach shows promising results in the context of secondary education, where analysis of feedback in real-life teacher–student encounters yielded several practice-derived recommendations for teachers (Skovholt 2018). To our best knowledge, there are no reports of such a detailed analysis of feedback situations in the context of resident training.

To shed more light on how feedback is constructed in real-world practice and to provide practical advice on how effective feedback dialogues are created in the real-world, we chose to examine the interactional process in detail as it evolves from moment to moment by performing a fine-grained analysis of feedback conversations between program directors and residents.

Methods

Background

In the Netherlands, the program director (PD) is responsible for the residency program and assessment of residents’ performance. Residents must meet with their PD at least semi-annually. The purpose of these meetings is to provide feedback on residents’ overall clinical performance and set goals to bridge the gap between current and desired performance. All PDs receive a 2-day training on how to supervise and assess residents, in which they also practice providing feedback that meets the latest recommendations from the feedback literature.

Participants and ethical approval

Participants were invited by email. To ensure heterogeneity in the sample, we recruited participants from different medical disciplines, hospitals, years of PD experience and years of resident training (Table 1). Original data were treated confidentially and analyses were performed anonymously. All participants provided written informed consent. The study was approved by the Netherlands Association for Medical Education (file number 566).
Table 1

Participants

 

Medical specialty

Hospital

(general hospital/university medical center)

Program Director

male/female

years of experience as PD

Resident

male/female

year of training

Duration of meeting

minutes

1

Internal medicine

General hospital

Male

10 years

Male

3rd year

33:19

2

Radiology

University medical center

Female

3 years

Male

2nd year

40:01

3

Radiology

University medical center

Male

1 year

Male

1st year

41:21

4

Internal Medicine

General hospital

Female

2 years

Female

3rd year

30:05

5

Internal Medicine

University medical center

Female

1 year

Female

2nd year

28:28

6

Radiology

General hospital

Female

5 years

Male

3rd year

23:52

7

Surgery

General hospital

Male

8 years

Female

1st year

12:32

8

Surgery

University medical center

Male

5 years

Male

4th year

53:53

Analytic procedure

We used Conversation Analysis (CA) to analyze the data of the meetings. CA is an ethnomethodological approach to interaction aimed at uncovering interactional patterns that structure naturally occurring, spoken interaction (Mazeland 2006; Maynard and Heritage 2005; Sidnell and Stivers 2012). It is a data-driven and iterative procedure. CA provides tools for detailed, practice-based analysis of interaction and has proved to be very valuable in (medical) educational research (Sidnell and Stivers 2012). Therefore, this methodology is particularly suited to the aim of our research.

The recorded interactions were transcribed using Jeffersonian transcription conventions, in which many symbols are used to transcribe what has been said and noting how it has been said (Jefferson 1986) (see “Appendix 1”). Since CA is an inductive qualitative linguistic method, it seeks to describe and explain the structures of social interaction “through a reliance on case-by-case analysis leading to generalizations across cases but without allowing them to congeal into an aggregate” (Sidnell and Stivers 2012, p. 2). Our analysis, therefore, started with an explorative stage of making observations of details of the interaction in individual fragments. This led to working hypotheses on aspects of the interaction (e.g., how actions, like challenging an opinion or closing the interaction, are performed), which were then checked against the entire dataset. By going back and forth between various individual fragments, we developed a description of the interaction that was grounded in individual fragments and simultaneously applicable to the entire data set (Sidnell and Stivers 2012). Accordingly, the analyses of extracts presented in the results section are illustrative of thorough microanalyses of feedback sequences throughout the entire dataset.

Two researchers (MD and MB) identified 62 fragments in which resident performance was discussed in the recordings of our study. The rest of the conversations concerned practical issues like planning, logistics and niceties. Two data sessions with a group of six experienced CA researchers from all over the country were held in which specific fragments were analyzed, a procedure typical for CA methodology (Ten Have 2007). After these two data sessions, MD and MB analyzed the rest of the data and periodically reflected on their findings with WS in three separate meetings. Finally, the findings were discussed in the entire research team.

The primary researchers (MD, MB) had no direct connection with the participants. MD is a medical doctor with experience as a resident in a general teaching hospital. This study is part of her Ph.D. research in medical education. MB is an educationalist and researcher in medical education with a background in linguistics. WS is an assistant professor in language and communication, working as a researcher using Conversation Analysis in medical settings. The results were discussed in the research team (MKB, JG, CF, DJ). MKB and JG are program directors in a university medical center, CF is a medical doctor and an educationalist working as post-doctoral educational researcher with expertise on feedback in residency programs and DJ is a professor in medical education. MD and MB built an audit trail by documenting the analytical decisions and summaries of the discussions with the research team.

Results

Eight meetings between PDs and residents were recorded in 2015–2016. They lasted between 12.32 and 53.53 min, with an average of 32.56 min, which resulted in 262.42 min of audio recording.

The analysis revealed that the two feedback formats described in the literature, a one-way transmission of information versus a conversation about performance, consistently appeared throughout the data set. In our study, we defined these types of feedback as unidirectional and dialogic feedback activities. Unidirectional feedback activities are characterized by an evaluation on past performance posted by the program director. Dialogic activities are characterized by an initial question, followed by in-depth questions of the program director to ask the resident about past performance.

Feedback as unidirectional activity occurred in 41 and feedback as a dialogic activity in 21 of the 62 fragments. Below, we first present examples of unidirectional feedback (positive and negative feedback examples) and then two examples of feedback as a dialogic activity. To present the transcribed data we use a two-line lay-out with the original Dutch language first and a literal English translation line next. The focus of our analysis is on how the two feedback formats are interactionally constructed by the PD and resident. As is common in CA research, we include CA literature in this section to validate the analyses of the presented fragments (Veen and de la Croix 2016; Sidnell and Stivers 2012).

Unidirectional feedback activities

Unidirectional feedback activities have three interactional features: (1) the PD introduces an aspect of performance for feedback, (2) this aspect is then evaluated by the PD, and (3) the resident’s responses are minimal.

Positive evaluation of performance

A typical example of a positive performance evaluation is presented in Extract 1. The feedback sequence is initiated by the PD after a short silence following the closing of the preceding topic.

The PD initiates this sequence by mentioning its topic: “communication” (line 1). Before providing his evaluation, “that goes well” (line 2), he states that there was no comment on that aspect of performance (lines 1–2)—most likely from supervisors of the resident. In overlap with a possible addition from the PD to the evaluation (line 3), the resident responds with a minimal “no” (line 4)—probably in response to the turn part “no comment”—and after a brief silence (line 5) the resident continues with a softly spoken repetition of the PD’s evaluation “that goes well” (line 6). Once the resident has demonstrated his reception of the evaluation, the PD proceeds with providing several pieces of detailed evidence that support his initial evaluation (lines 7–10). Only upon having started uttering the conclusion, launched with a conclusive “so” (line 11), the resident confirms the PD’s previous contribution (line 12). The conclusion “so eh in that respect that is eh no problem” is again responded to by a minimal “no” (line 13), upon which the PD launches the next feedback sequence (not shown).

Resembling Extract 1, positive performance evaluations are generally done in brief units of discourse. They are invariably initiated by the PD; the resident’s contribution to the interaction is limited. The performance evaluation itself remains rather superficial (e.g., “it goes well”), although it is sometimes supported by more specific examples of good conduct. In some positive unidirectional feedback sequences, the evaluation is slightly attenuated after it has been posed. In Extract 2, for example, the resident downplays the PD’s positive evaluation of the competency ‘Scholar’:

After a short gap following the PD’s positive evaluation “you are increasingly more active- i- in the (.) supervision of u::h interns discussion and that sort of things” (lines 4–5), the resident downplays the evaluation by saying that he thinks that it is still the same as it was before. This downplaying is accepted by the PD with a minimal “yes” in line 9, which is elaborated in line 11.

Negative evaluation of performance

The interactional structure of negative evaluations in unidirectional feedback activities differs from that of positive evaluation interactions. An example of a negative performance evaluation is presented in Extract 3. In this extract, the PD introduces the topic by stating that there have been strong critiques as well as positive comments on the resident’s performance in general:
After the problem-initiation sequence, in which the PD topicalizes the rather strong criticisms of faculty members (lines 1–2), he comments on other feedback that has been very positive (lines 3–4). The latter might be understood as mitigating negative feedback, but in this extract it seems to initiate a collaborative exploration of potential causes for the divergent feedback (not presented). This discussion culminates in a hesitantly produced conclusion by the resident that he blames his own behavior (lines 72–73). This self-deprecating evaluation (Pomerantz and Heritage 2012) gets a minimal response from the PD (line 75) following a short silence (line 74), after which the resident pursues his turn (line 76). This time, the resident does not wait for the PD to respond, but again resumes the talk once the PD’s silence emerges.

Following Pomerantz’ analysis (Pomerantz and Heritage 2012), the resident might interpret the absence of an overt disagreement with the self-deprecation in lines 74–75 and 77 as an implicit confirmation of the self-deprecation. As agreement with self-deprecation is not preferred (Pomerantz and Heritage 2012), the unfolding of this sequence so far is potentially problematic. In the following turns, however, the sequence gets a more favorable continuation: after claiming to have understood (line 80), the PD proceeds with an explicit statement of disagreement (lines 82–86). Without completely freeing the resident of his ‘guilt’ (line 83), the PD reduces the ‘burden’ by pointing at faculty members as part of the problem. He accounts for this relocation of responsibility by referring to his experience with these issues (line 87). In the following interaction (not shown), the PD slightly elaborates his account—resident turns being limited to minimal responses and backchanneling (as in line 90)—and ends the interaction with an advice-giving sequence.

This excerpt is exemplary of the general tendency of PDs to mitigate negative other-evaluations and downplay critical self-evaluations. Both practices seem to result from an orientation toward negative assessments as socially problematic activities (Skovholt 2018; Asmuß 2008).

Dialogic feedback activities

In contrast to unidirectional feedback fragments, dialogic feedback fragments are more interactive. These fragments are initiated by the PD asking a question to invite the resident to introduce a topic for discussion. The PD leaves room for the resident’s narration and elicits further elaboration on the topic. An example of dialogic feedback in a conversation between two PDs and one resident is presented in Extract 4.

The interaction in this fragment occurs at the end of a range of feedback sequences. The PD’s question “Do you have things for us?” opens up the floor to yet unmentioned topics (Schegloff and Sacks 1973). The resident’s response is delayed and hesitant (lines 2–7). In partial overlap with the PD’s “go ahead” (line 7), the resident proposes an issue for discussion (“yes I am very insecure myself”, line 6) and elaborates on that in lines 11–13. In the following interaction both PDs challenge the validity of the problem, first directly (line 15) and then with an outright invalidation of the problem: “that happens to all of us” (line 22). PD1’s response in line 23 contains another mitigation of the issue: “you don’t show it”. These challenges appear interactionally problematic: they are followed by a silence (line 24), received with “oh” (line 25), and objected with “in my head it is that way”. Despite PD1’s closure-implicative “okay” (Stokoe and Edwards 2008), the resident again takes the turn to explain his issue—which gets mitigated, is raised anew, and gets mitigated once more before one of the PDs formulates an advice and concludes the sequence with “that insecurity will decrease by itself” (not shown). If taking the floor is followed by repeated mitigations by the PD(s), then the result is a pseudo-dialogue (Skovholt 2018). Although the PDs’ questions “made the conversation appear ‘dialogic’” (Skovholt 2018) (p152), their disagreements with the topicalized problem quickly subverted the dialogue into a conversation in which the PDs gradually took over the agenda.

Unlike Extract 4, Extract 5 is a dialogic feedback sequence that does not incorporate mitigations (in this case, of a positive situation) and remains dialogic. The feedback sequence in Extract 5 has been initiated by the PD’s question “What is going well, according to you?” The resident has proposed a specific topic for discussion (“performing surgery”) and has explained why he thinks it goes well. Note that the open character of the PD’s question is fundamentally different from the assertive topicalizations and evaluations with which unidirectional feedback is started off. The consequences of this become clear as the interaction unfolds:

The PD’s next question is about self-development (lines 50–51), which is prefaced by and builds on the resident’s explanation. Again, this question elicits an elaborate response from the resident with minimal PD contributions (lines 59–88). The sequence is closed by a PD-initiated summary.

Discussion

In this study, we used conversation analysis to explore how feedback is constructed in semi-annual evaluation meetings between residents and PDs in postgraduate medical education.

Although literature highlights the importance of feedback as an interactive dialogue and states that unidirectional feedback is not helpful and should be avoided (Pelgrim et al. 2013), we found that unidirectional feedback still prevails over dialogic feedback in real-world practice. Our fine-grained analysis shows how both feedback formats are interactionally constructed.

Unidirectional feedback is dominated by the PD, who initiates the feedback sequence by introducing a topic for discussion and then provides an evaluation on the clinical performance of the resident. The PD subsequently dominates the conversational floor by barely leaving any opportunity for the resident to respond. Dialogic feedback is also initiated by the PD, but here the PD asks a question to invite the resident to introduce a topic for discussion. Subsequently, the PD leaves room for the resident’s narration and elicits further elaboration on the topic by asking more questions.

The question-initiated interaction of dialogic feedback allows the resident to take regular turns at talk (Frank 1982). This enhances the dialogic nature of the interaction. Also, as shown in Extracts 4 and 5, providing conversational space benefits the opportunity for the resident to introduce “hither-to unmentioned mentionables” (Schegloff and Sacks 1973, p. 245). Providing this opportunity is a way of acknowledging the resident’s right “to introduce topics at certain junctures in the conversation” (Frank 1982, p. 359).

Dialogic feedback interactions provide the PD with ample information about residents’ performance, while avoiding the pitfall of a PD-dominated conversation. It gives residents the opportunity to take ownership of their strengths and weaknesses, which, in turn, increases the likelihood that they will apply the feedback in order to improve their future performance (Holmboe et al. 2004).

However, despite being granted the conversational floor, taking the floor seems difficult. This corroborates the importance of leaving the floor open by asking open questions to elicit residents’ responses and avoid the pitfall of occupying the floor that has just been opened to the resident.

Another important finding of our study is the pervasiveness of mitigation in feedback conversations. PDs and residents frequently downplay negative as well as positive feedback. This feature resembles a common linguistic strategy used in various types of communication: hedging. Hedging can be described as mitigating claims by introducing elements of doubt (e.g. “I think”, “probably”) or signaling a lack of commitment to what is said (e.g. “The management team said that…”) (Fraser 2010). Hedging has been reported to be prevalent in written narrative feedback on residents’ performance (Ginsburg et al. 2016). Ginsburg et al. (2016) argue for the functionality of hedging in assessment contexts in medical education for the benefit of harmonious relationships. Yet, hedging as a way to downplay feedback can also create confusion or misunderstanding of the message (Ginsburg et al. 2016; Bonnefon et al. 2011). As such, hedging does more harm than good (Ramani et al. 2017).

Implications for practice

Based on our findings using a fine-grained conversation analysis, we suggest three implications for effective feedback practice. First, feedback providers should open up the floor to residents. They should ask questions to elicit information about the resident’s performance; pose explicit evaluations of performance after having enquired about the resident’s perception of his performance. Initial posing of performance evaluations limits residents’ conversational space and bars opportunities to build on residents’ evaluations of their own performance. Second, feedback providers should leave the floor open to the resident to further enhance the dialogic conversation. Again, they should ask questions to facilitate two-way dialogue. Additionally, feedback providers should explore issues brought up by the resident before dismissing them as irrelevant or not important. Third, feedback providers should be careful in mitigating feedback. Hedging might serve as a way to be polite and maintain relationships, but it can also distort the message.

Strengths, limitations and future research

A strength of our study is that we studied real-life feedback conversations, so we were able to ground our recommendations in actual practice. The analysis could have been strengthened by using video-recordings instead of audio-recordings. Video-recordings allow including non-verbal interactional features in the analysis of feedback construction. However, participants were very hesitant to participate in a study using video-recordings and allow themselves to be video-recorded.

Furthermore, we included participants from different hospitals and medical specialties with different experience levels in training and teaching, which resulted in participant diversity. Although we deliberately chose to select different types of medical disciplines (surgical and non-surgical), we could have maximized heterogeneity of the sample by including a wider range of medical specialties. We stress the need for further studies of feedback conversations between residents and their supervisors to find out whether the conversational structures can be generalized to similar feedback settings and to different cultures. In future analyses it would also be interesting to analyze where in the overall structure of the meeting the feedback sequences tend to occur, how the feedback sessions start and end and how this changes the nature of the feedback.

Conclusion

This study explores how feedback is constructed in semi-annual evaluation meetings between residents and PDs in postgraduate medical education. Unidirectional feedback prevails over dialogic feedback in real-world practice. Dialogic feedback avoids the pitfall of a program director-dominated conversation and gives residents the opportunity to take ownership of their strengths and weaknesses, which increases chances to change resident behavior. On the basis of linguistic analysis of our real-life data we suggest implications for feedback conversations: feedback providers should open up the floor to residents, they should leave the floor open to the resident to further enhance the dialogic conversation and they should be careful in mitigating feedback.

Notes

Acknowledgements

The authors wish to thank the participants for their co-operation, Lex Crijns and Lynn de Rijk for transcribing the evaluation meetings and the participants of the data sessions for their enriching input. Furthermore, the authors like to thank Lokke Genissen for her help in the beginning of the analysis of the data. We would like to thank Tineke Bouwkamp for editing the English language.

Funding

Dutch Ministry of Health, Welfare and Sports; Project RIO. The funding body had no influence on the design of the study, data collection, analysis, interpretation of the data and writing of the manuscript.

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

References

  1. Al-Mously, N., Nabil, N. M., Al-Babtain, S. A., & Fouad Abbas, M. A. (2014). Undergraduate medical students’ perceptions on the quality of feedback received during clinical rotations. Medical Teacher, 36(sup1), S17–S23.Google Scholar
  2. Archer, J. C. (2010). State of the science in health professional education: Effective feedback. Medical Education, 44(1), 101–108.Google Scholar
  3. Asmuß, B. (2008). Performance appraisal interviews: Preference organization in assessment sequences. The Journal of Business Communication (1973), 45(4), 408–429.Google Scholar
  4. Bienstock, J. L., Katz, N. T., Cox, S. M., Hueppchen, N., Erickson, S., et al. (2007). To the point: Medical education reviews—Providing feedback. American Journal of Obstetrics and Gynecology, 196(6), 508–513.Google Scholar
  5. Bing-You, R., Hayes, V., Varaklis, K., Trowbridge, R., Kemp, H., et al. (2017). Feedback for learners in medical education: What is known? A scoping review. Academic Medicine, 92(9), 1346–1354.Google Scholar
  6. Bing-You, R. G., & Trowbridge, R. L. (2009). Why medical educators may be failing at feedback. JAMA, 302(12), 1330–1331.Google Scholar
  7. Bing-You, R., Varaklis, K., Hayes, V., Trowbridge, R., Kemp, H., et al. (2018). The feedback tango: An integrative review and analysis of the content of the teacher–learner feedback exchange. Academic Medicine, 93(4), 657–663.Google Scholar
  8. Blatt, B., Confessore, S., Kallenberg, G., & Greenberg, L. (2008). Verbal interaction analysis: Viewing feedback through a different lens. Teaching and Learning in Medicine, 20(4), 329–333.Google Scholar
  9. Boehler, M. L., Rogers, D. A., Schwind, C. J., Mayforth, R., Quin, J., et al. (2006). An investigation of medical student reactions to feedback: A randomised controlled trial. Medical Education, 40(8), 746–749.Google Scholar
  10. Bonnefon, J.-F., Feeney, A., & De Neys, W. (2011). The risk of polite misunderstandings. Current Directions in Psychological Science, 20(5), 321–324.Google Scholar
  11. Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: The challenge of design. Assessment & Evaluation in Higher Education, 38(6), 698–712.Google Scholar
  12. Cantillon, P., & Sargeant, J. (2008). Giving feedback in clinical settings. BMJ, 337(nov10_2), a1961–a1961.Google Scholar
  13. De, S. K., Henke, P. K., Ailawadi, G., Dimick, J. B., & Colletti, L. M. (2004). Attending, house officer, and medical student perceptions about teaching in the third-year medical school general surgery clerkship. Journal of the American College of Surgeons, 199(6), 932–942.Google Scholar
  14. Dudek, N. L., Marks, M. B., & Regehr, G. (2005). Failure to fail: The perspectives of clinical supervisors. Academic Medicine, 80(10), S84–S87.Google Scholar
  15. Eichna, L. (1983). Feedback in clinical medical education. Jama, 250, 777–781.Google Scholar
  16. Ferguson, A. (2010). Appraisal in student–supervisor conferencing: A linguistic analysis. International Journal of Language & Communication Disorders, 45(2), 215–229.Google Scholar
  17. Fluit, C. V., Bolhuis, S., Klaassen, T., de Visser, M., Grol, R., et al. (2013). Residents provide feedback to their clinical teachers: Reflection through dialogue. Medical Teacher, 35(9), e1485–e1492.Google Scholar
  18. Frank, A. W. (1982). Improper closings: The art of conversational repudiation. Human Studies, 5(1), 357–370.Google Scholar
  19. Fraser, B. (2010). Pragmatic competence: The case of hedging. New Approaches to Hedging, 1, 15–34.Google Scholar
  20. Ginsburg, S., van der Vleuten, C., Eva, K. W., & Lingard, L. (2016). Hedging to save face: A linguistic analysis of written comments on in-training evaluation reports. Advances in Health Sciences Education, 21(1), 175–188.Google Scholar
  21. Hasley, P. B., & Arnold, R. M. (2009). Summative evaluation on the hospital wards. What do faculty say to learners? Advances in Health Sciences Education, 14(3), 431–439.Google Scholar
  22. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.Google Scholar
  23. Holmboe, E. S., Yepes, M., Williams, F., & Huot, S. J. (2004). Feedback and the mini clinical evaluation exercise. Journal of General Internal Medicine, 19(5p2), 558–561.Google Scholar
  24. Iobst, W. F., Sherbino, J., Cate, O. T., Richardson, D. L., Dath, D., et al. (2010). Competency-based medical education in postgraduate medical education. Medical Teacher, 32(8), 651–656.  https://doi.org/10.3109/0142159X.2010.500709.Google Scholar
  25. Jefferson, G. (1986). Notes on ‘latency’in overlap onset. Human Studies, 9(2–3), 153–183.Google Scholar
  26. Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254.Google Scholar
  27. Kornegay, J. G., Kraut, A., Manthey, D., Omron, R., Caretta-Weyer, H., et al. (2017). Feedback in medical education: A critical appraisal. AEM Education and Training, 1(2), 98–109.Google Scholar
  28. Lefroy, J., Watling, C., Teunissen, P. W., & Brand, P. (2015). Guidelines: The do’s, don’ts and don’t knows of feedback for clinical education. Perspectives on Medical Education, 4(6), 284–299.Google Scholar
  29. Maynard, D. W., & Heritage, J. (2005). Conversation analysis, doctor–patient interaction and medical communication. Medical Education, 39(4), 428–435.Google Scholar
  30. Mazeland, H. (2006). Conversation analysis. Encyclopedia of Language and Linguistics, 3, 153–162.Google Scholar
  31. Nicol, D. (2010). From monologue to dialogue: Improving written feedback processes in mass higher education. Assessment & Evaluation in Higher Education, 35(5), 501–517.Google Scholar
  32. Pelgrim, E., Kramer, A., Mokkink, H., & Van der Vleuten, C. (2013). Reflection as a component of formative assessment appears to be instrumental in promoting the use of feedback; An observational study. Medical Teacher, 35(9), 772–778.Google Scholar
  33. Pomerantz, A., & Heritage, J. (2012). Preference. In J. Sidnell & T. Stivers (Eds.), The Handbook of Conversation Analysis (pp. 210–228). Hoboken, NJ: Wiley-Blackwell Publishing.Google Scholar
  34. Ramani, S., Könings, K. D., Ginsburg, S., & van der Vleuten, C. P. (2018). Twelve tips to promote a feedback culture with a growth mind-set: Swinging the feedback pendulum from recipes to relationships. Medical Teacher.  https://doi.org/10.1080/0142159X.2018.1432850.Google Scholar
  35. Ramani, S., & Krackov, S. K. (2012). Twelve tips for giving feedback effectively in the clinical environment. Medical Teacher, 34(10), 787–791.Google Scholar
  36. Ramani, S., Post, S. E., Könings, K., Mann, K., Katz, J. T., et al. (2017). “It’s just not the culture”: A qualitative study exploring residents’ perceptions of the impact of institutional culture on feedback. Teaching and Learning in Medicine, 29(2), 153–161.Google Scholar
  37. Ridder, J., McGaghie, W. C., Stokking, K. M., & Cate, O. T. (2015). Variables that affect the process and outcome of feedback, relevant for medical training: A meta-review. Medical Education, 49(7), 658–673.Google Scholar
  38. Sargeant, J., & Mann, K. (2010). Feedback in medical education: Skills for improving learner performance. ABC of Learning and Teaching in Medicine, 2, 29–32.Google Scholar
  39. Sargeant, J. M., Mann, K. V., Van der Vleuten, C. P., & Metsemakers, J. F. (2009). Reflection: A link between receiving and using assessment feedback. Advances in Health Sciences Education, 14(3), 399–410.Google Scholar
  40. Schegloff, E. A., & Sacks, H. (1973). Opening up closings. Semiotica, 8(4), 289–327.Google Scholar
  41. Sender Liberman, A., Liberman, M., Steinert, Y., McLeod, P., & Meterissian, S. (2005). Surgery residents and attending surgeons have different perceptions of feedback. Medical Teacher, 27(5), 470–472.Google Scholar
  42. Sidnell, J., & Stivers, T. (2012). The handbook of conversation analysis. New York: Wiley.Google Scholar
  43. Skovholt, K. (2018). Anatomy of a teacher–student feedback encounter. Teaching and Teacher Education, 69, 142–153.Google Scholar
  44. Spanager, L., Dieckmann, P., Beier-Holgersen, R., Rosenberg, J., & Oestergaard, D. (2015). Comprehensive feedback on trainee surgeons’ non-technical skills. International Journal of Medical Education, 6, 4.Google Scholar
  45. Stokoe, E., & Edwards, D. (2008). Did you have permission to smash your neighbour’s door?’Silly questions and their answers in police—Suspect interrogations. Discourse Studies, 10(1), 89–111.Google Scholar
  46. Ten Have, P. (2007). Doing conversation analysis. ‎Thousand Oaks: Sage.Google Scholar
  47. Van de Ridder, J. M., Peters, C. M., Stokking, K. M., de Ru, J. A., & ten Cate, O. T. J. (2015). Framing of feedback impacts student’s satisfaction, self-efficacy and performance. Advances in Health Sciences Education, 20(3), 803–816.Google Scholar
  48. Veen, M., & de la Croix, A. (2016). Collaborative reflection under the microscope: Using conversation analysis to study the transition from case presentation to discussion in GP residents’ experience sharing sessions. Teaching and Learning in Medicine, 28(1), 3–14.Google Scholar
  49. Yang, M., & Carless, D. (2013). The feedback triangle and the enhancement of dialogic feedback processes. Teaching in Higher Education, 18(3), 285–297.Google Scholar

Copyright information

© The Author(s) 2019

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  • Marrigje E. Duitsman
    • 1
    Email author
  • Marije van Braak
    • 2
  • Wyke Stommel
    • 3
  • Marianne ten Kate-Booij
    • 4
  • Jacqueline de Graaf
    • 5
  • Cornelia R. M. G. Fluit
    • 6
  • Debbie A. D. C. Jaarsma
    • 7
  1. 1.Department of Internal Medicine, Radboudumc Health AcademyRadboud University Medical CenterNijmegenThe Netherlands
  2. 2.Department of General Practice TrainingErasmus Medical CenterRotterdamThe Netherlands
  3. 3.Center for Language StudiesRadboud UniversityNijmegenThe Netherlands
  4. 4.Department Gynaecologic OncologyErasmus Medical CenterRotterdamThe Netherlands
  5. 5.Department of Internal MedicineRadboud University Medical CenterNijmegenThe Netherlands
  6. 6.Department for Research in Learning and Education, Radboudumc Health AcademyRadboud University Medical CenterNijmegenThe Netherlands
  7. 7.Centre for Education Development and Research in Health ProfessionsUniversity Medical Center GroningenGroningenThe Netherlands

Personalised recommendations