Background

Currently, pre-recorded videotapes have become the standard approach when teaching clinical communication skills (CCS) [1]. Video feedback (VF) has shown positive effects in skills training and formative assessment [2, 3]. However, what do we know about video-based feedback in CCS using pre-recorded videos from real-life settings? Although the closer the assessment is to reality, the more valid it is likely to be [4], studies about VF using pre-recorded videos in a real-life setting have been less frequent than video feedback with simulated patients. Studies regarding the effects of different VF methodologies using real medical consultations are still scarce [5].

Feedback based on consultations recorded on video has many advantages over feedback performed directly after the observation. The video format allows for multiple reviews of the consultation as well as a more careful analysis of nonverbal communication. Consequently, VF may facilitate reflection, self-assessment, and more active engagement of the participants in solving the observed problems [6,7,8]. Using real consultations allows for a real-life setting analysis and, thus, a better formative assessment [4, 5]. VF allows a better exploration of misunderstandings, disagreement factors as well as to investigate the patient’s responsiveness to specific doctor behaviors [9]. Video is the only method that enables learners to reflectively “look at themselves from a distance”, as a realistic painting of their skills [2, 10]. While it may seem threatening to learners at first, it can be potentially more stimulating and rewarding [1, 2].

Video review in small groups with a facilitator and peer feedback is more beneficial than the traditional feedback on the students’ communication skills as it enables a more detailed analysis of the learner’s behavior [11,12,13]. Moreover, the self-reflection process during the video review seems to be a practical approach to learning communication and professional behaviors [5, 14]. Besides, feedback becomes more useful for optimizing performance when combined with self-assessment, external feedback, and peer-feedback [15].

The purpose of this study is to explore: 1) perceptions about potential benefits and challenges in VF; 2) differences in the CCC scores in first-year medical residents in primary care, before and after a communication program using VF in a curricular formative assessment. The VF methodology used video pre-recorded in real-life settings, problem-based interviewing (PBI), and agenda-led outcome-based analysis (ALOBA) feedback in small groups with peers.

Methods

Design, setting and participants

We conducted a pre/post study with a control group to evaluate how an educational program grounded on video-based feedback influenced the medical residents’ communication skills. A group with VF sessions represented the intervention. All the residents in the group belonged to the same residency program. The educational intervention VF and the activity with simulated patients were curricular activities in the program, and the entire group was invited to participate in this study. The control group was similar to the intervention group. Everyone in both groups had the same supervisors and the same theoretical classes. The only difference was in a few local supervisors in both groups.

All first-year medical residents (N: 61) in an integrated primary care program in Brazil were invited to participate, and 54 completed all phases of the study (17 male and 37 female). The residents were randomly divided into small groups of 12 to 15 participants for the communication program with video feedback. We used the simulated patients to evaluate differences in the performance of the medical residents before and after the educational intervention: video feedback of consultations pre-recorded in a real-life setting.

All medical residents performed two simulated patient (SP) consultations in a video-recorded clinical performance examination before and two after VF sessions. The SPs were trained to role-play two clinical consultations in primary care: breaking bad news (HIV result and gastric cancer result) and a common clinical situation (migraine, hypertension, and back pain) for 7 min each. Two blind raters assessed the videotapes. They scored performance items related to communication skills in the 224 (4 videos by resident). Additionally, participants answered quantitative questionnaires (about the perception of patient-centeredness and empathy) before and after the intervention.

The control group also experienced the intervention VF sessions after all the assessments were completed to avoid any potential educational disadvantage from not having the intervention. At the end of the sessions, both groups answered qualitative questions about their perceptions of the method. Therefore the control group was also able to answer the qualitative questionnaire (Fig. 1).

Fig. 1
figure 1

Diagram resuming the design of the study

The intervention

The VF methodology was based on Lesser’s PBI model [16] and an agenda-led outcome-based analysis (ALOBA) feedback [17, 18]. Each medical resident presented a pre-recorded interview in a real-life setting to a group of peers, subsequently receiving feedback from colleagues and two facilitators. The participants, including facilitators, did not change, and the facilitators were the same in both groups.

The communication program with VF commonly lasted about 10 weeks, with each video feedback session lasting around 90 min. There was no limit to the length of the recorded video, and each video-recorded media was 20 min long. In order to facilitate self-assessment and reflective practices, all videos were taped as close as possible to the session. The participants did not record any physical exam on tape.

In the VF session, each medical resident presented a video of a real consultation with some difficulty in medical communication. The facilitators and resident interviewer then agreed on an agenda addressing the topics in the video session [18]. The facilitators coordinated the VF session and facilitated the process of perceiving and understanding their self-image, performing a self-assessment, and finding new strategies by themselves. In this discussion, previous professional experiences from other participants, including the supervisor, illustrated different ways to improve upon the encountered difficulties [19,20,21].

The group watched the consultation as if they were conducting the consultation themselves, often pausing the video when someone addressed an issue. When the video stopped, the resident interviewer was invited to verbalize their self-image and what they observed in the interaction as well as analyze communication micro-skills and perform a microanalysis of micro-behaviors, paying close attention to the exact words spoken as well as non-verbal communication [11, 22]. The group then assisted the interviewer to find alternatives to the less effective behaviors identified [23]. We could argue that the inputs from the supervisor and the group also played a role in reinforcing positive behavior.

Assessment instruments

We used seven instruments designed to assess the effects on communication skills, in checklist format with a Likert scale, completed after each clinical performance examination, before and after the intervention:

Questionnaires completed by the standardized SP:

  1. 1.

    Consultation and Relational Empathy (CARE) [24, 25]

  2. 2.

    Perception of Patient-Centeredness (PPC) [26]

Questionnaires completed by the medical residents:

  1. 3.

    Jefferson Scale of Physician Empathy (JSPE) [27, 28]

  2. 4.

    Perception of Patient-Centeredness (PPC) [26]

  3. 5.

    Qualitative questionnaire created by the authors, with three questions:

    • What are your perceptions about the VF sessions?

    • Were there any changes in your clinical practice after you began attending VF sessions? If so, please specify.

    • Exemplify case situations presented and discussed during the sessions that led to changes in your daily practice.

Questionnaires completed by raters randomly watching videos of the clinical skills practical exam:

  1. 6.

    Questionnaire-based on Calgary-Cambridge Observation Guide (CCOG) [1, 18], with 17 items.

  2. 7.

    Questionnaire-based on SPIKES protocol [29], with 15 items

Data analysis

The sum scores of the questionnaires pre- and post-interventions were analyzed in the control and intervention groups using mixed-design ANOVA. The qualitative data analysis used the Braun and Clarke framework for thematic analysis [30]. The themes were constructed from the reviewed data rather than from a preconceived theoretical stance. For the thematic analysis, the authors have read and double-checked the sentences and coded them. Researchers categorized recurring ideas into themes and sub-themes.

Results

Quantitative results

The following table summarizes the main results of the repeated measures ANOVA for between and within measures effects for the sum scores of the quantitative instruments used in the study (Table 1).

Table 1 Main results of the repeated measures ANOVA for between- and within-measures effects for the sum scores of the quantitative instruments used in the study.

The quantitative results did not reveal significant differences in most questionnaires:

  • CARE [24, 25]: One item (How do you evaluate the doctor’s performance in making a plan of action?) presented a significant difference within applications, with a moderate effect size. The remaining items did not show any significant differences. As for the total scores, none of the results of the mixed ANOVA was significant, with small effect sizes for the within, between and interaction effects.

  • PPC [26]: One item (Regarding today’s problem, to what extent did you discuss personal issues with the doctor that might be affecting your health?) had a significant difference within assessments. The remaining items did not show any significant differences. In the comparison between answers by SPs and residents, the medical residents had given themselves significantly lower grades, with a large effect size. When we separately analyzed the SPs’ and residents’ data, the differences that could be attributed to the intervention were not significant. The interaction between time and group had a small effect size only when data from SPs and residents were taken into account concurrently.

  • Jefferson Scale of Empathy (JSPE) [27, 28]: none of the results regarding differences between groups before and after the intervention in terms of individual items were significant, with effect sizes close to null. For the total scores, the control group had lower mean total scores in the second assessment (from 82,33 to 80,94), and the intervention group had higher mean total scores after the intervention (from 80,26 to 83,63). As a result, a significant interaction with moderate effect size arose between group and time of application.

  • Checklist based on SPIKES protocol [29]: only one item had significant effects within assessments (Warning the patient that bad news is coming), and the effect size was small. For total scores, there was a significant increase in scores with moderate effect size. This difference was not significant between the control and intervention groups and had a null effect.

    The Checklist based on CCOG [1, 18] showed no significant differences whatsoever.

Qualitative results

The following table shows the main themes and sub-themes with supporting citations during the qualitative analysis (Table 2).

Table 2 Generated themes and sub-themes with supporting quotations from perceptions about the intervention

All of the residents considered the educational intervention helpful for improving their communication skills. Some of them realized that the VF sessions were the only moment in their educational training in which they could look at themselves and observe from an outside perspective. Some residents found the experience motivational and helpful for more challenging consultations. Some of the situations described were: breaking bad news, leading with a verbose patient or with multiple demands, and denying the patient’s requests.

The primary potential benefits identified in the VF sessions were the possibility to self-perceive their communicative limitations while reviewing their videos as well as their peers’ videos. The residents stated that they were able to observe communication aspects in need of improvement and to make changes in their medical practice with more reflective practice.

I realized in my video that I was authoritarian and now I think I am better in sharing decisions with the patients (male, 3.16)”; “I started to pay further attention to my difficulties during the consultations (female, 1.3)”

Other positive perceptions were related to the peer-feedback on communication skills. Many participants observed that they learned new communication strategies from their colleagues’ feedback for a better patient-centered approach. Furthermore, participants described the experience of providing feedback in the group as useful to the improvement of feedback skills. The residents also mentioned having further control over their emotional reactions and feeling more self-confident and calmer in interactions with patients after the VF sessions. They also reported improvements when organizing a consultation.

Some challenges related to the intervention: two residents reported that the experience of being videotaped and later watching themselves with the group was uncomfortable. However even so, they enjoyed the group discussion and watching their colleagues’ videos.

“I don’t like to see myself on video (male, 1.5); The idea of recording myself in video and showing it to the group was stressful at first, but after I relaxed (female 2.9).”

Some residents suggested more VF sessions and further correlations with theoretical references.

Discussion

The results suggest that the intervention had a positive effect on self-reported levels of empathy on the Jefferson scale. The influence of preceptors and other residents during the supervision in primary care might have played a role. Therefore, we cannot assign the observed differences exclusively to the VF intervention. Perhaps the intervention was not sufficiently long and intense to produce measurable differences. Furthermore, the scholarly literature lacks a precise quantification as to the effects of VF since most studies have used narrative reviews [2].

The small sample size, a limitation given by the study setting, has likely caused this study to suffer from an underpowered analysis. Therefore, the small to moderate effect sizes might not emerge as significant in the mixed-design ANOVA. Other factors influencing the paucity of quantitative results are cognitive biases known to occur on raters’ behavior, such as the halo and the ceiling effects.

Some of the residents’ self-reported perceptions of the actual changes in their clinical practice seem hard to verify, particularly when the changes are related to professional attitudes and non-verbal communication. Besides, professionalism varies according to language and cultural context [31, 32]. In a meta-analysis, we found more statistical differences related to the influence of video feedback in verbal behavior than non-verbal behavior, more in reception skills than relation skills, and more in molar-skills than micro-skills [2]. Moreover, it is advisable to associate narratives and global ratings to checklists as well as an effective standardization of evaluators. Evidence demonstrates that on OSCE-type assessments, reliability seems to depend more on assessors than on objectivity [33,34,35].

As for the qualitative evaluation, our study confirmed that the intervention is a well-accepted method for a formative evaluation of communication skills [2, 36]. The video feedback recorded in a real-life setting allowed residents to revisit particular points in the real interview and gain a deeper understanding of a specific phrasing or behavior. Some residents also reported improvements in their self-confidence as well as behavioral changes, as seen in other studies [13, 37].

The research findings signpost some essential elements to consider when preparing a video feedback session to potentiate learning as well as a better understanding of the objectives, advantages, and challenges. The participants confirmed that emphasis on self-assessment and peer-feedback are positive dimensions of a formative assessment in a communication program [38]. When learners receive thoughtful comments by peers in a timely and confidential manner, supported by reflections, they find the process compelling, insightful, and instructive [39]. Moreover, as reported by the participants, when judging the work of others, learners can gain insights into their performances [40].

The participants also agreed that this VF methodology has the potential to improve the students’ feedback skills and provide a better acceptance of receiving feedbacks [41, 42]. Providing high-quality feedback is a challenge; furthermore, this is an essential skill for developing collaborative behavior when working in teams. Peer-feedback from colleagues is an important element of multi-source feedback, which is key to programmatic assessments, and reflective practice is an essential skill for effective learning [43].

On the other hand, residents reported critical challenges for facilitator skills, particularly the need to quickly establish connections between the given feedback with pertinent theoretical frameworks and the discomfort of watching themselves with the group. This experience was perceived as a stressful and unpleasant event in other studies [5, 35]. However, evidence suggests that the first video recording experience tends to be more stressful as the learners’ stress gradually decreases over time [35].

Equally significant is how the role of the facilitator is essential for preserving a pleasant and trustful atmosphere in VF [17, 22]. It is, therefore, crucial to have a mindful facilitator, attentive to the students’ psychological needs, and able to associate the feedback with previously addressed communication theories. Furthermore, the local supervisors should be able to give continuous constructive feedback on communication skills in a real setting follow-up. Teaching and evaluating communication cannot be wholly technical, objective, and numerical, as there exists a significant subjective, individual, and intuitive dimension. For this reason, we also welcome further studies using qualitative methods [33, 44].

We also suggest complementing the approach of the effects of video feedback sessions in this methodology from other viewpoints, such as the preceptors, staff, and real patients [45, 46]. Furthermore, we recommend further research about assessing other skills such as clinical records and time management, in addition to investments in multicentric clinical trials on communication programs and its impact, and further evaluation tools and teaching methodologies for video feedback.

Limitations

Although the researchers are unsure as to how each variable relates to better effectiveness in the video feedback [2, 20, 47], a limitation of the study was the inability to causally link intervention with any effects. We did not focus on the effectiveness of the methodological variables regarding the effects of the communication program. Moreover, another limitation of this study was the low sample size, leaving us with low power in all analyses. In other words, there is a high likelihood of “false negatives” and the possibility that our results indicate that no difference exists when, in fact, they do. We also had limitations regarding the lack of standardization of raters and SPs, the variations of subjective judgments, and variation in local supervisors.

The assessment instruments used in Portuguese did not go through a rigorous cross-cultural adaptation, but simple translations. Besides, the small sample hampered the ability of the researchers to obtain any validity evidence based on the internal structure of the translated instruments. Therefore, one may infer that a more extended follow-up period would have been necessary to detect a significant improvement in communication attributable to the intervention [43].

Conclusions

VF taken from real-life settings seems to be associated with a significant increase in self-perceived empathy. It seems that the absence of additional measurable differences may be related to the small sample size and insufficient follow-up time. The main self-reported perceptions by the medical residents suggested that this VF educational intervention has the potential to promote beneficial changes in clinical practice. The mains changes reported: better patient-centered approach, improvement of non-verbal communication, self-confidence, emotional control, behavioral reactions, and better organization of the consultation. Besides, the results suggested that participants may retain such positive changes in their professional practice by incorporating reflective practices.

This study points to some critical elements to consider when preparing a communication program with VF sessions using real consultations. The potential benefits mentioned included the focus in self-perception, identifying learning goals in CCS, and the possibility to look at oneself interacting with a real patient in a real-life setting from a distance, a revisiting point. VF appears to be an opportunity for participants to experience a deeper level of self-assessment, peer-feedback, as well as reflective practices.

Moreover, VF seems to benefit from facilitator that are attentive to the learners’ psychological needs and skilled in relating the feedback with communication theory. Further studies on VF using real-life consultations could make use of inter-institutional collaborations to help circumvent the limitations related to sample size. Given that the complex skills targeted by VF take a long time to develop, future studies on VF would likely benefit from a more extended period of longitudinal follow-up.