Background

Modern clinical training, aligned with competency based education and programmatic assessment, is focused on assessment and feedback on routine tasks in the workplace, targeting the highest level in Miller’s framework for competency assessment [1,2,3]. Feedback is one of the most powerful influences on learning and performance [4,5,6,7,8]. It offers the opportunity for a learner to benefit from another practitioner’s critique, reasoning, advice and support. Through this collaboration, the learner can enhance their understanding of what the performance targets are and how they can reach those standards [9, 10]. ‘On the run’ or informal feedback refers to brief fragments of feedback that occur in the midst of delivering patient care. A formal feedback session typically refers to a senior clinician (educator) and student or junior clinician (learner) discussing the learner’s performance in a more comprehensive fashion. Formal feedback sessions often occur as a mid- or end-of-attachment appraisal or as part of a workplace-based assessment. However the success of this model relies on everyday clinicians providing effective feedback. It is not clear which components of effective feedback have been successfully translated from the literature into supervisory practice in the workplace, and which have not. Information on gaps in translation could be used to better target professional development training, or to design strategies to overcome impediments to implementing quality feedback behaviours.

Studies involving direct observation of authentic feedback in hospitals are rare. Observational studies are highly valuable, as they provide primary evidence of what actually happens in everyday clinical education. Direct observation can be achieved either by researchers observing the activity or via video-observation. We identified only a few previous direct observation studies: these involved junior learners (medical students or junior residents) in a few specialties (internal or family medicine) involving formal or informal feedback (in outpatient clinics, on a ward, or following summative simulated clinical scenarios) [11,12,13,14,15,16,17,18]. An additional single study involved physiotherapy students during formal mid- or end-of-attachment feedback [19]. The scarcity of observational studies may be related to the time consuming nature, difficulty in arranging observers or video recording to coincide with feedback meetings slotted into busy schedules, or the reticence of participants to be observed or recorded. These studies reported that typically educators make comments on specific aspects of performance, teach important concepts, and describe or demonstrate how the learner can improve. However educators tend to speak most of the time, ask the learner for their self-assessment but then do not respond to it, avoid corrective comments and do not routinely create actions plans. However these findings may no longer reflect current practice. In addition, no study captured the diversity of clinical educators and learners that work in a hospital environment.

Therefore we set out to directly observe authentic formal feedback episodes in hospital training, via self-recorded videos, to review contemporary educators’ feedback practice in workplace-based learning environments. This could then clarify opportunities and inform the design of professional development training. In Australia, health professions training is concentrated in hospitals, integrating both inpatient wards and outpatient clinics; major dedicated specialist outpatient centres are rare and family medicine clinics are relatively small. We recruited a range of participants, characteristic of the diversity present in hospitals, as desirable feedback elements are not profession specific. We targeted formal feedback sessions to capture complete feedback interactions. We then analysed the composition of educators’ feedback practice using a comprehensive set of observable educator behaviours recommended for high quality feedback (see Table 1) [20]. This enabled a systematic analysis of the first set of data gathered using a comprehensive set of behavioural indicators, in contrast to previous studies in which less structured and more exploratory approaches were used. This framework outlines 25 discrete observable educator behaviours considered to enhance learner outcomes by engaging, motivating and assisting a learner to improve (see Table 1) [20]. This earlier publication by our team described how these items were developed, starting with an extensive literature review to identify distinct elements of an educator’s role substantiated by empirical information to enhance learner outcomes, then operationalised into observable behaviours and refined through a Delphi process with experts.

Table 1 Set of 25 educator behaviours that demonstrate high quality feedback in clinical practice

While we strongly endorse a learner-centred paradigm, we have chosen to focus on the educator’s role in feedback because educators are in a position of influence to create conditions that encourage learners to feel safe, participate and work out how to successfully improve their skills. We agree that specific feedback episodes are shaped by the individuals involved, the context and the culture, however strategies to promote a learner’s motivation and capability to enhance their performance remain relevant. Recommended feedback behaviours are not intended to be implemented in a robotic fashion but tailored to a particular situation by prioritising the most useful aspects throughout the interaction. The core segments of quality feedback include clarifying the target performance, analysing the learner’s performance in comparison to this target, outlining practical steps to improve and planning how to review progress [4, 9, 21]. Overarching themes include promoting motivation [22,23,24,25], active learning [26,27,28] and collaboration [29,30,31,32] within a safe learning environment [10, 33, 34].

Research question

The research questions addressed in this study were:

  1. 1.

    What behaviours are exhibited by clinical educators in formal feedback sessions in hospital practice settings?

  2. 2.

    How closely do these behaviours align with published recommendations for feedback?

Methods

Research overview

In this observational study, senior clinicians (educators) observed junior clinicians or students (learners) performing routine clinical tasks in a hospital setting and then videoed themselves during the subsequent formal feedback session. We analysed each video using a check-list based on the set of educator behaviours recommended in high quality feedback (see Table 1) [20].

The feedback videos were captured at multiple hospitals within one of Australia’s largest metropolitan teaching hospital networks between August 2015 and December 2016. Ethics approval was obtained from the health service (Reference 15,233 L) and university human research ethics committees (Reference 2,015,001,338).

Recruitment

Educators (senior clinicians) across medicine, nursing, physiotherapy, occupational therapy, speech therapy and social work, and their learners (either qualified health professionals undertaking further training or students), who were working with them, were invited to participate. A broad range of educators were sought, via widespread advertising of the study using flyers, emails circulated by unit administration assistants, short presentations at unit meetings and face-to-face meetings with staff across the health service. To be considered for participation, an educator had to contact the primary researcher (CJ), in response to the advertisement. Once an educator consented, they distributed flyers to any learners working with them, with instructions to contact the primary researcher (CJ) if the learners were interested in participating. Diversity was sought by rolling advertising to participants, with consideration of key factors including health profession and specialty, gender and supervisor experience (educators) or training level (learners). Once an educator and a learner had both consented, the pair were advised and they made arrangements to video a routine feedback session. They were asked to record an entire feedback encounter and aim for a duration of approximately 10 minutes but were not given any additional instructions regarding how to conduct the feedback session. Participants were not shown the set of 25 educator behaviours recommended for high quality feedback used to analyse the videos nor given any other education on feedback from the research team, as the aim was to study the nature of current feedback practices.

Consenting participants used a smart phone or computer to video-record themselves at their next scheduled formal feedback session related to either a workplace-based assessment or end-of-attachment performance appraisal. This video was subsequently uploaded to a password protected on-line drive and participants were instructed to delete their copy. The videos were numbered using a random number generator and the videos (other than the images) contained no personal identifying information.

Video analysis

The group of raters were all health professionals (two medical, four physiotherapy) in senior education/educational research roles with extensive experience in supervision and feedback. Each rater analysed each video independently and compared their observations with the set of 25 educator behaviours recommended for high quality feedback (see Table 1) [20]. Each educator behaviour was rated 0 = not seen, 1 = done somewhat or done only sometimes, 2 = consistently done.

In a preparatory pilot study, we rated three videos using the instrument. We then met to discuss ratings and to identify differences in interpretation of items and the use of the rating scale. Strategies to encourage concordance and to clarify item meaning were developed. In particular we identified that Behaviour 2: Timely feedback: The educator offered to discuss the performance as soon as practicable was not observable, so it was excluded. For Behaviour 10: Acknowledge learner’s emotional response: The educator acknowledged and responded appropriately to emotions expressed by the learner, we decided that this would be rated as ‘2’ (consistently done) in the following situations i) if implicit or explicit indicators of learner emotion (such as anxiety or defensiveness) were detected, and the educator acknowledged, and attended to this, or ii) if emotional equilibrium was observed throughout the encounter, as we assumed that this emotional balance between educator and learner required the educator to be reading cues and acting accordingly. Subsequently the total item score could range from 0 to 48.

Data analysis

The data provided two perspectives i) on an individual educator’s practice: how many of the behaviours recommended in high quality feedback were observed in each video and ii) across the whole group of educators: which behaviours were commonly performed. To characterise each individual educator’s practice seen in a video, the scores for each item were averaged across assessors and then summed to give a total score. To describe how commonly specific educator behaviours were observed amongst the whole group of educators, the mean score and standard devation for each item was calculated across all the videos [35]. To assess inter-rater reliability, total scores for each video were assessed for concordance between examiner pairs using Spearman’s rho.

Results

Thirty-six feedback videos were available for analysis after five were excluded: two because they were incomplete (insufficient smartphone memory) and three because of technical errors with recording (audio unclear, time-lapse format used, participants not visible).

Video participants

Thirty-four educators participated, with diversity across key characteristics (health profession and specialty, length of supervisor experience and gender). There were four nurses, four physiotherapists and 26 senior medical staff (three anaesthetists, three emergency physicians, two radiologists, one paediatrician, six physicians, three psychiatrists, three obstetrician-gynaecologists, one opthalmologist and four surgeons). There were 18 (52.9%) female and 16 (47.1%) male educators. Fourteen (41.2%) educators had 5 years or less educator experience, 11 (32.3%) had six to 10 years and 9 (26.5%) had more than 10 years.

Thirty-five learners participated with diversity across key characteristics (health profession and specialty, training level and gender). There were 9 (25.7%) students, 9 (25.7%) clinicians who were five years or less post-qualification, 15 (42.9%) clinicians 6 years or more post-qualification and 2 (5.7%) senior clinicians. Twenty-three learners were (65.7%) female and 12 (34.3%) were male. All participants were from the same health profession and specialty as their respective educators.

The feedback session was related to a mid- or end-of-attachment assessment in 11 (30.6%) videos and to a specific task (such as a procedural skill, clinical assessment, case discussion or presentation) in 25 (69.4%) videos. An official feedback form from an institution such as a university or specialist medical college was used in 11 (30.6%) of the feedback sessions, most of which were mid- or end-of-attachment assessments. Most of the assessments were formative but some were summative as a component of longitudinal training programs aligned with programatic assessment principles [3].

Analysis of educator behaviours during feedback

Each video was analysed by four to six raters providing a total of 174 sets of ratings (unexpected time constraints on the project limited analysis by two raters). Missing data were uncommon (0.2% ratings missing).

Inter-rater reliability

To maximise data for comparison, the inter-rater reliability range for total scores was calculated for raters (4/6) who analysed all the videos: Spearman’s rho was 0.62–0.73. The other two raters rated 10 (28%) and 21 (58%) of the 36 videos and were not included in the inter-rater reliability analysis.

Individual educator’s feedback practice

To learn more about individual educator’s practice and how many of the recommended educator behaviours were observed in each video, we calculated a total score (sum of rating for each observed behaviour, averaged across all assessors) for each video. Total scores ranged from a minimum of 5.7 (11.9%) to a maximum of 34.2 (71.3%), with a mean score across educators of 22.5 (46.9%, SD 6.6), from a maximum possible score of 48. More detailed analysis (see Table 2) revealed virtually all the educators (88%) had a total score between 10 and 30. Although it was not our intention to compare performance across different characteristics (which would require sufficient sample sizes for each group, to enable comparisons), there seemed to be a fairly even spread of health professions, experience and gender across the score ranges.

Table 2 Range of total scores for individual educators (34 educators in 36 videos)

Frequency of specific educator behaviours across the whole group of educators

To explore how often specific feedback behaviours were observed amongst all participants, we calculated the mean rating score for each behaviour across all the videos. Table 3 displays the rating mean (SD) for each behaviour, ranked from most to least often observed. Some behaviours were seen in almost every video (highest mean rating 1.75, Behaviour 10) while others were very infrequently observed (lowest mean rating 0.05, Behaviour 25).

Table 3 Observed educator behaviours ranked in order of rating, with the highest at the top. (after references)

Amongst those educator behaviours most commonly observed (top third: mean rating score 1.41–2.0), most related to the educator’s assessment of the learner’s performance. Educators commonly linked comments regarding learner performance to the learner’s actions (Behaviours 1, 17, 20), focused on important aspects for improvement (Behaviour 16), described similarities and differences between the learner’s performance and the target performance (Behaviour 15), and clarified what should be done and why (Behaviour 14). The other two behaviours commonly seen related to creating a safe learning environment. These included showing respect and support (Behaviour 11) and responding appropriately to emotions expressed by the learner (Behaviour 10).

The middle band of educator behaviours were seen intermittently (mean rating score 0.71–1.40) and related to educators encouraging learners to contribute their thoughts, opinions and ideas, and to reveal their uncertainties. These included encouraging the learner to participate in interactive discussions (Behaviour 6), try to work things out for themselves (Behaviour 8), analyse their own performance (Behaviour 13), reveal the reasoning behind their actions (Behaviour 19), raise difficulties and ask questions (Behaviour 9), and participate in choosing the most important aspects to improve (Behaviour 21) and practical ways to do this through an action plan (Behaviour 22).

The lowest band of educator behaviours were rarely seen (mean rating score 0–0.7) and primarily related to the set up and conclusion of a feedback session. At the start of the session, as part of creating a safe learning environment, the recommended educator behaviours included explicitly explaining that the purpose of the feedback was to help the learner improve (Behaviour 3), describing the proposed outline for the session (Behaviour 5), and stating their acceptance that mistakes are an inevitable part of the learning process (Behaviour 4). As part of the session conclusion or wrap-up, the recommended behaviours included checking a learner’s understanding of the learning goals and action plan (Behaviours 23, 24), and discussing future opportunities to review progress, to promote ongoing learning (Behaviour 25). The other educator behaviours that were rarely seen included the educator incorporating the learner’s learning priorities (Behaviour 7) and promoting the learner’s understanding of the value of their self-assessment (Behaviour 12).

Discussion

In this study of educators’ feedback practice, we found considerable variation in both an individual educator’s practice and how frequently specific recommended behaviours were observed across the group of educators. This provides valuable insights into ‘what currently happens’ during formal feedback episodes in hospital-based training. These insights clarify opportunities for future research into educator development with the potential for substantial impact. Furthermore the recommended behaviours offer a repertoire of specific strategies that may assist educators to understand and enact these quality standards.

Frequency of specific recommended behaviours observed across the group of educators

We found that educators routinely gave their assessment of the learner’s performance and described what the task should look like, but only intermittently asked learners for self-assessment or development of an action plan. This seems to reflect a culture in which the educator’s analysis of the learner’s performance predominates [36]. These findings echo those from earlier observational studies and feedback forms [11, 12, 17, 19, 37,38,39,40]. This suggests that typical feedback practice in the clinical setting has remained much the same since these omissions were last reported years ago.

Self-assessment is a key component in self-regulated learning and evaluative judgement, which promotes reflection, independent learning and achievement [28,29,30]. Invitations for learner self-asssessment provide learners with the opportunity to judge their work first and indicate what they most want help with [33, 41, 42]. Self-assessments can alert the educator to the potential for a negative emotional reaction and rejection of the educator’s opinion if the learner rates their performance much higher than the educator [43]. Self -assessment offer opportunities for learners to enhance their evaluative judgement by calibrating their understanding against an expert’s understanding of the observed performance and the desired performance standards [4, 44]. Recent work on student feedback literacy has highlighted the importance of strategically designing opportunities for learners to make judgements and discuss characteristics of quality work, to assist them to appreciate, interpret and utilise feedback [45].

The fact that an action plan continues to be frequently neglected similarly warrants serious attention. If educators do not support and guide learners to create an action plan, learners are left with the difficult task of working out by themselves how to transform feedback information into performance improvement [21]. Furthermore, when learners hear about performance gaps, their distress may be exacerbated if they do not know how to improve it [46].

Our study also identified a number of missing feedback features, which have not been previously documented. One involves positioning the development of a learner’s motivation, understanding and skills as the focal point for feedback. The literature suggests that a learner is only likely to successfully implement changes when they ‘wish to’ (motivation) and ‘know how to’ (clear understanding) [9, 29, 47, 48].

Self-determination theory argues that intrinsic motivation, which is associated with both higher performance and increased well-being, is promoted when a learner decides what to do, in line with their personal values and aspirations [23,24,25]. This is captured by recommended educator behaviours that position the learner as decision maker and the educator as guide (see Table 1: Behaviours 7, 21, 22). A learner must be convinced for themselves that the feedback is credible and valuable (Behaviours 1, 6, 7, 9, 20, 24) [8, 49, 50]. The free flow of information, opinion and ideas between the educator and learner creates a shared understanding, as a foundation for tailored advice and good decision making [51]. In addition, Goal Setting Theory asserts that a learner’s motivation is stimulated by a clear view of the performance gap, performance goals that are specific, achievable and valuable to the learner, and an action plan that is practical and tailored to suit their needs (Behaviours 14, 15, 21, 22) [22].

Recent advances in feedback have focused on the need to assist learners to process and utilise feedback information, so they ‘know how to’ enhance their performance. This is exemplified in the R2C2 feedback model, which includes assisting a learner to explore the information, their reactions to it and to design effective strategies for skill development [30, 32, 51]. Social constructivist learning theory describes how a learner makes meaning of new information through interactions with others [52]. To promote this active learning, recommended educator behaviours include encouraging the learner to analyse their own performance and ‘work things out for themselves’ (Behaviours 8,12,13), enquiring about the learner’s difficulties or questions (Behaviour 9) and checking the learner’s understanding of the action plan before concluding the session (Behaviours 23, 24) [53].

Another feature of effective feedback rarely seen in our study was educators deliberately setting up a safe learning environment at the start the session, although they showed respect and support for learners in general. Recent literature has reinforced the importance of promoting a safe learning environment and establishing an educational alliance [34]. This may be a particularly important strategy when the educator and learner do not have an established relationship, which seems to be increasingly commonplace in modern workplace training with short placements and multiple supervisors attending to learners [54]. Excessive anxiety negatively impacts on thinking, learning and memory [53, 55, 56]. Feedback is inherently psychologically risky; if a learner’s limitations are exposed, this can result in a lower grade or critical remarks from the educator, or threaten a learner’s sense of self [5, 33, 46]. Carless [10] highlighted the important role of trust in view of the strong relational, emotional and motivational influences of feedback. In an attempt to counter the natural anxiety, educators could be explicit that “mistakes are part of the skill-acquisition process” and that they desire to help, not to be critical [53]. In addition, if an educator negotiated the process and expectations for the feedback session, this could reduce the anxiety caused when the learner does not know, or have any control over, what is going to happen [30].

One final important feature was the isolation of the learning activity. In our study, no educator discused when or how the learner might be able to review to what extent they had been able to successfully develop the targeted skills (Behaviour 25); this was the lowest ranked behaviour of all. Molloy and Boud [9] have emphasised the importance of promoting performance development by linking learning activities, so that feedback plans can be implemented and progress evaluated in subsequent tasks. As supervision is increasingly short-term and fragmented in nature, collaborating with the learner in deliberately planning another opportunity to be assessed performing a similar task seems an important objective.

Individual educator’s practice

The range in individual educator’s scores found in our study suggests the educators had variable expertise in feedback. Educators were not shown the check-list of recommended behaviours used in video analysis. Although not formally tested, there was no indication in the data that more experience conferred greater expertise, based on the spread of supervisor experience across the score ranges (Table 2). We did not ask about our educators’ professional development training. Although potentially interesting, this information was tangential to our primary goal of assessing current workplace practice against recommended behaviours. Given that education paradigms have changed considerably across time, and that educator behaviour may partly reflect methods used when they were learners, the observed variability in feedback approaches highlights the need for continuing professional development that focuses on recent advances. The lack of striking differences in scores between professions suggests that feedback skills within formal encounters may be more similar than different. Hence feedback literacy training could, at least in part, be designed for educators across the health professions, allowing significant efficiencies. Nevertheless, the extent to which these skills vary within informal feedback encounters and across different contexts requires more study. Practising clinicans are responsible for the majority of health professions training (both senior students and junior clinicians) and yet specified standards for their education and training role are rare. In contrast health professionals spend many years training and being carefully assessed on their clinical skills.

The aim of our research is to assist educators in generating high quality learner-centred feedback, by developing descriptions of educator behaviours that could engage, motivate and enable learners to improve. It may well be that once clinicians have the opportunity to consider the recommended behaviours, it would be relatively easy for them to introduce missing elements into their practice. One strategy that might be valuable for educators would be to video their feedback with a learner and subsequently use the list to systematically analyse their own behaviours. This would enable educators to also engage in reflective learning and goal setting [57, 58]. In addition, exemplars of supervisors’ phrases or videos re-enacting quality feedback practices may help educators to translate the principles of high quality feedback into new rituals. The set of behaviours is comprehensive however it could be useful to prioritise or summarise them, as 25 recommended behaviours may seem overwhelming, especially to new educators.

Study strengths and limitations

Strengths of our study include self-recorded video-observations of authentic feedback episodes in routine clinical practice, to reveal ‘what actually happens’ and target the top level of Miller’s framework for competency assessment. Participants involved a diverse group of clinical educators, characteristic of hospital practice. The educators’ feedback practices were systematically analysed utilising an empirically derived, comprehensive set of 25 observable educator behaviours.

There are a number of limitations to our study. The small sample of 36 participants were from a single health service, although it is one of the largest in Australia with multiple hospitals. Participants volunteered (which may have resulted in a subset of educators and learners with stronger skills than those who did not volunteer) and participants recorded their own performances, potentially making our data overly optimistic. These factors limit the generalisability of our findings. In the application of the educator behaviour descriptions to the assessment of educator behaviour during feedback, there was some variation in rater consistency. One reason for this could be different interpretations of the educator behaviour descriptions. In future research, attention will be directed to refining the descriptions of observable behaviours and supporting information, accompanied by additional practice and discussion to optimise consensus amongst raters. Although video raters represented only two health professions (two physicians and four physiotherapists), which could raise the possibility that this might influence their analysis of educators’ behaviours beyond their own profession, we cannot see a plausible argument to support this. A number of educators used official feedback forms (from university, hospital or specialty college). Trying to complete these forms in accordance with their instructions, may have influenced educators’ conduct or may have distracted educators’ attention, as they can be quite cognitively demanding. However, there are no compelling reasons why best practice in feedback could not occur in parallel with any learner assessment rubric. In addition, educator-learner pairs could have had earlier feedback conversations, during which some of the quality feedback behaviours may have occurred, particularly relating to setting up expectations and establishing trust, but were not captured on video.

Conclusions

Our study showed that during formal feedback sessions, educators routinely provided their analysis of the learner’s performance, described how the task should be performed, and were respectful and supportive within the conversation. These are all valuable and recommended components of quality feedback. Nevertheless, other desirable behaviours were rarely observed. Important elements that were often omitted included deliberately instigating a safe learning environment at the start of the feedback session (by explicitly articulating the purpose, expectations and likely structure of the session), encouraging self-assessment, activating the learner’s motivation and understanding, creating an action plan and planning a subsequent performance review. This suggests that many advances in feedback research, regarding the importance of assisting learners to understand, incorporate and act on performance information, have not impacted routine clinical education. Our research clarifies valuable targets for educator feedback skill development across the health professions education community. However further research is required to investigate whether implementing these recommended educator behaviours results in enhanced learner outcomes, as designed.