Brief Training of Student Clinicians in Shared Decision Making: A Single-Blind Randomized Controlled Trial
- 803 Downloads
Shared decision making is a crucial component of evidence-based practice, but a lack of training in the “how to” of it is a major barrier to its uptake.
To evaluate the effectiveness of a brief intervention for facilitating shared decision making skills in clinicians and student clinicians.
Multi-centre randomized controlled trial.
One hundred and seven medical students, physiotherapy or occupational therapy students undertaking a compulsory course in evidence-based practice as part of their undergraduate or postgraduate degree from two Australian universities.
The 1-h small-group intervention consisted of facilitated critique of five-step framework, strategies, and pre-recorded modelled role-play. Both groups were provided with a chapter about shared decision making skills.
The primary outcome was skills in shared decision making and communicating evidence [Observing Patient Involvement (OPTION) scale, items from the Assessing Communication about Evidence and Patient Preferences (ACEPP) Tool], rated by a blinded assessor from videorecorded role-plays. Secondary outcomes: confidence in these skills and attitudes towards patient-centred communication (Patient Practitioner Orientation Scale (PPOS)).
Of participants, 95 % (102) completed the primary outcome measures. Two weeks post-intervention, intervention group participants scored significantly higher on the OPTION scale (adjusted group difference = 18.9, 95 % CI 12.4 to 25.4), ACEPP items (difference = 0.9, 95 % CI 0.5 to 1.3), confidence measure (difference = 13.1, 95 % CI 8.5 to 17.7), and the PPOS sharing subscale (difference = 0.2, 95 % CI 0.1 to 0.5). There was no significant difference for the PPOS caring subscale.
This brief intervention was effective in improving student clinicians’ ability, attitude towards, and confidence in shared decision making facilitation. Following further testing of the longer-term effects of this intervention, incorporation of this brief intervention into evidence-based practice courses and workshops should be considered, so that student clinicians graduate with these important skills, which are typically neglected in clinician training.
KEY WORDSshared decision making evidence-based practice continuing medical education
While various definitions abound, the core tenets of shared decision making include: a two-way exchange of information between the patient and clinician, discussion and deliberation of the possible options and their outcomes, and the clinician and patient both participating in the decision making process and arriving at a decision.1 Successful evidence-based practice requires clinicians to incorporate patient values and preferences and utilise shared decision making.2 Indeed, shared decision making can be viewed as a prerequisite to evidence-based practice.3 However, shared decision making has not been routinely adopted by health professionals. There are barriers to its implementation.4 One barrier is simply the lack of teaching clinicians the “how to”. Recent commentaries about shared decision making have proposed that teaching shared decision making skills to health professionals is required as part of efforts to advance its implementation.5–7 Clinicians can be taught shared decision making skills,8,9 but this needs to be routine for all clinician training. While a small number of continuing professional development events that teach shared decision making now exist, they are of variable quality, and in most cases, not widely available.10
It is not clear when teaching of these skills should start to achieve maximum effect. It has been suggested that shared decision making training may be more effective if incorporated into undergraduate educational curriculums so that clinicians graduate not just aware, but also with some experience of it, before establishing clinical habits and patient interaction patterns.11 But we know little about if and when students are taught these skills. For example, a survey of evidence-based medicine curricula in UK medical schools asked about just one element of shared decision making (risk communication), which was taught and practiced in tutorials in only one-quarter of respondent schools.12
Despite the overlap and relevance of shared decision making to two areas (evidence-based practice and patient communication), teaching of these skills seems to fall between the two, with neither routinely covering the topic. Much of the skill focus of evidence-based practice teaching is on finding and appraising evidence,13–15 and shared decision making is not commonly covered under the umbrella of communication skills. This is demonstrated by its omission as an essential skill in a UK national consensus statement of communication curricula for medical programs16 and its omission from the accreditation standards for USA medical schools.17 Consequently, clinicians are graduating without these skills and student competency in communication skills does not extend to shared decision making competency.18 We aimed to evaluate the effectiveness of a brief intervention designed to teach student clinicians, at both the undergraduate and postgraduate level, skills in facilitating shared decision making.
This was a wait-listed, multi-centre, single-blind randomized controlled trial.
Participants and Setting
Participants were either third year medical undergraduate students, final year occupational therapy honours students, or postgraduate physiotherapy students enrolled in a compulsory course on evidence-based practice knowledge and skills at one of two universities in Queensland, Australia. Students had previously received teaching on communication skills as part of their regular curricula, but no previous shared decision making training. The medical students were enrolled at one university (with data collection between September and November 2011) and the allied health students were at another university (with data collection in September and October 2010). This study was approved by the ethics committees of both universities.
All students were invited to participate in the study during the first class of the course, and were assured of no academic consequences for not participating. After providing informed consent, participants were paired during the first week of the course and randomly assigned to one of two groups (intervention or control) by an independent administrative assistant using a computer-generated random numbers table. Each pair was provided with two clinical scenarios (focused on an intervention question) specific to their discipline, with brief details of a patient and their context, and an appropriate randomized controlled trial to address each scenario. The scenarios and citations of the trials are available from the corresponding author, with an example of one provided in the online supplementary materials. Participants were instructed to prepare two role-play consultations of approximately 5 min duration, with one participant playing the role of the clinician and the other playing the patient for one scenario. Participants swapped roles for the second scenario. Both intervention and control group participants were also provided with a copy of a book chapter that discussed the principles of and strategies for shared decision making and communicating evidence to patients, and instructed to read it.19
Two weeks later, participants completed a short questionnaire that contained baseline measures (see outcome measures section) and performed both of their role-plays (with an interval of approximately 30 min between each) in a small consulting room with only one academic staff member also present. All role-plays were videorecorded to enable baseline measure of shared decision making skill and no feedback was given. After performance of the second role-play, each pair of participants was provided with two new clinical scenarios and instructed to prepare new role-plays. Intervention group participants then attended the 1-hour intervention session immediately after the baseline role-plays were completed.
Two weeks later, all participants performed their two new role-plays. The same procedure that was used at baseline was followed: role-plays were videorecorded and participants completed a questionnaire containing the confidence and attitude outcome measures. After data collection was complete, control group participants received the intervention.
Description of the Intervention
The intervention session was a 1 h tutorial facilitated by one of the authors (TH). It commenced with a presentation of a five-step framework for communicating evidence for participatory decision making (see online appendix) and some strategies for each implementing each step.20 Participants were then shown a DVD of a pre-recorded modelled role-play of a 12-min consultation between a clinician and standardised patient that demonstrated some of the skills being taught. The tutorial concluded with a facilitated critique of this role-play and group discussion about strategies that can be used to facilitate shared decision making and when various strategies may be most appropriate. The session was repeated a number of times so that each session contained no more than 18 participants. The intervention slides and the script of the modelled role-play are available from the corresponding author.
Background Participant Characteristics
Participants were asked questions about previous evidence-based practice training, and if they had previously worked clinically, for how many years.
The primary outcome was shared decision making skills. To measure this, the recorded role-plays were rated using the revised Observing Patient Involvement (OPTION) scale, which contains 12 items scored with a five-point scale: 0) the behaviour was not observed; 1) a minimal attempt is made to exhibit the behaviour; 2) the behaviour is demonstrated; 3) the behaviour is demonstrated to a good standard; and 4) the behaviour is executed to a high standard.19 The scores are transformed to give a score out of 100. The OPTION scale has demonstrated good construct validity,21 content validity,21 and concurrent validity.22 Reliability has also been demonstrated, with internal consistency ranging from 0.68 to 0.79 and inter-rater reliability ranging from 0.62 to 0.77.21,23 As the OPTION scale does not specifically evaluate skills in communicating evidence, the role-plays were also rated using five items from the Assessing Communication about Evidence and Patient Preferences (ACEPP) Tool, which has shown good reliability.24 These items rate clinicians on their ability to describe the benefits and harms of a treatment and the likelihood of these occurring, provide information individualised to the patient, and describe the source of research evidence. Each item was scored based on the occurrence and quality of the behaviour as either: the behaviour was not observed (0), observed to a basic level (0.5), or observed to an extended level (1). All video-recorded role-plays were scored by one of the authors (CT) who was blind to group allocation and had received training in using the OPTION and ACEPP scales. A sample of recordings were rated by three other authors (TH, SB, CDM), responses discussed, and ratings of role-plays continued until high agreement had been reached.
Secondary outcomes were attitudes towards patient and clinician involvement in consultations and confidence in communicating with patients about evidence. Attitude was measured using the Patient Practitioner Orientation Scale (PPOS).25 The PPOS has demonstrated have good construct validity;26,27 for example, demonstrating criterion validity by correlating with an objective measure of frequency counts of verbal exchange between doctor and patient;26 as well as face validity, and predictive and discriminant validity.27 It contains 18 items, each scored on a six-point Likert scale (0 = strongly disagree; 6 = strongly agree). Items 1 to 9 form a Sharing sub-scale, which measures the extent to which a clinician believes that patients should be given information and included in the decision making process. Items 10-18 form a Caring sub-scale, which measures the extent to which a clinician sees a patient’s context, expectations and concerns as important elements of the decision making process.
As no suitable measure for evaluating confidence already exists, an 11-item questionnaire to evaluate participants’ confidence in facilitating shared decision making and communicate evidence with patients was developed for this study (copy available on request from authors). Each item was scored on a 10-point visual analogue scale, where 1 = not at all confident and 10 = very confident.
Descriptive statistics were calculated for each outcome measure for each group at baseline and post-intervention, and one-way analysis of covariance, adjusting for baseline outcomes, was used to compare between-group differences of post-intervention means. Data were analysed on an intention-to-treat basis using SPSS (version 19). We did not perform an a priori sample size calculation, as our maximum sample size possible was constrained by student enrolment numbers in the courses during the study period.
Intervention (n = 54)
Control (n = 53)
Previous evidence-based practice training
Clinical experience (prior to commencing this program)
Of those with clinical experience, its length – median (IQR) years
Mean Baseline and Post-Intervention Scores for Shared Decision Making (SDM) Training and Control Groups and Post-Intervention Between-Group Differences for All Outcome Measures
Post-intervention between-group mean difference, adjusted for baseline (95 % CI)
18.9 (12.4 to 25.4)
0.9 (0.5 to 1.3)
PPOS Sharing (Q1-9) ‡
0.2 (0.1 to 0.5)
PPOS Caring (Q10-18) ‡
0.08 (-0.09 to 0.3)
13.1 (8.5 to 17.7)
Table 2 shows the mean baseline and post-intervention scores for all measures and the between-group mean differences, adjusted for baseline, for the post-intervention scores. There were statistically significant differences between the groups at post-intervention for most of the outcome measures, with improvements shown by intervention group participants in: skills in shared decision making by 19 points (OPTION scale; 19 % improvement); skills in communicating about evidence and patient preferences (ACEPP items; 18 % improvement); confidence in facilitating shared decision making (confidence scale; 13 % improvement); and one component of the attitude towards patient and clinician involvement in consultations (sharing component of the PPOS scale, 3 % improvement). The increase in scores for the caring component of the PPOS scale (1.3 %) was not significantly improved over the control group.
Our evaluation found that a brief intervention, designed to provide student clinicians with the knowledge and skills to facilitate shared decision making, was effective in four of the five outcomes measured: skills in shared decision making and communicating with patients about evidence, confidence in these skills, and attitudes towards providing information to patients and involving them in decision making. Attitudes towards viewing a patient’s context, expectations and concerns as important elements of the decision making process did not improve in the intervention group more than the control group. This may reflect the use of a reasonably straightforward clinical scenario in the intervention, which did not allow for in-depth exploration of the complexities that patients bring to real clinical consultations.
Strengths of this study include the rigour of the evaluation method (a randomized trial, use of a validated and standardised rating scale as a primary outcome measure and blinded objective assessment of shared decision making skills). Few randomized trials, particularly with blinded assessors and objective measurement of skills, of teaching evidence-based practice skills have been conducted, although knowledge gained from such trials is important for enhancing the quality of training in this area.13 Adding to this study’s strength is the recruitment of clinical students from two universities and three disciplines (medicine, occupational therapy, and physiotherapy). Although shared decision making is a skill needed by all clinicians and a component of care that may benefit from an interprofessional approach, nearly all existing research in the area has focused on medical practitioners.4 Limitations include the lack of measuring any decay of the intervention effect over time and the artificial nature of the simulated patient encounter. It is possible that results may have been different if students had been interacting with real patients and without scrutiny, and a trial measuring this as well as intervention effect over time would be valuable.
Participants’ high baseline confidence in their ability to facilitate shared decision making with patients is probably misplaced when their low baseline scores on the OPTION scale, which measure actual skill level, are considered. Clinicians are typically overconfident about communication and often perceive it to be a natural easy skill that does not require specific training.28 However, when their actual skills in shared decision making are measured, they are often poor.29 Clinician communication skills do not necessarily improve with time or experience,30 further highlighting the need for specific training in these skills.
There have been some randomized trials of training clinicians to improve their skills in patient-centred communication, with one showing a significant effect of training that persisted 2 years after the training.31 Shared decision making can be considered as a component of patient-centred communication, and while there is some overlap between the skills that are taught and evaluated in trials of patient-centred communication training and our study, there are also some differences. Most notably, our intervention also emphasised the incorporation of evidence into the decision making process, including presentation of the probability (or likely size) of the benefits and harms of each option being discussed and the associated strategies for presenting this numerical information in a manner that patients can understand. In the Cochrane review of interventions to improve health professionals’ adoption of shared decision making, there were only three trials in which the intervention involved training clinicians in shared decision making (as part of a multi-faceted intervention), with the remainder of the included studies evaluating patient-mediated interventions.9 However, in all of the trials that involved clinician skill training, the intervention was delivered to practicing clinicians. As far as we can tell, this is the first study, and the first randomized trial, that has evaluated a method of teaching shared decision making skills to student clinicians.
Most patients want their clinician to engage them in shared decision making, yet many are not provided with the opportunity or not satisfied with the attempt.7 For too long, there have been calls for shared decision making to be routinely implemented into patient care, and more recently, calls for clinicians to receive training in these skills.5–7,9,32 However, rhetoric must be replaced by practical steps to ensure this happens. Previous attempts of skill training in this area have been with clinicians. Why not treat shared decision making skills as essential skills that are routinely taught as part of student clinician training, in the same way that more generic communication skills and basic sciences are taught almost universally? The sooner these skills are accomplished, the sooner shared decision making becomes part of patient-communication habit.
A recommendation from an overview of systematic reviews of training strategies for teaching communication skills is that training should use active, practice-orientated strategies, such as small group discussions and feedback, and be supplemented by modelling, and presentations.33 The intervention in the current study incorporated a number of these strategies, although the opportunity to practice the strategies and receive feedback about them was not incorporated, primarily due to the brief (1 h) nature of the intervention. However, the current intervention was intentionally designed to be brief so that it could be integrated into already full curricula. Increasing the intensity and/or complexity of the training may pose pragmatic difficulties, as recently discovered by Han and colleagues,34 and threaten inclusion of this content in an increasingly competitive curriculum environment.
A brief intervention such as the one evaluated in the current study should not be the only exposure to or training in these skills that clinicians receive. Research that is currently underway in developing a core set of shared decision making competencies for continuing professional development programs 10 will provide useful guidance on clinicians’ skills in shared decision making. Ensuring that clinicians receive training in shared decision making skills, and ideally training that commences during their journey to becoming a clinician, is an important way of bridging the evidence-practice gap, and one that has been largely ignored to date.
The authors thank the students who participated in the study, staff at both universities (Sandy Brauer, Robert Nee, Chrissy Erueti, Charles Leduc, Carina Doyle) who assisted with conducting the trial, and Prof Paul Glasziou (Professor of Evidence-Based Medicine, Centre for Research in Evidence-Based Practice) for his helpful comments on the manuscript.
TH is supported by a National Health and Medical Research Council of Australia (NHMRC)/Primary Health Care Research Evaluation and Development Career Development Fellowship (number: 1033038) with funding provided by the Australian Department of Health and Ageing. The funders had no role in design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. No specific funding was received to conduct this trial.
This paper was presented at the Cochrane Colloquium in Auckland, New Zealand in October 2012 and at the inaugural International Evidence-Based Health Care Conference in New Delhi, India in October 2012.
Conflict of Interest
The authors declare that they do not have a conflict of interest.
- 1.Charles C, Gafni A, Whelan T. Decision-making in the physician–patient encounter: revisiting the shared treatment decision-making model. Patient Educ Couns. 1999;49:651–661.Google Scholar
- 7.Alston C, Paget L, Halvorson G, et al. Communicating with patients on health care evidence. Discussion paper. Washington, DC: 2012.Google Scholar
- 9.Légaré F, Ratté S, Stacey D, et al. Interventions for improving the adoption of shared decision making by healthcare professionals. Cochrane Database Syst Rev. 2010;5:Art. No.: CD006732. DOI: 10.1002/14651858.CD006732.
- 17.Liaison Committee on Medical Education. Functions and structure of a medical school: standards for accreditation of medical education programs leading to the M.D. degree. June 2013. Available at: http://www.lcme.org/publications/functions.pdf Accessed Dec 27, 2013.
- 19.Hoffmann T, Tooth L. Talking with clients about evidence. In: Hoffmann T, Bennett S, Del Mar C, eds. Evidence-based practice across the health professions. Sydney: Elsevier Inc; 2010:276–299.Google Scholar
- 28.McKenna K, Tooth L. Client education: a partnership approach for health practitioners. Sydney: UNSW Press; 2006.Google Scholar
- 32.Salzburg statement on shared decision making. BMJ. 2011;342:d1745.Google Scholar