Background

Effective interprofessional communication (IPC) between healthcare professionals promotes teamwork, improves patient care and boosts cost efficiency [1, 2]. IPC also encourages open, honest and frank discussions, facilitates negotiations and resolution of conflicts, and promotes shared decision making [3]. These features foster coordinated medical, nursing, social, psychological and financial support by different members of the interprofessional team and contribute to holistic and longitudinal patient-centric care [4].

Yet, whilst medical schools have not been slow to recognise the importance of IPC training or equip its students to meet the IPC competencies set out by the Accreditation Council for Graduate Medical Education (ACGME) and the World Health Organisation’s Framework for Action on International Education & Collaborative Practice, significant diversity in the approaches and structuring of current IPC training in medical schools have been observed [5]. These variations create concern about the ability of medical students to function effectively in interprofessional teams upon graduation [6,7,8].

The need for this review

To advance a consistent approach to IPC skills training in medical schools, a scoping review of current practice is proposed [9]. With most programs seen to be designed around the different levels of Miller’s pyramid this scoping review will frame it approach accordingly [6,7,8]. In addition, this scoping review will adopt a constructivist perspective and a relativist lens to capture IPC’s socioculturally-sensitive, linguistically-dependent, context and user-specific nature [10, 11] across different education and healthcare systems [12,13,14,15,16].

Methods

A scoping review allows for the summarizing [17] of current approaches, pedagogies, assessments, and practice settings employed [18,19,20] in peer-reviewed and grey literature [12,13,14,15,16] and the circumnavigation of inevitable differences in practice, healthcare, education and healthcare financing across the different programs.

To guide this scoping review, we adopt Krishna’s Systematic Evidenced Based Approach [21, 22] (henceforth SEBA) to enhance transparency and reproducibility of the scoping review (Fig. 1). To begin SEBA employs an expert team comprising of local clinicians, educators, researchers, and a medical librarian to determine the research question and guide the scope of the review. SEBA structures its search process by adopting the approach used in systematic reviews. To enhance transparency of the review process SEBA uses trained researchers to carry independent searches for data across the selected databases including grey literature. These individual researchers use consensus based decisions to determine the final list of included articles. Independent reviews and consensus based determinations are also a part of SEBA’s ‘split approach’ which sees the concurrent use of thematic and content analysis of the data. The research team guided by the expert team review the findings and make comparisons of the findings with current available data as part of the reiterative process and the synthesis of the scoping review. SEBA also sees the employ of the PICO search strategy protocol and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols (PRISMA-P) checklist [23]. SEBA also incorporates Levac et al. (2015) [24]‘s methodology for scoping reviews.

Fig. 1
figure 1

The SEBA Process

Stage 1. Defining the research question and scope

Guided by the expert team, the research team identified the primary research question to be: “what are the characteristics of prevailing IPC programs?” The secondary research questions are: “what are the indications, training and evaluation methods, content, challenges and outcomes of these IPC programs?”

These questions were designed based on the population and concept elements of the inclusion and exclusion criteria, which are presented via a PICOS format in Table 1 [25]. For practical reasons, the ‘members’ of the IPC are drawn from a ‘small Multi-Disciplinary Team’ (MDT) which includes members from the faculties of medicine, nursing, physiotherapy, occupational therapy and social work [26]. Articles involving IPC training programs for medical students including healthcare professionals and or students from nursing, physiotherapy, occupational therapy and social work, were reviewed.

Table 1 PICOS, inclusion criteria and exclusion criteria applied to database search

Stage 2. Independent searches

Under the guidance of the expert team, search strategies (Supplementary File 1) were formulated with the following keywords: ‘medical students’, ‘nursing students’, ‘allied health students’, ‘interprofessional’, ‘communication’ and ‘education’. In keeping with Pham, Rajić [27]‘s approach to ensuring a viable and sustainable research process, the research team confined the searches to articles published between 1 January 2000 and 31 December 2018.

Seven trained researchers carried out independent searches of PubMed, Embase, CINAHL, Scopus, PsycINFO, ERIC, JSTOR, and Google Scholar databases and created independent lists of titles and abstracts to be scrutinized further based on the screening criteria as detailed in Table 1. The researchers discussed their findings at online meetings and determined the final list of full text articles to be reviewed using Sandelowski M [28]‘s ‘negotiated consensual validation’ approach.

Selection of studies for review

The final list full text articles was independently scrutinised by members of the research team and discussed their findings at online meetings. The research team determined the final list of full text articles to be analysed using Sandelowski M [28]‘s ‘negotiated consensual validation’ approach. Figure 2 shows a summary of the PRISMA process.

Fig. 2
figure 2

PRISMA Flowchart

Stage 3. Data characterization and Split approach [21, 22, 29]

Inspired by the notion that communication skills training is a longitudinal process that develops in competency based stages, Hsieh and Shannon’s directed content analysis was adopted [30]. The codes and categories for this content analysis was drawn from various stages of the Miller’s Pyramid [6,7,8]. Miller’s Pyramid serves as an influential conceptual framework for the development and assessment of clinical competence, one which sees learners move from cognitive acquisition of knowledge to applied behaviour in clinical settings where beneficiaries reside. Critically an initial review of prevailing programs suggest that many IPC programs appear to fashion their programs around the 4 levels of Miller’s Pyramid [6,7,8] which are ‘Knows’ – which requires the learner to be aware of knowledge and skills, ‘Knows How’ – which sees the learner apply these knowledge and skills in theory, ‘Shows How’ – where knowledge and skills are applied in practice, and ‘Does’ – where the learner is shown to be able to function independently in the clinical setting [31].

The decision to adopt content analysis was not unanimous precipitating the employ of the ‘split approach’. The decision to adopt Braun and Clarke’s approach to thematic analysis [26] gained traction following the findings of the deductive category application. Part of the directed content analysis, the deductive category application suggested the presence of a number of other categories not related to the 4 levels of Miller’s Pyramid. These include the indications, structure, content, assessments and obstacles to IPC programs [14]. Omission of these critical categories and the belief that the adoption of predetermined categories based on Miller’s Pyramid required further evidencing, underpinned the decision to adopt Krishna’s ‘Split Approach’ [23,24,25,26].

The ‘Split Approach’ [29] sees two independent teams carry out concurrent reviews of the data using Hsieh and Shannon’s directed content analysis [30] and Braun and Clarke’s approach to thematic analysis [26]. This saw two members of the research team carry out concurrent and independent analyses of the data using Hsieh and Shannon’s directed content analysis [30] and three other members of the research team carry out simultaneous and independent analysis of the data using Braun and Clarke’s approach to thematic analysis [26]. The findings were discussed within each sub-team at online and face-to-face meetings where “negotiated consensual validation” was employed to determine the final list of themes and categories [32,33,34]. The themes from Braun and Clarke’s approach to thematic analysis [26] and the categories from Hsieh and Shannon’s directed content analysis [30] were compared [29].

Stage 4. Review of results and comparing them with current data

Using PRISMA guidelines (Fig. 2), an initial search in eight databases revealed 17,809 titles and abstracts after removal of duplicates. Two hundred and fifty full-text articles were reviewed and a total of 73 articles were included for analysis. The narratives were written according to the Best Evidence Medical Education (BEME) Collaboration guide [35] and the STORIES (STructured apprOach to the Reporting In healthcare education of Evidence Synthesis) statement [36].

Scrutiny of the themes identified from the employ of Braun and Clarke’s approach to thematic analysis [26] and the categories identified from Hsieh and Shannon’s directed content analysis [30] were found to be overlap in some areas [29]. In addition the 5 themes identified using Braun and Clarke’s approach to thematic analysis [26] which were the indications, stages of trainings and evaluations, content, challenges and outcomes of IPC training were similar to the categories identified using Hsieh and Shannon’s directed content analysis [30]. This allowed the themes and categories to be presented together.

  1. a.

    Indications for IPC programs

The indications for the development of IPC programs are outlined in Table 2. Most accounts sought to assess perspectives towards Interprofessional work and communication, to introduce the use of IPC amongst medical students, to assess the nature of these interactions, determine roles and responsibilities of tutors and students in IPC, to better understand the process of problem solving and teamwork, to scrutinize the decision making processes that occurred in collaborations and evaluate the impact of debriefs and feedback sessions following IPC sessions. Many of these interactions took place in case discussions, simulations and or clinical practice and involved medical students in pre-clinical and clinical postings. Other accounts focused upon training faculty on teaching, facilitating IPC, setting and evaluating clinical competencies and debriefs and reports of IPC programs.

  1. b.

    Stages of IPC training

Table 2 Indications for IPC Program

Whilst there were accounts that assessed a specific aspect of the IPC process or involved ‘snap shots’ of the IPC process and interactions, accounts of IPC that took a longitudinal perspective of IPC did consider the development of IPC along the 4 levels of Miller’s Pyramid (Fig. 3) [6, 8, 38, 39]. As a result, we present the themes/categories related to each level of Miller’s Pyramid.

Fig. 3
figure 3

Miller’s Pyramid of Included Articles

Level 1: knows

Training

Forming the base of the pyramid, the “Knows” level of Miller’s Pyramid focuses on the acquisition of theoretical concepts and skills. IPC training at this level Miller’s Pyramid [6, 8, 38, 39] were part of formal programs. This includes the provision videos, lectures and briefings [40,41,42,43,44,45,46], online courses, didactic lectures and workshops [40, 47,48,49,50,51,52,53], seminars and conferences [44, 54,55,56] and even a ‘Healthcare Interprofessional Education Day’ where there opportunities to clarify interprofessional roles and markers of proficiency [57]. IPC training at this level also took place as part of observations of interactions between the healthcare team, role modelled in multidisciplinary settings [58,59,60,61,62].

Evaluation

Evaluations at this level of Miller’s Pyramid include self-reported surveys which incorporated checklists, open-ended questions and Likert scales that assessed perception of their own knowledge [40, 43, 47, 50, 54, 55, 57, 59, 63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86]. Focused group discussions [59, 87, 88] and semi-structured interviews were also carried out by faculty members to grade students on their ability to demonstrate their knowledge [52, 64, 65, 89,90,91,92]. Only the Mayo High Performance Teamwork Scale [49], the Scope of Practice Checklist [64], Readiness for Interprofessional Learning Scale [93], Conceptions of Learning and Knowledge Questionnaire [94] as well as a purpose-designed questionnaire in Jakobsen, Gran [95]‘s study.

Level 2: knows how

Training

To achieve the “Knows How” level of Miller’s Pyramid, emphasis was placed on problem-based discussions [49, 53, 56, 65,66,67,68,69, 96,97,98,99,100,101,102].

Evaluation

Students were asked to reflect on their IPC experiences [45, 56, 62, 72, 103]. In Robertson, Kaplan [104]‘s program, they were asked to point out positive IPC skills demonstrated in a video and suggest areas for improvement.

Level 3: shows how

Training

The third level of Miller’s pyramid comprises of “Shows How”, where students are required to demonstrate the application of knowledge in their clinical performance. Clinical scenarios included cardiac resuscitations [52, 65, 81, 102]; handoffs [105]; mock pages [41, 106]; communication with a senior clinician [107]; interactions with simulated patients [40, 44, 50, 92, 105] and manikins [46, 49, 73, 74, 76, 95, 108, 109]; simulated ward rounds [43, 45, 48, 51, 71, 110], family meetings [47], roleplay [100, 104]; paediatric clinical simulations [75]; Objective Structured Clinical Exam simulations [111] and laboratory sessions [101]. Non-clinical scenarios incorporated the handling of difficult family conflicts [92] and sensitive cultural issues [92].

Evaluation

Student evaluations were carried out by faculty members [65, 69, 77, 99, 105, 108], and supplemented by feedback from simulated patients [49, 97, 99], and team building exercises [97]. A post-training analysis of verbal units of exchange during handoffs also served to quantify improvements in communication skills [105]. Once again only the Mayo High Performance Teamwork Scale [49], the University of West England Interprofessional Questionnaire [89], the Readiness for Interprofessional Learning Scale [65] and the Interprofessional Collaborative Competency Attainment Survey [57] and the I-PASS: Medical Student Workshop [105] were validated.

Level 4: does

Training

At the apex of the pyramid, the “Does” level focuses on the students’ independent performance in real clinical settings. IPC training was facilitated via experiential learning in clinics [53, 70, 80, 83, 93, 103, 112] and wards [80, 83, 85, 93], student-led clinics [82, 88,89,90], motivational interviews with certified health educationalists [99] and home visits [78, 91, 113]. These interactions provide opportunities for students to share disciplinary insights and expertise, conduct collaborative medical interviews, explore complex patient cases and manage challenging situations as a unified group [78, 91].

Evaluation

Whilst students provided self-reports of their competency levels in clinical settings via questionnaires [79, 89, 96], faculty members [99] and patients [60, 70] were also involved in the observation and identification of good communication skills. Few tools were validated [89, 99] such as the University of West England Interprofessional Questionnaire [89], and the Indiana University Individual Communication Rubric and Indiana University Team Communication Rubric which were modifications of the validated Indiana University Simulation Integration Rubric [99].

Attitudes

Acknowledging that IPC experiences and professional and personal development change individual concepts [40, 42, 49, 64, 69, 71, 106, 108], a combination of validated and unvalidated questionnaires, checklists, interviews and reflective pieces were employed to determine prevailing attitudes towards IPC [44, 47, 50, 51, 53, 55, 58, 59, 61, 66,67,68, 73,74,75,76,77, 80, 82,83,84, 86,87,88,89, 91, 93, 95, 98, 101, 102, 104, 109, 112].

Suitability of teaching and evaluation methods

It is of note that across the 73 included studies, only 14 studies [46, 52, 62, 72, 78, 80, 85, 89, 97, 99, 101, 102, 108, 109, 111, 114] offered evaluation methods that appropriately evaluated learning outcomes in a stepwise approach as delineated by the stage(s) of Miller’s pyramid.

Thirty four studies, studied improvements in attitudes towards IPC or satisfaction with training in lieu of assessing any stage in Miller’s pyramid of competency, or, had conducted no assessment [42, 44, 47, 48, 50, 53,54,55, 58, 59, 61, 67, 68, 71, 73,74,75,76, 81,82,83, 86,87,88, 90,91,92, 98, 100, 101, 106, 110, 112, 113].

Content of IPC programs

Table 3 describes the list of topics covered in IPC programs. Most interventions were centred around clinical scenarios in various settings, deliberation of ethical issues and care determinations.

Table 3 List of topics for IPC Programmes and methods of training

Challenges to IPC training

Challenges to IPC training include scheduling conflicts, difficulties in preparing effective and appropriate programs, obstacles in recruiting [108] and training [72] teachers [56, 80] and students [46, 54, 76, 78]. A further issue is failure to vertically integrate IPC training which has been found to reduce teamwork and collaboration and stunt professional identity [93].

Yet perhaps less evident but nonetheless as concerning is the lack of longitudinal assessment of the IPC interactions [48, 57, 65]. Only one account amongst the 74 included studies employed a longitudinal assessment approach [90].

Outcomes of IPC training

A lack of longitudinal assessments limit outcome measures to self-reported increases in understanding and appreciation of IPC [40, 55, 63, 64, 70, 72, 73, 84,85,86, 91, 107, 112], self-perceived improvements in teamwork [49, 97, 99], communication techniques [107] and clinical communication [49, 97, 99], self-reported improvements in IPC competency [87, 107] and the belief that they would better able to adapt to future practice [40, 47, 51, 57, 59, 62, 63, 66, 70,71,72, 74, 76, 78, 98, 102, 103, 107, 108].

Critically M. Amerongen, Legros [64], Berger, Mahler [54], Bradley, Cooper [65], Erickson, Blackhall [47], Robertson, Kaplan [104] found that efforts to instil IPC did not result in statistically significant improvements in IPC competencies and attitudes [47, 54, 64, 65, 104]. Bradley, Cooper [65] reported that scores for collaboration decreased three to four months post IPC training. Some have sought to attribute these poor results to cognitive overload [42], a ceiling effect [47] and the need for more training [47]. Concurrently initial discomfort [50] with this communication approach could be countered by continued collaborative work [52, 76, 93] with other healthcare professionals [58, 66, 70, 91, 108].

Stage 5. Consultations with key stakeholders and synthesis of discussion

Consultations with the expert team and local educators, clinicians and researchers well-versed in IPC training revealed was particularly insightful. To begin these discussions following the review of the omitted data identified through deductive category application [14] and the belief that adopting categories based on Miller’s Pyramid required evidencing, underpinned the decision to adopt Krishna’s ‘Split Approach’ [23,24,25,26]. This led to the shift from use of Levac et al. (2015) [24]‘s methodology to scoping reviews to adoption first of the split approach and then the integration of a more structured methodology in the form of a SEBA guided approach to SRs following comments by the journal’s anonymous reviewers.

Discussions with the expert teams and local educators, clinicians and researchers also revealed general consensus that the results of this review aligned with prevailing understandings of IPC programs. It was also agreed upon that there is an urgent need for further research on the impact of IPC training on interprofessional collaborations and in the design of comprehensive and longitudinal training and evaluation programs for medical students.

Discussion

In addressing its research questions, this scoping review revealed diverse approaches, learning objectives, and methods of assessing IPC in medical schools contribute to the poor alignment of training goals and the desire for a step-wise competency framework [6, 8, 39]. Forty five of the included accounts focused on just one level of the Miller’s pyramid, 23 studies focused on two levels whilst 5 studies considered three levels of Miller’s Pyramid. Critically 59 studies employed inappropriate assessments methods to assess the level of the Miller’s Pyramid employed in their program [115].

Whilst we acknowledge that Miller’s Pyramid is by no means the definitive framework to be used in IPC training, it provides a sound, foundational, learner-centric, progressive scaffolding for the effective acquisition and assimilation of IPC knowledge and skills. There is sufficient data to suggest that IPC programs is best ‘spiralled’ – bearing both vertical and horizontal integration within the curriculum. Whilst each stage builds upon prior core topics, knowledge and skills in a vertical manner, they must also work in tandem horizontally with the wider medical school curricula to ensure that students are equipped with other imperative skills which would adequately prepare them for simulations and clinical placements within their IPC training [116, 117]. This would enable the students to see the interwoven nature of specific cognitive and procedural knowledge and skills across settings, allowing for more judicious decision-making and cohesive interprofessional collaborations.

Likewise, training and evaluation methods must be strategically curated and complementary with this stage-wise curriculum. Evaluations must be longitudinal, holistic, multi-sourced and allow for faculty members to quickly identify areas for remediation. Thus competencies must have both fixed elements and personalised components to contend with the individual needs, abilities and contextual considerations. To this end, portfolios are recommended as a suitable learning and evaluation tool to accompany students as they hone their IPC skills [118,119,120]. Extensive follow-ups assessing attitudinal and behaviour change [121, 122] should also be conducted following graduation to determine the overall impact of the curriculum on IPC skills into the clinical setting [107].

Limitations

While it is reassuring that Millers’ Pyramid may be used to address present gaps in IPC training, there are a number of limitations to be broached.

First, drawing from a small pool of papers which were limited to articles published or translated to the English language can be problematic particularly when most are North American and European-centric. This may limit the applicability of the findings in wider healthcare settings.

Two, there is much to be clarified about the IPC training and assessment processes. This endeavor is set back, however, by a lack of holistic and longitudinal assessments and the continued reliance upon assessment tools still rooted in “Cartesian reductionism and Newtonian principles of linearity” [123] and fail to consider the evolving nature of the IPC training process and training environment [49, 57, 65].

Three, despite our independent efforts to carry out our searches and independent efforts to verify our searches and consolidate our findings there may still be important articles that have been omitted.

Conclusion

This scoping review finds that despite efforts to design IPC programs around competency-based stages, most programs lack a longitudinal perspective and effective means of appraising competency. Yet it is still possible to forward a basic framework for the design of IPC programs.

Acknowledging the need for a longitudinal perspective IPC training should be structured around a ‘spiralled’ curriculum. This facilitates both vertical and horizontal integrations within the formal medical training curriculum. Being part of the formal curriculum will also cement IPC as part of the core training processes in medical school and facilitates the recruitment and training of trainers, established purpose built training slots over the course of medical training program, financial support and effective oversight of the program and the training environment. With more medical schools adopting a portfolio-based assessment process, IPC would be furnished with a clear means of longitudinal assessments of IPC competencies over the course of each competency-based stage. It also allows effective follow up of graduates and a link with postgraduate training processes and portfolios.

The program itself must involve all 4 levels of Miller’s Pyramid [6, 8, 38, 39]. For Level 1 of Miller’s Pyramid, a combination of interactive workshops and role modelling of effective IPC in the clinical setting will help medical students appreciate the role of IPC.

Level 2 should involve case based discussions on ethical and care issues in the interprofessional setting whilst Level 3 and 4 may be demonstrated in simulated clinics and ward rounds. Perhaps just as critical is that IPC practice should be regularly assessed in all clinical postings to ensure that remediation can be carried out early.

Being part of the formal curriculum will also ensure that there are quality appraisals of the IPC program and policing of codes of conduct and practice standards. It will also facilitate research into better assessment measures and tools, communication dynamics and the professional identity formation. Finally, it will also evaluate the translatability of these findings beyond medical schools and their links to postgraduate practice.