Introduction

Despite improved treatment and care, children and adolescents diagnosed with cancer in the Western world continue to die, while many of those cured are burdened by treatment-related sequelae [1, 2]. The best clinical management of children and adolescents with cancer depends on healthcare professionals with various skills and expertise [3] as the treatment, care and rehabilitation of children and adolescents with cancer is so complex that it surpasses the responsibilities and abilities of one single profession. To provide the best treatment and care, healthcare professionals are thus required to collaborate [4, 5] and interprofessional teams appear to be a vital component of the quality of care for children and adolescents with cancer and their families [6, 7]; however, the evidence supporting this remains limited.

In the process of designing an interprofessional education in paediatric cancer at Rigshospitalet, the largest paediatric cancer department in Copenhagen, Denmark, this research group found it relevant to explore if any interprofessional education in paediatric cancer existed. A steering group was established comprising oncological consultants (MHH, BL), the professor (KS), the head nurse of the Children and Adolescents Unit (MMA), the head nurse of the paediatric cancer department (PR), the leader of psychosocial research in Laboratory of Paediatric Oncology (HBL) and head of education and associate professor (JLS and PhD student (MKTOP)).

Interprofessional education should be strategically planned based on a curriculum to continuously ensure and strengthen high-quality care for children and adolescents with cancer and their families. In medical education, various frameworks exist [8,9,10], such as the six-step approach to curriculum development [11].

A curriculum can be defined as “a planned educational experience” [11] that includes short- and long-term learning experiences. The curriculum comprises problem identification, needs assessment, aims and objectives, educational strategies, implementation, assessment and evaluation and feedback [11].

Interprofessional education can be defined as “occasions when two or more professionals learn with, from and about each other to improve collaboration and the quality of care” [12]. A systematic review of the effects of interprofessional education identified empirical research that supports the underlying assumption that interprofessional education enhances the delivery of safe, high-quality care for patients [13]. Further, that learners react positively to interprofessional education by improving collaborative attitudes and perceptions, and report improvements in both knowledge and skills on a variety of outcomes [13].

This assumes that an education intervention improves how healthcare professionals work together, which in turn may lead to improved patient outcomes [13]. Interprofessional education has been established and in some settings shown to have a positive impact on the knowledge, attitudes and behaviours of healthcare professionals [14]. To derive the most benefit from educational interventions, medical education can be viewed as a health technology applying evidence-based practice and evaluation for clinical practice [15]. However, interprofessional outcomes are not easily monitored and research addressing interprofessional education is inherently complex [13, 16].

Curriculum outcomes typically cover cognitive (knowledge), psychomotor (skills) and affective (attitude) objectives, as defined by Bloom’s taxonomy [11]. A robust evaluation design is essential to report changes in the knowledge, skills and attitudes of healthcare professionals [14, 17, 18]. According to Kirkpatrick’s outcome evaluation model, which dates from the 1950s [10, 19], learning takes place when a change is registered in knowledge, skills or attitudes. The model pragmatically assists in framing potential areas and purposes of evaluation. Kirkpatrick’s model has been widely applied in the assessment of interprofessional education [20]. Barr and colleagues extended the model to capture more detailed outcomes relevant to interprofessional education and also incorporated a level of benefits to patients as shown in Table 1 [14, 20].

Table 1 Classification of Kirkpatrick’s interprofessional education outcomes model modified by Barr et al. 2005

Health education research has widely applied scoping reviews [21,22,23,24,25] to identify key concepts in specific research areas, especially complex ones that have not been reviewed earlier [26]. According to Arksey and O’Malley, a scoping review can examine the extent, range and nature of research activity; determine the value of undertaking a full systematic review; and summarise and disseminate research findings but also identify research gaps in the existing literature [26]. The scoping review methodology differentiates from other review methods such as the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) in several ways [27]. Most notably, the research questions for scoping reviews are more broadly defined compared with systematic reviews’ research questions. This leads to the inclusion of all types of methods as opposed to specific methods in systematic reviews [27]. Scoping reviews can contribute to generating hypotheses and chart the data according to key issues rather than synthesizing and aggregating findings as in a systematic review [26, 28].

The purpose of this scoping review is to identify and evaluate existing interprofessional education in paediatric cancer.

Methods

We applied Arksey and O’Malley’s scoping review stages 1–6 [26]. Table 2 provides an overview of how we applied the scoping review stages in this study.

Table 2 Application in this study of scoping review methodology based on Arksey and O’Malley and inspired by Reeves et al. 2017

Stage 1: Identifying the research question

In the process of designing an interprofessional education in paediatric cancer, a literature search was needed to identify existing education. Our research question was formulated to encompass the broad aspects of education planning and evaluation in paediatric cancer.

Research question:

  • What does the literature reveal about interprofessional education in paediatric cancer?

With this broad research question, we wish to examine the extent, range and nature of educational activities in paediatric cancer, specifically explore if and how education programmes are evaluated and determine the nature of the reported outcomes.

Stage 2: Identifying relevant studies

An information specialist assisted in generating a search strategy based on keywords involving the research question: (Oncology OR Hematology) AND (“Pediatric medicine” OR Pediatrics OR “Adolescents medicine”) AND (Curriculum OR “Education programme” OR “Educational programme” OR “Interprofessional education” OR “Interdisciplinary education” OR Program Development OR Postgraduate).

We searched the following databases with educational interventions: PubMed, Scopus and Education Resources Information Center (ERIC). The searches were not limited by date, country of origin or original published language. Figure 1 provides a flowchart of the studies identified and how they were selected.

Fig. 1
figure 1

Search strategy and selection of studies

The scoping review methodology allows for inclusion of grey literature [26], which can be defined as “anything that has not been published in traditional format, or in library parlance, lacks bibliographic control […] this includes […] conference proceedings, conference posters […]” [29].

We searched online for interprofessional education in organisations such as the American Society of Pediatric Hematology/Oncology (ASPHO) [30], the Nordic Society of Paediatric Haematology and Oncology (NOPHO) [31], the Nordic Society of Pediatric Oncology Nurses (NOBOS) [32] and hospital websites, such as MD Anderson Cancer Center [33] and St. Jude Children’s Research Hospital [34].

Stage 3: Study selection

To answer the research question, we applied the following four inclusion criteria: (1) postgraduate education interventions (2) in the field of paediatric cancer (3) targeting more than one profession and (4) including an evaluation of the education intervention.

The exclusion criteria were monoprofessional education, education in other medical fields and interventions regarding patient treatment, care and rehabilitation or patient education. Figure 1 illustrates the study selection process.

We (MKT, LIR and MH) independently screened titles and abstracts. If the abstract met the inclusion criteria, full-text papers were obtained and assessed individually by two authors (MKT and MH). Full-text articles are included in supplementary 1. In the event of a disagreement, each author provided justification for their decision-making process until consensus was achieved. When the educational or methodological approaches diverged, JLS was consulted, while a second senior consultant (BL) was consulted of divergences involving paediatric cancer.

Articles in languages other than English as elaborated in supplementary 1 were screened based on abstracts written in English and then filtered according to the inclusion and exclusion criteria.

Endnote X8 was used to store all articles, while the web-based programme Rayyan [35] using a semi-automatic process assisted in screening and sorting studies based on abstracts and titles.

Stage 4: Charting the data

We extracted information from the articles regarding general information such as country of origin. We also extracted specific information about the healthcare professionals involved and the aims, strategies and outcomes of the educational activities.

Stage 5: Collating, summarising and reporting the results

We reported data from the identified articles in accordance with two theories relevant to medical education: Kern’s six-step approach to curriculum development [11], to assess the curriculum and educational content, and the modified Kirkpatrick outcomes model [20], to evaluate the outcomes.

Stage 6: Consultation (optional)

The scoping review methodology as formulated by Arksey and O’Malley consists of five steps. However, Arksey and O’Malley suggest that including the opinion of stakeholders such as practitioners and consumers can contribute to applicability of the results. This step is optional to researchers and there is no description on when and how to apply this sixth optional step [36].

We applied the sixth step throughout the iterative process of the scoping review when we presented the findings to the steering group. All authors of this scoping review comprised the steering group that discussed the findings and implications on an ongoing basis.

Results

The database searches resulted in 418 records, two of which were removed because they were duplicates. Of the 416 records that remained, a further 385 records were excluded. After reading 31 full-text articles, we identified two additional studies from their reference lists. This process led to the final inclusion of nine studies for analysis as shown in Fig. 1. The excluded 24 articles covered reviews (n = 10) and 14 articles which did not include an education intervention.

Supplemental information S1 provides an overview of the final nine references’ educational activities, an overview of the final 9 references’ methodological information, overview of the 33 full-text articles, articles in languages other than English and the number of citations in Scopus.

No relevant interprofessional education was identified in the searched grey [29] literature. Included studies’ dates ranged from 1967 to 2017.

MH and MKT consulted with JLS on four occasions to resolve whether full-text articles should be included or excluded.

Results were sub-classified as existing interprofessional education as shown in Table 3 and evaluation of interprofessional education as shown in Table 4.

Table 3 Study characteristics of the nine included studies
Table 4 Application of Kirkpatrick’s modified interprofessional education outcomes model by Barr et al. 2005

Existing interprofessional education

The number of participants in each study varied from 19 [37] to 229 [38]. The healthcare professionals represented in the studies were predominantly nurses, physicians and psychosocial staff [37,38,39]. These groups of interprofessional healthcare professionals were supplemented in one study by a child-life specialist [40], a pharmacist [41] and a music therapist [42]. Two studies only targeted physicians and nurses [43, 44], and one study supplemented these two groups of healthcare professionals with paramedics and patient care technicians [45].

The topics that the interprofessional educations covered included pain management and assessment [38, 39, 41], team training to prevent burnout [37, 40], collaboration of healthcare professionals [43], training on the attitudes of healthcare professionals toward death [42], apheresis training [44] and improving initiation of antibiotics for febrile patients [45].

Learning activities and educational strategies covered in the included studies are seminars [43], educational sessions [38, 41], lectures [42], staff meetings [45], slide presentations [44] and activities such as role play [39], reflections [37] and formal meditations [40].

Evaluation of interprofessional education

Five studies were pre-post intervention studies that compared baseline measurements with outcomes following an intervention [37, 39, 40, 42, 44]. Three studies had control groups [38, 40, 42], one of which randomised participants to either the control or intervention group [40]. Data collected included questionnaires on knowledge [38, 44] and attitudes [38, 42] and information gathered in focus groups [37, 41] and structured interviews [39]. One study collected data from medical records [45], and one training programme offered certification of the skills acquired; however, there was no validation of the certification [44].

None of the identified articles applied a medical education or curriculum model, such as the six-step approach, to curriculum development [11], or Harden’s “ten questions to ask when planning a course or curriculum” [46].

None of the nine studies applied systematic evaluation theory to participant assessments in terms of knowledge, skills, attitudes or the effects on patient outcomes, such as quality of care. However, six studies reported statistically significant findings concerning knowledge [38, 44], behaviour change [39] and attitudes [37, 40, 42].

We applied Kirkpatrick’s [19] modified model [20] to systematise outcomes across the interventions identified for close analysis as shown in Table 4.

One study reported on the reaction of participants to being part of the intervention [43] (level 1) [20]. Three studies reported on acquisition of knowledge [38, 41, 44] and four studies [37, 40, 42, 43] evaluated the modification of attitudes among healthcare professionals [41] (level 2) [20].

Four studies measured behaviour change outcomes (level 3) [20], including increased compliance to guidelines [38, 39, 45] and increased self-awareness [40].

Three studies [38, 39, 45] reported on level 4b [20] that cover improvements in the health of patients.

Discussion

There is a lack of well-structured, interprofessional education in paediatric cancer that has undergone evaluation. We found few studies that assessed the needs of learners or defined the healthcare needs of the patients. Most studies planned the educational activities according to available standards, competency frameworks and organisational demands.

In the definition of interprofessional education, “occasions when two or more professionals learn with, from and about each other to improve collaboration and the quality of care” [12], the focus is on improving collaboration and the quality of care. We only identified one study [37] with an explicit interprofessional aim. However, there are many definitions of interprofessional collaboration and interprofessional practice which are also sometimes referred to as team work [47]. We adhere to the contingency approach of interprofessional practice as formulated by Reeves et al. that the “design of the team need to be matched to its clinical purpose(s) in order to serve the local needs of patients” [48]. This implies that interprofessional practice depends on two aspects, the clinical purpose and the patients’ needs, and that the choice of which healthcare professionals should collaborate depends on these two aspects.

In designing interprofessional education, focus should be on improving collaboration and heightening the quality of care, relating to i.e. Kirkpatrick’s outcome level 3, which “measures the transfer of interprofessional skills and learning to workplace” [13]. This could be support for behaviour change in the department or willingness of healthcare professionals to apply new knowledge and skills about collaborative work to their practice style.

In medical education, it is fundamental to link curricula to healthcare needs and define aims [11]. Meeting healthcare needs requires an interprofessional approach in many specialties, including paediatric cancer. We can potentially ensure and strengthen treatment and care for children and adolescents with cancer and their families by linking interprofessional education to the healthcare needs of the patients because the best clinical management of children and adolescents with cancer depends on healthcare professionals with various skills and expertise.

Educational strategies were superficially described across studies, and none compared the various effects of educational methods or teaching strategies in the interventions. A transparent presentation of educational methods can inspire other healthcare professionals to develop curricula and evaluate their education programmes [11, 49, 50]. Furthermore, application of a medical education framework to structure the educational intervention would allow hospital management and department managers to hold medical educators accountable [11].

The identified interventions did not follow any specific evaluation framework, making it difficult to compare them in this scoping review. Incorporating an interprofessional evaluation framework in interventions can serve to aid systematic evaluation of the usefulness of education programmes [49]. Even though Kirkpatrick and Barr et al. have been subject to criticism due to the apparent simplicity of the outcomes models [51,52,53,54], both models are helpful in the process of planning the evaluation of medical education.

Limitations of the review

The primary limitation of this scoping review is the low number of included studies making the generalisability of the results difficult. The heterogeneity of the findings challenges the interpretation of the results extracted. To counteract this, we presented our results transparently to increase credibility.

In the nine articles reviewed, self-reported measures were used in evaluating outcomes related to healthcare professional knowledge, skills and attitudes. An inherent weakness in self-reported outcome measurement is that individuals often over- or underestimate their knowledge, skills and behaviours [55, 56]. In this scoping review, three studies reported on acquisition of skills [39, 41, 45]; however, only two studies documented this [39, 45]. Instead, surrogate outcomes such as 24-h chart audits [41] or tests of knowledge of what to do (skill) in case of machine breakdown [44] were used to indicate that an increase in the knowledge of the healthcare professionals was associated with behaviour change.

According to Arksey and O’Malley, the purpose of a scoping review is to aid in determining the value of undertaking a full systematic review [26]. We suggest that the application of a systematic review methodology, such as PRISMA Statement [28], would not currently be feasible due to the heterogeneity and limited number of relevant studies.

Even though the scoping review methodology allows for inclusion of grey literature [29], we did not systematically include it in the findings. It is possible, however, that organisations, such as NOPHO, NOBOS and ASPHO, and hospitals, such as the MD Anderson Cancer Center or St. Jude Children’s Research Hospital, have developed and implemented interprofessional education without publishing or posting online.

Conclusion

In conclusion, medical education should be viewed similar to any other health technology, which is why evidence-based practice and evaluation for clinical practice in paediatric cancer is necessary to derive the most benefit from educational interventions [15]. This scoping review illustrates the lack of interprofessional education in paediatric cancer.

Perspectives

Based on the education theory and literature, we recommend that future interprofessional educations apply a medical education framework [11, 46] in designing interventions; select aims and objectives based on a needs assessment [11]; define outcomes before designing the intervention, with patient outcomes included when possible [57]; select topics relevant for an interprofessional education intervention, though some interventions are more relevant for monoprofessional education [58]; and, finally, use of a systematic approach to the evaluation [19, 20] with the allocation of relevant resources [11].