Background

Evidence-based practice (EBP) is essential for the delivery of quality healthcare [1]. It is a process that allows patients, health professionals, researchers and/or policy makers to make informed health decisions in a given context based on an integration of the best available evidence, with clinical expertise and patient values and preferences [2, 3]. Most commonly this involves five steps: Ask, Acquire, Appraise, Apply and Assess [1, 2, 4]. Competency in EBP is expected by many accreditation bodies, requiring health practitioners to be able to demonstrate skills across the five domains including asking a defined question, literature searching, critical appraisal, integrating evidence into clinical practice and self-reflection [2]. These domains intersect with key EBP competencies of knowledge, skills, attitudes and behaviours – each being critical to the successful implementation of the five steps in clinical practice [2]. However, there still seems to be a gap between the desired and actual practice [5].

Many of the identified barriers to the use and implementation of EBP include those that could be overcome by education [6]. These include inadequate skills and a lack of knowledge particularly pertaining to the research skills required to acquire and appraise studies. In addition to education around the core skills it appears that more practice and exposure to EBP could also overcome barriers, particularly those relating to lack of awareness and negative attitudes [6]. Many practitioners misunderstand EBP as being the ability to keep up to date with research. With an average of over 2000 citations added to Medline each day [7], the ability to effectively and efficiently search and identify relevant, high quality evidence is a critical skill.

A study of undergraduate health student perceptions of the meaning of EBP revealed a very limited understanding of EBP processes or principles [4]. This is despite the fact that over the last two decades EBP has been integrated into core health curricula [2]. The most common teaching methods in undergraduate programs include, research courses and workshops, collaboration with clinical practice, IT technology, assignments, participation in research projects, journal clubs, or embedded librarians [8]. There have been a number of systematic reviews published evaluating the effectiveness of interventions focused on teaching EBP. Some of these have addressed only one aspect of EBP such as literature searching [9], whilst others have focused on specific cohorts such as medical trainees [10], medical students [11], undergraduate health students [12], postgraduate teaching [13], or nursing programs [14].

More recently, MM Bala, T Poklepović Peričić, J Zajac, A Rohwer, J Klugarova, M Välimäki, T Lantta, L Pingani, M Klugar, M Clarke, et al. [15] performed an overview of systematic reviews examining the effects of different teaching approaches for EBP at undergraduate and postgraduate levels. The review identified 22 systematic reviews, with the most recent systematic review published in 2019. This overview of systematic reviews identified that knowledge improved when interventions were compared to no intervention, or pre-test scores [15] across a diverse range of teaching modalities and populations. Similarly, there was positive changes in behaviour, with EBP skills also improving in certain populations. However, of the systematic reviews included only three were judged of high quality, one as moderate, one as low and the other 17 were considered as critically low quality. MM Bala, T Poklepović Peričić, J Zajac, A Rohwer, J Klugarova, M Välimäki, T Lantta, L Pingani, M Klugar, M Clarke, et al. [15] reported that the reasons for categorisation as low-quality were most often a lack of a comprehensive search strategy and/or an adequate risk of bias assessment tool.

As the principles of EBP remain the same irrespective of the health profession, the aim of this systematic review was to identify the current evidence-base on the effectiveness of different teaching modalities on undergraduate or postgraduate learner competency in EBP among all fields of medicine, allied health and health sciences. This review also aims to provide a high-quality update on our 2014 systematic review, which focussed on EBP training in medical trainees. This review expands the population of interest to all health professions trainees at both undergraduate and post-graduate levels, encompassing all areas of EBP competency including knowledge, skills, attitudes and behaviours, including self-efficacy.

Methods

Cochrane methodology were used to ensure the robustness the systematic approach of this review by following the Cochrane Handbook [16] and reporting in accordance with the PRISMA 2020 statement [17], outlined in the steps below.

Step 1: Defining the research question and inclusion criteria

A conceptual framework for data synthesis [18, 19], which shares similarities with the EBP PICOTS framework [20], was utilised to determine the inclusion criteria and eligibility of studies to be included in this systematic review (Table 1).

Table 1 Conceptual framework for data synthesis

Step 2: Searching the literature and the identification of studies

Electronic searches were conducted across the following databases; MEDLINE, Cochrane central register of controlled trials, PsycINFO, CINAHL, ERIC, A + Education and AEI (Informit). No language or date restrictions were imposed. The search was last updated in November 2021. The full search strategy is available in Supplementary file 1. Citations of all articles returned from the search of the respective electronic databases was uploaded for review using Covidence [21]. All citations were reviewed independently by two authors (BH and DI) for possible inclusion in the systematic review based on the title and abstract. Full-text of those articles, as well as those where it was not possible to determine inclusion/exclusion based solely on the title and/or abstract was conducted by two authors (BH and DI). Articles that met the selection criteria after final review of the full-text were included in this systematic review. Any discrepancies in author judgement regarding the merits of article selection was resolved by the third author (BD).

Step 3: Data collection, extraction and management

A data extraction form was piloted before the commencement of data collection and extraction. Two authors (BH and BD) independently extracted data from each included study. Information on the following domains was recorded; study citation, country, setting, study design, study period, inclusion/exclusion criteria, number and type of participants, methodology (including teaching intervention and comparison), outcomes measures and time point, study funding source and conflict of interests. Any discrepancies in author judgement with data extraction were resolved by the third author (DI) before a single consolidated data extraction form was created.

Step 4: Assessment of risk of bias in included studies

The methodological quality of included studies was assessed using the Cochrane risk of bias tool [22]. Two authors (BH and BD) independently assessed each included study across four domains; 1) random sequence generation (selection bias), 2) allocation concealment (selection bias), 3) blinding of outcome assessment (detection bias) and 4) incomplete outcome data (attrition bias). Risk of bias for each study domain was assessed as ‘high’, ‘unclear’, or ‘low’ risk of bias. Overall risk of bias for the evidence base was similarly assessed. Any discrepancies in author judgement were resolved by the third author (DI).

Step 5: Data synthesis and analysis

Due to the relative heterogeneity of studies included in this review, a formal meta-analysis was not deemed appropriate. Studies varied across interventions, comparisons, outcomes measured (and tools for measuring outcomes), as well as timing of outcome measurement. Studies also differed with the type of EBP content delivered, with some focussing on single aspects of EBP, whilst others taught all EBP steps as part of the educational intervention. A descriptive analysis was performed on all included studies, with focus on differences in knowledge, skills, behaviour and attitudes between educational interventions and the methodological quality of the evidence.

Results

Description of studies

A total of 1,355 citations were identified from the search, of which 71 were examined for full-text. Twenty-one studies met the inclusion criteria and were included in the review as seen in the PRISMA flowchart (Fig. 1) [17]. Of the 21 studies included in this review, 14 studies were conducted with undergraduate medical students, one with undergraduate osteopathic medical students, one with graduate physician assistant students, one with undergraduate nursing students and one with graduate family nurse practitioner students. Three studies implemented an interdisciplinary approach to teaching with a combination of medical, occupational therapy, physiotherapy, nutrition, pharmacy and dental students. The majority of studies were conducted in the USA, with other studies involved across a variety of countries including Australia, Canada, Hong Kong, Indonesia, Japan, Lebanon, Malaysia, Mexico, Norway, Portugal, Taiwan and United Kingdom. The characteristics of included studies (including information on methodology, participants, interventions, outcomes and findings for each study) are detailed in Table 2.

Fig. 1
figure 1

PRISMA flowchart [17]

Table 2 Characteristics of included studies

Methodological quality

The risk of bias for each included study is illustrated in Fig. 2. Four studies had a low risk of bias [24, 26, 39, 46], ten studies [23, 29, 31, 34, 35, 37, 44, 47, 51, 53] had a high risk of bias and seven studies [41,42,43, 45, 49, 50, 52] had uncertain risk of bias as there was a lack of detail in the study methodology to genuinely judge the quality of the study. Table 3 provides further details about the evidence to support judgements about risk of bias for all included studies. Across methodological domains, studies were varied in terms of their risk of bias, with an overall judgement about the summary of evidence being ‘unclear’, in terms of the summary of evidence (Fig. 3).

Fig. 2
figure 2

Risk of bias summary. Review authors' judgements about each risk of bias item for each included study

Table 3 Risk of bias overview
Fig. 3
figure 3

Risk of bias graph. Review authors' judgements about each risk of bias item presented as percentages across all included studies

Study population and EBP interventions

Studies differed in the EBP competencies delivered as part of their respective training programs. GM Leung, JM Johnston, KY Tin, IO Wong, LM Ho, WW Lam and TH Lam [46] delivered an introduction to the principles of EBP with undergraduate medical students and D Cardoso, F Couto, AF Cardoso, E Bobrowicz-Campos, L Santos, R Rodrigues, V Coutinho, D Pinto, M-A Ramis, MA Rodrigues, et al. [26] delivered an EBP education program to undergraduate nursing students. The study by PM Krueger [45] focussed on teaching critical appraisal skills to undergraduate osteopathic medical students.

Four studies focussed on teaching searching skills (including constructing a clinical question). Of the four studies, JD Eldredge, DG Bear, SJ Wayne and PP Perea [34] and D Ilic, K Tepper and M Misso [37] training undergraduate medical students, whilst HL Johnson, P Fontelo, CH Olsen, KD Jones, 2nd and RW Gimbel [41] trained graduate family nursing students and LA Kloda, JT Boruff and AS Cavalcante [43] trained undergraduate occupational therapy and physiotherapy students.

Two studies focused on delivering teaching searching and appraising clinical evidence. The study by RG Badgett, JL Paukert and LS Levy [23] consisted of two quasi RCTs with undergraduate medical students, whilst the study by JD Long, P Gannaway, C Ford, R Doumit, N Zeeni, O Sukkarieh-Haraty, A Milane, B Byers, L Harrison, D Hatch, et al. [47] was with pharmacy and nutrition students.

A total of 12 studies delivered teaching programs on the 4 steps of EBP (asking a clinical question, searching the literature, critical appraisal of the evidence, and integration of evidence in the clinical setting). P Bradley, C Oterholt, J Herrin, L Nordheim and A Bjorndal [24], J Davis, S Crabb, E Rogers, J Zamora and K Khan [31], T Hadvani, A Dutta, E Choy, S Kumar, C Molleda, V Parikh, MA Lopez, K Lui, K Ban and SS Wallace [35], D Ilic, RB Nordin, P Glasziou, JK Tilson and E Villanueva [39], JM Johnston, CM Schooling and GM Leung [42], M Sanchez-Mendiola, LF Kieffer-Escobar, S Marin-Beltran, SM Downing and A Schwartz [50] and IS Widyahening, A Findyartini, RW Ranakusuma, E Dewiasty and K Harimurti [53] delivered their programs to undergraduate medical students. HM Cheng, FR Guo, TF Hsu, SY Chuang, HT Yen, FY Lee, YY Yang, TL Chen, WS Lee, CL Chuang, et al. [29] and K Schilling, J Wiecha, D Polineni and S Khalil [51] delivered their programs to undergraduate medical students as an integrated clinical rotation. MA Stack, NO DeLellis, W Boeve and RC Satonik [52] delivered their program to graduate physician assistant students, whilst E Nango and Y Tanaka [49] and D Koufogiannakis, J Buckingham, A Alibhai and D Rayner [44] delivered their programs as part of an interdisciplinary program.

It was noted that none of the studies included long term follow up and assessment of EBP competencies post intervention, with assessment delivered post-intervention.

EBP competency

Knowledge

Twelve studies in total examined the impact of teaching modes on learner knowledge [24, 26, 29, 31, 35, 42, 44, 45, 49,50,51, 53]. Five studies determined learner knowledge post-intervention via a non-validated tool or survey [24, 44, 45, 49, 51], three utilised either the Berlin or Fresno tool in isolation [26] or combination [31, 53], and another two utilised the Knowledge, Attitude and Behaviours (KAB) questionnaire [29, 42]. Other methods used to determine impacts on learner knowledge included the Knowledge, Attitudes, Access, and Confidence Evaluation (KACE) [35] or Taylor et al. [25] questionnaire [50]. Five of the included studies identified no statistically significant difference in learner knowledge between teaching interventions [24, 31, 35, 42, 49, 53]. Teaching modalities investigated across those five studies included comparisons between directed and self-directed learning; computer versus lecture based; self-directed multimedia vs didactic; PBL versus non-PBL structure; multidisciplinary versus homogenous disciplines; and near peer tutored versus staff tutored session. Two of the included studies identified differences in knowledge scores between teaching interventions. HM Cheng, FR Guo, TF Hsu, SY Chuang, HT Yen, FY Lee, YY Yang, TL Chen, WS Lee, CL Chuang, et al. [29] compared structured case conferencing to lecture based teaching. Learners in the structured case conferences were identified to have significantly higher knowledge scores at follow up (MD = 2.2 95%CI 0.52–3.87). D Koufogiannakis, J Buckingham, A Alibhai and D Rayner [44] identified significantly higher learner knowledge with those that attended EBP-related PBL sessions with a librarian compared to sessions without a librarian. Four studies compared the teaching of an EBM course to no teaching [26, 45, 50, 51]. Unsurprisingly, learners who attended the EBM course had significantly higher knowledge scores when compared to those allocated to the control group.

Skills

Twelve studies in total examined the impact of teaching modes on learner EBP skills [23, 24, 26, 34, 35, 37, 39, 43, 47, 51,52,53]. Impacts on learner EBP skills were assessed after the intervention via the Fresno tool in isolation [26, 35, 43, 52] or in combination with the EBP questionnaire [37], non-validated methods [23, 24, 34, 51], the Berlin tool [39], Research Readiness Self-Assessment (RRSA) [47] or the EBP confidence (EPIC) scale [53]. Eight of the studies concluded no statistically significant difference in learner EBP skill between teaching interventions [23, 24, 34, 37, 39, 43, 53]. Teaching modalities investigated across those eight studies included directed versus self-directed learning; blended versus didactic learning; self-directed multimedia vs didactic; specific workshop on searching versus no workshop; near peer tutored versus staff tutored sessions; and EBP training with and without peer assessment. The studies by MA Stack, NO DeLellis, W Boeve and RC Satonik [52], K Schilling, J Wiecha, D Polineni and S Khalil [51], D Cardoso, F Couto, AF Cardoso, E Bobrowicz-Campos, L Santos, R Rodrigues, V Coutinho, D Pinto, M-A Ramis, MA Rodrigues, et al. [26] and JD Long, P Gannaway, C Ford, R Doumit, N Zeeni, O Sukkarieh-Haraty, A Milane, B Byers, L Harrison, D Hatch, et al. [47] examined the effectiveness of an EBP teaching intervention, or curriculum, to usual practice. MA Stack, NO DeLellis, W Boeve and RC Satonik [52] compared the impact of students undertaking a curriculum with EBM teaching embedded, versus students who undertook a curriculum without EBM content. Students undertaking the EBM-based curriculum demonstrated higher EBM-related skills, and also recorded higher self-efficacy with respect to EBM skills. The study by JD Long, P Gannaway, C Ford, R Doumit, N Zeeni, O Sukkarieh-Haraty, A Milane, B Byers, L Harrison, D Hatch, et al. [47] examined the use of a web-based tool to support student EBM searching and critical appraisal skills. Students reported significantly higher self-efficacy scores when using the EBM-related technology. The study by K Schilling, J Wiecha, D Polineni and S Khalil [51] reported higher EBP skills in students attending an online clerkship in EBP, compared to students that did not. D Cardoso, F Couto, AF Cardoso, E Bobrowicz-Campos, L Santos, R Rodrigues, V Coutinho, D Pinto, M-A Ramis, MA Rodrigues, et al. [26] reported greater improvements in EBP skills for those who participated in the EBP education program.

Attitudes

Ten studies in total examined the impact of teaching modes on learner EBP attitudes [24, 31, 35, 39, 41, 42, 44, 50, 53, 55]. The main method for determining impact on attitudes post intervention was via the use of Taylor et al. [25]. Other methods included the use of the KAB questionnaire [29, 42], KACE [35] or Asessing Competency in Evidence based medicine (ACE) [39] tool, or non-validated means. Eight of the included studies identified no significant differences in learner EBP attitudes between teaching interventions [24, 31, 35, 41, 42, 53, 55]. Teaching modalities investigated across those eight studies included directed versus self-directed learning; structured case conference versus lectures; computer-based sessions versus lecture; self-directed multimedia vs didactic; near peer tutoring versus staff tutoring; PBL versus usual teaching; librarian assisted PBL versus non-librarian assisted PBL; and web-based teaching versus usual teaching. The study by D Ilic, RB Nordin, P Glasziou, JK Tilson and E Villanueva [39] examined blended learning versus didactic teaching of EBM. No overall difference in learner attitudes was identified, although several significant differences on sub-questions relating to the tool used to access attitudes was observed. Unsurprisingly, the study by M Sanchez-Mendiola, LF Kieffer-Escobar, S Marin-Beltran, SM Downing and A Schwartz [50] observed significantly higher learner attitudes towards EBP when comparing the implementation of an EBP course to no EBP teaching.

Behaviour

Five studies in total examined the impact of teaching modes on learner EBP behaviour [29, 39, 42, 46, 52]. Three studies determined impact on behaviours post intervention by the KAB questionnaire [29, 42, 46], one via the ACE tool [39] and one via the Patient Encounter Clinical Application (PECA) scale [52]. Two of the included studies identified no impact of teaching modes on EBP behaviour (PBL versus usual teaching and EBM curriculum versus curriculum without EBM integration) [42, 52]. The study by D Ilic, RB Nordin, P Glasziou, JK Tilson and E Villanueva [39] identified increases in EBP behaviour sub-scores in learners that received blended learning, versus those that received a didactic approach. HM Cheng, FR Guo, TF Hsu, SY Chuang, HT Yen, FY Lee, YY Yang, TL Chen, WS Lee, CL Chuang, et al. [29] reported increases in EBP behaviour in learners that were exposed to case conference style teaching of EBP, compared to those that received lecture-based sessions. Unsurprisingly, students who received any form of EBP teaching reported higher EBP behavioural scores compared to students that weren’t exposed to any form of EBP teaching [46].

Discussion

The findings of this systematic review build on the emerging evidence base exploring the effectiveness of different teaching strategies on the competency of EBP learners [9,10,11,12,13,14,15]. Results from this current review update and extend on the findings from our 2014 systematic review, which identified a small, albeit moderate quality of evidence, on the effectiveness of different training modalities in medical trainees [10]. Although our current study expanded the search to include allied and health sciences trainees, very few additional studies across these health professions have been performed. As per our 2014 review, our current findings highlight the variability in methodology quality, and use of psychometrically validated tools to assess learner competency in EBP [10]. These results align with the most recent overview of systematic reviews, which concluded the current limitations of evidence on the topic to be poor quality, heterogeneity of interventions and outcome measures [15].

In a bid for EBP education to be more ‘evidence-based’, a 2018 consensus statement was developed detailing the minimum core competencies in EBP that health professionals should meet to improve and standardise education in the discipline [2]. For such competencies to be translated into practice, a variety of robust teaching implementation and assessment strategies and tools must be available. A systematic review of evaluation tools in 2006 identified over 104 assessment instruments, of which only two were psychometrically evaluated [56]. The CREATE framework was developed in 2011, with the objective of creating a common taxonomy for assessment tools to cover assessing all steps of competency in EBP (including asking; acquiring; appraising; applying and assessing) [57]. A 2020 extension of the earlier 2006 systematic review identified six tools of reasonable validity evaluating some, but not all aspects of EBP [58].

Whilst our systematic review included 21 studies for review, the heterogeneity between the studies in terms of how outcomes were measured precluded any meaningful meta-analysis. I Chalmers and P Glasziou [59] have highlighted the impact of research waste in medical literature. Therefore, future research in the EBP education field must avoid waste by assessing agreed-upon outcome measures with robust, psychometrically validated tools. Such assessment tools should be competency focussed, rather than discipline focused, with the emphasis on evaluating specific EBP domains as recommended by the CREATE framework [40, 57]. Use of anything else only contributes to the gathering pile of research waste.

The findings from this systematic review suggest that insufficient evidence to promote one teaching modality over another in terms of its effectiveness on EBP learner outcomes. What is common across the current evidence is the need for multi-faceted, interactive, authentic learning experiences and assessment. Teaching utilities such as journal clubs have the potential to incorporate strong andragogic principles, for example with the use of PBL principles, coupled with a teaching modality that is commonly used in practice as a method of professional education. Further research is required to better understand how such authentic teaching and learning utilities are best served in the novice to expert continuum. For example, should novice EBP competencies be scaffolded through structured teaching modalities such as PBL, whilst those with a higher level of competency engage in more ‘authentic’ utilities such as journal clubs?

An important aspect of education not included in any study to date is the impact of cost and value on the teaching and learning experience [60]. The Prato Statement, published in 2017, highlights the goal of incorporating economic analyses into health professions education research in order to create an evidence that maximises value – both from an educational and economic perspective [60]. Whilst findings from our review demonstrate relevant equivalence between teaching modalities with respect to measured EBP competencies, the availability of economic data could well provide evidence to persuade the ‘value’ of one teaching modality over another.

Van der Vleuten’s assessment utility formula incorporates cost as a key quality characteristic of assessment [61]. Cost and value are important ingredients that educators should consider when evaluating the effectiveness of teaching strategies, from both a pragmatic and academic perspectives. The number of studies incorporating some form of economic evaluation is growing within the health professions education, however the quality of their reporting is poor [62]. The use of reporting guidelines to incorporate well-constructed economic evaluations as part of the assessment outcomes, particularly in the growing evidence base of EBP teaching, would provide sufficient evidence to conduct sensitivity analyses and provide a different lens from which to interpret evidence – particularly when it seems inconclusive on face value.

Many of the studies included in this systematic review were assessed as having either a high, or unclear, risk of bias. This potential for methodological bias brings a certain level of uncertainty in the evidence base with respect to interpreting the overall results of the review. A further limitation was the heterogeneity between studies with respect to outcomes measured, and tools used to measure these end-points. This variance in outcome measures prevented the possibility of conducting a meta-analysis, which would bring some level of quantifiable assessment of study outcomes. Subsequently, it was not possible to conduct a funnel plot analysis of studies and the potential impact of publication bias on this systematic review. Similarly, the included studies provided little information regarding the educational interventions that could allow reproducibility of the educational method, thereby adding to the ‘educational’ heterogeneity of the studies. All included studies focussed assessment of EBP competency immediately after intervention. The lack of long term follow-up is a significant evidence gap, as it critical questions regarding the need, and frequency, of continued professional development in EBP remain unanswered – particularly the impact that time, resources and environment may have upon self-efficacy, behaviours and attitudes toward implementing EBP principles in practice.

The majority of studies to date have focussed on medical trainees. Further research is required, particularly in the development of high-quality methodological studies to explore the impact of different teaching modalities across the broad spectrum of health professions disciplines. Such studies should focus on assessing key EBP competencies across the domains of knowledge, skills, attitudes and behaviours using robust, psychometrically validated outcome assessment tools.

Conclusions

The current evidence suggests limited differences on learner competency in EBP across different teaching modalities. Future studies should focus on conducting high methodological studies, with specific focus on measuring core EBP competencies using validated tools across disciplines within the health professions. Similarly, future studies should explore the use of emerging teaching strategies, and their effectiveness in teaching EBP across different stages of training. The COVID-19 pandemic has seen the need for many educational programs to pivot to an online delivery, with many adopting a hybrid online/face-to-face engagement as pandemic restrictions ease. Future work is needed to identify how successfully teaching of EBP can be translated into these emerging teaching modalities. There is also a need to explore long term follow up of learner competency in EBP as learners move along the novice to expert continuum from students to clinicians practicing EBP in the clinical environment.