Background

The Coronavirus Disease 2019 (COVID-19) pandemic has affected human health to an unprecedented degree: more than 569 million cases had been reported by July 2022 and an estimated 14.9 million excess deaths was reported in May 2022 [1]. This has been accompanied by profound disruption to health worker education, due to distancing, restrictions on access to learning facilities and clinical sites, or learner and faculty infection or illness [2, 3]. In response, many institutions rapidly embraced digital innovation and other policy responses to support continued learning [4].

Building on an earlier review by the same authors [5], this paper seeks to quantify the educational innovations and their outcomes since the start of the pandemic, as documented in published studies [6, 7], capturing different regions, levels of training, and occupations [8]. The pertinent challenge is how to translate this evidence into enduring policies, strategy and regulation on the instruction, assessment and well-being of health worker learners [9], in accordance with the WHO Global Strategy on Human Resources for Health: Workforce 2030 [10].

The aim of this systematic review and meta-analysis is to identify and quantify the impact of COVID-19 on the education of health workers worldwide, the resulting policy responses, and their outcomes, providing evidence on emerging good practices to inform policy change.

A graphical abstract summarizing our systematic review and meta-analysis in a cohesive and legible way is presented in Fig. 1.

Fig. 1
figure 1

Graphical abstract of the systematic review and meta-analysis

Methods

Study design

We conducted a systematic review and meta-analysis in accordance with the Measurement Tool to Assess Systematic Reviews-2 (AMSTAR-2) checklist [11] and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 statement [12], based on a predesigned protocol registered with PROSPERO (CRD42021256629) [13].

Search strategy

We searched the MEDLINE (via PubMed), EMBASE, Web of Science, and CENTRAL databases, as well as ClinicalTrials.gov and Google Scholar (first 300 records) for randomized controlled trials (RCTs) or observational studies published from 1/1/2020 to 31/07/2022 in English, French or German (full search strategy available in Additional file 1). A snowball approach was also employed.

Eligibility criteria and outcomes

Our eligible population included Health Worker (HW) learners or faculty, as defined by the International Standard Classification of Occupations (ISCO-08) [14] group of health professionals, excluding veterinarians. Health care settings per the Classification of Health Care Providers (International Classification for Health Accounts, ICHA-HP) [15] and relevant educational settings (i.e., universities, colleges) were considered eligible. The included population was divided into undergraduate learners, postgraduate (e.g., residents or fellows) and continuing education (in-service) [16]. Any change(s) and/or innovation(s) that were implemented in health worker education in response to the COVID-19 pandemic (not before the COVID-19 pandemic or amidst other pandemics) were considered eligible. Online training methods were sub-divided into predominantly theoretical courses, courses with a practical component (i.e., practical skill, simulation-based training), congresses/meetings, interviews, and clinical experience with patients (i.e., clinical rotations/electives, telehealth-based training). Comparators included conventional/traditional practices existing prior to the pandemic.

The study outcomes are organized according to (1) impact of the COVID-19 pandemic on the educational process and mental health of learners; (2) policy responses (not included in the meta-analysis); and (3) outcomes of those policy responses (Table 1). Specific meta-analysis outcomes in the categories shown in Table 1 included: regarding axis 1, clinical training, mental health (i.e., anxiety, depression, insomnia and burnout), and learner career plan disruptions (e.g., redeployment), and concerning axis 3, satisfaction, preference and performance with new training and assessment modalities and volunteerism, including any social/community/institutional work. Regarding anxiety and depression, individuals whose symptom severity was deemed moderate or higher according to validated measurement scales were considered as affected. For the Generalized Anxiety Disorder-7 (GAD-7) and Patient Health Questionnaire-9 (PHQ-9) screening tools, this corresponded to a cut-off score of 10.

Table 1 Outcomes framework for the systematic review

Literature search and data extraction

All retrieved records underwent semi-automatic deduplication in EndNote 20 (Clarivate Analytics) [17], and were then transferred to a Covidence library (Veritas Health Innovation, Melbourne, Australia) for title and abstract screening. Pairs of authors performed a blind scan of a random 15% sample of records. After achieving an absolute agreement rate > 95% (Fleiss’ kappa, 1st phase: 0.872, 99% confidence interval (CI) [0.846–0.898]; 2nd phase: 0.840, 99% CI [0.814–0.866]), single-reviewer screening was performed for the remainder of the studies, as per the AMSTAR-2 criteria [11]. Subsequently, pairs of independent reviewers screened the full texts of the selected studies for eligibility, and, if eligible, extracted the required data in a predetermined Excel spreadsheet. Screening and data extraction was carried out in two phases: the initial phase (1/1/2020 to 31/8/2021 by AD, ANP, M. Papapanou and MGS) and the updated living phase (1/9/2021 to 31/7/2022 by NRK, AA, DM, MN, CK, M. Papageorgakopoulou). After discussion with the WHO technical partner, we amended the extraction spreadsheet to further include descriptions of policies in the updated living phase. Satisfaction was extracted either from direct mentions of participants’ satisfaction by the authors or from questions surveying the participants’ perceptions on their satisfaction, the success, usefulness or effectiveness of the learning activity. Conflicts were resolved by team consensus. For missing data, study investigators were contacted. Studies for which the full text or missing data were unable to be retrieved were categorized as “reports not retrieved”. Studies on overlapping populations were also considered duplicates and subsequently removed if they related to the same study period and institution(s) and involved similar populations and author lines. The study with the most comprehensive report was retained.

Risk of bias, publication bias and certainty of evidence

Pairs of all aforementioned authors performed the risk of bias assessment, and any conflicts were resolved by team consensus. The quality assessment was performed using an adapted version of the Newcastle–Ottawa Scale (NOS) for cross-sectional studies (Additional file 1), the original NOS for cohort and case–control studies, and the Cochrane risk-of-bias (RoB2) tool (Version-2) for RCTs. Publication bias was explored with funnel plots and the Egger’s test [18]. Certainty of evidence was assessed using the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) approach [19].

Data synthesis

Categorical variables were presented as frequencies (%) and continuous variables as mean (standard deviation [SD]). To dichotomize ordinal data (e.g., Likert-type scales), we used the specific author provided cut-offs for the respective scales, or, if not provided, the 60th percentile (40th if the scale was reversed). Regarding mental health outcomes, we derived scale-specific cut-offs from the literature.

Analyses were carried out on learner and faculty population subsets separately. We carried out a meta-analysis of the Freeman–Tukey (FT) double-arcsine transformed estimates using the DerSimonian and Laird (DL) random-effects model [20,21,22]. We used the harmonic mean in the back-transformation formula of FT estimates to proportions [23]. For each meta-analyzed outcome, we reported the raw proportion (%), pooled proportion (%) along with its 95% CI, the number of studies (n) and number of included individuals (N). When applicable, we pooled standardized mean differences (SMDs) with the method of Cohen [24]. Statistical heterogeneity was quantified by the I2 [25], and was classified as substantial (I2 = 50–90%) or considerable (I2 > 90%) [26].

Subgroup and sensitivity analyses

We performed subgroup analyses stratified by gender, continent, WHO geographical region, ISCO-08 occupational group, stage of training, and year of undergraduate studies, and computed p-values for subgroup differences (psubgroup < 0.10 indicates statistically significant intra-subgroup differences) [26]. The potential effect of time on outcomes potentially exhibiting dynamic changes during the evolution of the pandemic, such as satisfaction and preference with learning formats, as well as mental health outcomes, was explored via additional subgroup analyses by year data collection was completed (2020 vs 2021 vs 2022). Only subgroups involving 3 or more studies are presented and taken into account for the psubgroup calculation, so no subgroup analysis is presented for the 2022 study end year.

Sensitivity analyses excluding studies with N > 25 000 were performed to minimize the risk for duplicate populations that may be introduced by large-scale nationwide studies. Regarding anxiety, depression and burnout, sensitivity analyses restricted to studies employing the GAD-7, PHQ-9, and Maslach Burnout Inventory (MBI, including its variants), respectively, and, even further, their low-risk-of-bias subsets were carried out.

To better account for the anticipated substantial heterogeneity, two additional meta-analytical approaches were used: (i) the Paule–Mandel estimator to calculate the between-study variance [27]; and (ii) the Hartung–Knapp method for the CI calculation [28].

Statistical significance for all analyses was set at a two-sided p < 0.05. All analyses were conducted using aggregate data via the STATA software, version 16.1 (Stata Corporation, College Station, TX, USA). Further explanation of adopted statistical approaches is provided in Additional file 1.

Results

The literature search yielded a total of 171 489 publications (168 102 from databases and 3 387 from snowball and Google Scholar). Following deduplication and title-abstract screening, a total of 10 525 publications (7 214 from database/register search, and 3 311 from snowball/Google Scholar) were assessed for eligibility, of which a total of 2 249 were included in the systematic review. Of these, 2 212 were observational studies (2 079 cross-sectional), and 37 RCTs. The PRISMA 2020 flow diagram is available in Fig. 2. All our included studies are cited in Additional file 2.

Fig. 2
figure 2

PRISMA 2020 flow diagram

Overall, 1 149 073 individuals (1 109 818 learners [96.6%], 22 204 faculty [1.9%], 12 544 combined learner and faculty participants [1.1%], and 4 507 education leaders representing institutions [0.4%]) across 109 countries from 6 continents/WHO regions were included. The total number of women was 468 966 (63.4%) out of 739 127 participants whose gender was reported. Of the studies included in the meta-analysis and pertaining to the impact of the pandemic, 314 focused on training disruption, 193 on career plans disruption, and 287 on the mental health of learners; regarding the outcomes of policy responses, 1013 studies focused on innovations in learning, 121 on online assessment methods and 48 on volunteerism.

Characteristics of included individuals and settings per outcome are available in Table 2A, B, Additional file 3 and Additional file 4. The sample mostly represented undergraduate learners (81.4%), within the field of medicine (86.5%), in studies originating from institutions in Asia (59.9%) and the Western Pacific WHO Region (WPR, 40.7%).

Table 2 Characteristics of included individuals and settings

Thirty-seven RCTs were included: 20 out of them were assessed as at high risk of bias, 12 at low risk of bias, and 5 at risk of bias with some concerns. They mostly compared newly developed virtual, gamified or in-person learning for medical or nursing students during the COVID-19 pandemic to prior established teaching methods. They mostly showed better learning outcomes with the innovative modalities, with some studies showing no significant difference. More details are available in Additional file 5. Based on the NOS and adapted NOS scales, the median (Q1–Q3) quality score of all observational studies was 6 (4–7), [5 (4–7) for cross-sectional; 6 (5–7) for retrospective; 5 (4–7) for prospective cohorts; and 7 (6–7) for case-controls] (Additional file 3).

The main results of our systematic review and meta-analysis are analyzed below, along with the most noteworthy subgroup results. Figures 3 and 4 also depict the main meta-analysis outcomes from Axes 1 and 3 (i.e., impact of the pandemic on health worker education and Outcomes of policy responses, Table 1). All results from subgroup analyses based on gender, ISCO-08 group, continent, WHO region, training level and undergraduate year of studies are detailed in Tables 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, and 15. The full spectrum of analyses is also available in more detail in Additional file 6.

Fig. 3
figure 3

Meta-analysis of impact of COVID-19 on Health Worker Education. Random-effect meta-analyses of proportions reflecting the impact of the pandemic on health worker education. A Disruption of learning, redeployment, changes of career plans and potential prolongation of studies. B mental health effects of the pandemic on learners. Each analysis is depicted as a cyclic data marker; the horizontal lines indicate the 95% confidence intervals (CI). The “raw proportion (%)” is derived from simple weighted division. I2 quantifies heterogeneity, which is statistically significant (p < 0.01) in all cases (metric omitted)

Fig. 4
figure 4figure 4

Meta-analysis of outcomes of policy responses. Random-effect meta-analyses of proportions reflecting the outcomes of policy and management responses in regard to the pandemic. A Learner and faculty perceptions on online and blended forms of learning. B Satisfaction with online assessments and volunteerism initiatives. Each analysis is depicted as a cyclic data marker; the horizontal lines indicate the 95% confidence intervals (CI). The “raw proportion (%)” is derived from simple weighted division. I2 quantifies heterogeneity, which is statistically significant (p < 0.01) in all cases (metric omitted)

Table 3 Learners perceiving disruption of their clinical training amidst the COVID-19 pandemic by subgroups
Table 4 Learner redeployment rates due to the COVID-19 pandemic by subgroups
Table 5 Learners’ scaled anxiety, depression, burnout and insomnia during the COVID-19 pandemic by subgroups
Table 6 Institutions enacting the responses implemented during the pandemic to preserve the education of health workers
Table 7 Satisfaction of health worker learners with educational methods implemented during the COVID-19 pandemic by subgroups
Table 8 Preference of health worker learners for the virtual-only educational format by subgroups
Table 9 Preference of health worker learners for the purely in-person educational format by subgroups
Table 10 Preference of health worker learners for the blended educational format by subgroups
Table 11 Learners supporting the adoption of a blended format in the post-pandemic future of health worker education by subgroups
Table 12 Learners supporting the adoption of a virtual-only format in the post-pandemic future of health worker education by subgroups
Table 13 Satisfaction of learners with virtual assessment methods during the COVID-19 pandemic by subgroups
Table 14 Learners’ willingness to volunteer and actual participation in pandemic-related-volunteering activities due to the COVID-19 pandemic by subgroups
Table 15 Summary and interpretation of main results

Impact of the pandemic on health worker education

The widespread disruption in undergraduate, graduate and continuing education of health workers due to closures and physical distancing has been clearly reported since the start of the pandemic [5]. There were references to complete or temporary cessation of in-person educational activities including classes and patient contact [29, 30]; and in many cases the temporary cessation of face-to-face learning, both pre-clinical and clinical. Especially for undergraduate learners, bedside education was initially halted to protect learners [31]. During residency training, the main disruptions identified were the reduction in case volumes [32, 33] especially in surgical training [34, 35], less time available for learners to spend in the hospital [36], or, conversely, increased workload, especially in COVID-related specialties. Other activities including in-person scientific conferences were discontinued [37]. Timely graduation was jeopardized [38], required examinations were canceled [39] and graduates were unable to apply for their next steps [40].

Disruption to clinical training

Most studies surveying training disruption focused on learners in a clinical setting. Overall, self-perceived disruption of training during the pandemic was estimated at 71.1% (95% confidence interval: 67.9–74.2) and varied according to WHO region, with the highest disruption having been observed in the Southeast Asia Region (SEAR) (Table 3). When surveyed, 75.8% (71.4–79.9) of learners noted decreased exposure to invasive procedures, such as surgeries or endoscopies, whereas a somewhat lower disruption was observed for the outpatient or inpatient clinical activity and performance in non-invasive procedures (69.7%, 64.4–74.9). Due to the disruption, 44.7% (39.2–50.2) of learners would want to prolong their training to presumably cover their educational gaps.

Disruption of career plans

Learners were sometimes redeployed from their training programs to support the COVID-19 response [41,42,43]. An estimated 29.2% (25.3–33.2) of clinical learners had to be redeployed during the pandemic to fulfill new roles, either caring for COVID-19 patients or accommodating other clinical needs associated with the response to the pandemic (e.g., covering a non-COVID-19 unit because of health worker shortage). This was more evident for learners in the WHO European region (EUR) (35.2%, 28.8–41.8), compared to those in Regions of the Americas (AMR, 24.7%, 19.5–30.3) (Table 4). Also, 21.5% (16.9–26.1) of learners admitted that they were reevaluating their future career plans due to the pandemic.

Mental health of learners: anxiety, depression, burnout, and insomnia

At least moderate anxiety, measured by validated scales, was estimated at 32.3% (28.4–36.1%). Notably, pharmacy learners reported higher anxiety than any other occupation, undergraduate learners scored higher than graduate ones, female learners scored higher than males, and learners in the WPR scored lower than any other WHO region. Also, learners surveyed in 2021 showed higher anxiety rates than learners in 2020 (Table 5).

Based on validated instruments, at least moderate depression was prevalent in 32.0% of learners (27.8–36.2), with undergraduates showing higher rates than graduate learners, learners in South America and Africa showing higher rates than other continents, and learners in the WPR showing lower rates than any other WHO region (Table 5). Further sensitivity analyses on studies using GAD-7 or PHQ-9 revealed similar findings (32.1% for anxiety, 32.8% for depression). Pooled mean GAD-7 and PHQ-9 learner scores were 7.00 (6.22–7.79), and 6.83 (5.72–7.95), respectively.

Burnout was prevalent in 38.8% of learners (33.4–44.2), with sensitivity analysis restricted to MBI scale showing 46.8% (28.5–55.0). Finally, insomnia was estimated at 30.9% (20.3–41.5), with significantly higher scores being reported in 2021 than in 2020 (Table 5).

Policy and management responses to those impacts

Several policy and management responses by governments, regulatory and accreditation bodies, schools, hospitals, clinical departments, health systems and student organizations were identified. A commonly cited response was the transition of face-to-face learning to online formats [44], including online videos [45], game-based learning [46], virtual clinical placements [34, 47,48,49], virtual simulations [50], remote teaching of practical skills as well as augmented reality [51, 52]. Interviews also transitioned to virtual format after guidance by accreditation bodies [53, 54], and face-to-face conferences were replaced with large-scale virtual conferences [55]. There were also responses relating to online assessment [56].

COVID-19-specific learning was introduced in particular for in-service and postgraduate learners [57], such as workshops on the use of personal protective equipment (PPE) [58,59,60] and simulations for COVID-specific protocols [61, 62]. Institutions published regulations and recommendations safeguarding learners’ health and continued learning [57, 63, 64], while there were interventions to specifically support learners’ mental health [65, 66]. Undergraduate learners were also involved in volunteering towards supporting the COVID-19 response [67, 68]. Another policy response was early graduation of final-year students who could work in a clinical capacity [69]. An overview of the institutions enacting these responses and policies as identified in the second phase of our systematic review is summarized in Table 6.

Outcomes of policy responses

Online and blended learning approaches

Overall 75.9% (74.2–77.7) of learners were satisfied with online learning. Learners appeared more satisfied with online clinical exposure, such as fully virtual clinical rotations and real patient encounters (86.9%, 79.5–93.1) or online practical courses (85.4%, 82.3–88.2) compared to predominantly theoretical courses (67.5%, 64.7–70.3). Satisfaction with virtual congresses was also high (84.1%, 71.0–94.0). Learner satisfaction rates with virtual methods were lower in the EMR and SEAR (Table 7).

Overall, 32.0% (29.3–34.8) of learners preferred fully online learning, which was lower than preferences for fully in-person learning (48.8%, 45.4–52.1) or for blended learning (56.0%, 51.2–60.7). Lastly, when examined about their willingness to maintain an online-only format or not, and a blended online and in-person training or not, 34.7% (30.7–38.8), and 68.1% (64.6–71.5) of them, respectively, replied positively.

As training level was increasing (undergraduates vs graduates vs continuing education), a gradually higher preference for online learning (29.5% vs 39.7% vs 39.9%) and lower preference for learning in-person (50.9%, 47.6%, 30.7%) were observed. Also, learners in the AMR and EUR expressed higher willingness to keep blended learning after the pandemic (Tables 8, 9, 10, 11, and 12).

Assessing the same outcomes for faculty, 71.8% (66.8–76.8) expressed satisfaction with online methods. Preference for online-only, in-person or blended training methods were, respectively, 25.5% (15.5–35.5), 58.7% (51.6–65.8), and 64.5% (47.8–81.2). Their willingness to maintain an online-only or a blended online and in-person teaching post-pandemic were 36.7% (22.3–51.2) and 65.6% (57.1–74.0), respectively.

Responses were overall effective, significantly increasing learner skills scores when compared to scores before the response or scores achieved with pre-pandemic comparators (Table 15).

Assessment

The satisfaction of learners with online assessments was 68.8% (60.7–76.3). Postgraduate learners were significantly more inclined towards the use of online assessments compared to undergraduate ones (86.6% vs 62.5%), and with female learners being less satisfied than males (38.7% vs 58.1%). Learners in EMR and SEAR were less satisfied with online assessment than their colleagues in EUR and AMR (Table 13). Candidates also achieved significantly higher mean scores at online assessments compared to previous, in-person assessments or with innovations in assessment compared to traditional [pre vs post: SMD = − 0.68 (95% CI − 0.96 to − 0.40)].

Volunteerism

Studies investigating willingness of learners to volunteer in the COVID-19 response were also included. Despite 62.2% (49.6–74.8) of learners expressing an intention to volunteer, 27.7% (18.6–36.8) of learners reported engaging in volunteer activity, with undergraduate learners volunteering much more (pooled estimate of 32.4%) than their graduate colleagues (pooled estimate of 9.1%) (Table 14, Fig. 4).

A full list of all outcomes, Forest plots (in which the extent of the variation in the pooled estimates is more visible) and funnel plots are available at Additional file 6 and Additional file 7. Publication bias was evident in about one-fourth of the analyses. The GRADE certainty of evidence was assessed as “very low” for all outcomes of the meta-analysis. Finally, alternative meta-analytical approaches additionally undertaken for our main analyses did not materially change our findings (Additional file 8).

A summary of our main findings can be found in Table 15, with additional interpretation in “Discussion”.

Discussion

A summary and interpretation of our main findings can be found in Table 15.

Impacts of the pandemic on health worker education

Our meta-analysis showed that 71% of learners reported their clinical training was adversely impacted by the pandemic. In a large study surveying medical students from South America, Japan and Europe, 93% of students reported a suspension of bedside teaching [70]. Trainees in surgical and procedural fields were severely affected, with 96% of surgery residents and early-career surgeons in the US reporting a disruption in their clinical experience, with an overall 84% reduction in their operative volume in the early phases of the pandemic [71]. Most included studies did not provide separate data on the type of surgery. In similar large-scale disruptions, achieving the difficult but crucial balance between patient and trainee safety with the necessary clinical training of health workers should be a priority for policymaking.

The extent of the impact on the mental health of learners is concerning and highlights the need for sufficient resources to support learners and faculty. Our meta-analysis revealed that about one in three learners suffered from at least moderate anxiety, depression, insomnia, or burnout. These appear to be higher than reported anxiety and depression in health workers, similar to the general population during the pandemic and similar to pre-pandemic studies. In an umbrella review of depression and anxiety among health workers (not learners) during the pandemic, anxiety and depression were estimated at 24.9% and 24.8%, respectively [72], although most meta-analyses also included mild forms of anxiety and depression. A different subgroup analysis estimated moderate or higher anxiety and depression in health workers at 6.88% (4.4–9.9) and 16.2% (12.8–19.9) [73], results much lower than ours. In the general population, anxiety and depression were estimated in one meta-analysis at 31.9% (27.5–36.7) and 33.7% (27.5–40.6) [74], similar to our estimate for health worker learners. Lastly, comparing our results with a 2018 meta-analysis, the prevalence of anxiety (33.7%, 10.1–58.9) and depression (39.2%, 29.0–49.5) might be similar among health learners before and during the COVID-19 pandemic [75], and warrants further study and policy interventions. Anxiety was significantly higher in 2021 studies compared to 2020, indicating a notable effect of persisting stressors on mental health and emphasizing the need for early intervention to prevent anxiety. Pharmacy learners were significantly more anxious, which may be associated with different backgrounds and levels of familiarity with the intense clinical environment at times of capacity, in comparison to medical and nursing colleagues.

Multiple studies showed female gender was a risk factor for increased anxiety and depression among health learners [71, 76,77,78,79]. In studies that investigated underlying stressors, learners showed a high level of anxiety about their relatives’ health [41, 80,81,82,83], getting infected with COVID-19 themselves [41, 80, 84, 85], lack of PPE [86, 87], failing their clinical obligations [88], the disruption of educational activities [89, 90], or financial reasons [88, 91, 92]. A UK study on the psychological well-being of health worker learners during the pandemic associated the educational disruption with a negative impact on mental health, estimating low well-being at 61.9%, moderate to high perceived stressfulness of training at 83.3% and high presenteeism at 50% despite high satisfaction with training (90%) [93]. Learners felt a lack of mental health resources and supports in some disciplines [93]. A US study found that lack of wellness framework and lack of personal protective equipment were predictors of increased depression and burnout in surgery residents and early-career surgeons, highlighting the importance of well-designed wellness initiatives and appropriate protection for learners [71]. A summary of protective and exacerbating factors identified from included studies is available in Table 16. An international study of medical students identified high rates of insomnia (57%), depressed mood (40%) as well as multiple physical symptoms including headache (36%), eye fatigue (57%) and back pain (49%) [70]. These important physical complaints were not included in our systematic review. Interestingly, time spent in front of a screen daily correlated positively with depression, insomnia and headache. Alcohol consumption declined during the pandemic, whereas cigarette and marijuana use was unchanged. Putting together these findings, trainees’ mental- and physical-health appears to be associated with multiple factors that should be targeted by policy interventions: gender disparities, lack of well-designed wellness frameworks, stressful training, lack of protective equipment and potential implications of increased screen time. It should be noted that variants of the MBI scale also tend to overestimate burnout rates [94], so these may be actually lower than reported by our study.

Table 16 Risks and protective factors for anxiety and depression among health worker learners

Outcomes of policy responses

Learners’ satisfaction with the rapidly implemented policy of online learning was relatively high (76%), especially if it included patient contact or practical training, rather than a purely theoretical approach. However, although learners were relatively satisfied when the alternative was no education, their opinions seemed to change when presented with options for the future. Learners preferred face-to-face (49%) and blended (56%) over fully online education (32%). In addition, only a small percentage of students were willing to pursue an exclusively online learning format (35%) in the post-pandemic era, with their preference trending towards a blended model (68%). The “Best Evidence in Medical Education” series and other systematic reviews, including only studies published in 2020, showed that the rapid shift to online learning proved to be an easily accessible tool that was able to minimize the impact of early lockdowns, both in undergraduate and graduate education [105,106,107]. Adaptations included telesimulations, live-streaming of surgical procedures and the integration of students to support clinical services remotely. Challenges included the lack of personal interaction and standardized curricula. All studies showed high risk of bias and poor reporting of the educational setting and theory [105]. Out meta-analysis of all relevant studies spanning from 2020 to mid-2022 showed that the integration of practical skill training into online courses led to higher satisfaction rates, solidifying a well-known preference for active learning among health workers. Satisfaction and preference for online learning was significantly increased in postgraduate and continuing learners compared to undergraduates, indicating it may be better suited for advanced learners with busy schedules. Higher convenience and ability to manage one’s time more flexibly and efficiently were frequently reported reasons for satisfaction and preference for online education [108,109,110,111]. Ιn synchronous learning, interaction through interactive lectures or courses, quizzes, case-based discussions, social media, breakout rooms or journal clubs were associated with increased satisfaction [112,113,114,115,116]. Conversely, in asynchronous learning, the opportunity for self-paced study and more detailed review of study material increased satisfaction [117,118,119]. Limitations of online education included challenges in comprehending material in courses such as anatomy [120, 121], as well as lack of motivation among learners [122,123,124,125]. A different systematic review found medical students appreciated the ability to interact with patients from home, easier remote access to experts and peer mentoring, whereas they viewed technical issues, reduced engagement and worldwide inequality were viewed as negative attributes of online learning [126]. Interestingly, one study comparing medical and nursing student satisfaction across India found high dissatisfaction (42%, compared to 37% satisfaction) which was not significantly different between the two fields, and higher in first-year students. Supportive faculty was important in increasing satisfaction [121].

We found that learners performed better in online assessments compared to prior in-person ones. It is unknown whether this represents lower demands, inadequate supervision, or changes in the constructive alignment between learning outcomes (e.g., theoretical knowledge) and assessment modality (e.g., multiple choice questions) [127]. However, online assessment has significant limitations in evaluating hands-on skills. Learners perceived online assessments as less fair, as cheating can be easier [128,129,130], or felt unable to showcase their skills online [131]. Open-book assessments focusing on thinking instead of memorization were preferred by learners [132] and may be more appropriate for online assessment. A different systematic review including studies up to October 2021 reviewed adaptations in in-person and online clinical examinations of medical students. Overall, online or modified in-person clinical assessment was deemed feasible, with similar scores to prior in-person iterations, and well received by trainees [133].

Although 62% of learners reported a willingness to volunteer, one in three actually did. This could be due to health risks, lockdowns, lack of opportunity or time, or other factors. As expected, undergraduates had more time to actually volunteer than other groups, however willingness to volunteer was comparable between the different training levels. These activities made heavy use of technology and frequently involved telephone outreach and counseling of patients and the public [134,135,136,137]. Students were also employed clinically in hospitals or other settings [138] and assisted with food and PPE donation and other nonclinical activities such as babysitting [139]. Some accrediting institutions responded by recommending that volunteering activities be rewarded with academic credit and supervised adequately [140].

Strengths of our study

To our knowledge, this is the largest systematic review and meta-analysis exploring the impact of the pandemic on the education and mental health of health worker learners. The vast amount of data allowed us to perform multiple subgroup analyses and explore the potential differences in training disruption, mental health and perceptions on educational innovations. We included health worker learners from all regions of the world, all occupations, and all levels of training. We also undertook sensitivity analyses by restricting our analyses to a homogenous sample of higher quality studies (e.g., by only pooling GAD-7/PHQ-9/MBI low risk of bias studies for anxiety/depression/burnout). These approaches demonstrated the robustness of our findings. Finally, we attempted to explore the effect of time on outcomes, given the dynamic character of the pandemic.

Limitations of our study

Although we excluded duplicate publications, there is still a risk for overlap, as learners may have participated anonymously in multiple cross-sectional studies. We attempted to minimize this with sensitivity analyses excluding very large datasets. Second, satisfaction was extracted from a variety of definitions among different studies leading to considerable heterogeneity. While prior experience with virtual learning might have affected learners’ or faculty perceptions, its inconsistent reporting did not allow us to account for it. For similar reasons, we did not manage to quantify mild mental health disruption for anxiety and depression. Although multiple significant subgroup differences emerged, heterogeneity remained largely unresolved. Heterogeneity is inherently high in meta-analyses of proportions, and the large sample of studies along with the subjective nature of many outcomes are in part responsible. The precision in point estimates (i.e., the observed narrow CIs) is therefore mainly a consequence of the large sample rather than true low variation. Therefore, we advise cautious interpretation and assess all our outcomes as very-low-certainty of evidence. Our sample mainly represented undergraduate students, learners in medicine and Asia, with reduced representation from Africa, South America and Oceania. Therefore, our results should be generalized with caution. However, subgroup analyses provide some insight into intra-group differences. Last, the authors were unable to include studies published in Spanish, which may in part reflect the scarcity of included studies from South America. We did, however, include studies in German and French.

Quality assessment revealed mostly observational studies and self-reported outcomes. RCTs were scarce and a considerable subset of them at high risk of bias. Publication bias was also evident in one-fourth of our analyses, leading to potential overestimation of proportions (e.g., higher satisfaction may be reported more eagerly). The above are consistent with the challenge in the education literature, which tends to capture mostly Kirkpatrick Level 1 data [141] (learner reaction), instead of objective learning assessments or behavioral changes. However, at the early stages of the pandemic, the literature is more likely to include lower-level immediate outcomes. Future studies will likely capture more objective outcomes and similar reviews should be repeated. Educational experiences are difficult to standardize and measure, making strict evidence-informed practice difficult [142]. However, quantitative evidence of any form can be a significant contributor to policy change.

Conclusion

Our systematic review and meta-analysis quantified the widespread disruption of health worker education during the early phases of the COVID-19 pandemic. Clinical training was severely disrupted, with many learners being redeployed and some expressing a need to prolong their training. About one in three learners screened positive for anxiety, depression, burnout or insomnia. Although learners from all occupations and countries were overall satisfied with new educational experiences including online learning, indicating a cultural shift towards the acceptability of online learning, they ultimately preferred in-person or blended formats. Learners in regions with lower satisfaction with online learning (e.g., Asian countries—especially EMR or SEAR), would need further support with resources to maximize learning opportunities. Our evidence supports acceptability for a shift to blended learning, especially for postgraduate learners. This can combine the adaptability and personalized online learning with in-person consolidation of interpersonal and practical skills, which both learners and educators agree is necessary. Policies should also prioritize prevention, screening, and interventions for anxiety, depression, insomnia, and burnout among not only health workers, but also undergraduate and graduate learners, who are significantly affected. A repeated large-scale review in a few years will be able to capture a more representative sample of countries, occupations and experiences. Our review aspires to inform future studies that will objectively evaluate the effectiveness of ensuing policy and management responses.