Background

The learning environment is a vital component of the medical school training. In medicine, it influences the academic attainment, future outcome, and physician behavior [1,2,3]. Thus, assessing the learning environment of medical schools is essential for any medical school continuous program evaluation and a prerequisite for an educational reform [4,5,6].

Several definitions have been assigned to the phrase ‘learning environment’. Examples include: “Dynamic social system that includes not only teacher behavior and teacher-student interaction, but student-student interaction as well”; [7] “Product of the number of students, their levels of intelligence, and their types of personalities”; [8] and “Total learning activities that took place inside and outside the school” [9]. These thoughtful expressions laid foundations on how the learning environment can be assessed in various educational settings [10, 11]. The definitions also consider the notions of ‘social universe’ and ‘students’ attributes’ [7, 8].

Several instruments have been used to assess the learning environment in medical schools, such as the ‘medical school learning environment survey’ (MSLES) [12], ‘Dundee Ready Education Environment Measure’ (DREEM), [13] and ‘Johns Hopkins Learning Environment Scale’ (JHLES) [14]. MSLES was originally developed in 1978 [15], and subsequently modified in 1981, to include 55 items of seven domains: (1) Medical and personal breadth of interests, (2) Emotional climates, (3) Flexibility (4), Meaningful learning experiences, (5) Organization, (6) Nurturance, and (7) Student-student interaction [12]. When compared to JHLES, MSLES is more sensitive and has validity evidence in various cultures [16]. With respect to student perceptions of the learning environment, MSLES can differentiate between traditional and innovative curriculums [4]. It has an overall reliability of 0.95 [15], and a test-retest reliability of 0.75 in the US [12, 17] Previous studies were mainly aimed at understanding how the learning environment shapes medical students’ experiences in terms of performance [18], satisfaction, [10] burnout, [11, 19] emotion, and motivation [20].

Previous studies have assessed the learning environment in the Arabian gulf region including UAE, [5, 21,22,23,24] but not at the College of Medicine and Health Sciences (CMHS) in the United Arab Emirates (UAE) University. To fulfill this gap, we assessed the learning environment at the College of Medicine and Health Sciences (CMHS) - UAE University. Determinants of academic success vary among different medical schools [22, 25]. Thus, findings of this study will be more properly channeled into our needs for continious educational initiatives and reforms.

The UAE is located in the Arabian Peninsula. It is composed of seven emirates, and the mother language is Arabic. Its estimated population is about 10 million, 10% Emirati citizens and 90% expatriates. The UAE University is the oldest in the country, founded in 1976. It is a public research university located in Al Ain, a city in the capital Abu Dhabi. The CMHS, founded in 1984, UAE University is the first and highest ranked medical school in the country. It is fully funded governmental institution, that officially accept only Emirati citizens from UAE. The College of Medicine and Health Sciences (CMHS) at UAE University offers a six-year medical program divided into pre- medical, pre-clinical and clinical phases. The first two years (pre- medical) focus on foundational medical sciences through traditional lectures, small group tutorials, and practical sessions. The next two years (pre-clinical) cover clinical sciences, incorporating a variety of learning opportunities such as problem-based learning and simulation-based training. The final two years (clinical) involve clinical rotations across various medical specialties, providing hands-on experience and practical exposure to different fields of medicine.

Methods

This study was approved by the UAE University Social Sciences Ethics Committee (Ref. N: ERS_2021_7362; Sep 2021) and was conducted in September-December 2021. Its population was medical students (years 1 through 6) at the CMHS - UAE University. Participation was voluntary, and signed consent was obtained from each enrolled student.

Students were grouped according to the existing curriculum: Group 1, Pre-medical Y1; Group 2, Pre-medical Y2; Group 3, Pre-clinical (Y3 and Y4); and Group 4, Clinical (Y5 and Y6). A modified version of MSLES with its psychometric characteristics was utilized [17]. This 17-item instrument has a five-point Likert scale (1 = never, 2 = rare, 3 = occasionally, 4 = fairly often, and 5 = very often). Its scores for items 6, 8, 9, 10, 14, 15, and 17 were reverse coded in the analysis, as they were expressed negatively. Thus, in the current format, the higher the score the more positive evaluation of learning environment for all items (Table 1S). Permission to use MSLES was obtained from Prof. Marcy Rosenbaum, University of Iowa, Carver College of Medicine.

To confirm student understandings of the terms of MSLES, cognitive interviews and pilot testing were conducted, one-on-one with the principal investigator, through a recorded ZoomVedio Conferencing [26]. These verbal techniques explored their understanding of the used terms when reacting to the questionnaire [26]. In the cognitive interview, the student was asked to read aloud each question and express her/his understanding by paraphrasing the concept and its wording, and by giving examples to explain the intended content. The student was also asked to designate unclear or difficult wordings. In the pilot testing, the student was asked to read each question and answer it. The primary investigator clarified questions and wordings, and suggested alternative expressions that could be appreciated better than the ones in the survey. The investigator picked on the suggested wordings, and asked students in subsequent interviews if they had the same or different points of view. The final version included all consensus changes and was used in this study.

Students were asked to complete the online MSLES based on their reflection of their experiences in the medical school. Data on gender, age, years in medical school, grade point average, repeated year(s), and influence of a person in the medical field to enroll in the school were also collected.

The data were collected in the online survey tool Qualtrics XM, and analyzed using STATA® BE, version 17.0. Missing data were assessed and replaced using the chain imputation method [27]. Statistical analysis included descriptive and inferential methods.

Exploratory factor analysis (EFA) was used to assess latent factor structures in the learning environment constructs [28]. Rotations of the factor loading matrix were performed to produce the final matrix indicating items reflecting each latent factor. The varimax rotation method was used to simplify factors internally, by maximizing the spread of variance across the items and maximizing the difference in the loading between factors. It assumed orthogonality or mutually independence of factors [28].

Principle components (communalities) were used to assess whether an individual item is working well in the exploratory factor analysis [29]. In this study, a minimum cut-off of 0.5 was used for including items in this analysis.

Appropriateness of exploratory factor analysis was checked by Kaiser–Meyer-Olkin (KMO) measure of sampling adequacy, and by Bartlett’s test. Appropriate analysis requires relationships between variables and the correlation matrix to be a nonidentity matrix, and a statistically significant (p < 0.05) Bartlett’s test [30].

Loading of a factor with any measured variable in the rotated factor matrix was considered significant if it was more than 0.50. The reliability of the scale was measured using Cronbach’s alpha measure of internal consistency, with the cut off of 0.70, indicating high reliability [28]. A single factor ANOVA was used to test the significance of the differences among the groups. All statistical tests were performed at 0.05 level of significance.

Results

Of the eligible 584 students enrolled at the CMHS in the year 2021/2022, 377 (65%) participants completed the survey, Table 1. A descriptive summary of the available and missing responses is summarized in Supplement (Tables 2 and 3S).

Table 1 Response rates and demographics of the participants (n = 377)

In the studied sample (n = 377), females were more than males, with a male to female ratio of (1:3), which is consistent with the larger number of female students at the CMHS. The participating students’ age was between 17 and 25 years, with only a few students being 26 years or older (Table 1). The students’ academic performance was assessed using the grade point average (GPA) in the last academic year (2020–2021). Students’ GPA was 3.4 ± 0.4 (of a minimum of 1 to a maximum of 4.0), with 7.1% of students repeated at least one year (see the reasons in Supplement, Table 4S). About a quarter of the students were inspired to enroll in the medical school because of a close person or family member in the medical field (Table 1).

Scores (mean ± SD) of the 17 items of MSLES are summarized in Table 2. Reliability of the scale, estimated by Cronbach’s alpha measure of internal consistency, was ∝=0.798. Exploratory factor analysis (EFA) of the scale revealed the four-factor structure as an appropriate reflection of the latent factor structure of the learning environment construct (Figs. 1 and 2; Table 3); these factors were given the following labels: (1) Learning experience (∝=0.71), (2) Student-student interaction (∝=0.69), (3) Student-faculty interaction (∝=0.62), and (4) Academic support (∝=0.62). While the values for Student-student interaction, Student-faculty interaction, and Academic support are below the 0.70 threshold, they are not excessively low. Cronbach’s alpha 0.7–0.8 is very good and 0.6–0.7 is acceptable or satisfactory, < 0.6 is suspect. The reliability of all factors were higher than 0.6, therefore they all meet the expectations [31]. These slightly lower reliability scores can be attributed to the diverse and multifaceted nature of the constructs being measured, as well as potential variations in student responses. KMO measure of the sampling adequacy was 0.791, and Bartlett’s test was significant (p < 0.05), both indicating appropriate factor analysis. All communalities were higher than 0.50. The scree plot (Fig. 1) confirms the four factors are adequate to explain the majority (81.3%) of the variance in the employed learning environment scale. Akai’s information criterion (AIC) measure, an estimate of the model parsimony, indicated lowest value for the chosen four factors.

Fig. 1
figure 1

Scree plot of Eigenvalues of factors (principle components in the exploratory factor analysis) versus component numbers (number of factors). The four factors having Eigenvalues greater than one explained 81.3% of the total variance in the employed 17-item survey

Fig. 2
figure 2

Box plot (showing the values for maximum, upper quartile, median, lower quartile, minimum, and outliers [circles]) of the learning components in the exploratory factor analysis across the studied groups (see Table 3)

Table 2 MSLES scores (mean ± SD) per year in the medical school (n = 377)

Results of ANOVA indicated significant differences in the ‘learning experience’ factor among the groups [F (3,373) = 22.37, p < 0.001]. The Bonferroni’s procedure-based test for post-hoc multiple comparisons indicated the mean score of the ‘learning experience’ in Y3/Y4 or Y5/Y6 was significantly lower than that in Y1 or Y2. There was also a significant difference in the ‘student-student interaction’ factor among the groups [F (3, 373) = 10.20, p < 0.001], with the mean score in Y2 being significantly lower than that in the other groups. Similarly, there was a significant difference in the ‘student-faculty interaction’ factor among Y1 and Y2 [F (3, 373) = 6.19, p < 0.001], with the mean score in Y1 being significantly higher than that in Y2. Finally, there was a significant difference in the ‘academic support’ factor among the groups [F (3, 373) = 39.25, p < 0.001], with the mean score being significantly higher in Y1 than that in the other groups (see Table 3).

Table 3 Scores (mean ± SD) of the components of the learning environment factors per years in medical school

Discussion

This study is the first to provide psychometric evidence from MSLES in the UAE University. It reveals student perceptions of the learning environment at CMHS. Our findings highlight distinct educational components amenable for improvements. Principally, it shows significant variabilities in the learning environment throughout the years. Notably, we found a decline in medical students’ perceptions of the learning experience, student-faculty interaction, and academic support throughout the years. Measures to assure learning consistency are thus needed. Separate reform strategies (aiming at consistency and achieving specified milestones [e.g., the identified factors]) are likely required for each of the groups, Pre-medical, Pre-clinical, and Clinical years.

This study successfully overcomes several cultural barriers. The MSLES was applied for the first time in the UAE cultural setting, ensuring that the measured constructs are relevant and comparable to those in other cultural contexts. This application helps to mitigate potential biases that could arise from using non-validated or culturally specific instruments. Nevertheless, further advances are still needed to improve generality of conclusions. For example, larger participations, especially male students are necessary to assure the measures cover the needs of all students. Reforms based on results of MSLES need to be coupled with academic measures, thus, matching improved MSLES scores with targeted academic perfomance. We suggest a regular implementation of MSLES to monitor the progress, especially with respect to qualities of the learning environment.

The mean scores for all domains of the learning environment are lower in Y3 to Y6 than those in Y1 to Y2. This suggests that educating students in advanced clinical years is challenging. Early clinical exposures and a focus on integration between basic and clinical sciences may improve this domain [32]; especially if hands-on experiences that improve motivation for learning are implemented [33].

Similarly, the academic support domain also declines after Y1. This indicates that faculty are heavily involved with Y1 students, this could be due to different reasons; one of them is a program implemented in the college of medicine and health science for all Y1 students (mentorship program), as every Y1 student is assigned to faculty with frequent meeting and continuous guidance. And previous studies showed that quality characteristic of medical school improve with mentoring programs [34].

The mean score for student-student interaction was lower in Y2. A potential factor here is the heavy competition, as these students will not transition to the pre-clinical program unless they pass Y2. However, it is also possible that competition could enhance student-student interaction through collaborative group study methods. Further research is needed to explore the underlying reasons for this observation. It would be helpful to introduce more extracurricular activities and small group studies to break social barriers [35] and reduce stress in this difficult year [36, 37].

Student-faculty interaction is higher in Y1, as each student is assigned to a mentor. Continuing with a similar mentorship program during Y2-6 will likely be beneficial [34, 38], increase student retention [39] and provide support [40]. Another aspect of this finding is students in Y1 communicate more freely with mentors, being new to medical school and having a lingering experience of their high-school learning.

Study limitations include: (1) The use of a self-perception survey, which may introduce recall bias, where students might not remember events or activities correctly. (2) The response rate varied throughout the years, 55% in Y2 compared to 84% in Y5/Y6. Varying levels of response rates could influence findings, where the number of participants in each year is not standardized. (3) Paucity of regional studies on the topic that could lay foundations for potential research problems in various countries, [16, 21] academic situations, [10] and age groups [41]. (4) This study did not identify students who are currently repeating the year, a variable that may influence student perception of the learning environment 18]. (5) Students who have relative(s) in the medical field might find the learning environment more supportive compared to students who come from a different background. They are also more likely to be motivated beyond the learning environment to advance their education 42]. (6) Evaluating the medical school at UAE University may have introduced some bias, as previous research on the learning environment is deficient.

Conclusions

This study is the first to offer psychometric evidence of MSLES in the UAE. It improves our understandings of the current medical school learning environment and offers suggestions to enhance the learning environment across the years. The findings offer actionable plans and pave the way for evidence-based curriculum reforms. Moreover, these insights are vital for medical educators, guiding efforts to optimize the medical school learning environment and promote lifelong learning in medical education.