To the Editor:

Academic Psychiatry’s October 2021 issue highlighted several topics important to understanding academic psychiatry’s response to COVID, such as online lectures [1], missed clinical rotations [2], and the mental health challenges learners faced [3]. In one article, authors wrote optimistically about the impact on professionalism and resilience “after COVID” [4]. Yet, years later we are still dealing with the COVID pandemic and its direct impact on psychiatric education rather than its aftermath.

Instead of discussing the pandemic as a past event that homogeneously and continuously affected all schools and programming alike, we therefore argue the opposite: First, while these overarching topics concern all of us, their impact depends on many confounding factors including institutional, regional, and sociocultural parameters; second, there have been not just one continued (or continuing) pandemic but multiple phases that affected psychiatric education differently; third, the pandemic’s effects differ from those of other kinds of natural disasters; and fourth, medical students’ professional identity formation has been affected by the disruptions.

Our group used the existing end-of-clerkship survey to examine COVID’s impact on third-year medical students in psychiatry. The survey consisted of 10 five-item, Likert questions and 7 open-ended questions. The Likert questions asked about syllabus, lectures, and clinical instruction quality; feedback timeliness; and the respect students perceived from faculty and residents. The open-ended questions addressed most and least valuable aspects of clinical sites and the clerkship as a whole; possible student mistreatment; and suggestions for improvement.

We examined surveys from four student blocks: T1 (pre-COVID, April–May 2019, n=26); T2 (the first COVID wave, all-virtual instruction, April–May 2020, n=23); T3 (“post”-COVID, live patients but virtual didactics, April–May 2021, n=18); and T4 (Delta wave, all-virtual instruction, July–August 2021, n=17). As students were not required to complete the survey, response rates varied (T1, 85%; T2, 98%; T3, 90%; T4, 90%).

Surveys were administered electronically, and results were de-identified (Internal Review Board exemption was obtained). We used LIWC-22 (Pennebaker Conglomerates, Austin TX, 2022) and MAXQDA 2020 (VERBI Software, Berlin, 2019) to analyze the open-ended questions. For each Likert question, the mean scores of the four groups were compared using one-way ANOVA and Tukey’s test.

Among the Likert questions, significant differences were found between the two clerkship groups experiencing virtual patient instruction (T2 and T4) and the other two groups (T1 and T3). Ratings regarding quality of “faculty instruction” (F=3.54, p<0.05), being “treated with respect” (F=3.68, p<0.05), and timely feedback (F=3.5, p<0.05) were all lower in both all-virtual COVID groups (T2, T4). Regarding the structure of the clerkship itself, T2 students reported the syllabus much less useful (F=9.22, p<0.001), the sharpest difference among the means of any item. No differences among perceived lecture quality were observed.

The qualitative analysis provided more granular observations. Using LIWC and its psycholinguistic analysis [5] to examine responses to the open-ended questions, the category “affect,” including its subcategories positive, negative, anger, anxiety, and sadness, varied across groups. T3 (live patient, virtual instruction) scored the highest, both overall and in the subcategory “positive affect” (6.93%). The comments support this: T3 reported lectures had nothing to improve, unlike the other cohorts. When asked, “What did you find most valuable?” their responses were more detailed, providing lists of attendings and residents with specific, praiseworthy attributes. Conversely, both T2 and T4 virtual-patient COVID groups demonstrated more negative emotions (0.71% and 0.67% vs 0.43% mean of other groups). LIWC categories “tense” and “pronouns,” which indicate blame and finger pointing [5], were highest in T2 (0.53%) and T4 (0.8%), further suggesting dissatisfaction. T4 (Delta group) also had the most expressions conveying “sadness” (0.21% vs 0.11% mean of other groups).

Although we expected the T2 (COVID first wave) and T4 (Delta) psychiatry clerks to find their virtual clinical experience challenging, both the extent of their dissatisfaction and the heterogeneity among the four groups were surprising. Among the three COVID-affected groups, the T2 and T4 clerks with both virtual patients and lectures were most negative, while the T3 group, with live patients and virtual lectures, was more positive than even the pre-COVID T1. Moreover, students unable to interact with live patients also had more negative views about experiences unrelated to patient care. For example, T2 clerks were distinctly less satisfied with the curriculum than students both 12 months before and after. Both T2 and T4 perceived less respect from faculty and residents; paired with the negative affect and content of the comments, we read these as possible indicators of issues related to professional identity formation. Finally, despite virtual lectures, T3 students were more satisfied than any other about perceived respect and the quality of faculty instruction.

Thus, it is the T3 clerks, fully masked, but working in-person, whose observations stand out as most positive. Unfortunately, we are not able to identify the cause of their sanguinity, nor whether it was they and not the curriculum which the pandemic changed. Perhaps the pre-COVID ubiquity of recorded faculty lectures caused students to be more distressed by losing live-patient interaction than by having all didactics go virtual. Our curriculum incorporated many ad hoc arrangements that became permanent avenues for student experience. The T4 clerks’ sharply negative observations might be due to these reforms being paused. Finally, further qualitative discourse analysis supported these findings; moreover, it suggested a correlation of anxiety, critical responses, and fear around the future of their professional practice, indicating an impact on their professional identity formation.

This study has limitations. Our observations are derived from routine surveys not meant to establish causal relations between COVID and clerkship satisfaction. Moreover, one cannot eliminate from students’ consciousness those place-specific factors that have little to do with the pandemic. For example, hurricanes caused education to stop completely for a few days in the fall of both 2020 and 2021. Identifying natural disaster-specific effects on medical student education is beyond the scope of this study. Nevertheless, given the frequency with which medical education is adversely affected by natural disaster—whether infectious disease, earthquake, or cyclone—intentional, prospective research is needed to examine its effects.