Abstract
Purpose
We sought to evaluate the feasibility and benefits of using video-based scenarios in Multiple Mini Interviews (MMIs) to assess candidate’s empathic abilities by investigating candidate perceptions and the acceptability, fairness, reliability, and validity of the test.
Methods
The study sample was candidates for admission interviews held in the MMI format at a medical school in South Korea. In this six-station MMI, one station included a 2-min video clip of a patient-doctor communication scenario to assess candidate emphatic abilities, whereas paper-based scenarios were used in the other stations. Candidate’s perceptions and acceptability of using the video-based scenario in the empathy station were examined using a 41-item post-MMI questionnaire. Fairness of the test was assessed by means of differences in candidate perceptions and performance across different demographics or backgrounds. Construct validity was assessed by examining the relationship of candidate performances in the empathy station with those in other stations. The G-coefficient was analyzed to estimate the reliability of the test.
Results
Eighty-two questionnaires were returned, a 97.6% response rate. Candidates showed overall positive perceptions of the video-based scenario and they found it authentic and interesting. The test was fair as there were no differences in candidates’ perceptions of the patient-doctor relationship presented in the video clip and neither in their performance nor in their perceived difficulty of the station across demographics or backgrounds. Construct validity was established as candidate performance in the empathy station was not associated with that of any other stations. The G-coefficient was 0.74.
Conclusions
The present study demonstrates that the video-based scenario is a feasible tool to assess candidate’s empathy in the MMI.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Student selection is an important step in educating tomorrow’s doctors, and during this process, it is critical that candidates’ cognitive and non-cognitive attributes be assessed in order to select those with the qualities required of good doctors [1]. Several research studies have shown the Multiple Mini Interview (MMI) is a valid and reliable tool for assessing candidates’ non-cognitive qualities in medical student selection [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17]. Meanwhile, it has also been reported that the reliabilities of MMIs can vary widely depending on the way they are administered and structured [2, 6], which suggests research is needed to determine effective formats for MMIs.
Although MMIs are widely recognized as a selection tool in medical education, our review of relevant literature indicates little attention has been paid to using technology in MMIs. There has been a plethora of research and practice on using technology in medical education to improve student learning and assessment [18, 19]. Moreover, technology is increasingly incorporated into medical school admissions processes [20], which include using video-based scenarios in situational judgement tests [21]. Furthermore, with the COVID-19 pandemic rapidly changing the landscape of medical education, it is likely that there will be increasing use of technology in medical school admissions processes. Still, research is scant on the use of technology in MMIs. Thus, research is needed on the MMI format that makes effective use of technology.
Candidates’ various non-cognitive attributes have been assessed in MMIs. In particular, empathy, which is “a personality trait that enables one to identify with another's situation, thoughts, or condition by placing oneself in their situation [22],” is an important attribute in doctors, and therefore, many medical schools assess candidates’ basic understanding of empathy at admission interviews [23]. However, research is scant on how to assess empathy for student selection purpose [23]. Therefore, research is warranted on developing a feasible tool to assess candidate’s empathy in the MMI.
In the present study, we examined the feasibility of using a video-based scenario in the MMI to assess candidates’ empathic abilities. In general, MMI scenarios are presented in a paper format, but technological developments offer opportunities to adopt other formats for presenting scenarios during MMIs. In particular, it has been advocated candidate’s non-cognitive attributes need to be assessed in authentic contexts [24]. Consequently, some medical schools have reported using actors or standardized patients (SPs) in MMI stations to assess candidates’ empathy or communication skills [4, 6]. Still, using actors or SPs are resources-intensive, and there is a lack of research that provides empirical evidence on the utility of MMIs that utilize such resources. Therefore, it is worthwhile to explore other feasible alternatives to assess candidate’s empathic abilities in MMIs.
We considered that presenting candidates with a scenario involving human interactions on a video vignette would be likely to provide richer contexts than scenarios presented as text. The use of videos to present cases or problems has been studied in case-based learning (CBL) and problem-based learning (PBL) situations, and these studies have shown video-based CBL and PBL are more effective than using paper-based alternatives in terms of fostering critical thinking and interest [25,26,27,28,29]. Therefore, it is speculated that using video-based scenario in MMIs provides candidates with more authentic contexts and enhances interest and engagement. Thus, this study aimed to investigate the feasibility of the use of video-based scenarios in MMIs to assess candidate’s empathic abilities by investigating its perceived benefits, acceptability, fairness, reliability, and validity.
Methods
Study Participants and Setting
This study was conducted on candidates that participated in admission interviews at Dongguk University School of Medicine (DUMS), a private medical school in South Korea, for matriculation in 2019. DUMS has a 4-year basic medical education program for graduate-entry students and an annual intake of approximately 50 students. DUMS has implemented admission interviews for those that passed the initial screening stage based on considerations of prior academic achievements, including undergraduate Grade Points Average (GPA) and performance at the Korean medical school entrance exam (the Medical Education Eligibility Test). As a result, 84 candidates attended admission interviews, which were conducted in December 2018.
DUMS has implemented MMIs for admission interviews since 2014. Interview schedules are composed of six mini-interviews conducted at separate stations, and the allocated time per station is 10 min. There was one assessor for each station, who evaluated the candidates’ performance on a 5-point scale of 1 being “unsuitable” to 5 being “outstanding” on two to three items presented in a scoresheet. Candidates’ performances are assessed by each station score and their overall performance is determined by summing up the scores at each and every station. Previous experiences with the MMI at DUMS have shown that it is a feasible tool for student selection [30, 31].
Study Design and Procedures
A video-based scenario was developed for one MMI station to assess candidate’s empathic abilities. MMIs at the other five stations were implemented in a traditional paper-based format. Three medical faculty members participated in the development of the video-based scenarios for the empathy station. Two were experts in MMIs and the other was a psychiatrist, who wrote the script for the scenario. Two investigators with experience of MMIs reviewed and revised the draft scenario. The video was produced in-house and was pilot tested on a volunteer medical education graduate student. The student was asked to think aloud what she thought of the video clip as she watched it and reported whether she thought the situation was presented clearly in the video and whether there was any unambiguity in the dialogue.
The video vignette presented a fictive clinical situation where a doctor interviewed a patient who seemed to be in a depressive mood. The vignette lasted around 2 min because of time constraints for the candidate to view and prepare for him/her to discuss it with the assessor. Candidates used a tablet and a headset to watch the video. During the interview, candidates were asked to assess the extent to which the doctor showed empathy and to elicit the feelings and views of the patient shown in the video, which are considered key elements of empathy in doctor-patient interactions [23]. The candidate also discussed with the assessor about the importance of empathic communication in patient-doctor relationship.
Data on candidate perceptions and performance in the MMI stations were obtained and analyzed to investigate its acceptability, fairness, validity, and reliability as evidence regarding the feasibility of the test. Acceptability by candidates and their perceived benefits of the video-based scenario was examined by using a post-MMI questionnaire. Fairness of the test was assessed by means of differences in candidate perceptions of the MMI and of video-based scenario and in their performance in the empathy station across different demographics or backgrounds. Construct validity was assessed by examining the relationship of candidate scores in the empathy station with those in other stations. Moreover, we calculated the G-coefficient to investigate its reliability using the variance components method.
The post-MMI questionnaire used in this study consisted of 41 items with Likert-type responses ranging from “strongly disagree” (1) to “strongly agree” (5). The questionnaire is composed of the following four sections. The first section included 7 items regarding candidate demographics and backgrounds. The second section consisted of 17 items that elicited candidates’ overall perceptions of the MMI. The items in this section were adapted from the instrument developed by Eva et al. [4] and translated into Korean by Kim et al. [31] and have been used in other studies [30, 31]. The third section included 12 statements on respondent perceptions of the video-based scenario used in the empathy station, which consisted of the following four sub-scales: (a) station difficulty (3 items), (b) authenticity (3 items), (c) interest (3 items), and (d) overall satisfaction (3 items). This section also included five items regarding candidate perceptions of the patient-doctor relationship presented in the video clip. The items in this section of the questionnaire were developed by the authors and were pilot tested in the previous year with a sample of medical school applicants. The last item was a single open-ended question that elicited candidates’ overall opinions of the MMI.
The questionnaire was administered during a wrap-up session conducted immediately after all interviews had ended in the morning and afternoon sessions. Participation in the study was voluntary and consent was implied by return of the questionnaire as responses were collected anonymously. An ethical review was conducted and the study was exempted from the requirement for informed consent by the institutional review board of Dongguk University, Gyeongju.
Data Analysis
Descriptive statistics were used to analyze candidate responses to the post-MMI questionnaire and their test scores in the MMI stations. Reliability of the research instrument was assessed using Cronbach’s alpha coefficients. The independent t test was conducted to compare candidates’ responses and performance with respect to gender and age, for which they were dichotomized about median age (25 years), and their geographic locations (urban vs. rural areas). ANOVA (analysis of variance) was used to compare candidates’ perceptions with respect to undergraduate backgrounds, which were categorized into seven groups. The G-coefficient was analyzed to investigate the reliability of this test, which indicates the proportion of variance in MMI score attributable to differences in candidates’ non-cognitive abilities [11]. The data were analyzed using SPSS version 23 for Windows (IBM Corp., Armonk, USA), and statistical significance was accepted for p values < 0.05.
Results
Candidate Demographics and Backgrounds
A total of 82 questionnaires were returned, which yielded a 97.6% (82/84) response rate. Twenty-six of the respondents (31.7%) were female and 56 (68.3%) were male; their ages ranged from 22 to 36 years (M = 26.6, SD = 2.83). Candidates’ undergraduate backgrounds were as follows: life sciences (n = 35), engineering (n = 25), sciences (n = 15), health-related professions (n = 10), social sciences and humanities (n = 3), and others (n = 4). Thirty-eight (46.3%) of them were from urban areas, whereas 44 (53.7%) were from rural areas.
Candidate Perceptions of the Video-Based Scenario
Candidates were neutral with regard to whether the empathy station required specialized knowledge (M = 2.83, SD = 1.02) and with respect to station difficulty (M = 3.17, SD = .75). Nine candidates (10.7%) answered the time allocated to prepare responses to the assessor in the empathy station was too short, whereas the remainder (89.3%) thought it adequate.
Table 1 shows the descriptive statistics regarding candidate perceptions of the video-based scenario and the results of the reliability analysis. Candidates disagreed slightly with the statement that it was difficult to understand the situation presented in the video. Candidates agreed with the statements that the video was authentic and interesting and that they were generally satisfied with it. Cronbach’s alpha values of the four sub-scales of candidate perceptions of the video-based scenario demonstrated acceptable internal consistency of the items.
Candidate Perceptions of Empathy from the Situation Portrayed in the Video
Table 2 describes candidates’ perceptions of the extent to which the doctor in the video showed empathy for the patient. The candidates generally evaluated that the patient-doctor relationship presented in the video was not effective in terms of emphatic communication.
Comparisons of Candidate Perceptions of and Performances
Table 3 illustrates candidate perceptions of the video-based scenario for the empathy station across different demographics or backgrounds. Candidates did not differ in their overall perceptions, genders, ages, geographic locations, or undergraduate majors. Yet, male candidates were more satisfied with the video-based scenario than females (p < .05), and younger candidates showed more interest in it than their older counterparts (p < .05). Moreover, there were no differences in candidate perceptions of empathy from the patient-doctor relationship portrayed in the video clip across genders, ages, locations, or with undergraduate majors.
Candidates’ performances in the MMI are presented in Table 3. Candidates’ performance in the empathy station did not differ across different demographics or backgrounds nor were there differences in their overall test scores.
Table 4 shows the relationship of candidate performance among MMI stations. The candidate performance in the empathy station was not associated with that of any other stations.
Reliability Analysis
Table 5 shows the results of the reliability of the test using the variance components method. The G-coefficient of MMI scores was 0.74, which is in an acceptable level.
Discussion
The present study illustrates the feasibility of using the video-based scenario in MMIs to assess candidates’ empathic abilities by demonstrating the acceptability, fairness, validity, and reliability of the test. This study also found benefits of using video-based scenarios in MMIs as they present scenarios in an authentic way and hold candidates’ interest.
Acceptability and Perceived Benefits of Video-Based Scenarios
Our study showed the positive acceptability of our new MMI station by the candidates. The candidates reported overall satisfaction with the use of video-based scenario in the MMI and they agreed it was authentic and interesting. Our study demonstrates that there are benefits of using video-based scenarios in MMIs as they present scenarios in an authentic way and hold candidates’ interest. The findings concur with that observed in other studies that showed using video-based cases in assessments for the health professionals enhanced its authenticity [21] and that it has benefits over traditional text-based format in terms of student preference, enhancing cognitive engagement and stimulating interest [25,26,27,28,29].
Fairness, Reliability, and Validity of the Test
In terms of candidate perceptions of the video-based scenario for the MMI, this study found the male candidates were more satisfied with it than females, and younger candidates showed more interest in it than their older counterparts. This finding may indicate the digital technology preference among younger males [32]. Yet, there were no differences in candidates’ overall perceptions of the video-based scenario used in the MMI nor in their station scores across demographics or backgrounds; thus, such differences seemed not to affect the overall fairness of the test.
The candidates generally evaluated that the patient-doctor relationship presented in the video was not effective in terms of emphatic communication, which was intended in this case scenario. This finding indicates candidates were generally able to identify the level of empathy that the doctor showed in the patient-doctor communication depicted in the video. The candidates did not differ in terms of their interpretations of patient-doctor relationship depicted in the video with respect to age, gender, geographic, or undergraduate backgrounds. Moreover, candidates’ performance at the video-based MMI station and their perceived station difficulty did not differ with respect to their demographics or backgrounds. These findings indicate that this MMI station using a video-based scenario was fair as it was not biased against candidates on the basis of age, gender, undergraduate majors, or geographic locations.
Our study showed the test was reliable in terms of the generalizability theory, which demonstrated a reliable level of G-coefficient. Moreover, candidate performance in the empathy station was not associated with that of any other stations, which indicate this station assessed candidate attributes that were different from other stations. This finding offers evidence for the construct validity of the test.
Study Limitations and Recommendations for Future Research
Several limitations should be acknowledged. First, the video vignette used in this study was designed to assess candidate’s empathic abilities. It is known that individual’s non-cognitive abilities are context-specific [33], which means the results of this study cannot be generalized to enable assessments of candidates’ attributes in other domains. Thus, we recommend additional studies be undertaken to develop video-based MMI stations in various domains and establish their effectiveness. Second, although there are many psychometric measures available to assess one’s empathic abilities, we could not compare the results from such measures with the test scores from our empathy station due to the anonymity of our study participants. Such a study would offer further evidence for the validity of this MMI station. Third, our study does not provide evidence on the predictive validity of the test. Future study is warranted to investigate relationships between candidate performance measured by the MMI using a video-based scenario and their empathetic communication skills performed in clinical settings. Fourth, the video-based case was used for the student selection purpose in this study, and it is not clear from this study whether our findings are applicable to its use in other teaching and learning contexts. Future research is recommended for using video-based cases in teaching or assessing student empathy to study its utility in various contexts.
Conclusions
Our findings indicate that the use of a video-based scenario in the MMI to assess candidates’ empathy was perceived positively by the candidates and it fairly assessed their empathetic abilities as it was not biased against specific demographics or backgrounds. Furthermore, the test was found reliable in terms of the generalizability theory and it was valid as it assessed candidate attributes different from those assessed in the other stations. As Kreiter and Alexson [34] suggest, the practice of MMI can improve by research offering valid evidence. This study adds our knowledge base on the evidence of the feasibility of using video-based scenarios in MMIs.
This study demonstrates the benefits and feasibility of using a video-based scenario in MMIs to assess candidates’ non-cognitive attributes such as empathy. We believe that using video-based scenarios is more effective in assessing communication and interpersonal skills than using paper formats in MMIs because they include verbal and non-verbal cues that encompass the natures of interactions, which allow for authentic assessment. Although some medical schools have reported using actors or SPs in the MMI to assess candidates’ interpersonal skills, often budgetary constraints prevent the utilization of such resources. Thus, we would argue using video-based scenarios provides a more cost-effective means of assessing candidates’ non-cognitive attributes in MMIs, especially in the domains of communication and interpersonal skills such as empathy.
References
Bardes CL, Best PC, Kremer SJ, Dienstag JL. Perspective: Medical school admissions and noncognitive testing: some open questions. Acad Med. 2009;84(10):1360–3.
Eva KW, Reiter HI, Trinh K, Wasi P, Rosenfeld J, Norman GR. Predictive validity of the multiple mini-interview for selecting medical trainees. Med Educ. 2009;43(8):767–75.
Reiter HI, Eva KW, Rosenfeld J, Norman GR. Multiple mini-interviews predict clerkship and licensing examination performance. Med Educ. 2007;41(4):378–84.
Eva KW, Rosenfeld J, Reiter HI, Norman GR. An admissions OSCE: the multiple mini-interview. Med Educ. 2004;38(3):314–26.
Knorr M, Hissbach J. Multiple mini-interviews: same concept, different approaches. Med Educ. 2014;48(12):1157–75.
Pau A, Jeevaratnam K, Chen YS, Fall AA, Khoo C, Nadarajah VD. The Multiple Mini-Interview (MMI) for student selection in health professions training - a systematic review. Med Teach. 2013;35(12):1027–41.
Sebok SS, Luu K, Klinger DA. Psychometric properties of the multiple mini-interview used for medical admissions: findings from generalizability and Rasch analyses. Adv Health Sci Educ Theory Pract. 2014;19(1):71–84.
Lee HJ, Park SB, Park SC, Park WS, Ryu SW, Yang JH, et al. Multiple mini-interviews as a predictor of academic achievements during the first 2 years of medical school. BMC Res Notes. 2016;9(1):93.
Pau A, Chen YS, Lee VK, Sow CF, De Alwis R. What does the multiple mini interview have to offer over the panel interview? Med Educ Online. 2016;21:29874.
Eva KW, Reiter HI, Rosenfeld J, Trinh K, Wood TJ, Norman GR. Association between a medical school admission process using the multiple mini-interview and national licensing examination scores. JAMA. 2012;308(21):2233–40.
Rees EL, Hawarden AW, Dent G, Hays R, Bates J, Hassell AB. Evidence regarding the utility of multiple mini-interview (MMI) for selection to undergraduate health programs: a BEME systematic review: BEME Guide No. 37. Med Teach. 2016;38(5):443–55.
Roberts C, Walton M, Rothnie I, Crossley J, Lyon P, Kumar K, et al. Factors affecting the utility of the multiple mini-interview in selecting candidates for graduate-entry medical school. Med Educ. 2008;42(4):396–404.
Urlings-Strop LC, Stijnen T, Themmen AP, Splinter TA. Selection of medical students: a controlled experiment. Med Educ. 2009;43(2):175–83.
Eva KW, Reiter HI, Rosenfeld J, Norman GR. The relationship between interviewers’ characteristics and ratings assigned during a multiple mini-interview. Acad Med. 2004;79(6):602–9.
Eva KW, Reiter HI, Rosenfeld J, Norman GR. The ability of the multiple mini-interview to predict preclerkship performance in medical school. Acad Med. 2004;79(10 Suppl):S40–2.
Patterson F, Knight A, Dowell J, Nicholson S, Cousans F, Cleland J. How effective are selection methods in medical education? A systematic review. Med Educ. 2016;50(1):36–60.
Pline ER, Whicker SA, Fogel S, Vari RC, Musick DW. Association of Multiple Mini-Interview scores with first year medical student success in problem-based learning. Med Sci Educ. 2016;26(2):221–7.
Bullock A, de Jong PGM. Technology-enhanced learning. In: Swanwick T, editor. Understanding medical education: evidence, theory and practice. 2nd ed. West Sussex: Wiley blackwell; 2014. p. 149–60.
Amin Z. Technology enhanced assessment in medical education. In: Walsh K, editor. Oxford Textbook of Medical Education. London: Oxford University Press; 2013.
Hanson MD, Eva KW. A reflection upon the impact of early 21st-century technological innovations on medical school admissions. Acad Med. 2019;94(5):640–4.
Patterson F, Zibarras L, Ashworth V. Situational judgement tests in medical education and training: research, theory and practice: AMEE Guide No. 100. Med Teach. 2016;38(1):3–17.
Hemmerdinger JM, Stoddart SD, Lilford RJ. A systematic review of tests of empathy in medicine. BMC Med Educ. 2007;7:24.
Pounds G, Salter C, Platt MJ, Bryant P. Developing a new empathy-specific admissions test for applicants to medical schools: a discourse-pragmatic approach. Commun Med. 2017;14(2):165–80.
Ginsburg S. Evaluating professionalism. In: Dent JA, Harden RM, editors. A Practical guide for medical teachers. 4th ed. New York: Elsevier; 2014. p. 333–40.
Bizzocchi J, Schell R. Rich-narrative case study for online PBL in medical education. Acad Med. 2009;84(10):1412–8.
Kamin C, O'Sullivan P, Deterding R, Younger M. A comparison of critical thinking in groups of third-year medical students in text, video, and virtual PBL case modalities. Acad Med. 2003;78(2):204–11.
Balslev T, de Grave WS, Muijtjens AM, Scherpbier AJ. Comparison of text and video cases in a postgraduate problem-based learning format. Med Educ. 2005;39(11):1086–92.
Hassoulas A, Forty E, Hoskins M, Walters J, Riley S. A case-based medical curriculum for the 21st century: the use of innovative approaches in designing and developing a case on mental health. Med Teach. 2017;39(5):505–11.
Cook DA, Thompson WG, Thomas KG. Case-based or non-case-based questions for teaching postgraduate physicians: a randomized crossover trial. Acad Med. 2009;84(10):1419–25.
Kim KJ, Kwon BS. Does the sequence of rotations in Multiple Mini Interview stations influence the candidates’ performance? Med Educ Online. 2018;23(1):1485433.
Kim K-J, Nam K-S, Kwon BS. The utility of multiple mini-interviews: experience of a medical school. Korean J Med Educ. 2017;29(1):7–14.
Nasah A, DaCosta B, Kinsell C, Seok S. The digital literacy debate: an investigation of digital propensity and information and communication technology. Educ Technol Res Dev. 2010;58(5):531–55.
Eva KW. On the generality of specificity. Med Educ. 2003;37(7):587–8.
Kreiter CD, Axelson RD. A perspective on medical school admission research and practice over the last 25 years. Teach Learn Med. 2013;25(Suppl 1):S50–6.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare that they have no conflict of interest.
Ethical Approval and Informed Consent
An ethical review was conducted and the study was exempted from the requirement for informed consent by the institutional review board of Dongguk University, Gyeongju (DGU IRB 20190029-01).
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kim, KJ., Lee, N.Y. & Kwon, B.S. Benefits and Feasibility of Using Videos to Assess Medical School Applicants’ Empathetic Abilities in Multiple Mini Interviews. Med.Sci.Educ. 31, 175–181 (2021). https://doi.org/10.1007/s40670-020-01163-0
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40670-020-01163-0