Procedure-specific assessment tool for flexible pharyngo-laryngoscopy: gathering validity evidence and setting pass–fail standards
- 80 Downloads
The attainment of specific identifiable competencies is the primary measure of progress in the modern medical education system. The system, therefore, requires a method for accurately assessing competence to be feasible. Evidence of validity needs to be gathered before an assessment tool can be implemented in the training and assessment of physicians. This evidence of validity must according to the contemporary theory on validity be gathered from specific sources in a structured and rigorous manner. The flexible pharyngo-laryngoscopy (FPL) is central to the otorhinolaryngologist. We aim to evaluate the flexible pharyngo-laryngoscopy assessment tool (FLEXPAT) created in a previous study and to establish a pass–fail level for proficiency.
Eighteen physicians with different levels of experience (novices, intermediates, and experienced) were recruited to the study. Each performed an FPL on two patients. These procedures were video recorded, blinded, and assessed by two specialists. The score was expressed as the percentage of a possible max score. Cronbach’s α was used to analyze internal consistency of the data, and a generalizability analysis was performed. The scores of the three different groups were explored, and a pass–fail level was determined using the contrasting groups’ standard setting method.
Internal consistency was strong with a Cronbach’s α of 0.86. We found a generalizability coefficient of 0.72 sufficient for moderate stakes assessment. We found a significant difference between the novice and experienced groups (p < 0.001) and strong correlation between experience and score (Pearson’s r = 0.75). The pass/fail level was established at 72% of the maximum score. Applying this pass–fail level in the test population resulted in half of the intermediary group receiving a failing score.
We gathered validity evidence for the FLEXPAT according to the contemporary framework as described by Messick. Our results support a claim of validity and are comparable to other studies exploring clinical assessment tools. The high rate of physicians underperforming in the intermediary group demonstrates the need for continued educational intervention.
Based on our work, we recommend the use of the FLEXPAT in clinical assessment of FPL and the application of a pass–fail level of 72% for proficiency.
KeywordsFlexible laryngoscopy Assessment tool Medical education Validity Technical skills Mastery learning
The authors would like to thank the Olympus company (Tokyo, Japan) for the generous lending of a flexible video laryngoscope for the duration of the study.
The endoscope used in the gathering of data was generously supplied by Olympus (Tokyo, Japan). No funding was provided for the completion of this study.
Compliance with ethical standards
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Conflict of interest
All authors declare that no conflicts of interest exist.
- 1.Sethi RKV, Kozin ED, Remenschneider AK, Lee DJ, Gray ST, Shrime MG et al (2014) Subspecialty emergency room as alternative model for otolaryngologic care: implications for emergency health care delivery. Am J Otolaryngol Head Neck Surg 35:758–765Google Scholar
- 2.Couch ME (2010) Cummings otolaryngology—head and neck surgery, 5th edn. Elsevier, MosbyGoogle Scholar
- 6.McGaghie WC, Issenberg SB, Cohen ER, Barsuk JH, Wayne DB (2011) Does simulation-based medical education with deliberate practice yield better results than traditional clinical education? A meta-analytic comparative review of the evidence. Acad Med 86(6):706–711CrossRefPubMedPubMedCentralGoogle Scholar
- 10.Messick S (1989) Validity. In: Linn RL (ed) Educational measurement, 3rd edn. American Counsel on Education and Macmillan, New YorkGoogle Scholar
- 11.Cook DA, Zendejas B, Hamstra SJ, Hatala R, Brydges R (2013) What counts as validity evidence? Examples and prevalence in a systematic review of simulation-based assessment. Adv Heal Sci Educ 19:1–18Google Scholar
- 12.Downing S, Yudkowsky R (2009) Assessment in health professions education. Routledge, New YorkGoogle Scholar
- 13.Melchiors J, Hendriksen M, Charabi B, Konge L, Buchwald C (2018) Diagnostic flexible pharyngo-laryngoscopy: development of a procedure specific assessment tool using a Delphi methodology. Eur Arch Otorhinolaryngol. https://doi.org/10.1007/s00405-018-4904-9 (accepted for publication)CrossRefPubMedGoogle Scholar
- 24.Todsen T, Tolsgaard MG, Olsen BH, Henriksen BM, Hillingsø JG, Konge L et al (2014) Reliable and valid assessment of point-of-care ultrasonography. Ann Surg 0(0):1–7Google Scholar
- 26.Magill RA, Anderson D (2014) Motor learning and control: concepts and applications, 10th edn. Mcgraw-Hill Education, New YorkGoogle Scholar
- 28.Livingston SA, Zieky MJ (1982) Passing scores: a manual for setting standards of performance. Educational Testing Service, PrincetonGoogle Scholar