Designing, Choosing, and Using Assessment Tools in Healthcare Simulation Research
- 412 Downloads
Studies in healthcare simulation research are often based on performance scores. These scores can be used to compare provider groups, establish the efficacy of competing educational programs, and identify clinical skills deficiencies. The following chapter provides an overview of the development and use of assessment tools. Researchers need to select tools that align with the purpose of the assessment. Where human evaluators are employed, they should have sufficient expertise in the domains being assessed. Training is also necessary to ensure that evaluators are using the rubrics as intended. In the future, technology may be helpful for gathering accurate data from, and providing standardized scoring for, various simulation-based assessments. Healthcare simulation researchers who employ assessment tools need to evaluate whether the scores that are produced represent reliable and valid estimates of ability. Without some assurance of the psychometric rigor of the scores, their use in any research study could be questioned.
KeywordsSimulation-based assessment Scoring Reliability Validity
- 2.Hatala RA, Cook D. Reliability and validity. In: Nestel D, Hui J, Kunkler K, Calhoun A, Scerbo M, editors. Healthcare simulation research: a practical guide. Cham: Springer.Google Scholar
- 4.Epstein RM. Assessment in medical education. N Engl J Med 2007;356(4):387–396. Cox M, Irby DM, editors.Google Scholar
- 8.Clauser BE, Margolis MJ, Swanson DB. Practical guide to the evaluation of clinical competence. In: Holmboe ES, Durning SJ, Hawkins RE, editors. Practical guide to the evaluation of clinical competence. 2nd ed. Philadelphia: Elsevier; 2017. p. 22–36.Google Scholar
- 9.Tavares W, Brydges R, Myre P, Prpic J, Turner L, Yelle R, et al. Applying Kane’s validity framework to a simulation based assessment of clinical competence. Adv Health Sci Educ. 2017;23(2):1–16.Google Scholar
- 19.Wiggins LL, Morrison S, Lutz C, O’Donnell J. Using evidence-based best practices of simulation, checklists, deliberate practice, and debriefing to develop and improve a regional anesthesia training course. AANA J. 2018;86(2):119–26.Google Scholar
- 20.Boulet JR, Swanson DB. Psychometric challenges of using simulations for high-stakes assessment. In: Dunn WF, editor. Simulators in critical care and beyond. Des Plains: Society of Critical Care Medicine; 2004. p. 119–30.Google Scholar
- 24.Boulet JR, McKinley DW. Criteria for a good assessment. In: McGaghie WC, editor. International best practices for evaluation in the health professions. London: Radcliffe Publishing, Inc; 2013. p. 19–43.Google Scholar
- 25.CanMeds. The Royal College of physicians and surgeons of Canada: CanMEDS framework [Internet]. 2017 [cited 2018 Jul 16]. Available from: http://www.royalcollege.ca/rcsite/canmeds/canmeds-framework-e.
- 36.Boulet JR, Errichetti AM. Training and assessment with standardized patients. In: Riley RH, editor. Manual of simulation in healthcare. 2nd ed. Oxford: Oxford University Press; 2016. p. 185–207.Google Scholar