Overview
This chapter will discuss impact evaluation, an important method of measuring the effectiveness of an educational intervention. This form of evaluation represents a subset of program evaluation and focuses on outcomes and consequential events related to an educational intervention. In doing so, it incorporates several different quantitative methods and is typically reserved for stable, long-standing educational programs/curricula. Many of these methods are also used as part of program evaluation as a whole and in surgical research. Readers are directed to Chaps. 23 (“Demystifying Program Evaluation for Surgical Education”, Battista et al.) and 30 (“Researching in Surgical Education: An Orientation”, Ajjawi and McIllhenny) for more information on these subjects. In addition to providing a working definition of impact evaluation, this chapter will help define key concepts related to its successful use as well as aid in delineating the most useful quantitative methods to employ.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Patton, M. Q. (1997). Utilization-focused evaluation (4th ed.). Thousand Oaks: Sage Publications.
Owen, J. M. (2006). Program evaluation: Forms and approaches (3rd ed.). Crows Nest: Allen and Unwin.
Fudickar, A., et al. (2012). The effect of the WHO Surgical Safety Checklist on complication rate and communication. Deutsches Ärzteblatt International, 109(42), 695–701.
Evers, U., et al. (2013). ‘Get your life back’: Process and impact evaluation of an asthma social marketing campaign targeting older adults. BMC Public Health, 13, 759–768.
Tavakol, M., & Sanders, J. (2014). Quantitative and qualitative methods in medical education research: AMEE Guide No 90: Part I. Medical Teacher, 36(9), 746–756.
Papaconstantinou, H. T., et al. (2013). Implementation of a surgical safety checklist: Impact on surgical team perspectives. The Oschner Journal, 13, 299–309.
Seymour, N. E., et al. (2002). Virtual reality training improves operating room performance. Results of a randomized, double-blinded study. Annals of Surgery, 236(4), 458–464.
Anastakis, D. J., et al. (1999). Assessment of technical skills transfer from the bench training model to the human model. American Journal of Surgery, 177(2), 167–170.
Moulton, C. E., et al. (2006). Teaching surgical skills: What kind of practice makes perfect? A randomized, controlled trial. Annals of Surgery, 244(3), 400–409.
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues for field settings (1st ed.). Chicago: Rand McNally.
Martling, A. L., et al. (2000). Effect of a surgical training programme on outcome of rectal cancer in the County of Stockholm. Lancet, 356(9224), 93–96.
Rosser, J. C., et al. (2007). The impact of video games on training surgeons in the 21st century. Archives of Surgery, 142, 181–186.
Tavakol, M., & Sanders, J. (2014). Quantitative and qualitative methods in medical education research: AMEE Guide No 90: Part II. Medical Teacher, 36(10), 838–848.
Artino, A. R., et al. (2014). Developing questionnaires for educational research: AMEE Guide No.87. Medical Teacher, 36(6), 463–474.
DeVellis, R. F. (2014). Scale development: Theory and applications (2nd ed.). Newbury Park: Sage Publications.
Dillman, D., et al. (2009). Internet, mail and mixed-mode surveys: The tailored design method (3rd ed.). Hoboken: Wiley.
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.
American Educational Research Association, American Psychological Association, National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.
Yule, S., et al. (2008). Surgeons’ non-technical skills in the operating room: Reliability testing of the NOTSS behaviour rating system. World Journal of Surgery, 32, 548–556.
Regher, G., et al. (1998). Comparing psychometric properties of checklists and global rating scales for assessing performance on an OSCE-format examination. Academic Medicine, 73(9), 993–997.
Cook, D. A., & Beckman, T. J. (2006). Current concepts in validity and reliability for psychometric instruments: Theory and application. The American Journal of Medicine, 119, 166.e7–166.e16.
Hojat, M., et al. (2002). Physician empathy: Definition, components, measurement and relationship to gender and speciality. American Journal of Psychiatry, 159(90), 1563–1569.
Cook, D. A., & Hatala, R. (2016). Validation of educational assessments: A primer for simulation and beyond. Advances in Simulation, 1, 31.
Martin, J. A., et al. (1997). Objective structured assessment of technical skill (OSATS) for surgical residents. British Journal of Surgery, 84, 273–278.
Reznick, R., et al. (1997). Testing technical skill via an innovative ‘bench station’ examination. American Journal of Surgery, 173(3), 226–230.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Martin, J.A. (2019). Measuring the Impact of Educational Interventions: A Quantitative Approach. In: Nestel, D., Dalrymple, K., Paige, J., Aggarwal, R. (eds) Advancing Surgical Education. Innovation and Change in Professional Education, vol 17. Springer, Singapore. https://doi.org/10.1007/978-981-13-3128-2_34
Download citation
DOI: https://doi.org/10.1007/978-981-13-3128-2_34
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-3127-5
Online ISBN: 978-981-13-3128-2
eBook Packages: EducationEducation (R0)