Abstract
Well-designed instructional programs seamlessly promote human performance. Students are often unaware of countless iterations of formative evaluation completed to improve effectiveness, efficiency, motivation, and flow of that instructional design. This paper examines one research-based, comprehensive, systematic evaluation approach as applied by students through two case studies where they evaluated instructional design at a macro level of real-world training programs. The 10-Step Evaluation for Training and Performance Improvement (Chyung 2019) was piloted in a graduate-level instructional design and technology class. Lessons learned include evaluators recognizing the importance of honing in on the questions to ask to determine the dimensions to evaluate while remaining unbiased with their assumptions. For example, results from the first case initially focused on a summative evaluation that hypothesized that the program should be discontinued. Conversely, that case actually uncovered unanticipated findings that resulted in recommendations for the program to be revamped in an effort to continue improvement of human performance.
Similar content being viewed by others
References
Chyung, S. Y. (2019). 10-step evaluation for training and performance improvement. Thousand Oaks: SAGE Publications.
Kellogg, W. K. (1998). Logic Model Development Guide. Battle Creek: W.K. Kellogg Foundation Retrieved from https://www.wkkf.org.
Kennedy, P. E., Chyung, S. Y., Winiecki, D. J., & Brinkerhoff, R. O. (2014). Training professionals' usage and understanding of Kirkpatrick's level 3 and level 4 evaluations. International Journal of Training and Development, 18(1), 1–21.
Kirkpatrick, D. (2007). The four levels of evaluation (no. 701). Pewaukee: American Society for Training and Development Press.
Kirkpatrick, D., & Kirkpatrick, J. (2006). Evaluating training programs: The four levels. San Francisco: Berrett-Koehler Publishers.
Morrison, G. R., Ross, S. M., Kalman, H. K., & Kemp, J. E. (2013). Designing effective instruction (7th ed.). Hoboken: John Wiley & Sons, Inc..
Scriven, M. (1991). Evaluation thesaurus. Point Reyes: Sage Publications.
Stufflebeam, D. L. (2007). CIPP evaluation model checklist. Retrieved from https://kwschochconsulting.com/wp-content/uploads/2017/04/cippchecklist_mar07.pdf.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare that they have no conflict of interest.
Research Involving Human Participants and/or Animals
All procedures performed in the program evaluations provided to the company and higher education institution program involving human participants were in accordance with ethical standards and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. This article does not contain any studies with animals performed by any of the authors.
Informed Consent
In compliance with Ethical Standards, informed consent was obtained from all individual participants included in the evaluations.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Ensmann, S., Ward, A., Fonseca, A. et al. A Case Study for the 10-Step Approach to Program Evaluation. TechTrends 64, 329–342 (2020). https://doi.org/10.1007/s11528-019-00473-4
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11528-019-00473-4