Abstract
In post-secondary education, there is a widely-held belief in a “gold standard” for evaluative studies of curricular innovations. In this context, “appropriate” assessment is understood to refer to experimental designs and statistically significant differences in group outcomes. Yet in our evaluative study of a medical undergraduate program, we did not find these concepts to be particularly applicable. Based on our experience, we now feel that it is appropriate to assemble an eclectic mix of scientific findings, show how they have been used for program improvement, and articulate the program’s theoretical rationale and social significance. In the absence of statistically significant differences, this comprehensive argument can be used to justify the deployment of curricular innovations. The same may be true of other educational programs that target hard-to-measure changes in affective domains.
Similar content being viewed by others
References
Barrows, H. S. (2008). Innovations without appropriate assessment are of limited usefulness. Teaching and Learning in Medicine, 20(4), 287.
Berkun, S. (2010). The myths of innovation. Sebastopol, CA: O’Reilly Media.
Bloxham, S., Boyd, P., Chesney, S., Ginty, A., & Nuttall, M. (2007). Developing effective assessment in higher education: A practical guide. Berkshire, UK: Open University Press.
Boud, D., & Falchikov, N. (2007). Rethinking assessment in higher education: Learning for the longer term. New York: Routledge.
Boudreau, J. D., & Cassell, E. J. (2010). Abraham Flexner’s ‘mooted question’ and the story of integration. Academic Medicine, 85(2), 378–383.
Boudreau, J. D., Cassell, E. J., & Fuks, A. (2007). A healing curriculum. Medical Education, 41(12), 1193–201.
Boudreau, J. D., Steinert, Y., & MacDonald, M. E. (2011). Letter in response to: Osler, guilds, and community. To the editor. Academic Medicine, 86(3), 274–275.
Bresciani, M. J. (2006). Outcomes-based academic and co-curricular program review: A compilation of institutional good practices. Sterling, VA: Stylus Publishing.
Busing, N., Slade, S., Rosenfield, J., Gold, I., & Maskill, S. (2010). In the spirit of Flexner: Working toward a collective vision for the future of medical education in Canada. Academic Medicine, 85(2), 340–348.
Campbell, C., & Levin, B. (2009). Using data to support educational improvement. Educational Assessment, Evaluation and Accountability, 21(1), 47–65.
Cantillon, P., & Wood, D. F. (2010). ABC of learning and teaching in medicine (2nd ed.). London: BMJ Books.
Chen, F. M., Bauchner, H., & Burstin, H. (2004). A call for outcomes research in medical education. Academic Medicine, 79, 955–960.
Cohen, J. J., Cruess, S. R., & Davidson, C. (2007). Alliance between society and medicine: The public’s stake in medical professionalism. JAMA, 298, 670–3.
Cribb, A., & Bignold, S. (1999). Towards the reflexive medical school: the hidden curriculum and medical education research. Studies in Higher Education, 24, 195–209.
Cronbach, L. J. (1978). Designing educational evaluations. Stanford, CA: Evaluation Consortium.
Cronbach, L. J., & Shapiro, K. (1982). Designing evaluations of educational and social programs. San Francisco: Jossey-Bass.
Cruess, R. L., & Cruess, S. R. (2009). Principles for designing a program for the teaching and learning of professionalism at the undergraduate level. In R. L. Cruess, S. R. Cruess, & Y. Steinert (Eds.), Teaching medical professionalism (pp. 73–92). Cambridge MA: Cambridge University Press.
Diachun, L., Van Bussel, L., Hansen, K. T., Charise, A., & Rieder, M. J. (2010). “But I see old people everywhere”: Dispelling the myth that eldercare is learned in non-geriatric clerkships. Academic Medicine, 85(7), 1221–1228.
Flexner, A. (1910). Medical education in the US and Canada. A report to the Carnegie Foundation for the Advancement of Teaching. Bulletin # 4. New York: The Carnegie Foundation for the Advancement of Teaching.
Gardiner, L. F. (2004). Redesigning higher education: Producing dramatic gains in student learning. San Francisco: Jossey-Bass.
Glick, T. H. (2005). Evidence-guided education: Patients’ outcomes data should influence our teaching priorities. Academic Medicine, 80, 147–151.
Hafferty, F. W. (1998). Beyond curriculum reform: confronting medicine’s hidden curriculum. Academic Medicine, 73, 403–407.
Howell, D. C. (1997). Statistical methods for psychology (4th ed.). Albany, NY: Duxbury Press.
Kassebaum, D. G. (1990). The measure of outcomes in the assessment of educational program effectiveness. Academic Medicine, 65, 293–296.
Macdonald, R. (2002). Academic and educational development: Research, evaluation and changing practice in higher education. London: Routledge Falmer.
Mills, C. W. (2000). The sociological imagination (40 th anniversary ed.). New York: Oxford University Press (Original work published in 1959).
Morrison, G., Goldfarb, S., & Lanken, P. N. (2010). Team training of medical students in the 21st century: Would Flexner approve? Academic Medicine, 8(2), 254–259.
Moss, P. A., Phillips, D. C., Erickson, F. D., Floden, R. E., Lather, P. A., & Schneider, B. L. (2009). Learning from our differences: A dialogue across perspectives on quality in education research. Educational Researcher, 38(9), 501–517.
Nash Parker, R., & Hudley, C. (2006). Editors’ notes. New Directions in Evaluation, 110, 1–4.
Neurath, O. (1921). Antispengler (T. Parzen, Trans.). Munich, Germany: Callwey.
Parker Boudett, K., City, E., & Murnane, R. (2005). Data wise: A step-by-step guide to using assessment results to improve teaching and learning. Cambridge MA: Harvard University Press.
Parker Boudett, K., City, E., & Steele, J. L. (2005). Data wise in action: Stories of schools using data to improve teaching and learning. Cambridge MA: Harvard University Press.
Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.). Thousand Oaks, CA: Sage.
Pellegrino, E. D. (1979). Humanism and the physician. Knoxville: University of Tennessee Press.
Reed, V. A., Jernstedt, G. C., Boudreau, D., Dollase, R. H., Nierenberg, D. W., Shelton, W., et al. (1999). Meeting the challenge of MSOP: Comprehensive measurement of profiles of student characteristics across medical schools. Academic Medicine, 74, S96–S97.
Ruhe, V., & Zumbo, B. D. (2009). Evaluation in distance education and e-learning: The unfolding model. New York: Guilford Press.
Santa, L. L., Reed, V. A., Jernstedt, G. C., & McCormick, T. R. (2003). Development of medical students’ attitudes over the course of medical education. Annals of Behavioural Science and Medical Education, 9, 7–12.
Scriven, M. (1993). Hard won lessons in program evaluation. Vol. 58, New Directions for Program Evaluation. Memphis: Jossey-Bass.
The Spencer Foundation. (n.d.). Strategic initiatives. http://www.spencer.org/content.cfm/data-use-and-educational-improvement. Accessed 7 June 7 2010.
Steinert, Y., Boudreau, J. D., Boillat, M., Slapcoff, B., Dawson, D., Briggs, A., et al. (2010). The Osler fellowship: An apprenticeship for medical educators Academic Medicine, 85(7), 1242–1249.
Stone, S. L., & Qualters, D. M. (1998). Course-based assessment: implementing outcome assessment in medical education. Academic Medicine, 73, 397–401.
Tamblyn, R. (1999). Outcomes in medical education: what is the standard and outcome of care delivered by our graduates? Advances in Health Sciences Education, 4, 9–25.
Todhunter, S., Cruess, S. R., Cruess, R. L., Young, M., & Steinert, Y. (2011). Developing and piloting a form for student assessment of faculty professionalism. Advanced Health Science Education: Theory and Practice, 16(2), 223–38.
Westen, D., & Rosenthal, R. (2005). Improving construct validity: Cronbach, Meehl, and Neurath’s ship. Psychological Assessment, 17, 409–412.
Zumbo, B. D. (2009). Validity as contextualized and pragmatic explanation, and its implications for validation practice. In R. W. Lissitz (Ed.), The concept of validity: Revisions, new directions and applications (pp. 65–82). Charlotte, NC: Information Age Publishing.
Acknowledgements
The authors wish to thank the Max Bell Foundation for financial support. Donald J. Boudreau, as an Arnold P. Gold Associate Professor of Medicine, expresses his profound appreciation to Virginia Reed and colleagues at the Center for Educational Outcomes at the Dartmouth Medical College for having authorized the use of the MEAP core survey instrument. We are grateful to the reviewers at the editorial office of Educational Assessment, Evaluation & Accountability for their comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Additional information
J. Donald Boudreau is an Arnold P. Gold Foundation Associate Professor of Medicine
Rights and permissions
About this article
Cite this article
Ruhe, V., Boudreau, J.D. Curricular innovation in an undergraduate medical program: What is “appropriate” assessment?. Educ Asse Eval Acc 23, 187–200 (2011). https://doi.org/10.1007/s11092-011-9124-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11092-011-9124-4