Measures of instruction for creative engagement: Making metacognition, modeling and creative thinking visible
The purpose of the current study was to estimate reliability, internal consistency and construct validity of the Measure of Instruction for Creative Engagement (MICE) instrument. The MICE uses an iterative process of evidence collection and scoring through teacher observations to determine instructional domain ratings and overall scores. The results demonstrated the sound inter-observer reliability, teacher stability and score validity of the MICE. We found (a) a low proportion of rater variance (0.14–5.99%), (b) moderate to highly correlated within-teacher ratings ranging from r(17) = 0.663, p < 0.01 to r(17) = 1.000, p < 0.01 and (c) a statistically-significant difference between classroom teachers and teaching artists, t(56) = 7.37, p = 0.000. These results relate to the development of classroom environment instruments and the substantive development of pedagogy that supports creative thinking and behaviours, both of which are a priority for enhancing teacher accountability and student learning.
KeywordsCreativity Instructional practices Inter-rater reliability Teacher evaluation
This research was supported by a grant from the U.S. Department of Education (PR/Award No. U351D140063).
- Beghetto, R., Kaufman, J., & Baer, J. (2015). Teaching for creativity in the common core classroom. New York: Teacher’s College Press.Google Scholar
- Danielson, C. (2013). The framework for teaching: Evaluation instrument. Princeton, NJ: The Danielson Group.Google Scholar
- Darling-Hammond, L., Bae, S., Cook-Harvey, C. M., Lam, L., Mercer, C., Podolsky, A., et al. (2016). Pathways to new accountability through the Every Student Succeeds Act. Palo Alto, CA: Learning Policy Institute.Google Scholar
- Geverdt, D. (2007). Remote towns and rural fringes: An overview of the NCES School Locale Framework. Washington, DC: U.S. Census Bureau. http://aasa.org/uploadedFiles/Policy_and_Advocacy/files/RemoteTownsRuralFringes.pdf.
- Hafen, C. A., Hamre, B. K., Allen, J. P., Bell, C. A., Gitomer, D. H., & Pianta, R. C. (2014). Teaching through interactions in secondary school classrooms: Revisiting the factor structure and practical application of the classroom assessment scoring system-secondary. Journal of Early Adolescence, 35(6), 650–680.Google Scholar
- Hetland, L., Winner, E., Veenema, S., & Sheridan, K. (2014). Studio thinking 2: The real benefits of visual arts education. New York: Teachers College Press.Google Scholar
- Ho, A. D., & Kane, T. J. (2013). The reliability of classroom observations by school personnel (Research Paper, MET Project). Seattle, WA: Bill & Melinda Gates Foundation.Google Scholar
- Kane, T. J., & Staiger, D. O. (2012). Gathering feedback for teaching: Combining high-quality observations with student surveys and achievement gains (Research Paper, MET Project). Seattle, WA: Bill & Melinda Gates Foundation.Google Scholar
- Lench, S., Fukuda, E., & Anderson, R. (2015). Essential skills and dispositions: Developmental frameworks for collaboration, creativity, communication, and self-direction. Lexington, KY: Center for Innovation in Education at the University of Kentucky.Google Scholar
- Lucas, B., Claxton, G., & Spencer, E. (2013). Progression in student creativity in school: First steps towards new forms of formative assessments (OECD Education working Papers, No. 86). Paris: Organization for Economic Cooperation and Development/OECD Publishing.Google Scholar
- Pianta, R. C., Hamre, B. K., & Mintz, S. (2010). Classroom Assessment Scoring System—Secondary (CLASS—S). Charlottesville, VA: University of Virginia.Google Scholar
- Sawyer, R. K. (2006). Explaining creativity: The science of human innovation. New York: Oxford University Press.Google Scholar