Advertisement

Learning Environments Research

, Volume 21, Issue 1, pp 43–59 | Cite as

Measures of instruction for creative engagement: Making metacognition, modeling and creative thinking visible

  • Christine PittsEmail author
  • Ross Anderson
  • Michele Haney
Original Paper

Abstract

The purpose of the current study was to estimate reliability, internal consistency and construct validity of the Measure of Instruction for Creative Engagement (MICE) instrument. The MICE uses an iterative process of evidence collection and scoring through teacher observations to determine instructional domain ratings and overall scores. The results demonstrated the sound inter-observer reliability, teacher stability and score validity of the MICE. We found (a) a low proportion of rater variance (0.14–5.99%), (b) moderate to highly correlated within-teacher ratings ranging from r(17) = 0.663, p < 0.01 to r(17) = 1.000, p < 0.01 and (c) a statistically-significant difference between classroom teachers and teaching artists, t(56) = 7.37, p = 0.000. These results relate to the development of classroom environment instruments and the substantive development of pedagogy that supports creative thinking and behaviours, both of which are a priority for enhancing teacher accountability and student learning.

Keywords

Creativity Instructional practices Inter-rater reliability Teacher evaluation 

Notes

Acknowledgements

This research was supported by a grant from the U.S. Department of Education (PR/Award No. U351D140063).

References

  1. Beghetto, R. (2016). Creative learning: A fresh look. Journal of Cognitive Education and Psychology, 15(1), 6–23.CrossRefGoogle Scholar
  2. Beghetto, R., Kaufman, J., & Baer, J. (2015). Teaching for creativity in the common core classroom. New York: Teacher’s College Press.Google Scholar
  3. Brennan, R. L. (2001). Generalizability theory. New York: Springer.CrossRefGoogle Scholar
  4. Cohen, J. (1960). A coefficient for agreement of nominal scales. Educational and Psychological Measurement, 20(1), 37–46.CrossRefGoogle Scholar
  5. Danielson, C. (2013). The framework for teaching: Evaluation instrument. Princeton, NJ: The Danielson Group.Google Scholar
  6. Darling-Hammond, L., Bae, S., Cook-Harvey, C. M., Lam, L., Mercer, C., Podolsky, A., et al. (2016). Pathways to new accountability through the Every Student Succeeds Act. Palo Alto, CA: Learning Policy Institute.Google Scholar
  7. Davis, J. H. (2000). Metacognition and multiplicity: The arts as models and agents. Educational Psychology Review, 12(3), 339–359.CrossRefGoogle Scholar
  8. Fan, X., & Sun, S. (2014). Generalizability theory as a unifying framework of measurement reliability in adolescent research. Journal of Early Adolescence, 34(1), 38–65.CrossRefGoogle Scholar
  9. Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39, 175–191.CrossRefGoogle Scholar
  10. Fraser, B. (1998). Classroom environment instruments: Development, validity and applications. Learning Environments Research, 1, 7–33.CrossRefGoogle Scholar
  11. Geverdt, D. (2007). Remote towns and rural fringes: An overview of the NCES School Locale Framework. Washington, DC: U.S. Census Bureau. http://aasa.org/uploadedFiles/Policy_and_Advocacy/files/RemoteTownsRuralFringes.pdf.
  12. Glăveanu, V. P. (2013). Rewriting the language of creativity: The Five A’s framework. Review of General Psychology, 17(1), 69–81.CrossRefGoogle Scholar
  13. Glăveanu, V., & Beghetto, R. (2017). The difference that makes a ‘creative’ difference in education. In R. Beghetto & B. Sriraman (Eds.), Creative contradictions in education (pp. 37–54). Cham: Springer. CrossRefGoogle Scholar
  14. Hafen, C. A., Hamre, B. K., Allen, J. P., Bell, C. A., Gitomer, D. H., & Pianta, R. C. (2014). Teaching through interactions in secondary school classrooms: Revisiting the factor structure and practical application of the classroom assessment scoring system-secondary. Journal of Early Adolescence, 35(6), 650–680.Google Scholar
  15. Hetland, L., Winner, E., Veenema, S., & Sheridan, K. (2014). Studio thinking 2: The real benefits of visual arts education. New York: Teachers College Press.Google Scholar
  16. Ho, A. D., & Kane, T. J. (2013). The reliability of classroom observations by school personnel (Research Paper, MET Project). Seattle, WA: Bill & Melinda Gates Foundation.Google Scholar
  17. Hong, E., Hartzell, S. A., & Greene, M. T. (2009). Fostering creativity in the classroom: Effects of teachers’ epistemological beliefs, motivation, and goal orientation. The Journal of Creative Behavior, 43(3), 192–208.CrossRefGoogle Scholar
  18. Kane, T. J., & Staiger, D. O. (2012). Gathering feedback for teaching: Combining high-quality observations with student surveys and achievement gains (Research Paper, MET Project). Seattle, WA: Bill & Melinda Gates Foundation.Google Scholar
  19. Landis, J., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159–174. doi: 10.2307/2529310.CrossRefGoogle Scholar
  20. Lench, S., Fukuda, E., & Anderson, R. (2015). Essential skills and dispositions: Developmental frameworks for collaboration, creativity, communication, and self-direction. Lexington, KY: Center for Innovation in Education at the University of Kentucky.Google Scholar
  21. Lucas, B., Claxton, G., & Spencer, E. (2013). Progression in student creativity in school: First steps towards new forms of formative assessments (OECD Education working Papers, No. 86). Paris: Organization for Economic Cooperation and Development/OECD Publishing.Google Scholar
  22. Marcoulides, G. A. (1990). An alerternative method for estimating variance components in generalizability theory. Psychological Reports, 66, 102–109.CrossRefGoogle Scholar
  23. Pianta, R. C., Hamre, B. K., & Mintz, S. (2010). Classroom Assessment Scoring System—Secondary (CLASS—S). Charlottesville, VA: University of Virginia.Google Scholar
  24. Putka, D. J., Le, H., McCloy, R. A., & Diaz, T. (2008). Ill-structured measurement designs in organizational research: Implications for estimating interrater reliability. Journal of Applied Psychology, 93, 959.CrossRefGoogle Scholar
  25. Runco, M. A. (2016). Commentary: Overview of developmental perspectives on creativity and the realization of potential. New Directions for Child and Adolescent Development, 151, 97–109. doi: 10.1002/cad.20145.CrossRefGoogle Scholar
  26. Sawyer, R. K. (2006). Explaining creativity: The science of human innovation. New York: Oxford University Press.Google Scholar
  27. Schacter, J., Thum, Y. M., & Zifkin, D. (2006). How much does creative teaching enhance elemtary school students’ achievement? Journal of Creative Behavior, 40, 47–72.CrossRefGoogle Scholar
  28. Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86, 420–428.CrossRefGoogle Scholar
  29. Smollkowski, K., & Gunn, B. (2012). Reliability and validity of the Classrooom Observation of Student–Teacher Interactions (COSTI) for kindergarten reading instruction. Early Childhood Research Quarterly, 27, 316–328.CrossRefGoogle Scholar
  30. Yi, X., Plucker, J. A., & Guo, J. (2015). Modeling influences on divergent thinking and artistic creativity. Thinking Skills and Creativity, 16, 62–68.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2017

Authors and Affiliations

  1. 1.Educational Policy Improvement CenterEugeneUSA

Personalised recommendations