Improving course evaluations to improve instruction and complex learning in higher education
Recent research has touted the benefits of learner-centered instruction, problem-based learning, and a focus on complex learning. Instructors often struggle to put these goals into practice as well as to measure the effectiveness of these new teaching strategies in terms of mastery of course objectives. Enter the course evaluation, often a standardized tool that yields little practical information for an instructor, but is nonetheless utilized in making high-level career decisions, such as tenure and monetary awards to faculty. The present researchers have developed a new instrument to measure teaching and learning quality (TALQ). In the current study of 464 students in 12 courses, if students agreed that their instructors used First Principles of Instruction and also agreed that they experienced academic learning time (ALT), then students were about 5 times more likely to achieve high levels of mastery of course objectives and 26 times less likely to achieve low levels of mastery, according to independent instructor assessments. TALQ can measure improvements in use of First Principles in teaching and course design. The feedback from this instrument can assist teachers who wish to implement the recommendation made by Kuh et al. (2007) that universities and colleges should focus their assessment efforts on factors that influence student success.
KeywordsCourse evaluation Teaching quality First principles of instruction Academic learning time Complex learning Higher education Authentic problems
- American Institutes for Research. (2006, January 19). New study of the literacy of college students finds some are graduating with only basic skills. Retrieved January 20, 2007, from http://www.air.org/news/documents/Release200601pew.htm.
- Baer, J., Cook, A., & Baldi, S. (2006, January). The literacy of America’s college students. American Institutes for Research. Retrieved January 20, 2007, from http://www.air.org/news/documents/The%20Literacy%20of%20Americas%20College%20Students_final%20report.pdf.
- Berliner, D. (1990). What’s all the fuss about instructional time? In M. Ben-Peretz & R. Bromme (Eds.), The nature of time in schools: Theoretical concepts, practitioner perceptions. New York: Teachers College Press.Google Scholar
- Cohen, P. (1981). Student ratings of instruction and student achievement. A meta-analysis of multisection validity studies. Review of Educational Research, 51(3), 281–309.Google Scholar
- Estep, M. (2003). A theory of immediate awareness: Self-organization and adaptation in natural intelligence. Boston: Kluwer Academic Publishers.Google Scholar
- Estep, M. (2006). Self-organizing natural intelligence: Issues of knowing, meaning and complexity. Dordrecht, The Netherlands: Springer.Google Scholar
- Fisher, C., Filby, N., Marliave, R., Cohen, L., Dishaw, M., Moore, J., et al. (1978). Teaching behaviors: Academic learning time and student achievement: Final report of phase III-B, beginning teacher evaluation study. San Francisco: Far West Laboratory for Educational Research and Development.Google Scholar
- Frick, T. (1990). Analysis of patterns in time (APT): A method of recording and quantifying temporal relations in education. American Educational Research Journal, 27(1), 180–204.Google Scholar
- Frick, T. W., Chadha, R., Watson, C., Wang, Y., & Green, P. (2008a). College student perceptions of teaching and learning quality. Educational Technology Research and Development (in press).Google Scholar
- Frick, T. W., Chadha, R., Watson, C., Wang, Y., & Green, P. (2008b). Theory-based course evaluation: Implications for improving student success in postsecondary education. Paper presented at the American Educational Research Association conference, New York.Google Scholar
- Greenspan, S., & Benderly, B. (1997). The growth of the mind and the endangered origins of intelligence. Reading, MA: Addison-Wesley.Google Scholar
- Kirkpatrick, D. (1994). Evaluating training programs: The four levels. San Francisco, CA: Berrett-Koehler.Google Scholar
- Kuh, G., Kinzie, J., Buckley, J., Bridges, B., & Hayek, J. (2007). Piecing together the student success puzzle: Research, propositions, and recommendations. ASHE Higher Education Report, 32(5). San Francisco: Jossey-Bass.Google Scholar
- Maccia, G. S. (1987). Genetic epistemology of intelligent natural systems. Systems Research, 4(1), 213–281.Google Scholar
- Merrill, M. D. (2008). What makes e 3 (effective, efficient, engaging) instruction? Keynote address at the AECT Research Symposium, Bloomington, IN.Google Scholar
- Merrill, M. D., Barclay, M., & van Schaak, A. (2008). Prescriptive principles for instructional design. In J. M. Spector, M. D. Merrill, J. van Merriënboer, & M. F. Driscoll (Eds.), Handbook of research on educational communications and technology (3rd ed., pp. 173–184). New York: Lawrence Erlbaum Associates.Google Scholar
- Rangel, E., & Berliner, D. (2007). Essential information for education policy: Time to learn. Research Points: American Educational Research Association, 5(2), 1–4.Google Scholar
- Sperber, M. (2001). Beer and circus: How big-time college sports is crippling undergraduate education. New York: Henry Holt & Co.Google Scholar
- Tabachnick, B. G., & Fidell, L. S. (2001). Using multivariate statistics (4th ed.). Boston, MA: Allyn and Bacon.Google Scholar
- van Merriënboer, J. J. G., & Kirschner, P. A. (2007). Ten steps to complex learning: A systematic approach to four-component instructional design. Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
- Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press.Google Scholar
- Yazzie-Mintz, E. (2007). Voices of students on engagement: A report on the 2006 high school survey of student engagement. Retrieved January 8, 2008, from http://ceep.indiana.edu/hssse/pdf/HSSSE_2006_Report.pdf.