Improving course evaluations to improve instruction and complex learning in higher education

  • Theodore W. Frick
  • Rajat Chadha
  • Carol Watson
  • Emilija Zlatkovska
Research Article


Recent research has touted the benefits of learner-centered instruction, problem-based learning, and a focus on complex learning. Instructors often struggle to put these goals into practice as well as to measure the effectiveness of these new teaching strategies in terms of mastery of course objectives. Enter the course evaluation, often a standardized tool that yields little practical information for an instructor, but is nonetheless utilized in making high-level career decisions, such as tenure and monetary awards to faculty. The present researchers have developed a new instrument to measure teaching and learning quality (TALQ). In the current study of 464 students in 12 courses, if students agreed that their instructors used First Principles of Instruction and also agreed that they experienced academic learning time (ALT), then students were about 5 times more likely to achieve high levels of mastery of course objectives and 26 times less likely to achieve low levels of mastery, according to independent instructor assessments. TALQ can measure improvements in use of First Principles in teaching and course design. The feedback from this instrument can assist teachers who wish to implement the recommendation made by Kuh et al. (2007) that universities and colleges should focus their assessment efforts on factors that influence student success.


Course evaluation Teaching quality First principles of instruction Academic learning time Complex learning Higher education Authentic problems 


  1. American Institutes for Research. (2006, January 19). New study of the literacy of college students finds some are graduating with only basic skills. Retrieved January 20, 2007, from
  2. Baer, J., Cook, A., & Baldi, S. (2006, January). The literacy of America’s college students. American Institutes for Research. Retrieved January 20, 2007, from
  3. Berliner, D. (1990). What’s all the fuss about instructional time? In M. Ben-Peretz & R. Bromme (Eds.), The nature of time in schools: Theoretical concepts, practitioner perceptions. New York: Teachers College Press.Google Scholar
  4. Brown, B., & Saks, D. (1986). Measuring the effects of instructional time on student learning: Evidence from the beginning teacher evaluation study. American Journal of Education, 94(4), 480–500. doi: 10.1086/443863.CrossRefGoogle Scholar
  5. Cohen, P. (1981). Student ratings of instruction and student achievement. A meta-analysis of multisection validity studies. Review of Educational Research, 51(3), 281–309.Google Scholar
  6. Estep, M. (2003). A theory of immediate awareness: Self-organization and adaptation in natural intelligence. Boston: Kluwer Academic Publishers.Google Scholar
  7. Estep, M. (2006). Self-organizing natural intelligence: Issues of knowing, meaning and complexity. Dordrecht, The Netherlands: Springer.Google Scholar
  8. Feldman, K. A. (1989). The association between student ratings of specific instructional dimensions and student achievement: Refining and extending the synthesis of data from multisection validity studies. Research in Higher Education, 30, 583–645. doi: 10.1007/BF00992392.CrossRefGoogle Scholar
  9. Fisher, C., Filby, N., Marliave, R., Cohen, L., Dishaw, M., Moore, J., et al. (1978). Teaching behaviors: Academic learning time and student achievement: Final report of phase III-B, beginning teacher evaluation study. San Francisco: Far West Laboratory for Educational Research and Development.Google Scholar
  10. Frick, T. (1990). Analysis of patterns in time (APT): A method of recording and quantifying temporal relations in education. American Educational Research Journal, 27(1), 180–204.Google Scholar
  11. Frick, T. (1997). Artificial tutoring systems: What computers can and can’t know. Journal of Educational Computing Research, 16(2), 107–124. doi: 10.2190/4CWM-6JF2-T2DN-QG8L.CrossRefGoogle Scholar
  12. Frick, T. W., Chadha, R., Watson, C., Wang, Y., & Green, P. (2008a). College student perceptions of teaching and learning quality. Educational Technology Research and Development (in press).Google Scholar
  13. Frick, T. W., Chadha, R., Watson, C., Wang, Y., & Green, P. (2008b). Theory-based course evaluation: Implications for improving student success in postsecondary education. Paper presented at the American Educational Research Association conference, New York.Google Scholar
  14. Greenspan, S., & Benderly, B. (1997). The growth of the mind and the endangered origins of intelligence. Reading, MA: Addison-Wesley.Google Scholar
  15. Keller, J. M. (1987). The systematic process of motivational design. Performance & Instruction, 26(9), 1–8. doi: 10.1002/pfi.4160260902.CrossRefGoogle Scholar
  16. Kirkpatrick, D. (1994). Evaluating training programs: The four levels. San Francisco, CA: Berrett-Koehler.Google Scholar
  17. Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview. Theory into Practice, 41(4), 212–218. doi: 10.1207/s15430421tip4104_2.CrossRefGoogle Scholar
  18. Kuh, G., Kinzie, J., Buckley, J., Bridges, B., & Hayek, J. (2007). Piecing together the student success puzzle: Research, propositions, and recommendations. ASHE Higher Education Report, 32(5). San Francisco: Jossey-Bass.Google Scholar
  19. Kulik, J. A. (2001). Student ratings: Validity, utility and controversy. New Directions for Institutional Research, 109, 9–25. doi: 10.1002/ir.1.CrossRefGoogle Scholar
  20. Maccia, G. S. (1987). Genetic epistemology of intelligent natural systems. Systems Research, 4(1), 213–281.Google Scholar
  21. Merrill, M. D. (2002). First principles of instruction. Educational Technology Research and Development, 50(3), 43–59. doi: 10.1007/BF02505024.CrossRefGoogle Scholar
  22. Merrill, M. D. (2008). What makes e 3 (effective, efficient, engaging) instruction? Keynote address at the AECT Research Symposium, Bloomington, IN.Google Scholar
  23. Merrill, M. D., Barclay, M., & van Schaak, A. (2008). Prescriptive principles for instructional design. In J. M. Spector, M. D. Merrill, J. van Merriënboer, & M. F. Driscoll (Eds.), Handbook of research on educational communications and technology (3rd ed., pp. 173–184). New York: Lawrence Erlbaum Associates.Google Scholar
  24. Rangel, E., & Berliner, D. (2007). Essential information for education policy: Time to learn. Research Points: American Educational Research Association, 5(2), 1–4.Google Scholar
  25. Sperber, M. (2001). Beer and circus: How big-time college sports is crippling undergraduate education. New York: Henry Holt & Co.Google Scholar
  26. Tabachnick, B. G., & Fidell, L. S. (2001). Using multivariate statistics (4th ed.). Boston, MA: Allyn and Bacon.Google Scholar
  27. van Merriënboer, J. J. G., Clark, R. E., & de Croock, M. B. M. (2002). Blueprints for complex learning: The 4C/ID model. Educational Technology Research and Development, 50(2), 39–64. doi: 10.1007/BF02504993.CrossRefGoogle Scholar
  28. van Merriënboer, J. J. G., & Kirschner, P. A. (2007). Ten steps to complex learning: A systematic approach to four-component instructional design. Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  29. Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press.Google Scholar
  30. Yazzie-Mintz, E. (2007). Voices of students on engagement: A report on the 2006 high school survey of student engagement. Retrieved January 8, 2008, from

Copyright information

© Association for Educational Communications and Technology 2009

Authors and Affiliations

  • Theodore W. Frick
    • 1
  • Rajat Chadha
    • 1
  • Carol Watson
    • 1
  • Emilija Zlatkovska
    • 1
  1. 1.School of EducationIndiana University BloomingtonBloomingtonUSA

Personalised recommendations