Research in Higher Education

, Volume 53, Issue 4, pp 471–486 | Cite as

Student Evaluation of Instruction: In the New Paradigm of Distance Education

Article

Abstract

Distance education has experienced soaring development over the last decade. With millions of students in higher education enrolling in distance education, it becomes critically important to understand student learning and experiences with online education. Based on a large sample of 11,351 students taught by 1,522 instructors from 29 colleges and universities, this study investigates the factors that impact student evaluation of instruction in distance education, using a two-level hierarchical model. Key findings reveal that in a distance education setting, gender and class size are no longer significant predictors of quality of instruction. However, factors such as reasons for taking the course, student class status and instructor’s academic rank have a significant impact on student evaluation of learning and instruction. Findings from this study offer important implications for institutional administrators on utilizing the evaluation results and on developing strategies to help faculty become effective online instructors.

Keywords

Distance education eSIR Hierarchical linear modeling Learning outcomes Student evaluation of instruction 

References

  1. Allen, I. E., & Seaman, J. (2010). Learning on demand: Online education in the United States, 2009. Needham, MA: The Sloan Consortium.Google Scholar
  2. Badri, M., Abdulla, M., Kamali, M., & Dodeen, H. (2006). Identifying potential biasing variables in student evaluation of teaching in a newly accredited business program in the UAE. International Journal of Educational Management, 20, 43–59.CrossRefGoogle Scholar
  3. Basow, S. (1995). Student evaluations of college professors: When gender matters. Journal of Educational Psychology, 87, 656–665.CrossRefGoogle Scholar
  4. Basow, S. A., & Silberg, N. T. (1987). Student evaluations of college professors: Are female and male professors rated differently? Journal of Educational Psychology, 79, 308–314.CrossRefGoogle Scholar
  5. Buchert, S., Laws, E. L., Apperson, J. M., & Bregman, N. J. (2008). First impressions and professor reputation: Influence on student evaluations of instruction. Social Psychology of Education, 11(4), 397–408.CrossRefGoogle Scholar
  6. Carle, A. C. (2009). Evaluating college students’ evaluations of a professor’s teaching effectiveness across time and instruction mode (online vs. face-to-face) using a multilevel growth modeling approach. Computers & Education, 53(2), 429–435.CrossRefGoogle Scholar
  7. Carson, L. (2001). Gender relations in higher education: Exploring lecturers’ perceptions of student evaluations of teaching. Research Papers in Education, 16(4), 337–358.CrossRefGoogle Scholar
  8. Centra, J. A. (2005). Will teachers receive higher student evaluations by giving higher grades and less coursework?. Princeton, NJ: Educational Testing Service.Google Scholar
  9. Centra, J. A. (2006). The Student Instructional Report II: Its development, uses and supporting research. Princeton, NJ: Educational Testing Service.Google Scholar
  10. Centra, J. A., & Gaubatz, N. B. (2000a). Is there gender bias in student evaluations of teaching? Journal of Higher Education, 70(1), 17–33.CrossRefGoogle Scholar
  11. Centra, J. A., & Gaubatz, N. B. (2000b). Student perceptions of learning and instructional effectiveness in college courses. Research Rep., No. 9, The Student Instructional Report II. Princeton, NJ: Educational Testing Service.Google Scholar
  12. Centra, J. A., & Gaubatz, N. B. (2002). Student Perceptions of Learning and Instructional Effectiveness in College Courses: A Validity Study of SIR II. ETS SIR Report No. 9. Princeton: NJ: Educational Testing Service.Google Scholar
  13. Chamberlin, M. S., & Hickey, J. S. (2001). Student evaluations of faculty performance: The role of gender expectations in differential evaluations. Educational Research Quarterly, 25(2), 3–14.Google Scholar
  14. Davies, R. S., Howell, S. L., & Petrie, J. A. (2010). A review of trends in distance education scholarship at research universities in North America, 1998–2007. International Review of Research in Open and Distance Learning, 11(3), 42–56.Google Scholar
  15. Educational Testing Service. (1998). Student Instructional Report II, comparative Data, 1995–1997. For four-year colleges and universities for two-year colleges. Princeton, NJ: Higher Education Assessment Program.Google Scholar
  16. Feldman, K. (1992). College students’ views of male and female college teachers: Part I—Evidence from the social laboratory and experiments. Research in Higher Education, 33, 317–375.CrossRefGoogle Scholar
  17. Feldman, K. (1993). College students’ views of male and female college teachers: Part II—Evidence from students’ evaluations of their classroom teachers. Research in Higher Education, 34, 151–211.CrossRefGoogle Scholar
  18. Hu, L., & Bentler, P. M. (1999). Cutoff criteria for indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 60, 1–55.CrossRefGoogle Scholar
  19. Joreskog, K. G., & Sorbom, D. (1993). LISREL8: Structural equation modeling with the SIMPLIS command language. Hillsdale, NJ: Erlbaum.Google Scholar
  20. Kelly, H. F., Ponton, M. K., & Rovai, A. P. (2007). A comparison of student evaluations of teaching between online and face-to-face courses. Internet and Higher Education, 10, 89–101.CrossRefGoogle Scholar
  21. Knight, J., Ridley, D. R., & Davies, E. (1998). Assessment of student academic achievement in online programs. Paper presented at the 38th Annual Forum of the Association for Institutional Research, Minneapolis.Google Scholar
  22. Koh, H. C., & Tan, T. M. (1997). Empirical investigation of the factors affecting SET results. International Journal of Educational Management, 11(4), 170–178.CrossRefGoogle Scholar
  23. Landrum, R. E., & Braitman, K. A. (2008). The effect of decreasing response options on students’ evaluation of instruction. College Teaching, 56, 215–217.CrossRefGoogle Scholar
  24. Liaw, S., & Goh, K. (2003). Evidence and control of biases in student evaluations of teaching. The International Journal of Educational Management, 17(1), 37–43.Google Scholar
  25. Marsh, H. W. (1983). Multidimensional ratings of teaching effectiveness by students from different academic settings and their relation to student/course/instructor characteristics. Journal of Educational Psychology, 75(1), 150–166.CrossRefGoogle Scholar
  26. Marsh, H. W. (1984). Students’ evaluation of university teaching: Dimensionality, reliability, validity, potential biases, and utility. Journal of Educational Psychology, 78, 707–754.CrossRefGoogle Scholar
  27. Marsh, H. W. (2007). Students’ evaluations of university teaching: A multidimensional perspective. In R. P. Perry & J. C. Smart (Eds.), The scholarship of teaching and learning in higher education: An evidence based perspective (pp. 319–384). New York: Springer.CrossRefGoogle Scholar
  28. Marsh, H. W., & Dunkin, M. J. (1992). Students’ evaluations of university teaching: A multidimensional perspective. In J. C. Smart (Ed.), Higher education: Handbook of theory and research (Vol. 8, pp. 143–233). New York: Agathon Press.Google Scholar
  29. Marsh, H. W., & Roche, L. A. (1997). Making students’ evaluations of teaching effectiveness effective: The critical issues of validity, bias, and utility. American Psychologist, 52(11), 1187–1197.CrossRefGoogle Scholar
  30. Marsh, H. W., & Roche, L. A. (2000). Effects of grading lenience and low workload on students’ evaluations of teaching: Popular myth, bias, validity, or innocent bystanders? Journal of Educational Psychology, 92(1), 202–228.CrossRefGoogle Scholar
  31. McGhee, D. E., & Lowell, N. (2003). Psychometric properties of student ratings of instruction in online and on-campus courses. New Directions for Teaching & Learning, 96, 39–48.CrossRefGoogle Scholar
  32. McMahan, T. V., & Loyola, M. (2011). College 2.0: Transforming higher education through greater innovation and smarter regulation. Washington, DC: Institute for a Competitive Workforce.Google Scholar
  33. Muthen, L. K., & Muthen, B. (2004). Mplus: The comprehensive modeling program for applied researchers. Los Angeles, CA: Muthen & Muthen.Google Scholar
  34. Parry, M. (2010a). Tomorrow’s college. Retrieved November 3, 2010, from http://chronicle.com/article/Tomorrows-College/125120/.
  35. Parry, M. (2010b). News analysis: Is ‘Wal-Mart u.’ a good bargain for students? Retrieved November 3, 2010, from http://chronicle.com/article/Is-Wal-Mart-U-a-Good/65933/.
  36. Pike, G. (2004). The Student Instructional Report for distance education: e-SIR II. Assessment Update, 16(4), 11–12.Google Scholar
  37. Radmacher, S. A., & Martin, D. J. (2001). Identifying significant predictors of student evaluations of faculty through hierarchical regression analysis. The Journal of Psychology, 135(3), 259–268.CrossRefGoogle Scholar
  38. Renaud, R., & Murray, H. (2005). Factorial validity of student ratings of instruction. Research in Higher Education, 46(8), 929–953.CrossRefGoogle Scholar
  39. Rovai, A. P., Ponton, M. K., Derrick, M. G., & Davis, J. M. (2006). Student evaluation of teaching in the virtual and traditional classrooms: A comparative analysis. Internet and Higher Education, 9, 23–35.CrossRefGoogle Scholar
  40. Theall, M., & Franklin, J. (2001). Looking for bias in all the wrong places: A search for truth or a witch hunt in student ratings of instructions. In M. Theall, P. C. Abrami, & L. A. Mets (Eds.), New directions for institutional research: No. 109. The student ratings debate: Are they valid? How can we best use them? (pp. 45–56). San Francisco: Jossey-Bass.Google Scholar
  41. Waschull, S. B. (2001). The online delivery of psychology courses: Attrition, performance, and evaluation. Computers in Teaching, 28, 143–147.CrossRefGoogle Scholar
  42. Whitworth, J., Price, B., & Randall, C. (2002). Factors that affect college of business student opinion of teaching and learning. Journal of Education for Business, 77, 282–289.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.Educational Testing ServicePrincetonUSA

Personalised recommendations