Advertisement

Research in Higher Education

, Volume 53, Issue 5, pp 576–591 | Cite as

Nonresponse and Online Student Evaluations of Teaching: Understanding the Influence of Salience, Fatigue, and Academic Environments

  • Meredith J. D. AdamsEmail author
  • Paul D. Umbach
Article

Abstract

Technological advances have enabled institutions of higher education to administer course evaluations online, forgoing the traditional paper-and-pencil methods. Consequently, many of these institutions suffer from low response rates, but little research is available on this topic. To increase understanding about course evaluation participation in the online environment, this study examined over 22,000 undergraduates to whom the university administered about 135,000 evaluations. Multilevel models were constructed to analyze the data, and several variables emerged as significant predictors of participation. The results were mostly consistent with previous research and aligned with theories of survey nonresponse. However, the integration of uncommon variables provided new perspectives about course evaluations in particular. Implications for research and practical applications for institutions are also addressed, including ways to combat survey fatigue, increase the salience of the survey, and increase participation in online course evaluations.

Keywords

Nonresponse Course evaluations Evaluations of teaching Participation Surveys Online 

References

  1. Avery, R. J., Bryant, W. K., Mathios, A., Kang, H., & Bell, D. (2006). Electronic SETs: Does an online delivery system influence student evaluations? Journal of Economic Education, 37, 21–37.CrossRefGoogle Scholar
  2. Blau, P. (1964). Exchange and power in social life. New York: Wiley.Google Scholar
  3. Brandenburg, G. C., & Remmers, H. H. (1927). The Purdue Rating Scale for instructors. Educational Administration and Supervision, 13, 399–406.Google Scholar
  4. Clarksberg, M., Robertson, D., & Einarson, M. (2008). Engagement and student surveys: Nonresponse and implications for reporting survey data. Paper presented at the 48th Annual Forum of the Association for Institutional Research, Seattle, WA.Google Scholar
  5. Cohen, P. A. (1981). Student ratings of instruction and student achievement: A meta-analysis of multisection validity studies. Review of Educational Research, 51(3), 281–309.Google Scholar
  6. Costin, F. (1978). Do student ratings of college teachers predict student achievement? Teaching of Psychology, 5(2), 86–88.CrossRefGoogle Scholar
  7. Crumbley, D. L., & Reichelt, K. J. (2009). Teaching effectiveness, impression management, and dysfunctional behavior: Student evaluation of teaching control data. Quality Assurance in Education, 17(4), 377–392.CrossRefGoogle Scholar
  8. Davis, M., Hirschberg, J., Lye, J. N., & Johnston, C. G. (2007). Systematic influences on teaching evaluations: The case for caution. Australian Economic Papers, 46(1), 18–38.CrossRefGoogle Scholar
  9. Dey, E. L. (1997). Working with low survey response rates: The efficacy of weighting adjustments. Research in Higher Education, 38(2), 215–227.CrossRefGoogle Scholar
  10. Dillman, D. A., Eltinge, J. L., Groves, R. M., & Little, R. J. A. (2002). Survey nonresponse in design, data collection, and analysis. In R. M. Groves, D. A. Dillman, J. L. Eltinge, & R. J. A. Little (Eds.), Survey nonresponse (pp. 3–26). New York: Wiley.Google Scholar
  11. Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, mail, and mixed-mode surveys: The tailored design method. San Francisco: Jossey-Bass.Google Scholar
  12. Dommeyer, C. J., Baum, P., Hanna, R. W., & Chapman, K. S. (2004). Gathering faculty teaching evaluations by in-class and online surveys: Their effects on response rates and evaluations. Assessment and Evaluation in Higher Education, 29(5), 611–625.CrossRefGoogle Scholar
  13. Etzioni, A. (1975). A comparative analysis of complex organizations: On power, involvement, and their correlates. New York: The Free Press.Google Scholar
  14. Fidelman, C. G. (2007). Course evaluation surveys: In-class paper surveys versus voluntary online surveys. Doctoral dissertation (UMI no. 3301790), Boston College.Google Scholar
  15. Goyder, J., Warriner, K., & Miller, S. (2002). Evaluating socio-economic status (SES) bias in survey nonresponse. Journal of Official Statistics, 18(1), 1–11.Google Scholar
  16. Groves, R. M. (1989). Survey errors and survey costs. New York: Wiley.CrossRefGoogle Scholar
  17. Groves, R. M., & Couper, M. P. (1998). Nonresponse in household interview surveys. New York: Wiley.Google Scholar
  18. Groves, R. M., Couper, M., Presser, S., Singer, E., Tourangeau, R., Piani Acosta, G., et al. (2006). Experiments in producing nonresponse bias. Public Opinion Quarterly, 70(5), 720–736.CrossRefGoogle Scholar
  19. Groves, R. M., Dillman, D. A., Eltinge, J. L., & Little, R. J. A. (2002). Survey nonresponse. New York: Wiley.Google Scholar
  20. Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2004a). Survey methodology. Hoboken, NJ: Wiley.Google Scholar
  21. Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey methodology (2nd ed.). Hoboken, NJ: Wiley.Google Scholar
  22. Groves, R. M., & Peytcheva, E. (2008). The impact of nonresponse rates on nonresponse bias: A meta-analysis. Public Opinion Quarterly, 72(2), 167–189.CrossRefGoogle Scholar
  23. Groves, R. M., Presser, S., & Dipko, S. (2004b). The role of topic interest in survey participation decisions. Public Opinion Quarterly, 68(1), 2–31.CrossRefGoogle Scholar
  24. Guthrie, E. R. (1954). The evaluation of teaching: A progress report. Seattle: University of Washington.Google Scholar
  25. Heberlein, T. A., & Baumgartner, R. (1978). Factors affecting response rates to mailed questionnaires: A quantitative analysis of the published literature. American Sociological Review, 43(4), 447–462.CrossRefGoogle Scholar
  26. Ho, D. E., & Shapiro, T. H. (2008). Evaluating course evaluations: An empirical analysis of a quasi-experiment at the Stanford Law School, 2000-2007. Journal of Legal Education, 58(3), 388–412.Google Scholar
  27. Hofstede, G. (1981). Culture and organizations. International Studies of Management and Organization, 10(4), 15–41.Google Scholar
  28. Holland, J. L. (1966). The psychology of vocational choice. Waltham, MA: Blaisdell.Google Scholar
  29. Holland, J. L. (1973). Making vocational choice: A theory of careers. Englewood Cliffs, NJ: Prentice Hall.Google Scholar
  30. Holland, J. L. (1997). Making vocational choice: A theory of vocational personalities and work environments. Odessa, FL: Psychological Assessment Resources.Google Scholar
  31. Isely, P., & Singh, H. (2005). Do higher grades lead to favorable student evaluations? Journal of Economic Education, 36(1), 29–42.CrossRefGoogle Scholar
  32. Johnson, T. D. (2003). Online student ratings: Will students respond? In D. L. Sorenson & T. D. Johnson (Eds.), Online student ratings of instruction: Vol. 96. New directions for teaching and learning (pp. 49–59). Hoboken, NJ: Wiley.Google Scholar
  33. Johnson, T. P., O’Rourke, D., Burris, J., & Owens, L. (2002). Culture and survey nonresponse. In R. M. Groves, D. A. Dillman, J. L. Eltinge, & R. J. A. Little (Eds.), Survey nonresponse (pp. 55–70). New York: Wiley.Google Scholar
  34. Jones, C. R. (2009). Nonresponse bias in online SETs. Doctoral dissertation (UMI no. 3386938), James Madison University.Google Scholar
  35. Kucsera, J. V., & Zimmaro, D. M. (2008). Electronic course instructor survey (eCIS) report. Austin, TX: Division of Instructional Innovation and Assessment, University of Texas at Austin.Google Scholar
  36. Layne, B. H., DeCristoforo, J. R., & McGinty, D. (1999). Electronic versus traditional student ratings of instruction. Research in Higher Education, 40(2), 221–232.CrossRefGoogle Scholar
  37. Lindner, J. R., Murphy, T. H., & Briers, G. H. (2001). Handling nonresponse in social science research. Journal of Agricultural Education, 42(4), 43–53.CrossRefGoogle Scholar
  38. Marcus, B., & Schutz, A. (2005). Who are the people reluctant to participate in research? Personality correlates of four different types of nonresponse as inferred from self- and observer ratings. Journal of Personality, 73(4), 1–26.CrossRefGoogle Scholar
  39. Marsh, H. W. (2007). Students’ evaluations of university teaching: Dimensionality, reliability, validity, potential biases and usefulness. In R. P. Perry & J. C. Smart (Eds.), The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 319–383). Dordrecht: Springer.CrossRefGoogle Scholar
  40. Marsh, H. W., & Roche, L. A. (1997). Making students’ evaluations of teaching effectiveness effective. American Psychologist, 52, 1187–1197.CrossRefGoogle Scholar
  41. McGourty, J., Scoles, K., & Thorpe, S. (2002a, June). Web-based student evaluation of instruction: Promises and pitfalls. Paper presented at the 42nd Annual Forum of the Association for Institutional Research, Toronto, ON, Canada.Google Scholar
  42. McGourty, J., Scoles, K., & Thorpe, S. (2002b, November). Web-based course evaluation: Comparing the experience at two universities. Paper presented at the 32nd ASEE/IEEE Frontiers in Education Conference, Boston, MA.Google Scholar
  43. McKeachie, W. J. (1969). Student ratings of faculty. American Association of University Professors Bulletin, 55, 439–444.Google Scholar
  44. McKeachie, W. J. (1979). Student ratings of faculty: A reprise. Academe, 65, 384–397.CrossRefGoogle Scholar
  45. Moore, D. L., & Tarnai, J. (2002). Evaluating nonresponse error in mail surveys. In R. M. Groves, D. A. Dillman, J. L. Eltinge, & R. J. A. Little (Eds.), Survey nonresponse (pp. 197–212). New York: Wiley.Google Scholar
  46. Porter, S. R. (2004a). Pros and cons of paper and electronic surveys. In S. R. Porter (Ed.), Overcoming survey research problems: Vol. 121. New directions for institutional research (pp. 91–98). San Francisco, CA: Jossey-Bass.Google Scholar
  47. Porter, S. R. (2004b). Raising response rates: What works? In S. R. Porter (Ed.), Overcoming survey research problems: Vol. 121. New directions for institutional research (pp. 5–22). San Francisco, CA: Jossey-Bass.Google Scholar
  48. Porter, S. R., & Umbach, P. D. (2006). Student survey response rates across institutions: Why do they vary? Research in Higher Education, 47(2), 229–247.CrossRefGoogle Scholar
  49. Porter, S. R., & Whitcomb, M. E. (2005). Non-response in student surveys: The role of demographics, engagement, and personality. Research in Higher Education, 46(2), 127–152.CrossRefGoogle Scholar
  50. Porter, S. R., & Whitcomb, M. E. (2007). Mixed-mode contacts in web surveys: Paper is not necessarily better. Public Opinion Quarterly, 71(4), 635–648.CrossRefGoogle Scholar
  51. Porter, S. R., Whitcomb, M. E., & Weitzer, W. H. (2004). Multiple surveys of students and survey fatigue. In S. R. Porter (Ed.), Overcoming survey research problems: Vol. 121. New directions for institutional research (pp. 63–74). San Francisco, CA: Jossey-Bass.Google Scholar
  52. Remedios, R., & Lieberman, D. A. (2008). I liked your course because you taught me well: The influence of grades, workload, expectations and goals on students’ evaluations of teaching. British Educational Research Journal, 34(1), 91–115.CrossRefGoogle Scholar
  53. Rosen, D., Holmberg, K., & Holland, J. L. (1997). The educational opportunities finder. Odessa, FL: Psychological Assessment Resources.Google Scholar
  54. Rubin, D. B., & Zanutto, E. (2002). Using matched substitutes to adjust for nonignorable nonresponse through multiple imputations. In R. M. Groves, D. A. Dillman, J. L. Eltinge, & R. J. A. Little (Eds.), Survey nonresponse (pp. 389–402). New York: Wiley.Google Scholar
  55. Sax, L. J., Gilmartin, S. K., & Bryant, A. N. (2003). Assessing response rates and nonresponse bias in web and paper surveys. Research in Higher Education, 44(4), 409–432.CrossRefGoogle Scholar
  56. Sax, L. J., Gilmartin, S. K., Lee, J. J., & Hagedorn, L. S. (2008). Using web surveys to reach community college students: An analysis of response rates and response bias. Community College Journal of Research & Practice, 32(9), 712–729.CrossRefGoogle Scholar
  57. Smart, J. C., Feldman, K. A., & Ethington, C. A. (2000). Academic disciplines: Holland’s theory and the study of college students and faculty. Nashville, TN: Vanderbilt University Press.Google Scholar
  58. Thorpe, S. W. (2002). Online student evaluation of instruction: An investigation of non-response bias. Paper presented at the 42nd Annual Forum for the Association for Institutional Research, Toronto, ON, Canada.Google Scholar
  59. Umbach, P. D. (2005). Getting back to the basics of survey research. In P. D. Umbach (Ed.), Survey research: Emerging issues: Vol. 127. New directions for institutional research (pp. 91–100). San Francisco, CA: Jossey-Bass.Google Scholar
  60. Vehovar, V., Batagelj, Z., Manfreda, K. L., & Zaletel, M. (2002). Nonresponse in web surveys. In R. M. Groves, D. A. Dillman, J. L. Eltinge, & R. J. A. Little (Eds.), Survey nonresponse (pp. 229–242). New York: Wiley.Google Scholar
  61. Yu, C. H., Jannasch-Pennell, A., DiGangi, S., Kim, C., & Andrews, S. (2007). Data visualization and mining for survey responses. Practical Assessment, Research & Evaluation, 12(19), 1–12.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.Department of Leadership, Policy, and Adult and Higher EducationNorth Carolina State UniversityRaleighUSA
  2. 2.Department of Leadership, Policy, and Adult and Higher EducationNorth Carolina State UniversityRaleighUSA

Personalised recommendations