Effects of variations in stem and response options on teaching evaluations
- 154 Downloads
- 1 Citations
Abstract
This study was designed to investigate the effects of strength of descriptors in teaching evaluation items on global ratings of the instructor and the course. Six teaching evaluation forms were selected for this study with items that varied in either the descriptor in the item stem or the descriptors labeling the scale points. The forms were distributed in nine university classes from two different colleges. The total number of usable returns was 586. Analysis of variance results indicated that the version of the evaluation form has a significant effect on both instructor ratings and course ratings. These findings suggest that one should exercise caution in comparing ratings of one group of instructors with another without considering how the item stems and response options are worded.
Keywords
Student evaluations of teaching Teaching evaluation scales Instructor and course ratings Faculty assessmentsPreview
Unable to display preview. Download preview PDF.
References
- Abrami P. C. (1989) How should we use student ratings to evaluate teaching?. Research in Higher Education 30(2): 221–227CrossRefGoogle Scholar
- Albaum G., Murphy B. D. (1988) Extreme response on a Likert scale. Psychological Reports 63: 501–502CrossRefGoogle Scholar
- Brandenburg D. C., Slinde J. A., Batista E. E. (1977) Student ratings of instruction: Validity and normative interpretations. Research in Higher Education 7: 67–78CrossRefGoogle Scholar
- Campbell H. E., Gerdes K., Steiner S. (2005a) What’s looks got to do with it? Instructor appearance and student evaluations of teaching. Journal of Policy Analysis and Management 24(3): 611–620CrossRefGoogle Scholar
- Campbell H. E., Steiner S., Gerdes K. (2005b) Student evaluations of teaching: How you teach and who you are. Journal of Public Affairs Education 11(3): 211–231Google Scholar
- Cashin, W. E. (1995). Student ratings of teaching: The research revisited (IDEA paper no. 32). Manhattan: Kansas State University. Center for Faculty Evaluation and Development.Google Scholar
- Cashin W. E., Downey R. G. (1992) Using global student rating items for summative evaluation. Journal of Educational Psychology 84(4): 563–572CrossRefGoogle Scholar
- Frisbie D. A., Brandenburg D. C. (1979) Equivalence of questionnaire items with varying response formats. Journal of Educational Measurement 16(1): 43–48CrossRefGoogle Scholar
- Germain M. L., Scandura T. A. (2005) Grade inflation and student individual differences as systematic bias in faculty evaluations. Journal of Instructional Psychology 32(1): 58–67Google Scholar
- Grimes P. W., Millea M. J., Woodruff T. W. (2004) Grades-who’s to blame? Student evaluation of teaching and locus of control. Journal of Economic Education 35(2): 129–147CrossRefGoogle Scholar
- Hamermesh D. S., Parker A. (2005) Beauty in the classroom. Instructors’ pulchritude and putative pedogogical productivity. Economics of Education Review 24: 369–376CrossRefGoogle Scholar
- Isely P., Singh H. (2005) Do higher grades lead to favorable student evaluations?. Journal of Economic Education 36(1): 29–42CrossRefGoogle Scholar
- Koushki P. A., Kuhn H. A. J. (1982) How reliable are student evaluations of teachers?. Engineering Education 72: 362–367Google Scholar
- Kumar V., Aaker D. A., Day G. S. (2002) Essentials of marketing research (3rd ed.). Wiley, New York, NYGoogle Scholar
- Langbein L. I. (1994) The validity of student evaluations of teaching. PS: Political Science and Politics 27(3): 545–553CrossRefGoogle Scholar
- Lill D. J. (1979) The development of a standard student evaluation form. Journal of Academy of Marketing Science 7(3): 242–254CrossRefGoogle Scholar
- Marsh H. W. (1980) The influence of student course and instructor characteristics on evaluations of university teaching. American Educational Research Journal 17: 219–237Google Scholar
- Marsh H. W. (1987) Students’ evaluations of university teaching: Research findings, methodological issues, and directions for future research. International Journal of Educational Research 11(3): 253–388CrossRefGoogle Scholar
- Marsh H. W., Roche L. A. (1997) Making students’ evaluations of teaching effectiveness effective: The critical issues of validity, bias and utility. American Psychologist 52(11): 1187–1197CrossRefGoogle Scholar
- Ory J. C. (1982) Item placement and wording effects on overall ratings. Educational and Psychological Measurement 42: 767–775CrossRefGoogle Scholar
- Overall J. U., Marsh H. W. (1980) Students’ evaluations of instruction: A longitudinal study of their stability. Journal of Educational Psychology 72: 321–325CrossRefGoogle Scholar
- Parish T. S., Wengert A. (2006) An examination of two teacher rating scales: What can they tell us about how well we teach?. Journal of Instructional Psychology 33(2): 110–112Google Scholar
- Saunders K. T. (2001) The influence of instructor native language on student learning and instructor ratings. Eastern Economic Journal 27(3): 345–353Google Scholar
- Schwarz N. (1999) Self-reports: How the questions shape the answers. American Psychologist 54(2): 93–105CrossRefGoogle Scholar
- Sixbury, G. R., & Cashin, W. E. (1995). Description of database for the IDEA Diagnostic Form (IDEA technical report no. 9). Manhattan: Kansas State University, Center for Faculty Evaluation and Development.Google Scholar
- Sommer R. (1991) Literal versus metaphorical interpretations of scale terms: A serendipitous natural experiment. Educational and Psychological Measurement 51: 1009–1012CrossRefGoogle Scholar
- Tom, G., Tong, S. T., & Hesse, C. (2009). Thick slice and thin slice teaching evaluations. Social Psychology of Education. doi: 10.1007/s11218-009-9101-7.
- Whitworth J. E., Price B. A., Randall C. H. (2002) Factors that affect college of business student opinion of teaching and learning. Journal of Education for Business 77(5): 282–289CrossRefGoogle Scholar