Advances in Health Sciences Education

, Volume 7, Issue 2, pp 99–116 | Cite as

Medical Students' Ratings of Faculty Teaching in a Multi-Instructor Setting: An Examination of Monotonic Response Patterns

  • Terry D. Stratton
  • Donald B. Witzke
  • Robert J. Jacob
  • Marlene J. Sauer
  • Amy Murphy-Spencer

Abstract

Realizing that the psychometric properties of a measure may be highly variable is especially relevant in a multi-instructor context, since an implicit assumption is that student ratings are equally reliable and valid for all faculty ratees. As a possible indicator of nonattending (i.e. invalid) responses, the authors examined the effects of monotonic response patterns on the reliabilities of students' ratings of faculty teaching – including how an alternative presentation format may reduce the prevalence of this behavior. Second-year medical and dental students(n = 130) enrolled in a required basic science course during the 1998–99 academic year were randomly assigned to one of two groups – each of which evaluated the teaching of 6 different faculty across 6 distinct dimensions (i.e. overall quality, organization, preparation, stimulation, respectfulness, and helpfulness). Using a `split ballot' design, two versions of the conceptually equivalent faculty evaluation form were distributed at random to students in each group. Form A contained the `traditional' items-within-faculty format, while Form B listed faculty-within-item.

The number of monotonic forms (i.e. the identical rating of all 6 items) varied measurably across faculty ratees, as did the respective effects on scale reliabilities. Alpha was especially inflated where a sizeable proportion of monotonic patterns were located on response categories that were either very high (> +1.28 zm deviations) or very low (< −1.28 zm deviations) compared to the group mean. Lastly, the prevalence of monotonic response patterns was significantly(p = ≤ 0.01) less when a faculty-within-item format is used (Form B).These findings suggest that monotonic response patterns differentially impact the reliabilities and, hence, the validity of students' ratings of individual faculty in a multi-instructor context.

effects faculty evaluation monotonic response patterns multi-instructor course nonattending behaviors response validity 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aquilino, W. & LoSciuto, L. (1990). Effect of interview mode on self-reported drug use. Public Opinion Quarterly 54: 362–395.CrossRefGoogle Scholar
  2. Albanese, M., Prucha, C., Barnet, J.H. & Gjerde, C.L. (1997). The effect of right or left placement on the positive response on Likert-type scales used by medical students for rating instruction. Academic Medicine 72: 627–630.Google Scholar
  3. Bardes, C.L. & Jayes, J.G. (1995). Are the teachers teaching? Measuring the educational activities of clinical faculty. Academic Medicine 70: 111–114.CrossRefGoogle Scholar
  4. Barnett, J.J. (1999). Nonattending respondent effects on internal consistency of self-administered surveys: A Monte Carlo simulation study. Educational and Psychological Measurement 59: 38–46.CrossRefGoogle Scholar
  5. Barnett, J.J. (1996). Responses that may indicate nonattending behaviors in three self-administered educational surveys. Research in the Schools 3(2): 49–59.Google Scholar
  6. Bradburn, N.M., Rips, L.J. & Shevell, S.K. (1987). Answering autobiographical questions: The impact of memory and inference on surveys. Science 230: 157–161.Google Scholar
  7. Bradburn, N.M. & Sudman, S. (1981). Improving interview method and questionnaire design. San Francisco: Jossey-Bass.Google Scholar
  8. Cacioppo, J.T. & Petty, R.E. (1982). The need for cognition. Journal of Personality and Social Psychology 42: 116–131.CrossRefGoogle Scholar
  9. Canaday, S.D., Mendelson, M.A. & Hardin, J.H. (1978). The effect of timing on the validity of student ratings. Journal of Medicine Education 53: 958–964.Google Scholar
  10. Costin, F., Greenough, W.T. & Menges R.J. (1971). Student ratings of college teaching: Reliability, validity, and usefulness. Review of Educational Research 41: 511–535.CrossRefGoogle Scholar
  11. d'Appllonia, S. & Abrami, P.C. (1997). Navigating student ratings of instruction. American Psychologist 52: 1198–1208.CrossRefGoogle Scholar
  12. Dijkstra, W. & van der Zouwen, J. (1982). Response behavior in the survey-interview. New York: Academic Press.Google Scholar
  13. Dowell, D.A. & Neal, J.A. (1982). A selective review of the validity of student ratings of teaching. Journal of Higher Education 53: 51–62.CrossRefGoogle Scholar
  14. Ellwein, L.B., Khachab, M. & Waldman, R.H. (1989). Assessing research productivity: Evaluating journal publication across academic departments. Academic Medicine 64: 319–325.CrossRefGoogle Scholar
  15. Feldman, J.M., & Lynch, J.G., Jr. (1988). Self-generated validity and other effects of measurement on belief, attitude, intention, and behavior. Journal of Applied Psychology 3: 421–435.CrossRefGoogle Scholar
  16. Fowler, F.J., Jr. & Cannel, C.F. (1996). Using behavioral coding to identify cognitive problems with survey questions. In N. Schwarz & S. Sudman (eds.), Answering Questions: Methodology for Determining Cognitive and Communicative Processes in Survey Research (pp. 15–36). San Francicso: Jossey-Bass.Google Scholar
  17. Guyatt, G.H., Cook, D.J., King, D., Norman, G.R., Kane, S.L.C. & van Ineveld, C. (1999). Effect of the framing of questionnaire items regarding satisfaction with training in residents' responses. Academic Medicine 74: 192–194.CrossRefGoogle Scholar
  18. Hilton, C., Fisher, W., Jr., Lopez, A. & Sanders, C. (1997). A relative value-based system for calculating faculty productivity in teaching, research, administration, and patient care. Academic Medicine 72: 787–793.CrossRefGoogle Scholar
  19. Irby, D.M., Shannon, F.N., Scher, M., Peckham, P., Ko, G. & Davis, E. (1977). The use of student ratings in multi-instructor courses. Journal of Medical Education 52: 668–673.Google Scholar
  20. Jacobson, R.L. (1992). Colleges face new pressure to increase faculty productivity. The Chronicle of Higher Education (April 15), A1.Google Scholar
  21. Jarvis, W.B.G. & Petty, R.E. (1996). The need to evaluate. Journal of Personality and Social Psychology 70: 172–194.CrossRefGoogle Scholar
  22. Jones, R.F. & Froom, J.D. (1994). Faculty and administrative views of problems in faculty evaluation. Academic Medicine 69: 476–483.Google Scholar
  23. Jones, R.F. & Gold, J.S. (1998). Faculty appointment and tenure policies in medical schools: A 1997 status report. Academic Medicine 73: 212–219.Google Scholar
  24. Krosnick, J.S. (1999). Survey research. Annual Review of Psychology 50: 537–567.CrossRefGoogle Scholar
  25. Krosnick, J.S. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology 5: 213–236.Google Scholar
  26. Krosnick, J.A. & Alwin, D.A. (1986). An Evaluation of a Cognitive Theory of Response Order Effects in Survey Measurement. GSS Methodological Report No. 45. Ann Arbor, MI: Institute for Social Research.Google Scholar
  27. Lancaster, C.J., Mendelson, M.A. & Ross, G.R. (1979). The utilization of student instructional ratings in medical colleges. Journal of Medical Education 54: 657–659.Google Scholar
  28. Lessler, J.T. & Kalsbeek, W. (1993). Nonsampling Error in Surveys. New York: Wiley.Google Scholar
  29. Marsh, H.W. (1995). Still weighting for the right criteria to validate student evaluations of teaching in the IDEA system. Journal of Educational Psychology 86: 631–648.CrossRefGoogle Scholar
  30. Marsh, H.W. & Baliey, M. (1993). Multidimensionality of students' evaluations of teaching effectiveness. Journal of Higher Education 64: 1–18.CrossRefGoogle Scholar
  31. Marsh, H.W., Fleiner, H. & Thomas, C.S. (1975). Validity and usefulness of student evaluations of instructional quality. Journal of Educational Psychology 67: 833–839.CrossRefGoogle Scholar
  32. Marsh, H.W. & Roche, L.A. (1993). The use of students' evaluations and an individually structured intervention to enhance university teaching effectiveness. American Educational Research Journal 30: 217–251.CrossRefGoogle Scholar
  33. McKeachie, W.J. (1997). Students ratings: The validity of use. American Psychologist 52: 1218–1224.CrossRefGoogle Scholar
  34. Meier, R.S. & Feldhusen, J.F. (1979). Another look at Dr. Fox: Effect of stated purpose for evaluation, lecturer expressiveness, and density of lecture content on student ratings. Journal of Educational Psychology 71: 339–345.CrossRefGoogle Scholar
  35. Palchik, N.S., Burdi, A.R., Hess, G.E. & Dielman, T.E. (1988). Student assessment of teaching effectiveness in a multi-instructor course for multidisciplinary health professional students. Evaluation & the Health Professions 11: 55–73.Google Scholar
  36. Petty, R.E. & Cacioppo, J.T. (1979). Issue involvement can increase or decrease persuasion by enhancing message-relevant cognitive responses. Journal of Personality and Social Psychology 37: 1915–1926.CrossRefGoogle Scholar
  37. Petty, R.E., Wells, G.L. & Brock, T.C. (1976). Distraction can enhance or reduce yielding to propaganda: Thought disruption versus effort justification. Journal of Personality and Social Psychology 34: 874–884.CrossRefGoogle Scholar
  38. Reiser, S.J. (1995). Linking excellence in teaching to departments' budgets. Academic Medicine 70: 272–275.CrossRefGoogle Scholar
  39. Rippey, R.M. (1975). Student evaluations of professors: Are they of value? Journal of Medical Education 50: 951–958.Google Scholar
  40. Schultz, C.B. (1978). Some limits to the validity and usefulness of student ratings of teachers: An argument for caution. Educational Research Quarterly 3: 12–27.Google Scholar
  41. Schuman, H. & Presser, S. (1981). Questions and Answers in Attitude Surveys: Experiments on Question Form, Wording, and Context. New York: Academic Press.Google Scholar
  42. Schwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist 54: 93–105.CrossRefGoogle Scholar
  43. Schwarz, N. & Bless, H. (1992). Constructing reality and its alternatives: An inclusion/exclusion model of assimilation and contrast effects in social judgement. In L.L. Martin & A. Tesser (eds.), The Construction of Social Judgments. Hillsdale, NJ: Erlbaum.Google Scholar
  44. Schwarz, N. & Sudman, S. (1996). Answering Questions: Methodology for Determining Cognitive and Communicative Processes in Survey Research. San Francisco: Jossey-Bass.Google Scholar
  45. Sudman, S. & Bradburn, N.M. (1974). Response Effects in Surveys: A Review and Synthesis. Chicago: Aldine.Google Scholar
  46. Thompson, B. & Vacha-Haase, T. (2000). Psychometrics is datametrics: The test is not reliable. Educational and Psychological Measurement 60: 174–195.CrossRefGoogle Scholar
  47. Tourangeau, R. & Smith, T.W. (1996). Asking sensitive questions: The impact of data collection mode, question format, and question content. Public Opinion Quarterly 60: 275–304.CrossRefGoogle Scholar
  48. Vacha-Haase, T. (1998). Reliability generalization: Exploring variance in measurement error affecting score reliability across studies. Educational and Psychological Measurement 58: 6–20.Google Scholar
  49. Ware, J.E., Jr. & Williams, R.G. (1975). The Dr. Fox effect: A study of lecturer effectiveness and ratings of instruction. Journal of Medical Education 50: 149–156.Google Scholar
  50. Wentlund, E.J. & Smith, K.W. (1993). Survey Responses: An Evaluation of Their Validity. San Diego, CA: Academic Press, Inc.Google Scholar
  51. Williams, R.G. & Ware, J.E., Jr. (1976). Validity of student ratings of instruction under different incentive conditions: A further study of the Dr. Fox effect. Journal of Educational Psychology 68: 48–56.CrossRefGoogle Scholar
  52. Witte, M.H., Kerwin, A., Witte, C.L. & Scadron, A. (1989). A curriculum on medical ignorance. Medical Education 23: 24–29.CrossRefGoogle Scholar
  53. Witzke, D.B., Bonaminio, G.A. & Jacob, R.J. (1997). Faculty evaluation: Validity issues revisited. Paper presented at the 36th Annual Research in Medical Education (RIME) conference, Association of American Medical Colleges, Washington, D.C.Google Scholar

Copyright information

© Kluwer Academic Publishers 2002

Authors and Affiliations

  • Terry D. Stratton
    • 1
  • Donald B. Witzke
    • 2
  • Robert J. Jacob
    • 3
  • Marlene J. Sauer
    • 4
  • Amy Murphy-Spencer
    • 5
  1. 1.Office of Academic Affairs, Division of Testing and EvaluationUniversity of Kentucky College of MedicineLexingtonU.S.A
  2. 2.Department of Pathology and Laboratory MedicineUniversity of Kentucky College of MedicineLexingtonU.S.A
  3. 3.Department of Microbiology and Immunology and Division of Oral Health SciencesUniversity of Kentucky, Colleges of Medicine and DentistryLexingtonU.S.A
  4. 4.Office of Academic AffairsUniversity of Kentucky College of MedicineLexingtonU.S.A
  5. 5.Office of Academic Affairs, Division of Testing and EvaluationUniversity of Kentucky College of MedicineLexingtonU.S.A

Personalised recommendations