Skip to main content
Log in

Experimental Effects of Student Evaluations Coupled with Collaborative Consultation on College Professors’ Instructional Skills

  • Published:
Research in Higher Education Aims and scope Submit manuscript

Abstract

This experimental study concerned the effects of repeated students’ evaluations of teaching coupled with collaborative consultation on professors’ instructional skills. Twenty-five psychology professors from a Dutch university were randomly assigned to either a control group or an experimental group. During their course, students evaluated them four times immediately after a lecture (class meeting in which lecturing was the teaching format) by completing the Instructional Skills Questionnaire (ISQ). Within 2 or 3 days after each rated lecture, the professors in the experimental group were informed of the ISQ-results and received consultation. Each consultation, three in total, resulted in a plan to improve their teaching for the next lectures. Controls received neither their ISQ-results nor consultation during their course. Multilevel regression analyses showed significant differences in ISQ-ratings in the experimental group compared to the control group, specifically on the instructional dimensions Explication, Comprehension and Activation. In addition, the impact of each of the three consultations plus differences between targeted versus non targeted dimensions were analyzed. This study complements recent non-experimental research on a collaborative consultation approach with experimental results in order to provide evidence-based guidelines for faculty development practices.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. The deviance test is the likelihood ratio test to compare models; the −2*log-likelihood of one model is compared with the −2*log-likelihood of the other model. The difference has a Chi square distribution with degrees of freedom equal to the difference in the number of parameters estimated in the models being compared.

  2. Tables of detailed results on the seven specific dimensions are available on request.

References

  • Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, 179–211.

    Article  Google Scholar 

  • Ajzen, I., & Fishbein, M. (1980). Understanding attitudes and predicting social behavior. Englewood Cliffs, NJ: Prentice-Hall.

    Google Scholar 

  • Bandura, A. (1977). Social learning theory. New York, NY: General Learning Press.

    Google Scholar 

  • Bandura, A. (1997). Self-efficacy: The exercise of control. New York, NY: W.H. Freeman.

    Google Scholar 

  • Benton, S. L., & Cashin, W. E. (2012). Student ratings of teaching: A summary of research and literature (IDEA Paper no. 50). Manhattan, KS: The IDEA Center. http://www.theideacenter.org/sites/default/files/idea-paper_50.pdf. Accessed 12 Mar 2012.

  • Brinko, K. T. (1990). Instructional consultation with feedback in higher education. Journal of Higher Education, 61, 65–83.

    Article  Google Scholar 

  • Cohen, P. A. (1980). Effectiveness of student feedback for improving college instruction. Research in Higher Education, 13, 321–341.

    Article  Google Scholar 

  • De Neve, H. M. F., & Janssen, P. J. (1982). Validity of student evaluation of instruction. Higher Education, 11, 543–552.

    Article  Google Scholar 

  • Dresel, M., & Rindermann, H. (2011). Counseling university instructors based on student evaluations of their teaching effectiveness: a multilevel test of its effectiveness under consideration of bias and unfairness variables. Research in Higher Education, 52, 717–737.

    Article  Google Scholar 

  • Eagly, A. H., & Chaiken, S. (1998). Attitude structure and function. In D. T. Gilbert, S. T. Fiske, & G. Lindzey (Eds.), The handbook of social psychology (4th ed., pp. 269–322). New York, NY: McGraw-Hill.

    Google Scholar 

  • Hampton, S. E., & Reiser, R. A. (2004). Effects of a theory-based feedback and consultation process on instruction and learning in college classrooms. Research in Higher Education, 45, 497–527.

    Article  Google Scholar 

  • Hattie, J. A. C., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77, 81–112.

    Article  Google Scholar 

  • Hox, J. J. (2002). Multilevel analysis. Techniques and applications. Mahwah, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Kember, D., Leung, D. Y. P., & Kwan, K. P. (2002). Does the use of student feedback questionnaires improve the overall quality of teaching? Assessment & Evaluation in Higher Education, 27, 411–425.

    Article  Google Scholar 

  • Knapper, C., & Piccinin, S. (1999). Consultation about teaching: An overview. In C. Knapper & S. Piccinin (Eds.), Using Consultants to Improve Teaching. New Directions for Teaching and Learning, Number 79. San Francisco, CA: Jossey-Bass.

  • L’Hommedieu, R., Menges, R. J., & Brinko, K. T. (1990). Methodological explanations for the modest effects of feedback from student ratings. Journal of Educational Psychology, 82, 232–241.

    Article  Google Scholar 

  • Lang, J. W. B., & Kersting, M. (2007). Regular feedback from student ratings of instruction: Do college teachers improve their ratings in the long run? Instructional Science, 35, 187–205.

    Article  Google Scholar 

  • Lenze, L. F. (1996). Instructional development: What works? National Education Association, Office of Higher Education Update, 2, 1–4.

    Google Scholar 

  • Levinson-Rose, J., & Menges, R. J. (1981). Improving college teaching: A critical review of research. Review of Educational Research, 51, 403–434.

    Article  Google Scholar 

  • Madden, T., Ellen, P., & Ajzen, I. (1992). A comparison of the theory of planned behavior and the theory of reasoned action. Personality and Social Psychology Bulletin, 18, 3–9.

    Article  Google Scholar 

  • Marsh, H. W. (1984). Students evaluations of university teaching—Dimensionality, reliability, validity, potential biases, and utility. Journal of Educational Psychology, 76, 707–754.

    Article  Google Scholar 

  • Marsh, H. W. (1987). Students’ evaluations of university teaching: Research findings, methodological issues, and directions for future research. International Journal of Educational Research, 11, 253–388.

    Article  Google Scholar 

  • Marsh, H. W. (2007a). Do university teachers become more effective with experience? A multilevel growth model of students’ evaluations of teaching over 13 years. Journal of Educational Psychology, 99, 775–790.

    Article  Google Scholar 

  • Marsh, H. W. (2007b). Students’ evaluations of university teaching: Dimensionality, reliability, validity, potential biases and usefulness. In R. P. Perry & J. C. Smart (Eds.), The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 319–384). New York, NY: Springer.

    Chapter  Google Scholar 

  • Marsh, H. W., & Hocevar, D. (1991). Students’ evaluations of teaching effectiveness: The stability of mean ratings of the same teachers over a 13-year period. Teaching and Teacher Education, 7, 303–314.

    Article  Google Scholar 

  • Marsh, H. W., & Roche, L. A. (1993). The use of students’ evaluations and an individually structured intervention to enhance university teaching effectiveness. American Educational Research Journal, 30, 217–251.

    Article  Google Scholar 

  • McKeachie, W. J. (1997). Student ratings: The validity of use. American Psychologist, 52, 1218–1225.

    Article  Google Scholar 

  • McLaughlin, M. W., & Pfeifer, R. S. (1988). Teacher evaluation: Improvement, accountability, and effective learning. New York, NY: Teachers College Press.

    Google Scholar 

  • Menges, R. J., & Brinko, K. T. (1986). Effects of student evaluation feedback: A meta-analysis of higher education research. Paper presented at the annual meeting of the American Educational Research Association, San Francisco.

  • Penny, A. R., & Coe, R. (2004). Effectiveness of consultation on student ratings feedback: A meta-analysis. Review of Educational Research, 74, 215–253.

    Article  Google Scholar 

  • Piccinin, S., Cristi, C., & McCoy, M. (1999). The impact of individual consultation on student ratings of teaching. The International Journal for Academic Development, 4, 75–88.

    Article  Google Scholar 

  • Prebble, T., Hargraves, H., Leach, L., Naidoo, K., Suddaby, G., & Zepke, N. (2004). Impact of student support services and academic development programmes on student outcomes in undergraduate tertiary study: A synthesis of the research. Report to the Ministry of Education, Massey University College of Education.

  • Richardson, T. T. E. (2005). Instruments for obtaining student feedback: A review of the literature. Assessment and Evaluation in Higher Education, 30, 378–415.

    Article  Google Scholar 

  • Rindermann, H., Kohler, J., & Meisenberg, G. (2007). Quality of instruction improved by evaluation and consultation of instructors. International Journal for Academic Development, 12, 73–85.

    Article  Google Scholar 

  • SCO Kohnstamn Institute. (2002). Rapportage Uvalon. Amsterdam: University of Amsterdam.

    Google Scholar 

  • SCO Kohnstamn Institute. (2005). Jaarverslag Uvalon 2003 en 2004. Amsterdam: University of Amsterdam.

    Google Scholar 

  • Sheppard, B. H., Hartwick, J., & Warshaw, P. R. (1988). The theory of reasoned action: A meta-analysis of past research with recommendations for modifications and future research. Journal of Consumer Research, 15, 325–343.

    Article  Google Scholar 

  • Snijders, T. A. B., & Bosker, R. J. (1999). Multilevel analysis: An introduction to basic and advanced multilevel modeling. Thousand Oaks, CA: Sage.

    Google Scholar 

  • Stes, A., Min-Leliveld, M., Gijbels, D., & Van Petegem, P. (2010). The impact of instructional development in higher education: The state-of-the-art of the research. Educational Research Review, 5, 25–49.

    Article  Google Scholar 

  • Theall, M., & Franklin, J. (2001). Looking for bias in all the wrong places: A search for truth or a witch hunt in student ratings of instruction? In M. P. Theall, L. Abrami, & L. Mets (Eds.), The student ratings debate: Are they valid? How can we best use them? New directions for institutional research, 109 (pp. 45–56). San Francisco, CA: Jossey-Bass.

    Google Scholar 

  • Van den Putte, B. (1993). On the theory of reasoned action (Dissertation, University of Amsterdam, 1993).

  • Vorst, H. C. M., & Van Engelenburg, B. (1994). UVALON, UvA-pakket voor onderwijsevaluatie. The Netherlands: Psychological Methods Department, University of Amsterdam.

    Google Scholar 

  • Weimer, M., & Lenze, L. F. (1997). Instructional interventions: A review of the literature on efforts to improve instruction. In K. R. Perry & J. C. Smart (Eds.), Effective teaching in higher education: Research and practice (pp. 205–240). New York, NY: Agathon Press.

    Google Scholar 

Download references

Acknowledgments

We thank Prof. Dr. Conor Dolan for valuable suggestions on the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mariska H. Knol.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Knol, M.H., in’t Veld, R., Vorst, H.C.M. et al. Experimental Effects of Student Evaluations Coupled with Collaborative Consultation on College Professors’ Instructional Skills. Res High Educ 54, 825–850 (2013). https://doi.org/10.1007/s11162-013-9298-3

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11162-013-9298-3

Keywords

Navigation