Abstract
The aim of this volume was to give a comprehensive overview of the current state of the research on student perceptions of and student feedback on teaching. This chapter provides a resume of the important theoretical considerations and empirical evidence the authors contributed to this volume. First, evidence concerning the validity of student perceptions of teaching quality is discussed, highlighting the quality of the questionnaires used and accompanying materials provided by their authors. In the next step, empirical findings are summarized on student and teacher characteristics that can influence important processes within the feedback cycle. Subsequently, it is emphasized that the effectiveness of student feedback on teaching is significantly related to the nature of the individual school’s feedback culture. Furthermore, it is argued that the efficacy of student feedback depends on whether teachers are provided with a high level of support, when making use of the feedback information to improve their teaching practices. As the literature review impressively documents‚ teachers, teaching, and ultimately students can benefit substantially from student feedback on teaching in schools.
You have full access to this open access chapter, Download chapter PDF
Similar content being viewed by others
Keywords
1 Introduction
Although there exists a vast and differentiated literature about teachers’ feedback to students in schools and the ways to make productive use of it (Hattie, 2009), feedback from students to teachers has received far less attention. The aim of this volume, therefore, was to present an informative overview of state-of-the-art research in this area and important neighboring scientific fields. Central topics discussed in this volume are whether student perceptions of teaching in school are reliable and valid, what has to be considered to obtain valid information, and how to successfully make use of it for the professional development of teaching and teachers.
As Hattie points out in his foreword to this volume, the knowledge of variables which may influence the success and effectiveness of feedback is rather critical. The Process Model of Student Feedback on Teaching (SFT) suggested by Röhl, Bijlsma, and Rollett (Chap. 1 of this volume) is an attempt to provide a framework which describes the feedback cycle in such a way that it can provide an orientation for research on the efficacy of student feedback as well as for the effective implementation of intervention measures. Particular focus is put on variables which characterize the affective and cognitive processing of students’ feedback by the teachers and their readiness for considering improvement-orientated actions. A professional implementation of student feedback on teaching clearly has the potential to enrich the feedback and learning culture of schools substantially and, above and beyond that can contribute to their democratic culture. A corresponding approach is elaborated by Jones and Hall (Chap. 13 of this volume) advocating school and teaching practices of involving the “student voice,” i.e., involving students in the planning and implementation of their own education. But, as Uttl (Chap. 15 of this volume) summarizes, findings from higher education show how potential dangers arise when student perceptions of teaching are collected with an evaluative focus.
In this final chapter, we summarize the findings and conclusions drawn from the chapters in this volume to give an overview of what we have achieved in research on student feedback, what needs to be considered when implementing student feedback in practice and where we see room for improvement. First, we discuss the validity of student perceptions of teaching quality and characteristics of survey instruments. Next, we highlight characteristics of students and teachers affecting and impacting teachers’ feedback processes. We then discuss the organizational context of the evaluation and the presentation of the feedback information to stakeholders. Finally, we suggest directions forward for researchers, policymakers, and schools.
2 Validity of Student Perceptions of Teaching Quality and Characteristics of Survey Instruments
Regarding the question of the validity of measurements and tests in educational contexts, it has been emphasized that validity can only be assessed in regard to the intended interpretation and subsequent actions (AERA, 2014; Kane, 2012). In this sense we discuss the validity of student perceptions of teaching in terms of their value for improving teaching practices in a formative setting and‚ at the same time‚ we disregard a purely evaluative use of student ratings on teaching (see Chap. 8 by Wisniewski and Zierer, and Chap. 15 by Uttl in this volume).
The literature review provided across the chapters of this volume impressively illustrates how teachers and teaching can benefit from making use of formative student feedback. Nevertheless, there are still many researchers and practitioners raising concerns about the accuracy and fairness of student ratings of teaching quality, which—if tenable—would considerably limit their value for the proposed usage in the development of teaching and teaching skills. Indeed, there are good reasons to be skeptical about the results of student ratings of teaching used in the field, and several contributions to this volume address the topic of a valid measurement of student perception of teaching quality.
An important issue in this context is how well students are able to evaluate teaching practices. The referenced literature on the prognostic validity of student feedback measures point to the result that student evaluations on teaching do indeed capture aspects of the teaching quality which are relevant for students’ learning and development (e.g., Fauth et al., 2014; Praetorius et al., 2018; Wallace et al., 2016). As the analyses presented in this volume indicate, there is much which can be done to improve the measurement procedures and to increase the accuracy of student ratings. For example, Bijlsma et al. (Chap. 2 of this volume) point out that the underlying psychometric theory is determining how the rater’s perception is conceptualized and captured. Göllner, Fauth, and Wagner (Chap. 7 of this volume) emphasize impressively that we have to be cautious and more aware about the way we ask students about their experiences in class. Different combinations of item referents (e.g., “I / We / The class understood the subject matter well”) and item addressees (e.g., “The teachers explained the subject matter clearly to me / the class”) are likely to induce different evaluation processes and different results, thus affecting reliability and validity of the measurements. Accordingly, Schweig and Martínez (Chap. 6 of this volume) call for evaluating within-classroom variability of student experiences as an indicator for disparate instructional experiences and unequal participation opportunities of the students. The authors strongly argue that evaluating within-classroom variability should be considered as a defining strength of the approach of using student-survey-based measures for the improvement of teaching.
One intensely discussed topic in the literature is the agreement or disagreement of the evaluations of students and observers (e.g., Clausen, 2002; Gitomer et al., 2014; Kuhfeld, 2017). van der Lans (Chap. 5 of this volume) provides important findings which may even have the potential to end this discussion. In his analyses, the results from students and observers converge when 25 students’ and seven different observers’ views are related to each other. Interestingly, the ordering of the item difficulties or teaching competences were very consistent across students or observer ratings, and could also be calibrated on the same continuum of instructional effectiveness (van der Lans et al., 2019). These findings indicate that the disagreement of students and observers often reported in the literature may be largely attributed to an insufficient number of observers.
But it is indisputable that the question has to be raised of how well students perceive different aspects of teaching quality, and how well they comprehend the corresponding items in the questionnaire they are processing. Unfortunately, little research has been done on these topics. Accordingly, Göllner and colleagues (Chap. 7 of this volume) call for studies on the students’ cognitive processing of survey items and its influence on their evaluation of teaching practices, while highlighting the necessity of age- and development-appropriate survey instruments. A closer look at the topic of how well students comprehend the items in a questionnaire would improve the survey instruments substantially and subsequently enhance the validity of the feedback information. As a consequence, for example, Bijlsma et al. (under review) and Lenske (2016) intensely discuss the content of the items of their student perception questionnaires with students to make sure that they understood and interpreted the items well.
In their review of the literature, both Göllner et al. (Chap. 7 of this volume) and Röhl and Rollett (Chap. 3 of this volume), raise further questions of (1) whether students in school are actually able to distinguish between different teaching dimensions and (2) how reliable the ratings of different dimensions are. Students’ lack of ability to differentiate between different teaching dimensions would lead to empirically simpler factorial structures (see also Kuhfeld, 2017). Indeed, it is quite typical to see a two-factor structure, where a general factor covers all theoretically distinguishable teaching quality constructs with the exception of classroom management (e.g., Wallace et al., 2016). As Röhl and Rollett (Chap. 3 of this volume) demonstrate, students’ social perceptions of their teachers explain an important part of the common variance of different teaching quality dimensions in a second order factor model. These results indicate that students’ evaluations of teaching quality might be influenced by their social perceptions of their teachers and so may lead to biased assessments. Their findings suggest to emphasize using items which are less likely to be confounded by the students’ social perception of their teachers (e.g., by addressing the individual experiences in a specific lesson) and controlling for the impact of how students socially perceive their teachers can be counteracted by administering suitable scales. The literature, nevertheless, offers reliable information that the students’ assessments of teaching quality dimensions show characteristics of differential predictive validity, indicating the existence of a meaningful unique variance (e.g., Klieme & Rakoczy, 2003; Raudenbush & Jean, 2014; Yi & Lee, 2017).
In order to make use of student perceptions of teaching quality, the design of the survey instruments is crucial and critically determines the nature of the perceptions. In her informative systematic review, Bijlsma (Chap. 4 of this volume) analyzes the quality of 22 student perception questionnaires on teaching quality, which leads to an extensive literature research. Overall, most of the instruments were evaluated positively regarding their theoretical foundation, their design, and the information about their statistical quality. The review revealed, however, weaknesses concerning norm information, sampling specifications, and the availability of more detailed information on the features of the instruments (e.g., by providing a user manual). The analyses illustrate how to put more emphasis on the quality of the presentation of the survey instruments to potential users.
As Röhl’s review of the research (Chap. 9 of this volume) shows‚ there is a substantial amount of evidence for the effects of student feedback on teachers’ behavior—e.g., initiating reflective thinking processes, learning about students’ perspectives, reviewing their goal setting, and changing their teaching practices accordingly. When provided with student ratings or feedback on their teaching, teachers also tend to engage more in communication with their classes on teaching practices and the changes which follow student feedback. An important pattern of results from a meta-analysis of intervention studies in schools presented by Röhl (ibid.) shows a mean weighted effect size of d = 0.21 for the impact of student feedback on student’s perception of the subsequent lessons. But, as an in-depth analysis showed, this effect size underestimates the potential of student feedback: A high level of support provided to the teachers when making use of feedback information yielded a significantly larger positive effect of d = 0.52. Medium or low levels of support, though, did not result in a better outcome. This pattern of findings, therefore, highlights how crucial an adequate level of support is for the effectiveness of student feedback measures.
3 Student and Teacher Characteristics Influencing the Feedback Process
As the research presented in this volume shows, student feedback on teaching can indeed provide a valuable basis for evaluating and improving teaching practices. It is not unusual for school students to welcome and value the opportunity to give teachers feedback regarding their teaching and to find their “student voice” recognized (Jones and Hall, Chap. 13 of this volume). Nevertheless, student ratings of teaching can be affected by a variety of student and class characteristics (Bijlsma et al., under review). For example, high performing students rate their teachers’ teaching quality significantly higher than low and middle performing students. Students from socio-economically or educationally more privileged families tend to be more critical of teaching practices (Atlay et al., 2019). Male students seem to be more critical than female students (Kuhfeld, 2017). Moreover, differences in students’ language comprehension can affect whether items in the survey instrument are understood (Lenske, 2016). The perception or evaluation of teaching quality may differ by age or development stage (see Chap. 7 of this volume by Göllner et al.). Student ratings of teaching can also be influenced by certain individual teacher characteristics which are not systematically associated with differences in teaching performance—such as gender, age, or physical appearance; in more sophisticated evaluation contexts, procedures can be implemented to correct scores accordingly, but this is not typical. The expectation of erroneous results can be considered as the most frequent reason why teachers are reluctant to use feedback. Although, as Schweig and Martínez (Chap. 6 of this volume) conclude, “these biases are generally small in magnitude and do not greatly influence comparisons across teachers or student groups, or how aggregates relate with one another and with external variables.” But users should, of course, be aware of the biases which might occur, especially as minor differences may have severe consequences in evaluative contexts.
Ways to counteract these undesirable effects and prepare students to use the questionnaires appropriately—and thereby develop students’ feedback competences—are indeed advisable. Accordingly, Göbel et al. (Chap. 11 of this volume) call for training students to use the survey instruments adequately. Unfortunately, the authors of these survey instruments frequently do not provide the users with clear guidelines for implementing the instruments (Bijlsma, Chap. 4 of this volume).
It has been well documented that individual characteristics of teachers influence whether and how effectively teachers use student feedback measures to improve their teaching and teaching skills, as Röhl and Gärtner (Chap. 10 of this volume) document in their literature review. In their discussion, the authors particularly emphasize teachers’ attitudes toward students as a feedback providers (e.g., regarding their trustworthiness or competence) and whether teachers perceive the function of the feedback as an opportunity to develop their teaching. The effectiveness of student feedback is also influenced by the teachers’ attitudes toward the measuring process of feedback. In general, teachers tend to show a positive attitude toward formative forms of student feedback on teaching (e.g., Göbel et al., Chap. 11 of this volume). But it is not uncommon that accuracy and trustworthiness are questioned, especially when it comes to feedback from younger students. Pre-service teachers, on the other hand, tend to be more positive when considering using student feedback on teaching than in-service teachers, as the findings of Göbel et al. (ibid.) indicate. In their insightful investigation, they demonstrate that experiences with student feedback can have a further positive impact on pre-service teachers’ attitudes concerning student feedback in general and on their willingness to reflect on and modify teaching practices. These results thereby illustrate the potential of a widespread implementation of student feedback on teaching within the practical parts of teacher education.
As the results in this volume show, the ways in which teachers perceive, process, and make use of the feedback information determines its impact on their teaching. Accordingly, the SFT Model (see Chap. 1 of this volume by Röhl et al.) puts an emphasis on teachers’ processes and handling of feedback information. At present, research on the ways in which teachers perceive and interpret feedback information affectively and/or cognitively, how this influences them, and how they deal with it is still rather scarce. But as the investigations of Röhl and Rollett (2021) show, teachers can very much differ in why and how they make use of student feedback on teaching. Their analysis evinced four paths of utilization: (1) Direct Formative Use (identifying aspects to improve, setting goals, evaluating target achievement); (2) Direct Communicative Use (discussing the results and looking for improvements in class); (3) Indirect Use (enabling a positive emotional experience, gathering of information); and (4) Symbolic Process Orientated Use (signaling a democratic or student-orientated attitude and an openness to criticism in classes). These results point to the importance of paying more attention to the goals individual teachers pursue when they ask their students for feedback on their teaching.
4 Organizational Context of the Evaluation and the Presentation of Feedback Information to Stakeholders
The conditions of the organizational context in schools can vary largely, and these differences can influence the effectiveness of student feedback (see Chap. 10 by Röhl and Gärtner; Chap. 11 by Göbel et al.; and Chap. 8 by Wisniewski and Zierer in this volume). The relevance of organizational characteristics for the effects of feedback is also evident from research on multisource feedback in business enterprises (Chap. 14 of this volume by Fleenor). The school setting can provide resources which strongly support teachers within the student feedback cycle, thus fostering its effectiveness (see Chap. 9 of this volume by Röhl). Schools may offer team structures which intensely accompany the process of reflecting on the feedback as well as the subsequent professional development and changes in teaching practices. Furthermore, it depends very much on the learning culture within the organization whether the feedback is considered as an opportunity to learn and whether sustainable support is provided to act on the results. Correspondingly, Röhl and Gärtner (Chap. 10 of this volume) highlight—in their literature review on the conditions of effectiveness of feedback—the importance of a positive feedback culture, organizational safety, and a focus on the professional development of teachers in contrast to a focus on control. In their approach, the feedback culture is considered as a crucial moderator for the effectiveness of feedback measures. Accordingly, the school management and leadership have a special responsibility for the success of student feedback measures by ensuring a safe learning environment and shaping a positive feedback culture within schools. Elstad and colleagues (2015, 2017) report a higher appreciation of the results of student feedback on teaching when a developmental purpose is perceived by the teachers, whereas perceiving a control purpose is linked to a rejecting attitude to the feedback measures and a lower recognition of the feedback information. Furthermore, other findings point out that its effects on teaching practices differ depending on whether teachers are intrinsically or extrinsically motivated to engage in using student feedback, but that both motivational paths are related to positive changes in the classroom (Gärtner, 2014).
How much the organizational context matters is well illustrated by a project described by van der Lans (Chap. 5 of this volume). It followed a very sophisticated data-driven procedure: (1) determining reliable diagnostic results of individual teaching practices (from ineffective to effective); (2) allocating teachers on an empirically validated continuum of teaching effectiveness; (3) identifying the most effective development measures; and (4) tailoring the feedback procedure accordingly. Taken together, these measures provide a viable basis for the teacher’s further education and professional development. This is aligned to the research field of data-based decision-making, which suggests that data (in this case student feedback data) can help improve teaching and further outcomes for students (Poortman & Schildkamp, 2016; Schildkamp, 2019; van Geel et al., 2016).
A critical issue for the effectiveness of student feedback is how it is presented to the teachers. Problems concerning accuracy and comprehensibility of feedback have been addressed for a long time (e.g., Frase & Streshly, 1994). Thereby, the designing of feedback and of support measures has to take into account the level of data literacy of teachers in order to overcome the typical struggles of making use of the data (e.g., Kippers et al., 2018). One way to reduce the complexity of the gathered student data is by condensing the information into a smaller number of performance levels, which makes it considerably easier to communicate individual strengths and weaknesses. However, this requires an adequate level of support to prepare the data accordingly. One should, nevertheless, be careful not to disregard the potentially meaningful variance of the student ratings within classes (see Chap. 6 of this volume by Schweig and Martínez).
Another important prerequisite for the effectiveness of student feedback is that the communication within the feedback cycle is performed in an appreciative and constructive manner. Yet the ability to formulate and provide feedback as well as the ability to receive and respond to feedback can vary considerably. These differences can significantly influence the cognitive and emotional dynamic within the feedback process and its effectiveness. Effective feedback can be characterized as task orientated, specific, clear, development orientated, and distinct in its implications for action (Cannon & Witherspoon, 2005). Röhl and Gärtner (Chap. 10 of this volume) discuss how characteristics of the feedback may influence its effectiveness in terms of the information format (e.g., means, boxplots), the timing of the feedback, its specificity, valence, and positivity. Unfortunately, there are only few studies in the field of student feedback on teaching addressing these issues.
5 Concluding Remarks
The present volume is the first providing a comprehensive overview of the current state of the research on student perceptions of and student feedback on teaching in schools. Its aim was to coherently present to a wider audience the extensive and important international research which has been done using student perception of teaching for improving teaching practices. The authors contributing to this volume agree in granting student feedback a high potential for the improvement of teaching in schools. The empirical evidence for this claim‚ which is addressed across the chapters of this book‚ is impressive. If set up professionally, the implementation of student feedback on teaching can be indeed a very effective way to improve teaching quality.
But there are, of course, requirements which have to be met to achieve these positive results. On the one hand, a high quality of the survey instruments and the accompanying material provided by their authors is indispensable. On the other hand, the provision of a high quality of support within the school setting is needed gathering and evaluating the data, interpreting and reflecting on the information, and putting results effectively into action. Accordingly, it has been shown that the availability of an adequate level of support is an important moderating variable concerning the effectiveness of student feedback on teaching (Röhl, Chap. 9; and Göbel et al., Chap. 11 in this volume).
In order to make better use of the potential of student feedback on teaching, different paths should be followed: Authors of student perception questionnaires should put more emphasis on providing users with sound and easily accessible information (concerning e.g., theoretical basis, measurement quality, reference norms, and guidelines for implementing and working with the instruments). Practitioners should be encouraged to use student feedback by establishing sustainable support structures within schools, which include powerful technical solutions for implementing, evaluating and acting on student feedback. Researchers should intensify investigations on teachers’ ways of processing feedback information, on how a professional and ongoing implementation of student feedback in schools affects the longitudinal development of teachers, teaching, and last but not least students.
In several respects, the findings presented in this volume‚ indicate that the summative use of student feedback for teacher accountability to supervisors is hardly appropriate. Thus, when teachers perceive a control function of the feedback, they tend to be more resistant to the developmental use of it (see Chap. 10 of this volume by Röhl and Gärtner). Further, student perceptions of teaching quality are subject to many idiosyncrasies or biasing factors, requiring highly expert and cautious interpretation (see Chaps 3 and 7 of this volume by Röhl and Gärtner). Therefore, the use of student feedback for accountability purposes should be avoided in order to prevent damaging its developmental potential.
A promising development to support the capturing, evaluating, and scrutinizing of student feedback data are online or smartphone-based survey instruments (e.g., the Impact! tool, Bijlsma et al., 2019; FeedbackSchule, Wisniewski et al., 2020). If set up accordingly, they can provide users with easily accessible differentiated feedback information on individual-, group-, and class-level or even might provide corrected scores for known biases. Digital solutions can be an excellent way to gather and evaluate student feedback on teaching, thus reducing the needs on resources significantly. Of course, they cannot in any way substitute the professional reflection of individual teachers within collegial settings on the feedback results, but they can help schools substantially in creating the informational basis for these processes and help them to make better use of the information and their typically limited time and staff resources.
Another very inspiring perspective for the future of student feedback is provided by Schmidt and Gawrilow (see Chap. 12 of this volume) when advocating the implementation of measures for systematic reciprocal feedback between students and teachers, thereby addressing teachers and students as cooperative partners. The potential of combining approaches of feedback from students to teachers with those of feedback from teachers to students provides an important outlook for developments to come.
An important point arising from the research overview presented in this volume is the question of why some countries and regions seem to be more reluctant to use student feedback to improve teaching and professionalize teachers than others (see e.g. Chap. 3 by Bijlsma, and Chap. 9 by Röhl). Here, cultural aspects like the role of teachers and students, but also characteristics of the school systems could provide an explanatory framework. These issues should be elaborated on in further research in order to develop appropriate and effective forms of making use of student feedback on teaching for these cultural contexts and countries.
As outlined throughout this book, student feedback on teaching is a highly beneficial and—from our point of view indispensable—way to improve teaching practices. Based on the extensive body of research on the benefit and effectiveness of student feedback on teaching presented in this volume the authors hope to contribute to a wide and systematic use of student feedback in schools to sustainably improve teaching quality and the learning experiences of students.
References
American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. (2014). Standards for educational and psychological testing. American Educational Research Association. https://doi.org/10.1037/e577932014-003.
Atlay, C., Tieben, N., Fauth, B., & Hillmert, S. (2019). The role of socioeconomic background and prior achievement for students’ perception of teacher support. British Journal of Sociology of Education,40(7), 970–991. https://doi.org/10.1080/01425692.2019.1642737.
Bijlsma, H. J. E., Glas, C. A. W., & Visscher, A. J. (under review). The factors influencing digitally measured student perceptions of teaching quality. Paper presented at the EARLI conference in Aachen.
Bijlsma, H. J. E., Visscher, A. J., Dobbelaer, M. J., & Veldkamp, B. P. (2019). Does smartphone-assisted student feedback affect teachers’ teaching quality? Technology, Pedagogy and Education,28(2), 217–236. https://doi.org/10.1080/1475939x.2019.1572534.
Cannon, M. D., & Witherspoon, R. (2005). Actionable feedback: Unlocking the power of learning and performance improvement. The Academy of Management Executive,19(2), 120–134. https://doi.org/10.5465/ame.2005.16965107.
Clausen, M. (2002). Unterrichtsqualität: eine Frage der Perspektive? Empirische Analysen zur Übereinstimmung, Konstrukt- und Kritieriumsvalidität [Teaching quality: A matter of perspective? Empirical analyses of agreement, construct and criterion validity]. Waxmann. https://doi.org/10.1080/0267152980130104.
Elstad, E., Lejonberg, E., & Christophersen, K.-A. (2015). Teaching evaluation as a contested practice: Teacher resistance to teaching evaluation schemes in Norway. Education Inquiry,6, 375–399. https://doi.org/10.3402/edui.v6.27850.
Elstad, E., Lejonberg, E., & Christophersen, K.-A. (2017). Student evaluation of high-school teaching: Which factors are associated with teachers’ perception of the usefulness of being evaluated? Journal for Educational Research Online,9(1), 99–117.
Fauth, B., Decristan, J., Rieser, S., Klieme, E., & Büttner, G. (2014). Student ratings of teaching quality in primary school: Dimensions and prediction of student outcomes. Learning and Instruction,29, 1–9. https://doi.org/10.1016/j.learninstruc.2013.07.001.
Frase, L. E., & Streshly, W. (1994). Lack of accuracy, feedback, and commitment in teacher evaluation. Journal of Personnel Evaluation in Education,8(1), 47–57. https://doi.org/10.1007/bf00972709.
Gärtner, H. (2014). Effects of student feedback as a method of self-evaluating the quality of teaching. Studies in Educational Evaluation,42, 91–99. https://doi.org/10.1016/j.stueduc.2014.04.003.
Gitomer, D. H., Bell, C. A., Qi, Y., McAffrey, D., Hamre, B. K., & Pianta, R. C. (2014). The instructional challenge in improving teaching quality: Lessons from a classroom observation protocol. Teachers College Record,116(6), 1–32.
Hattie, J. A. C. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. Routledge. https://doi.org/10.1007/s11159-011-9198-8.
Kane, M. T. (2012). Validating score interpretations and uses. Language Testing,29, 3–17. https://doi.org/10.1177/0265532211417210.
Kippers, W. B., Poortman, C. L., Schildkamp, K., & Visscher, A. J. (2018). Data literacy: What do educators learn and struggle with during a data use intervention? Studies in Educational Evaluation,56, 21–31. https://doi.org/10.1016/j.stueduc.2017.11.001.
Klieme, E., & Rakoczy, K. (2003). Unterrichtsqualität aus Schülerperspektive: Kulturspezifische Profile, regionale Unterschiede und Zusammenhänge mit Effekten von Unterricht. In Deutsches PISA-Konsortium & J. Baumert (Eds.), PISA 2000—Ein differenzierter Blick auf die Länder der Bundesrepublik Deutschland (pp. 333–359). Springer. https://doi.org/10.1007/978-3-322-97590-4_12.
Kuhfeld, M. R. (2017). When students grade their teachers: A validity analysis of the Tripod student survey. Educational Assessment,22, 253–274. https://doi.org/10.1080/10627197.2017.1381555.
Lenske, G. (2016). Schülerfeedback in der Grundschule: Untersuchungen zur Validität [Student feedback in primary schools: Studies of validity]. Münster: Waxmann.
Poortman, C. L., & Schildkamp, K. (2016). Solving student achievement problems with a data use intervention for teachers. Teaching and Teacher Education,60, 425–433. https://doi.org/10.1016/j.tate.2016.06.010.
Praetorius, A.-K., Klieme, E., Herbert, B., & Pinger, P. (2018). Generic dimensions of teaching quality: The German framework of Three Basic Dimensions. ZDM Mathematics Education,50, 407–426. https://doi.org/10.1007/s11858-018-0918-4.
Raudenbush, S. W., & Jean, M. (2014). To what extend do student perceptions of classroom quality predict teacher value added? In T. J. Kane, K. A. Kerr, & R. C. Pianta (Eds.), Designing teacher evaluation systems: New guidance from the measures of effective teaching project (1st ed., pp. 170–202). Jossey-Bass. https://doi.org/10.1002/9781119210856.ch6.
Röhl, S., & Rollett, W. (2021). Jenseits von Unterrichtsentwicklung: Intendierte und nicht-intendierte Nutzungsformen von Schülerfeedback durch Lehrpersonen [Beyond teaching development: Teachers’ intended and unintended ways of student feedback use]. In K. Göbel, C. Wyss, K. Neuber, & M. Raaflaub (Eds.), Quo vadis Forschung zu Schülerrückmeldungen? Springer VS. https://doi.org/10.1007/978-3-658-32694-4.
Schildkamp, K. (2019). Data-based decision-making for school improvement: Research insights and gaps. Educational Research, 1–17. https://doi.org/10.1080/00131881.2019.1625716.
van der lans, R. M., van de Grift, W. J. C. M., & van Veen, K. (2019). Same, similar, or something completely different? Calibrating student surveys and classroom observations of teaching quality onto a common metric. Educational Measurement: Issues and Practice,38, 55–64. https://doi.org/10.1111/emip.12267.
van Geel, M., Keuning, T., Visscher, A. J., & Fox, J. P. (2016). Assessing the effects of a school-wide data-based decision-making intervention on student achievement growth in primary schools. American Educational Research Journal,53(2), 360–394. https://doi.org/10.3102/0002831216637346.
Wallace, T. L., Kelcey, B., & Ruzek, E. A. (2016). What can student perception surveys tell us about teaching? Empirically testing the underlying structure of the Tripod student perception survey. American Educational Research Journal,53, 1834–1868. https://doi.org/10.3102/0002831216671864.
Wisniewski, B., Zierer, K., Dresel, M., & Daumiller, M. H. (2020). Obtaining students’ perceptions of instructional quality: Two-level structure and measurement invariance. Learning and Instruction, 66. https://doi.org/10.1016/j.learninstruc.2020.101303.
Yi, H. S., & Lee, Y. (2017). A latent profile analysis and structural equation modeling of the instructional quality of mathematics classrooms based on the PISA 2012 results of Korea and Singapore. Asia Pacific Education Review,18, 23–39. https://doi.org/10.1007/s12564-016-9455-4.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2021 The Author(s)
About this chapter
Cite this chapter
Rollett, W., Bijlsma, H., Röhl, S. (2021). Student Feedback on Teaching in Schools: Current State of Research and Future Perspectives. In: Rollett, W., Bijlsma, H., Röhl, S. (eds) Student Feedback on Teaching in Schools. Springer, Cham. https://doi.org/10.1007/978-3-030-75150-0_16
Download citation
DOI: https://doi.org/10.1007/978-3-030-75150-0_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-75149-4
Online ISBN: 978-3-030-75150-0
eBook Packages: EducationEducation (R0)