Advertisement

ZDM

, Volume 49, Issue 3, pp 475–489 | Cite as

Training effects on teachers’ feedback practice: the mediating function of feedback knowledge and the moderating role of self-efficacy

  • Birgit Schütze
  • Katrin Rakoczy
  • Eckhard Klieme
  • Michael Besser
  • Dominik Leiss
Original Article

Abstract

Formative assessment has been identified as a promising intervention to support students’ learning. How to successfully implement this means of assessment, however, is still an open issue. This study contributes to the implementation of formative assessment by analyzing the impact of a training measure on teachers’ formative feedback practice, with a special focus on mediating and moderating variables. Research questions are as follows: (1) Is there an indirect training effect on teachers’ instructional feedback practice via (a) teachers’ declarative feedback knowledge and (b) the ability to generate feedback in a test situation? (2) Is this indirect effect moderated by teachers’ self-efficacy? A total of 67 secondary education mathematics teachers participated in the study, taking part in professional development either on formative assessment and feedback (PD-FA) or on mathematical modelling and problem solving (PD-PM). Training was provided in two sessions (T1 and T2; each lasting 3 days) with 10 weeks in between T1 and T2. Teachers’ self-efficacy regarding feedback was measured before T1 with a questionnaire. Declarative feedback knowledge and the ability to apply this knowledge were tested after T2. Teachers’ instructional feedback practice was assessed with a student questionnaire (before T1 and 4–6 weeks after T2). Path analyses show that (1) there is no indirect training effect (PD-FA vs. PD-PM) on the development of teachers’ feedback practices in mathematics instruction; but an indirect effect on the ability to generate feedback in a test situation via teachers’ declarative feedback knowledge. Teachers participating in PD-FA show a higher level of declarative feedback knowledge than teachers in the PD-PM condition. Declarative feedback knowledge in turn is positively related to the ability to generate feedback in a test situation. (2) This indirect effect is moderated by teachers’ self-efficacy. Teachers with a high level of self-efficacy are better able to use their knowledge to generate feedback in a test situation than teachers with a low level of self-efficacy.

Keywords

Professional development Formative assessment Teacher knowledge Mathematics instruction Self-efficacy 

Notes

Acknowledgements

The present study is embedded in the project “Conditions and Consequences of Classroom Assessment” (Co2CA), which is collaboratively conducted by researchers at the German Institute for International Educational Research, University of Kassel, and Leuphana University of Lüneburg. The preparation of this paper was supported by grants from the German Research Foundation (DFG, KL1057/10-3, BL2751/17-3, LE2619/1-3) in the Priority Program “Models of Competencies for Assessment of Individual Learning Outcomes and the Evaluation of Educational Processes” (SPP 1293).

References

  1. Bandura, A. (1986). Social foundations of thought and action: A social-cognitive view. Englewood Cliffs, NY: Prentice-Hall.Google Scholar
  2. Bennett, R. (2011). Formative assessment: A critical review. Assessment in Education: Principles, Policy & Practice, 18(1), 5–25. doi: 10.1080/0969594X.2010.513678.CrossRefGoogle Scholar
  3. Besser, M., Leiss, D., & Klieme, E. (2015). Wirkung von Lehrerfortbildungen auf Expertise von Lehrkräften zu formativem Assessment im kompetenzorientierten Mathematikunterricht [Impact of professional development on teachers’ expertise on formative assessment in competence oriented mathematics instruction]. Zeitschrift für Entwicklungspsychologie und Pädagogische Psychologie, 47(2), 110–122. doi: 10.1026/0049-8637/a000128.CrossRefGoogle Scholar
  4. Black, P., & Wiliam, D. (1998a). Assessment and classroom learning. Assesment in Education, 5(1), 7–74.CrossRefGoogle Scholar
  5. Black, P., & Wiliam, D. (1998b). Inside the black box: Raising standards through classroom. Phi Delta Kappan, 80(2), 139–147.Google Scholar
  6. Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21, 5–31. doi: 10.1007/s11092-008-9068-5.CrossRefGoogle Scholar
  7. Bliese, P. (2000). Within-group agreement, non-independence, and reliability: Implications for data aggregation and analysis. In K. Klein & S. Kozlowski (Eds.), Multilevel theory, research, and methods in organizations (pp. 349–381). San Francisco, CA: Jossey-Bass.Google Scholar
  8. Brown, M., & Hodgen, J. (2010). Assessment in schools–mathematics. In E. Baker, B. McGaw & P. Peterson (Eds.), International encyclopedia of education (3 edn., pp. 279–284). Oxford: Elsevier.CrossRefGoogle Scholar
  9. Cizek, G. (2010). An introduction to formative assessment: History, characteristics and challenges. In H. L. Andrade & G. J. Cizek (Eds.), Handbook of formative assessment (pp. 3–17). New York, NY: Routledge.Google Scholar
  10. Collins, L., Schafer, J., & Kam, C. (2001). A comparison of inclusive and restrictive strategies in modern missing data procedures. Psychological Methods, 6(4), 330–351. doi: 10.1037/1082-989X.6.4.330.CrossRefGoogle Scholar
  11. Desimone, L. (2009). Improving impact studies of teachers’ professional development: Toward better conceptualizations and measures. Educational Researcher, 38(3), 181–199. doi: 10.3102/0013189X08331140.CrossRefGoogle Scholar
  12. Diamond, B. S., Maerten-Rivera, J., Rohrer, R. E., & Lee, O. (2014). Effectiveness of a curricular and professional development intervention at improving elementary teachers’ science content knowledge and student achievement outcomes: Year 1 results. Journal of Research in Science Teaching, 51(5), 635–658.CrossRefGoogle Scholar
  13. Dresel, M., & Haugwitz, M. (2008). A computer-based approach to fostering motivation and self-regulated learning. The Journal of Experimental Education, 77(1), 3–18. doi: 10.3200/JEXE.77.1.3-20.CrossRefGoogle Scholar
  14. Dunn, K. E., & Mulvenon, S. (2009). A critical review of research on formative assessment: The limited scientific evidence of the impact of formative assessment in education. Practical Assessment, Research and Evaluation, 14(7), 1–11.Google Scholar
  15. Edwards, J., & Lambert, L. (2007). Methods for integrating moderation and mediation: A general analytical framework using moderated path analysis. Psychological Methods, 12, 1–22. doi: 10.1037/1082-989X.12.1.1.CrossRefGoogle Scholar
  16. Fauth, B., Decristan, J., Rieser, S., Klieme, E., & Büttner, G. (2014). Student ratings of teaching quality in primary school: Dimensions and prediction of student outcomes. Learning and Instruction, 29, 1–9. doi: 10.1016/j.learninstruc.2013.07.001.CrossRefGoogle Scholar
  17. Fuchs, L. S., Fuchs, D., Karns, K., Hamlett, C. L., & Katzaroff, M. (1999). Mathematics performance assessment in the classroom: Effects on teacher planning and student problem solving. American Educational Research Journal, 36(3), 609–646. doi: 10.3102/00028312036003609.CrossRefGoogle Scholar
  18. Gegenfurtner, A. (2011). Motivation and transfer in professional training: A meta-analysis of the moderating effects of knowledge type, instruction, and assessment conditions. Educational Research Review, 6(3), 153–168.CrossRefGoogle Scholar
  19. Gelman, A., & Carlin, J. (2014). Beyond power calculations: Assessing type S (sign) and type M (magnitude) errors. Perspectives on Psychological Science, 9(6), 641–651. doi: 10.1177/1745691614551642.CrossRefGoogle Scholar
  20. Graham, J. (2003). Adding missing-data-relevant variables to FIML-based structural equation models. Structural Equation Modeling, 10, 80–100. doi: 10.1207/S15328007SEM1001_4.CrossRefGoogle Scholar
  21. Harks, B. (2013). Kompetenzdiagnostik und Rückmeldung–zwei Komponenten formativen Assessments [Competence diagnostics and feedback–two components of formative assessment]. Unpublished dissertation.Google Scholar
  22. Hattie, J. (2009). Visible learning. A synthesis of over 800 meta-analyses relating to achievement. New York, NY: Routledge.Google Scholar
  23. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. doi: 10.3102/003465430298487.CrossRefGoogle Scholar
  24. Heritage, M. (2007). Formative assessment: What do teachers need to know and do? Phi Delta Kappan, 89, 140–145.CrossRefGoogle Scholar
  25. Hiebert, J., Gallimore, R., Garnier, H., Givvin, K. B., Hollingsworth, H., Jacobs, J., et al. (2003). Teaching mathematics in seven countries: Results from the TIMSS 1999 video study. NCES (2003–013). Washington DC: U.S. Department of Education/National Center for Education Statistics.Google Scholar
  26. Hill, M., Cowie, B., Gilmore, A., & Smith, L. (2010). Preparing assessment-capable teachers: What should preservice teachers know and be able to do? Assessment Matters, 2, 43–64.Google Scholar
  27. James, M., & Pedder, D. (2006). Professional learning as a condition for assessment for learning. In J. Gardner (Ed.), Assessment and learning (pp. 25–27). London: Sage.Google Scholar
  28. Kim, M.-J., & Lehrer, R. (2015). Using learning progressions to design instructional trajectories. In C. Suurtamm & A. Roth McDuffie (Eds.), Annual perspectives in mathematics education:Assessment to enhance teaching and learning (pp. 27–38). Reston, VA: National Council of Teachers of Mathematics.Google Scholar
  29. Kingston, N., & Nash, B. (2011). Formative assessment: a meta-analysis and a call for research. Educational Measurement: Issues and Practice, 4, 28–37. doi: 10.1111/j.1745-3992.2011.00220.x.CrossRefGoogle Scholar
  30. Klimczak, M., Kampa, M., Bürgermeister, A., Harks, B., Rakoczy, K., Besser, M.,... Leiss, D. (2012). Dokumentation der Befragungsinstrumente des Feldexperimentes im Projekt “Conditions and Consequences of Classroom Assessment” (Co²CA) [Documentation of the questionnaires of the laboratory experiment in the project ‘Conditions and Consequences of Classroom Assessment’ (Co²CA)].Google Scholar
  31. Kline, R. B. (2005). Principles and practice of structural equation modeling. New York, NY: Guilford Press.Google Scholar
  32. Kluger, A., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119, 254–284. doi: 10.1037//0033-2909.119.2.254.CrossRefGoogle Scholar
  33. Koedinger, K. R., McLaughlin, E., & Heffernan, N. (2010). A quasi-experimental evaluation of an on-line formative assessment and tutoring system. Journal of Educational Computing Research, 43(1), 489–510. doi: 10.2190/EC.43.4.d.CrossRefGoogle Scholar
  34. Krampen, G. (1987). Differential effects of teacher comments. Journal of Educational Psychology, 79, 137–146. doi: 10.1037//0022-0663.79.2.137.CrossRefGoogle Scholar
  35. Kultusministerkonferenz. (2003). Bildungsstandards im Fach Mathematik für den Mittleren Schulabschluss [National standards in mathematics for secondary school]. München: Luchterhand.Google Scholar
  36. Kunter, M., Klusmann, U., Baumert, J., Richter, D., Voss, T., & Hachfeld, A. (2013). Professional competence of teachers: effects on instructional quality and student development. Journal of Educational Psychology, 105(3), 805–820. doi: 10.1037/a0032583.CrossRefGoogle Scholar
  37. Lehmann, R. H., & Seeber, S. (2005). Accelerated mathematics in grades 4–6. Summary of a quasi-experimental study in Northrhine-Westphalia, Germany. Madison, WI: Renaissance Learning, Inc. Retrieved from http://research.renlearn.com/research/pdfs/192.pdf.
  38. Lipowsky, F., & Rzejak, D. (2012). Lehrerinnen und Lehrer als Lerner–Wann gelingt der Rollentausch [Teachers as learners–when does the role reversal work]? Schulpädagogik Heute, 5(3), 1–17.Google Scholar
  39. Lüdtke, O., Robitzsch, A., Trautwein, U., & Köller, O. (2007). Umgang mit fehlenden Werten in der psychologischen Forschung [Dealing with missing data in psychological research]. Psychologische Rundschau, 58(2), 103–117.CrossRefGoogle Scholar
  40. Lüdtke, O., Robitzsch, A., Trautwein, U., & Kunter, M. (2009). Assessing the impact of learning environments: How to use student ratings of classroom or school characteristics in multilevel modeling. Contemporary Educational Psychology, 34(2), 120–131. doi: 10.1016/j.cedpsych.2008.12.001.CrossRefGoogle Scholar
  41. MacKinnon, D. (2008). Introduction to statistical mediation analysis. New York, NY: Erlbaum.Google Scholar
  42. MacKinnon, D., Lockwood, C., & William, J. (2004). Confidence limits for the indirect effect: Distribution of the product and resampling methods. Multivariate Behavioral Research, 39(1), 99–128. doi: 10.1037/1082-989X.7.1.83.CrossRefGoogle Scholar
  43. Mandl, H., & Gerstenmaier, J. (2000). Die Kluft zwischen Wissen und Handeln. Empirische und theoretische Lösungsansätze [The gap between knowledge and action]. Göttingen, Germany: Hogrefe.Google Scholar
  44. Marsh, H., Lüdtke, O., Nagengast, B., Trautwein, U., Morin, A., Abduljabbar, A., & Köller, O. (2012). Classroom climate and contextual effects: Conceptual and methodological issues in the evaluation of group-level effects. Educational Psychologist, 47(2), 106–124. doi: 10.1080/00461520.2012.670488.CrossRefGoogle Scholar
  45. Mason, B., & Bruning, R. (2001). Providing feedback in computer-based instruction: What the research tells us. http://dwb.unl.edu/Edit/MB/MasonBruning.html. Accessed 1 May 2013.
  46. Mischo, C., & Rheinberg, F. (1995). Erziehungsziele von Lehrern und individuelle Bezugsnormen der Leistungsbewertung [Educational goals of teachers and individual frames of reference of performance assessment]. Zeitschrift für Pädagogische Psychologie, 9(3/4), 139–151.Google Scholar
  47. Multon, K., Brown, S., & Lent, R. (1991). Relation of self-efficacy beliefs to academic outcomes: A meta-analytic investigation. Journal of Counseling Psychology, 38(1), 30–38. doi: 10.1037/0022-0167.38.1.30.CrossRefGoogle Scholar
  48. Muthén, B., & Asparouhov, T. (2015). Causal effects in mediation modeling: An introduction with applications to latent variables. Structural Equation Modeling, 22(1), 12–23. doi: 10.1080/10705511.2014.935843.CrossRefGoogle Scholar
  49. Muthén, L., & Muthén, B. (1998–2007). Mplus (Version 6.1) [Computer software]. Los Angeles, CA: Muthén & Muthén.Google Scholar
  50. Pellegrino, J., Chudowsky, N., & Glaser, R. (2001). Knowing what students know. The science and design of educational assessment. Washington, DC: National Academic Press.Google Scholar
  51. Phelan, J., Choi, K., Vendlinski, T., Baker, E., & Herman, J. (2011). Differential improvement in student understanding of mathematical principles following formative assessment intervention. The Journal of Educational Research, 104, 330–339. doi:  10.1080/00220671.2010.484030.CrossRefGoogle Scholar
  52. Pintrich, P., & De Groot, E. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82(1), 33–40. doi: 10.1037/0022-0663.82.1.33.CrossRefGoogle Scholar
  53. Pugh, K., & Bergin, D. (2006). Motivational influences on transfer. Educational Psychologist, 41(3), 147–160. doi: 10.1207/s15326985ep4103_2.CrossRefGoogle Scholar
  54. Rakoczy, K., Klieme, E., Leiß, D., & Blum, W. (in press). Formative assessment in mathematics instruction: Theoretical considerations and empirical results of the Co2CA-project. In D. Leutner, J. Fleischer, J. Grünkorn & E. Klieme (Eds.), Competence assessment in education: Research, models, and instruments. Berlin: Springer.Google Scholar
  55. Renkl, A. (1996). Träges Wissen: Wenn Erlerntes nicht genutzt wird [Inert knowledge: When knowledge is not used]. Psychologische Rundschau, 47, 78–92.Google Scholar
  56. Ruthven, K. (1994). Better judgement: Rethinking assessment in mathematics education. Educational Studies in Mathematics, 27, 433–450. doi: 10.1007/BF01273382.CrossRefGoogle Scholar
  57. Sadler, D. (1998). Formative assessment: Revisiting the territory. Assessment in Education: Principles, Policy & Practice, 5(1), 77–84. doi: 10.1080/0969595980050104.CrossRefGoogle Scholar
  58. Schlomer, G., Bauman, S., & Card, N. (2010). Best practices for missing data management in counseling psychology. Journal of Counseling Psychology, 57(1), 1–10. doi: 10.1037/a0018082.CrossRefGoogle Scholar
  59. Schneider, M., & Randel, B. (2010). Research on characteristics of effective professional development programs for enhancing educator’ skills in formative assessment. In H. Andrade & G. Cizek (Eds.), Handbook of formative assessment (pp. 251–276). New York, NY: Routledge.Google Scholar
  60. Schunk, D. (1995). Self-efficacy and education and instruction. In J. Maddux (Ed.), Self-efficacy, adaptation, and adjustment theory, research, and application (pp. 281–303). New York, NY: Springer.CrossRefGoogle Scholar
  61. Shavelson, R. (2008). Guest editor’s introduction. Applied Measurement in Education, 21(4), 293–294. doi: 10.1080/08957340802347613.CrossRefGoogle Scholar
  62. Shih, S., & Alexander, J. (2000). Interacting effects of goal setting and self- or other-referenced feedback on children’s development of self-efficacy and cognitive skill within the Taiwanese classroom. Journal of Educational Psychology, 92, 536–544. doi: 10.1037//0022-0663.92.3.536.CrossRefGoogle Scholar
  63. Shrout, P., & Bolger, N. (2002). Mediation in experimental and nonexperimental studies: New procedures and recommendations. Psychological Methods, 7, 422–445. doi: 10.1037/1082-989X.7.4.422.CrossRefGoogle Scholar
  64. Shute, V. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189. doi: 10.3102/0034654307313795.CrossRefGoogle Scholar
  65. Simosi, M. (2012). The moderating role of self-efficacy in the organizational culture–training transfer relationship. International Journal of Training and Development, 16(2), 92–106. doi: 10.1111/j.1468-2419.2011.00396.x.CrossRefGoogle Scholar
  66. Snijders, T., & Bosker, R. (1999). Multilevel analysis: An introduction to basic and advanced multilevel modeling. London: Sage.Google Scholar
  67. Stajkovic, A., & Luthans, F. (1998). Self-efficacy and work-related performance: A meta-analysis. Psychological Bulletin, 124(2), 240–261. doi: 10.1037/0033-2909.124.2.240.CrossRefGoogle Scholar
  68. Stein, M., & Wang, M. (1988). Teacher development and school improvement: The process of teacher change. Teaching and Teacher Education, 4(2), 171–187. doi: 10.1016/0742-051X(88)90016-9.CrossRefGoogle Scholar
  69. Suurtamm, C., Thompson, D.R., Kim, R.Y., Moreno, L.D., Sayac, N., Schukajlow, S., Silver, E., Ufer, S., Vos, P. (2016). Assessment in mathematics education. Large-Scale assessment and classroom assessment. ICME 13 Topical Surveys. Springer Open.Google Scholar
  70. Thompson, D. B. J. (2007). Effects of evaluative feedback on math self-efficacy, grade self-efficacy, and math achievement of ninth grade algebra students: A longitudinal approach. Unpublished doctoral dissertation, University of Louisville, KY. http://gradworks.umi.com/32/68/3268832.html. Accessed 1 May 2013.
  71. Thompson, D. R., & Kaur, B. (2011). Using a multi-dimensional approach to understanding to assess students’ mathematical knowledge. In B. Kaur & K. Y. Wong (Eds.), Assessment in the mathematics classroom: 2011 Association of Mathematics Educators Yearbook (pp. 17–32). Singapore: World Scientific Publishing.CrossRefGoogle Scholar
  72. Toh, T. L., Quek, K. S., Leong, Y. H., Dindyal, J., & Tay, E. G. (2011). Assessing problem solving in the mathematics curriculum: A new approach. In B. Kaur & K. Y. Wong (Eds.), Assessment in the mathematics classroom: 2011 Association of Mathematics Educators Yearbook (pp. 33–66). Singapore: World Scientific Publishing.CrossRefGoogle Scholar
  73. Voss, T., Kunter, M., Seiz, J., Hoehne, V., & Baumert, J. (2014). Die Bedeutung des pädagogisch-psychologischen Wissens von angehenden Lehrkräften für die Unterrichtsqualität [The relevance of educational psychological knowledge of preservice teachers for instructional quality]. Zeitschrift für Pädagogik, 60(2), 184–201.Google Scholar
  74. Whitehead, A. (1929). The aims of education. New York, NY: Mac Millan.Google Scholar
  75. Wiliam, D., Lee, C., Harrison, C., & Black, P. (2004). Teachers developing assessment for learning: Impact on student achievement. Assessment in Education: Principles, Policy and Practice, 11(1), 49–65. doi: 10.1080/0969594042000208994.CrossRefGoogle Scholar
  76. Wiliam, D., & Thompson, M. (2008). Integrating assessment with learning: What will it take to make it work? In C. A. Dwyer (Ed.), The future of assessment: Shaping teaching and learning (pp. 53–82). New York, NY: Lawrence Erlbaum.Google Scholar
  77. Wylie, E., Gullickson, A., Cummings, K., Egelson, P., Noakes, L., Norman, K., et al. (2012). Improving formative assessment practice to empower student learning. Thousand Oakes, CA: Corwin.Google Scholar
  78. Yin, Y., Shavelson, R., Ayala, C., Ruiz-Primo, M., Brandon, P., Furtak, E., et. al (2008). On the impact of formative assessment on student motivation, achievement, and conceptual change. Applied Measurement in Education, 21(4), 335–359. doi: 10.1080/08957340802347845.CrossRefGoogle Scholar
  79. Ysseldyke, J. E., & Bolt, D. M. (2007). Effect of technology-enhanced continuous progress monitoring on math achievement. School Psychology Review, 36, 453–467.Google Scholar
  80. Zimmerman, B. (2000). Attaining self-regulation. A social cognitive perspective. In M. Boekaerts, P. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 13–39). San Diego, CA: Academic Press.  Google Scholar

Copyright information

© FIZ Karlsruhe 2017

Authors and Affiliations

  1. 1.Institute of Psychology in EducationUniversity of MünsterMünsterGermany
  2. 2.Department of Educational Quality and EvaluationGerman Institute for International Educational ResearchFrankfurtGermany
  3. 3.Institute for Didactics of MathematicsLeuphana University of LüneburgLüneburgGermany

Personalised recommendations