Advertisement

Long-term effects of the implementation of state-wide exit exams: a multilevel regression analysis of mediation effects of teaching practices on students’ motivational orientations

  • Katharina Maag MerkiEmail author
  • Britta Oerke
Article

Abstract

This study extends previous research investigating the effects of state-wide exit exams by studying the change from a class-based to a state-wide exit exam system over 5 years, using multilevel analyses and examining mediating effects of teachers’ practices on students’ motivational orientations. In this multi-cohort study, we analyzed in particular the effects on students’ interest, scholastic self-efficacy, and persistence in advanced level English courses (N = 1835) and mathematics courses (N = 1336) in two states in Germany (28 schools). Descriptive analyses, multivariate hierarchical regression analyses, and differences-in-differences analyses were carried out. The results revealed long-term effects of the implementation of state-wide exit exams particularly in the advanced level English courses. Here, a close relationship between the change in all analyzed motivational orientations and teacher support perceived by the students can be identified. These results show the ambivalent effects of state-wide exit exams: Due to the increased teacher competence support, students’ interest is enhanced in the long term. However, scholastic self-efficacy and persistence might have been negatively affected by state-wide exams, if teacher competence support had not increased over time. In the advanced level mathematics courses, the results are mixed. Implications for further research are discussed.

Keywords

State-wide exit exams Multilevel regression analyses Interest Scholastic self-efficacy Persistence 

Notes

Acknowledgments

We would like to express our great appreciation to Prof. Dr. Eckhard Klieme, Dr. Monika Holmeier, Dr. Daniela J. Jäger, and Elisabeth Maué for their substantial contributions to the whole project.

Compliance with ethical standards

Disclosure of potential conflicts of interest

This work was supported by the German Research Foundation [MA 4184/3-1 and KL 1057/12-1] and by the two states Bremen and Hesse, Germany. Research independency was contractually agreed. Accordingly, we have no conflict of interest.

References

  1. Abrams, L. M., Pedulla, J. J., & Madaus, G. F. (2003). Views from the class-room: teachers’ opinions of statewide testing programs. Theory Into Practice, 42(1), 18–29.CrossRefGoogle Scholar
  2. Assor, A., Kaplan, H., & Roth, G. (2002). Choice is good, but relevance is excellent: autonomy-enhancing and suppressing teacher behaviours predicting students’ engagement in schoolwork. British Journal of Educational Psychology, 72, 261–278.CrossRefGoogle Scholar
  3. Au, W. (2007). High-stakes testing and curricular control: a qualitative metasynthesis. Educational Researcher, 36(5), 258–267.CrossRefGoogle Scholar
  4. Bandura, A. (1997). Self-efficacy: the exercise of control. New York: Freeman and Company.Google Scholar
  5. Bandura, A., Barbaranelli, C., Caprara, G. V., & Pastorelli, C. (2001). Self-efficacy beliefs as shapers of children’s aspirations and career trajectories. Child Development, 72, 187–206.CrossRefGoogle Scholar
  6. Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51, 1173–1182.CrossRefGoogle Scholar
  7. Baumert, J. (2000). Lebenslanges Lernen und internationale Dauerbeobachtung der Ergebnisse von institutionalisierten Bildungsprozessen [Lifelong learning and international monitoring of the results of institutionalized educational processes]. In F. Achtenhagen & W. Lempert (Eds.), Lebenslanges Lernen im Beruf—seine Grundlegung im Kindes- und Jugendalter. Band 5: Erziehungstheorie und Bildungsforschung (pp. 121–127). Leske+Budrich: Opladen.CrossRefGoogle Scholar
  8. Baumert, J., & Watermann, R. (2000). Institutionelle und regionale Variabilität und die Sicherung gemeinsamer Standards in der gymnasialen Oberstufe [Institutional and regional variability and assuring common standards in the gymnasium upper level]. In J. Baumert, W. Bos, & R. Lehmann (Eds.), TIMSS/III. Dritte Internationale Mathematik- und Naturwissenschaftsstudie - Mathematische und naturwissenschaftliche Bildung am Ende der Schullaufbahn. Band 2. Mathematische und physikalische Kompetenzen am Ende der gymnasialen Oberstufe (pp. 317–372). Opladen: Leske+Budrich.CrossRefGoogle Scholar
  9. Baumert, J., Gruehn, S., Heyn, S., Köller, O., & Schnabel, K. U. (1997). Bildungsverläufe und psychosoziale Entwicklung im Jugendalter (BIJU). Dokumentation, Band 1. Skalen Längsschnitt I, Welle 1-4 [Educational trajectories and psycho-social development in adolescence (BIJU). Documentation, Vol. 1. Scales, longitudinal study I, waves 1–4]. Berlin: Max-Planck-Institut für Bildungsforschung.Google Scholar
  10. Bishop, J. H. (1999). Are national exit examinations important for educational efficiency? Swedish Economic Policy Review, 6, 349–398.Google Scholar
  11. Borman, G. D. (2009). The use of randomized trials to inform education policy. In G. Sykes, B. Schneider, & D. N. Plank (Eds.), Handbook of policy research (pp. 129–138). New York: Routledge.Google Scholar
  12. Chan, J. C. Y., & Lam, S.-f. (2010). Effects of different evaluative feedback on students’ self-efficacy in learning. Instructional Science, 38, 37–58.CrossRefGoogle Scholar
  13. Chiu, M. M., & Xihua, Z. (2008). Family and motivation effects on mathematics achievement: analyses of students in 41 countries. Learning and Instruction, 18, 321–336.CrossRefGoogle Scholar
  14. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale: Erlbaum.Google Scholar
  15. Deci, E. L., & Ryan, R. M. (1985). Intrinsic motivation and self-determination in human behaviour. New York: Plenum.CrossRefGoogle Scholar
  16. Deci, E. L., Koestner, R., & Ryan, R. M. (2001). Extrinsic rewards and intrinsic motivation in education: reconsidered once again. Review of Educational Research, 71(1), 1–27.CrossRefGoogle Scholar
  17. EACEA/Eurydice. (2009). National testing of pupils in Europe: objectives, organisation and use of results. Brussels: Eurydice.Google Scholar
  18. Faxon-Mills, S., Hamilton, L. S., Rudnick, M., & Stecher, B. M. (2013). New assessments, better instruction? Designing assessment systems to promote instructional improvement. Santa Monica: RAND.Google Scholar
  19. Graham, J. W. (2009). Missing data analysis: making it work in the real world. Annual Review of Psychology, 60, 549–576.CrossRefGoogle Scholar
  20. Grob, U., & Maag Merki, K. (2001). Überfachliche Kompetenzen. Theoretische Grundlegung und empirische Erprobung eines Indikatorensystems [Cross-curricular competencies: Theoretical foundation and empirical analysis of an indicator system]. Bern: Peter Lang.Google Scholar
  21. Heckhausen, H. (1989). Motivation und Handeln [Motivation and action]. Berlin: Springer.CrossRefGoogle Scholar
  22. Heller, K., & Perleth, C. (2000). Kognitiver Fähigkeitstest KFT 4-12+ R (für 4. bis 12. Klassen, Revision) [Cognitive ability test KFT 4-12+ R (for grades 4 to 12, revised)]. Göttingen: Beltz-Test GmbH.Google Scholar
  23. Jäger, D. J., Maag Merki, K., Oerke, B., & Holmeier, M. (2012). Statewide low-stakes tests and a teaching to the test effect? An analysis of teacher survey data from two German states. Assessment in Education: Principles, Policy & Practice, 19(4), 451–467.CrossRefGoogle Scholar
  24. Jerusalem, M., & Satow, L. (1999). Schulbezogene Selbstwirksamkeitserwartung [Scholastic self-efficacy belief]. In R. Schwarzer & M. Jerusalem (Eds.), Skalen zur Erfassung von Lehrer- und Schülermerkmalen. Dokumentation der psychometrischen Verfahren im Rahmen der wissenschaftlichen Begleitung des Modellversuchs ‚Selbstwirksame Schulen‘[03.July 2015] (pp. 15–16). Berlin: Freie Universität Berlin. Verfügbar unter: http://web.fu-berlin.de/gesund/skalen/Kollektive_Selbstwirksamkeit/kollektive_selbstwirksamkeit.htm.
  25. Jürges, H., & Schneider, K. (2010). Central exit examinations increase performance, but take the fun out of mathematics. Journal of Population Economics, 23, 497–517.CrossRefGoogle Scholar
  26. Jürges, H., Schneider, K., Senkbeil, M., & Carstensen, C. H. (2012). Assessment drives learning: the effect of central exit exams on curricular knowledge and mathematical literacy. Economics of Education Review, 31, 56–65.CrossRefGoogle Scholar
  27. Klein, E. D. (2013). Statewide exit exams, governance, and school development: an international comparison. Münster: Waxmann.Google Scholar
  28. Klein, E. D., & Van Ackeren, I. (2011). Challenges and problems for research in the field of statewide exams. A stock taking of differing procedures and standardization levels. Studies in Educational Evaluation, 37, 180–188.CrossRefGoogle Scholar
  29. Klieme, E. (2004). Begründung, Implementation und Wirkung von Bildungsstandards: Aktuelle Diskussionslinien und empirische Befunde [Rational, implementation and effects of performance standards: current discussion and empirical results]. Zeitschrift für Pädagogik, 50(5), 625–634.Google Scholar
  30. Krapp, A. (2002). An educational-psychological theory of interest and its relation to SDT. In E. L. Deci & R. M. Ryan (Eds.), Handbook of self-determination research (pp. 405–427). Rochester: University of Rochester Press.Google Scholar
  31. Krapp, A. (2005). Basic need and the development of interest and intrinsic motivational orientations. Learning and Instruction, 15, 381–395.CrossRefGoogle Scholar
  32. Kuhl, J. (1987). Action control: the maintenance of motivational states. In F. Halisch & J. Kuhl (Eds.), Motivation, intention, and volition (pp. 279–291). Berlin: Springer.CrossRefGoogle Scholar
  33. Kunter, M., Baumert, J., & Köller, O. (2007). Effective classroom management and the development of subject-related interest. Learning and Instruction, 17, 494–509.CrossRefGoogle Scholar
  34. Kunter, M., Tsai, Y.-M., Klusmann, U., Brunner, M., Krauss, S., & Baumert, J. (2008). Students’ and mathematics teachers’ perceptions of teacher enthusiasm and instruction. Learning and Instruction, 18, 468–482.CrossRefGoogle Scholar
  35. Kunter, M., Klusmann, U., Baumert, J., Richter, D., Voss, T., & Hachfeld, A. (2013). Professional competence of teachers: effects on instructional quality and student development. Journal of Educational Psychology, 105(3), 805–820.CrossRefGoogle Scholar
  36. Leutwyler, B., & Maag Merki, K. (2005). Mittelschulerhebung 2004. Indikatoren zu Kontextmerkmalen gymnasialer Bildung. Perspektive der Schülerinnen und Schüler: Schul- und Unterrichtserfahrungen. Skalen- und Itemdokumentation [Assessment in high schools 2004. Indicators for context variables. Students’ perspectives: Experiences in school and classes. Documentation of scales and items]. Zurich: Forschungsbereich Schulqualität & Schulentwicklung, Institute of Education, University of Zurich.Google Scholar
  37. Lüdtke, O., Robitzsch, A., & Köller, O. (2002). Statistische Artefakte bei Kontexteffekten in der pädagogisch-psychologischen Forschung [Handling missing values in psychological research: Problems and solutions]. Zeitschrift für Pädagogische Psychologie, 16(3/4), 217–232.CrossRefGoogle Scholar
  38. Maag Merki, K. (2011). Effects of the implementation of state-wide exit exams on students’ self-regulated learning. Studies in Educational Evaluation, 37, 196–205.CrossRefGoogle Scholar
  39. Maag Merki, K., Holmeier, M., Jäger, D. J., & Oerke, B. (2010). Die Effekte der Einführung zentraler Abiturprüfungen auf die Unterrichtsgestaltung in Leistungskursen in der gymnasialen Oberstufe [The effects of the implementation of state-wide exit examinations on teaching in advanced courses in academic upper secondary schools]. Unterrichtswissenschaft, 34(2), 173–192.Google Scholar
  40. Oerke, B. (2012). Emotionaler Umgang von Lehrkräften und Schüler/-innen mit dem Zentralabitur: Unsicherheit, Leistungsdruck und Leistungsattribution [Teachers‘and students‘emotional dealings with state-wide exit exams: Uncertainty, pressure to achieve and attribution]. In K. Maag Merki (Ed.), Zentralabitur. Die längsschnittliche Analyse der Wirkungen der Einführung zentraler Abiturprüfungen in Deutschland (pp. 119–153). Wiesbaden: Springer VS.Google Scholar
  41. Oerke, B., Maag Merki, K., Holmeier, M., & Jäger, D. J. (2011). Changes in student attributions due to the implementation of central exit exams. Educational Assessment, Evaluation and Accountability, 23(3), 223–241.CrossRefGoogle Scholar
  42. Oerke, B., Maag Merki, K., Maué, E., & Jäger, D. J. (2013). Zentralabitur und Themenvarianz: Lohnt sich Teaching-to-the-Test? [State-wide exit examination and content variance: Does teaching-to-the test pay off?]. In D. Bosse, F. Eberle, & S.-T. Barbara (Eds.), Standardisierung in der gymnasialen Oberstufe (pp. 27–50). Wiesbaden: Springer VS.CrossRefGoogle Scholar
  43. Pedulla, J., Abrams, L. M., Madaus, G., Russell, M., Ramos, M., & Miao, J. (2003). Perceived effects of state-mandated testing programs on teaching and learning: findings from a national survey of teachers. Chestnut Hill: National Board on Educational Testing and Public Policy, Lynch School of Education, Boston College.Google Scholar
  44. Piopiunik, M., Schwerdt, G., & Wössmann, L. (2014). Zentrale Abschlussprüfungen, Signalwirkung von Abiturnoten und Arbeitsmarkterfolg in Deutschland [Central school-leaving exams, signaling effects and labor-market outcomes in Germany]. Zeitschrift für Erziehungswissenschaft, 17, 35–60.CrossRefGoogle Scholar
  45. PISA-Konsortium Deutschland. (2000). Dokumentation der Erhebungsinstrumente [Documentation of the survey instruments] (Vol. 72). Berlin: Max-Planck Institut für Bildungsforschung.Google Scholar
  46. Prenzel, M., Kristen, A., Dengler, P., Ettle, R., & Beer, T. (1996). Selbstbestimmt motiviertes und interessiertes Lernen in der kaufmännischen Erstausbildung [Self-determinedly motivated and interested learning in vocational business school]. Zeitschrift für Berufs- und Wirtschaftspädagogik. Beiheft, 13, 108–127.Google Scholar
  47. Putwain, D. W., Connors, L., Woods, K., & Nicholson, L. J. (2012). Stress and anxiety surrounding forthcoming standard assessment tests in English schoolchildren. Pastoral Care in Education, 30(4), 289–302.CrossRefGoogle Scholar
  48. Putwain, D. W., Daly, A. L., Chamberlain, S., & Sadreddini, S. (2015). Academically buoyant students are less anxious about and perform better in high-stakes examinations. British Journal of Educational Psychology, 85, 247–263.CrossRefGoogle Scholar
  49. Rakoczy, K. (2008). Motivationsunterstützung im Mathematikunterricht [Motivational support in mathematics classes]. Münster: Waxmann.Google Scholar
  50. Reardon, S. F., Arshan, N., & Atteberry, A. (2010). Effects of failing a high school exit exam on course taking, achievement, persistence, and graduation. Educational Evaluation and Policy Analysis, 32(4), 498–520.CrossRefGoogle Scholar
  51. Rheinberg, F. (2004). Motivationsdiagnostik [Motivation diagnostic]. Göttingen: Hogrefe.Google Scholar
  52. Richman, C. L., Brown, K., & Clark, M. (1987). Personality changes as a function of minimum competency test success or failure. Contemporary Educational Psychology, 12, 7–16.CrossRefGoogle Scholar
  53. Roth, G., Assor, A., Kanat-Mayom, Y., & Kaplan, H. (2007). Autonomous motivation for teacher: how self-determined teaching may lead to self-determined learning. Journal of Educational Psychology, 99(4), 761–774.CrossRefGoogle Scholar
  54. Rubin, D. B. (1987). Multiple imputation for nonresponse in surveys. New York: Wiley.CrossRefGoogle Scholar
  55. Ryan, R. M., & Sapp, A. (2005). Considering the impact of test-based reforms: a self-determination theory perspective on high stakes testing and student motivation and performance. Unterrichtswissenschaft, 33(2), 143–159.Google Scholar
  56. Ryan, K. E., Ryan, A. M., Arbuthnot, K., & Samuels, M. (2007). Students’ motivation for standardized math exams. Educational Researcher, 36(1), 5–13.CrossRefGoogle Scholar
  57. Schwarzer, R., & Jerusalem, M. (2002). Das Konzept der Selbstwirksamkeit [The concept of self-efficacy]. Zeitschrift für Pädagogik, 44. Beiheft, 28–53.Google Scholar
  58. Seidel, T. (2006). The role of student characteristics in studying micro teaching-learning environments. Learning Environments Research, 9(3), 253–271.CrossRefGoogle Scholar
  59. Sitzmann, T., & Yeo, G. (2013). A meta-analytic investigation of the within-person self-efficacy domain: is self-efficacy a product of past performance or a driver of future performance? Personnel Psychology, 66, 531–568.CrossRefGoogle Scholar
  60. Slavin, R. E. (2010). Experimental studies in education. In B. P. M. Creemers, L. Kyriakides, & P. Sammons (Eds.), Methodological advances in educational effectiveness research (pp. 102–114). London: Routledge.Google Scholar
  61. Wheelock, A., Bebell, D. J., & Haney, W. (2000). What can student drawings tell us about high-stakes testing in Massachusetts? Teachers College Record, ID Number: 10634.Google Scholar
  62. Winne, P. H., & Hadwin, A. F. (2008). The weave of motivation and self-regulated learning. In D. H. Schunk & B. J. Zimmerman (Eds.), Motivation and self-regulated learning: Theory, research, and applications (pp. 297–314). Mahwah: Erlbaum.Google Scholar
  63. Zhang, Z., Zyphur, M., & Preacher, K. J. (2009). Testing multilevel mediation using hierarchical linear models. problems and solutions. Organizational Research Methods, 12(4), 695–719.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  1. 1.Institute of EducationUniversity of ZurichZürichSwitzerland
  2. 2.Institute of EducationUniversity of BielefeldBielefeldGermany

Personalised recommendations