Introduction

Student teachers enter classrooms with high expectations for themselves and for their students. Yet, we know that the first year of teaching is a sobering experience for most and that over the course of the year student teachers often feel their belief in their own capacities to successfully accomplish the teaching task diminish (Hoy & Woolfolk, 1990). Veenman (1984, p. 143) described this phenomenon as “a collapse of the missionary ideals formed by the harsh and rude reality of everyday classroom life.”

Teacher efficacy is defined as “the teacher’s belief in his or her ability to organize and execute the courses of action required to successfully accomplish a specific teaching task in a particular context” (Tschannen-Moran, Woolfolk Hoy, & Hoy, 1998, p. 233). Research shows that teachers’ efficacy for teaching is related to a number of important aspects of their professional lives. First, teachers’ efficacy beliefs are related to the effort they invest in teaching, the goals they set, their persistence when things do not go smoothly, and their resilience in the face of setbacks (Tschannen-Moran et al., 1998). In addition, practicing teachers with higher teacher efficacy are more committed to and satisfied with their profession, experience less stress, are more resourceful, are less likely to experience burnout, and are rated as more competent by their supervisors (Brouwers & Tomic, 2000; Caprara, Barbaranelli, Borgogni, & Steca, 2003; Coladarci & Breton, 1997; Egyed & Short, 2006; Pruski et al., 2013). Teachers’ efficacy beliefs have also been related to student outcomes such as students’ self-efficacy beliefs, and student engagement, motivation, and achievements (Tschannen-Moran & McMaster, 2009).

Attention to the factors that support the development of a strong sense of efficacy for teaching is worthwhile because, once established, the efficacy beliefs of teachers are difficult to change (Tschannen-Moran et al., 1998). For student teachers, mastery experiences are the most important source for building positive teacher efficacy beliefs (Mulholland & Wallace, 2001). The perception that teaching has been successful raises efficacy expectations, i.e., the hope that their teaching will be proficient in classes that they consider difficult. The problem is that mastery experiences cannot be given to student teachers: They have to be created by the student teachers themselves. Creating a mastery experience mainly in the context of difficult experienced classrooms starts with student teachers’ productive understanding of experiences that they consider to be problematic. The perception that their teaching has failed lowers teachers’ efficacy beliefs, contributing to the expectation that future performances will also be inept, unless the failure is viewed as providing clues about potentially more successful strategies. However, Janssen, de Hullu, and Tigelaar  (2009) showed that it is difficult for student science teachers to develop productive resolutions to their problems based on reflection on their experiences.

In the study reported in this article, we investigated whether student science teachers’ efficacy for teaching in classes they consider difficult can be enhanced by providing them with an attribution support tool that allows them to explain failure more productively, so that their lessons will improve and they can develop mastery experiences.

Why is Teacher Efficacy Important?

Bandura (1977) introduced the idea of self-efficacy as an important factor in human motivation and performance. He defines perceived self-efficacy as “beliefs in one’s capabilities to organize and execute the courses of action required to produce given attainments” (Bandura, 1997, p. 3). In educational contexts, teachers with higher teacher efficacy choose to perform more challenging tasks, and set themselves higher goals and stick to them; they invest more effort and persist longer in difficult tasks. When they experience failure, they recover more quickly and remain loyal to their goals (Schwarzer & Hallum, 2008).

The major influences on teacher efficacy are supposed to be interpretations of four sources of efficacy information, described by Bandura (1997): mastery experience, physiological arousal, vicarious experience, and verbal persuasion. The most influential source of efficacy information is personal mastery experiences, because these provide the most authentic evidence of whether one can master whatever it takes to succeed in a particular field of endeavor (Bandura, 1997). This also applies to teaching. When student teachers in a classroom achieve what they want to achieve, they experience this as a mastery experience, which contributes positively to the development of their teacher efficacy. Increased teacher efficacy contributes in turn to the setting of higher objectives and active realization of these objectives; therefore, it indirectly contributes to the development of student teachers’ performance in the classroom. As student teachers are still learning to teach, the reverse is also common, especially in classes that the student teachers consider difficult. Student teachers often find that the situation in the classroom is not as they desire, and this can lead to reduced teacher efficacy. This in turn can lead to the lowering of objectives, or avoidance of difficult situations, which may influence the development of student performance negatively. This in turn reduces the possibility of gaining mastery experiences, which may have negative effects on teacher efficacy and may ultimately result in feelings of helplessness.

The question now is how this downward spiral can be prevented or reversed by student teachers, especially in classes that they find difficult.

To increase teacher efficacy, mastery experiences are necessary. However, these cannot be given by teacher educators: They have to be created by student teachers themselves. Nevertheless, we can support student teachers by helping them better understand the problematic experience, and enabling them to formulate productive objectives that can lead to mastery experiences, which in turn can lead to a higher sense of teacher efficacy.

Toward Productive Attributions of Problematic Experiences in Difficult Classrooms

First, student teachers should regain enough confidence to be convinced that they can alter the situation in a difficult class. This depends on how they explain the causes of their failures.

Research on the reflection of student teachers has demonstrated again and again that it is very difficult for student teachers to reflect productively on problematic experiences. Productive reflection means that when a student teacher reflects on a problematic situation their resolution should solve this specific problem (Janssen, de Hullu, & Tigelaar, 2008; Lee, 2005; Mansvelder-Longayroux, Beijaard, & Verloop, 2007). Student teachers’ reflection processes often remain limited to the descriptive level (Mena-Marcos, Garcia-Rodriguez, & Tillema, 2013). The consequence of this is that attributions made after problematic teaching experiences are not sufficiently explanatory and do not help student teachers find productive resolutions. Moreover, their resolutions are often phrased too narrowly or too generally (Janssen et al., 2008). In recent years (decades), various reflection models have been developed to stimulate reflection by student teachers (Beauchamp, 2015; Hatton & Smith, 1995; Jay & Johnson, 2002; Zeichner & Noffke, 2001). The disadvantage of these models is that they are too generic and do not give guidance on how to deal with problematic experiences (Janssen et al., 2009; Lee, 2005).

We therefore developed a tool that helps student teachers develop productive attributions about problematic experienced situations. This tool is based on the work of Smedslund (1988, 1997), who analyzed this in his Psycho-Logic (1988) and made a list of necessary and sufficient conditions that can be used for both explaining and predicting human action. Smedslund’s model is attractive because it integrates theories on several conditions for action (can, try, and trust) in a parsimonious and conceptually coherent way [see Smedslund (2000) for a comparison of his psychologic with major existing theoretical frameworks in cognitive and social psychology]. Smedslund’s analysis can be seen as a refinement of the analysis of conditions of human action by the “grandfather” of attribution research (Heider, 1958). Smedslund shows that all personal activities can be subsumed under two broad categories: cognizing and doing. In cognizing the goal is to gather information about the situation; in doing the goal is to make a change in the situation. Almost all activities involve components of both cognizing and doing. Smedslund argues that from a psychologic point of view cognizing and doing have the same intentionality and can be treated together. Therefore, the necessary and sufficient conditions apply to both cognizing and doing. An act is executed only when the person is not only able to do it but also tries to do it. A person can do an act only when his or her ability is higher than the difficulty of the act. A person tries to do an act if, and only if, this act has the highest expected utility. Expected utility is defined as the product of the expected outcome value of the act for the person and the expected likelihood for the person that the act will in fact lead to that outcome. Many theories of motivation are based on similar value and expectancy constructs (Wigfield & Eccles, 2000).

Smedslund acknowledges that this analysis is incomplete because it lacks an explicit recognition of the fact that people are social beings; they develop and live in constant interaction with others. Hence, Smedslund added the psychology of interpersonal processes to his analysis of the conditions of acting. The basic concept in this respect is trust. You will trust other people if you believe they will not harm you. According to Smedslund, the necessary and sufficient conditions of trust are understanding, care, and self-control. If you think that another person understands you, cares for you, and has self-control, you trust that person. Smedslund’s analysis of acting and trust points to important factors a teacher should take into account for the processes of productive attributing as well as predicting student actions.

Janssen, De Boer, Dam, Westbroek, and Wieringa, (2013) developed an attribution tool based on Smedslund’s analysis of individual and social conditions for actions. In this tool, conditions of action and trust are reformulated as questions for teachers to answer in order to analyze their lesson experiences. The current study was designed to explore through intervention how the teacher efficacy of student biology teachers with problematic experiences in classes they found difficult could be improved by trying to change their self-formulated attributions from external, stable, and uncontrollable to more internal, unstable, and controllable attributions. We used the so-called tool-based reflection cycle: a series of reflections on five lessons given by student teachers in which the attribution tool was used. We hoped this tool-based reflection cycle would result in self-formulated attributions and resolutions, enabling student teachers to achieve mastery experiences in difficult classes and increase their sense of teacher efficacy.

The following research questions were formulated:

  • Does the use of the tool-based reflection cycle by student teachers when analyzing failures or successes after lessons lead to a higher sense of teacher efficacy with respect to problematic experienced situations in difficult classes?

  • Does the use of the tool-based reflection cycle by student teachers lead to more mastery experiences?

  • How do the student teachers value the whole procedure of using the tool-based reflection cycle and setting resolutions to problems in the difficult classes?

We predicted that using the tool-based reflection cycle during the analysis of failures or successes would lead to a higher sense of teacher efficacy among student teachers. We also expected that the use of the tool-based reflection cycle would lead to more mastery experiences by making the student teachers look more closely at their lessons and, therefore, produce better self-formulated attributions. We expected that this would enable them to set achievable resolutions and goals, resulting in mastery experiences. Finally, we predicted that the student teachers would positively evaluate the whole procedure. If we could break through the feelings of helplessness, goals and mastery experiences would be achieved so that the student teachers would feel better about teaching their difficult classes.

Method

We conducted an explorative study to determine whether the attribution tool helped the student teachers to make attributions over a period of time, and so affect their teacher efficacy in a positive way.

Participants

The participants in our study were a group of nine student teachers of biology. Shortly before entering a one-year postgraduate teacher education program, all participants had obtained a master’s degree in biology. Five of the student teachers were female, four were male. None had previous teaching experience. All of the student teachers followed the institutional program at Leiden University. They attended working classes on Mondays, at which they reported and reflected on their teaching experiences and discussed their findings with each other. The rest of the week they were at practice schools (teaching about five to ten lessons per week). The participants were in the second part of the program; they had already followed courses on basic classroom management, teaching skills, and methods in biology.

Design

Attribution Tool

We asked the student teachers to use the attribution support tool to analyze problematic experiences in classes they found difficult to teach. The features of the tool are described below; how the student teachers used the tool is described in Sect. 2.2.2.

Smedslund formulated three conditions for action: can, try, and trust. For the attribution support tool, we reformulated these conditions as questions for teachers to answer while analyzing their lessons (see “Appendix 2”).

The conditions for can are worked out in questions four and five. These are based on obtaining information about the situation, resulting in the following questions: “Do the students possess the essential knowledge?” and “Do the students possess the essential capabilities?”

The conditions for try are worked out in questions six and seven. These are about achieving a change in the existing situation, resulting in the questions: “Does the student think the assignment is worth doing?” and “Does the student think he can do it?”

The conditions for trust are worked out in questions eight, nine, and ten. These are about understanding, care, and self-control, resulting in the following questions: “Does the student feel understood?” “Does the student think that the actions taking place are in her interest?” and “Does the student feel that independence is stimulated?”

Smedslund’s conditions for action and trust are well supported by research on teaching (Brophy, 1999; Resnick, Besterfield-Sacre, Mehalik, Sherer, & Halverson, 2007). However, these studies on teaching effectiveness also point to three major characteristics of the tasks that explain the failure or success of lessons. So, in addition to the conditions (can, try, and trust) for persons, there are also conditions for tasks. We reformulated these conditions for tasks into questions. First, activities and content should be in line with the goals of the instruction (goal-activity alignment): “Are the activities necessary to reach the goals?” Second, what students learn depends on the time allocated for accomplishing the tasks (instructional time): “Is there enough time for each student to complete the tasks?” Finally, students’ performance depends critically on how well they understand what they are expected to do (clarity of instruction): “Is it clear what is expected from the student?” We expected that the attribution support tool would help the student teachers analyze their lessons more accurately, and direct attributions made after failures toward more internal, unstable, and controllable aspects.

Procedure

The student teachers were asked to fill in questionnaires about their self-formulated attributions and lessons they taught in classes they found difficult. The difficult class could be one that made the student teachers nervous or increased their level of stress, but there was no restriction as to age or level for the “difficult” class. Having the participants choose a difficult class was intended to exclude the factor of luck: Student teachers could not attribute good lessons to the attitudes of the students in the class. Also, if the classes were already difficult for the student teachers, their sense of teacher efficacy for teaching in those classes would not be strong, making improvement possible.

The student teachers had to fill in lesson plans for their difficult classes, giving short descriptions of what they planned to do at what times, and how long they thought each particular part of the lesson would last. In these descriptions they also sketched how the students would be expected to work during these activities: for instance, individually or as a whole class. After that the lesson was given.

After the student teachers had given the lessons, they had to describe them. The questions to be answered were: What happened at what time? What did you do, and what did the students do at the same time? Was what happened in the classroom actually your intention at that moment? At the end of the questionnaire, all student teachers had to give themselves a mark for how they felt their lesson went, and explain why they gave themselves that particular mark.

The student teachers then filled in the attribution support tool (“Appendix 2”). For each of the ten aspects, it had to be stated whether that part of the lesson was a success or a failure, using yes or no. “No” meant this aspect failed during the lesson. Next, they had to describe what they thought was the cause of the failure (i.e., make an attribution), and if necessary explain what part of the lesson failed and for which students the lesson failed (individuals, groups, or the whole class). Resolutions for that particular failure had to be set. How can you turn this failure into a success during the next lesson? (See “Appendix 2” for an example of how to fill in the attribution support tool.) Our expectation was that when these resolutions were carried out the probability of student teachers having mastery experiences would increase, resulting in increased teacher efficacy.

If a student teacher came up with a large number of resolutions, “Appendix 3” was used to prioritize them. “Appendix 3” was also used to ask what mark the teacher had given herself after the lesson, and what mark she wanted to receive in the next lesson. Teachers’ priorities had to be based on the mark they wanted to receive for the next lesson, using the resolutions that could help them get that mark most easily. The maximum number of resolutions to be used in the next lesson plan was five; more than five would be too much to fit into the new lesson and would probably be too hard to accomplish.

After that, the participants were asked to design a new lesson plan including their most important resolutions. The next lesson had to be given in the same difficult class, and then analyzed again using the attribution support tool. If the set resolutions succeeded during the new lesson, other attributions and resolutions had to be set after any new failures, resulting in a new lesson plan including new resolutions. This cycle had to be completed five times by each student teacher over a period of three months. The student teachers were also asked to fill in a teacher efficacy questionnaire (“Appendix 1”) after each lesson. We expected that in this way the student teachers would see the results of their efforts in the difficult classes; they would be rewarded with mastery experiences in each lesson; and they would see that varying their methods and performance in the difficult classes could have some positive effects. Thus, we expected that it would be possible to break through the feelings of helplessness often felt by student teachers with difficult classes.

Instruments

Measuring Teacher Efficacy

The participants were asked to fill in a questionnaire on their teacher efficacy in the difficult class after each lesson. The teacher efficacy questionnaire we used was the Dutch version of the Ohio State Teacher Efficacy scale (OSTES), a list made by Tschannen-Moran and Woolfolk Hoy (2001). The Dutch version was obtained via forward and backward translation procedures and validated by Goei, Bekebrede, and Bosma (2011). It consisted of 24 items spread over three categories.

  • Factor 1: Efficacy for instructional strategies

  • Factor 2: Efficacy for classroom management

  • Factor 3: Efficacy for student engagement

We added one last overall statement to the list: “I can adequately teach this class.” Teachers were asked to rate each item on its importance for effective teaching on a four-point scale: “weak belief,” “moderate belief,” “strong belief,” or “very strong belief in my capabilities” (Dellinger, Bobbett, Olivier, & Ellett, 2008). For the complete list see “Appendix 1.”

Analyzing Teacher Efficacy

After the participants had filled in the teacher efficacy questionnaires, we analyzed the three categories separately for each teacher by measuring the average for each category (eight items). We also analyzed the average rating of efficacy for all categories, and the overall statement for each teacher separately. Finally, we measured the averages over all teachers for teacher efficacy, both in general and separated into the three categories. The average efficacy ratings for all categories were put in a graph. The average efficacy ratings for each category separately were put in a table.

Because the data were non-normal for one of the variables, Wilcoxon signed-rank tests were run to see whether there was a significant increase in general teacher efficacy as well as for all categories separately.

Analyzing Mastery Experiences

To measure mastery experiences, we looked at three different aspects: differences in the number of aspects of the attribution support tool that failed or succeeded during the lesson cycles (1), development in attributions and resolutions set by the student teachers after each lesson (2), and the self-awarded marks of the student teachers after each lesson (3).

First we checked whether there was a difference in the number of aspects of the attribution support tool that failed or succeeded during the lessons. For this we used the list in “Appendix 2,” which the student teachers used to analyze the lesson. If five aspects of the attribution support tool failed during a student teacher’s first lesson, would this number change during the cycle? The more aspects of the attribution support tool that succeeded during lessons, the fewer the attributions and resolutions that would be needed. A possible hypothesis is that the more aspects of the attribution support tool that succeeded during the lesson, the better the lesson went, since there were fewer failures to describe.

To see whether there was development in the attributions and resolutions set by the participants, a list of attributions and corresponding resolutions was drawn up for each student teacher by the first author. Attributions and resolutions had to be made by the student teachers after they had checked whether the aspects of the attribution support tool were scored negatively during their lesson. Resolutions set after each lesson were compared with each other to see whether there was a development over time. This was done by making lists of set resolutions after all lessons and comparing these with each other. When, for example, a particular teacher set resolutions for classroom management problems at the beginning of the study, these resolutions could have developed into resolutions related to pedagogy by the end of the study. A possible hypothesis is that resolutions made at the beginning will cease to be necessary toward the end of the lesson cycles because of small mastery experiences, after which other types of failures may occur, for which new resolutions have to be set. This shows a possible development of the student teacher in the difficult class.

To measure the self-awarded marks for each teacher, we asked the participants to provide descriptions after the lessons. We looked at how the self-awarded marks changed during the five-lesson cycles. An average mark for all teachers together per lesson was calculated, and the results put into a graph. The development of the average self-awarded marks was compared with the development of the average scores on teacher efficacy over time.

Evaluation of the Procedure

Finally, we asked our participants to answer several questions about their views on the whole process of using the attribution support tool and setting resolutions for the difficult classes: They answered the questions from “Appendix 4” after the lessons in the difficult classes. The answers were classified as positive or negative by the first author and presented in percentages or as quotes.

Results

Results Concerning Teacher Efficacy

The results for teacher efficacy are presented as an average for all teachers and show that teacher efficacy in general increases as the cycle of lessons in the difficult classes continues. All categories are combined in Fig. 1. Teacher efficacy starts at an average of 1.78 and ends at 2.67. On a four-point scale, this is a substantial improvement.

Fig. 1
figure 1

Average increase of teacher efficacy in general during the lesson cycles

Because the data were non-normal for one of the variables, Wilcoxon signed-rank tests were run; the output indicated that the posttest scores were statistically significantly higher than the pretest scores. The average scores for teacher efficacy in general were, for interventions 1 and 2, Z = −1.960, p < .05; for interventions 1 and 3, Z = −2.429, p < .015; for interventions 1 and 4, Z = −2.111, p < .0035; for interventions 1 and 5, Z = −2.201, p < .028. This indicates that teacher efficacy in general significantly improved between interventions 1 and 2, 1 and 3, 1 and 4, and 1 and 5.

Table 1 shows that teacher efficacy values increase for each of the three categories. Teacher efficacy for instructional strategies starts at an average of 1.83 and ends at 2.77; efficacy for classroom management starts at an average of 1.78 and ends at 2.60; efficacy for student engagement starts at an average of 1.72 and ends at 2.58.

Table 1 Average increase in teacher efficacy on a four-point scale for instructional strategies, classroom management, and student engagement during the lesson cycles

Table 2 shows that teacher efficacy for instructional strategies improved significantly between interventions 1 and 2, Z = −2.079, p < .0038; 1 and 3, Z = −2.677, p < .007; 1 and 4, Z = −2.197, p < .028; and 1 and 5, Z = −2.214, p < .027. For classroom management, a significant improvement was seen between interventions 1 and 3, Z = −1.973, p < .049; and 1 and 5, Z = −1.997, p < .046. And for student engagement efficacy increased significantly between interventions 1 and 3, Z = −2.371, p < .018; and 1 and 5, Z = −1.997, p < .046.

Table 2 Wilcoxon signed-rank test scores for teacher efficacy (n = 6) for instructional strategies, classroom management, and student engagement

Results Concerning Mastery Experiences

The results concerning the presence of the aspects of the attribution support tool during the lesson cycles were presented as an average for all teachers. The number of these aspects present during the lessons increased. In the first lesson an average of 6.0 aspects of the attribution support tool were present; this increased to 7.9 aspects at the end of the lesson cycles.

The results concerning development in resolutions set could not be represented in a figure. Seven of the nine student teachers (77 %) made progress over time: They set different goals, or made more specific resolutions for parts of lessons or for a few students rather than the whole class. See Sect. 3.4 for examples.

An average improvement for all teachers during the lesson cycles was observed using the marks teachers gave themselves for their lessons. The average mark starts with a 5.7, increases to a 7.3, and ends at an average of 6.9, as shown in Fig. 2.

Fig. 2
figure 2

Student teachers’ average self-awarded marks after each lesson in the difficult class

Results Concerning the Evaluation of the Procedure

Below, we present the answers to the questions in percentages as if they were positive or negative and quote some participants to illustrate the answers. We also introduce two of our student teachers, taking a closer look at their “difficult” classes and showing the development of their teacher efficacy and changes in self-awarded marks with each lesson.

The answers to the questions concerning the whole process could only be categorized as positive or negative and presented as either percentages or personal statements. See “Appendix 4” for all questions.

All teachers responded positively when asked whether they thought their lessons were becoming more effective and whether they felt that they were becoming more capable of reaching their goals during the lessons in the difficult classes. Using the resolutions set after one lesson in the difficult class helped them achieve part or all of their goals for the next lesson in that class. For example, “Yes, while preparing my lessons and during teaching I pay more attention to my set resolutions. The practical and easily applicable resolutions, like checking homework, were especially useful.”

In response to the second question, about managing the difficult class and whether they enjoyed teaching the difficult class more because of the changes that had occurred, the majority (75 %) of the student teachers answered that they enjoyed teaching the difficult class more, mainly because the class was easier to manage: “Teaching goes better, in particular when I focus on designing more challenging and stimulating lessons for the students.” However, some were still were not entirely satisfied: “I can handle the difficult class better, but the teaching itself has not become more pleasant; I think this is because most of the set resolutions were about managing the class and being consistent. From now on, I want to focus on making attractive lesson plans.”

With regard to the benefits and drawbacks of the procedure, all participants came to the same conclusion about the drawbacks: The whole procedure takes time and only focuses on one difficult class, while most student teachers had more classes to teach. The most important benefits, however, were the quick improvements in the difficult classes, which in turn increased the student teachers’ motivation to teach in these particular classes. Also, the attribution support tool inspired the participants to use the same procedure in other classes they taught during the same period.

The last question was “Would you apply this procedure again if you came across a difficult class?” All teachers said they would. Some would use the whole procedure again, others only parts of it. For example: “Yes, I would use the procedure again, but I would only use the attribution support tool and set resolutions for the next lessons, which I would try to achieve.”

Two of Our Student Teachers

Above we saw the average results for all participant student teachers. Below, to illustrate the development of teacher efficacy and self-awarded marks over time, we show figures and provide quotes from two of our student teachers, who showed development over time in their set resolutions, and whose teacher efficacy and self-awarded marks improved in different ways.

Teacher 1 (female) teaches at a school for higher general secondary education. At the beginning of the lesson cycle, she had problems mainly with classroom management. After she used the attribution support tool her set resolutions included “I am going to formulate rules for the students to hold on to” and “I will be consistent and clear regarding the implementation of the rules.” Between the first and the fourth lessons her management skills improved slightly with every lesson; during the second lesson she had a mastery experience in classroom management, which is reflected in her self-awarded mark for this lesson. Teacher efficacy also increased.

The resolutions set by this student teacher shifted from mostly classroom management at the beginning of the lesson cycles to more pedagogical goals at the end of the cycles. At the end, resolutions to her problems included “I am going to try out different methods of teaching, which might appeal more to the students” and “I will use a competition more often as a way to revise for a test.”

During the whole procedure her teacher efficacy increased from 1.625 to 2.76 (Fig. 3). During the lesson cycles her self-awarded mark increased from 5.5 to 7.5, as shown in Fig. 4.

Fig. 3
figure 3

Increase in teacher efficacy for student teacher 1

Fig. 4
figure 4

Increase in student teacher 1’s self-awarded marks during the lesson cycle

Her answers to the procedure evaluation questions were as follows: “My lessons are more effective, mostly due to improved classroom management, so that I am better able to achieve my goals. Some lessons go more smoothly than others; however, at the moment the worst lessons are already better than the lessons before the start of the lesson cycles. I like teaching the difficult class better; I can be myself more often. I would apply this procedure again; the set resolutions are applied in the next lesson, which is motivating.”

Teacher 2 (female) teaches in the upper grades of secondary school. The problems she had at the beginning of the lesson cycle concerned not classroom management, but instructional strategies. Her resolutions were as follows: “I have to change my lesson plan during the lesson, when it is not going the way I imagined it” and “I should not be afraid to answer difficult questions from the students.”

Her teacher efficacy increased during the lesson cycles, but she had a negative experience with time management in the fourth lesson. Students had to give presentations, which took too much time, but she did not interfere. She saw this as a huge failure, so that both her teacher efficacy and her self-awarded mark decreased at that point (Figs. 5, 6). At the end of the lesson cycle most of her problems had disappeared; no more resolutions were needed. Both her teacher efficacy and her self-awarded mark increased again; her teacher efficacy increased from 2.21 to 3.48, and her self-awarded mark from 6.0 to 8.0 during the lesson cycles.

Fig. 5
figure 5

Increase in teacher efficacy for student teacher 2, with a collapse at lesson four

Fig. 6
figure 6

Increase in student teacher 2’s self-awarded marks, with a relapse at lesson four

Her answers to the evaluation questions were as follows: “Yes, the lessons are more effective, most goals are achieved during the lessons. The difficult class is more manageable, and students pay more attention to my lessons. Teaching has become more enjoyable, mostly because of the improvement in instructional strategies and my increased confidence in my own subject expertise. The whole procedure is quite extensive and takes time to accomplish. However, the benefits are that it can be used not only to analyze whole lessons, but also to focus on specific details per lesson. Depending on the type of class, I would use this procedure again.”

Conclusion and Discussion

In the process of learning to teach, particularly in difficult classes, problematic teaching experiences are inevitable. It is important that student teachers learn productively from these problematic experiences, otherwise they may lead to a lowering of teacher efficacy. A lowering of teacher efficacy results in student teachers setting lower goals for themselves and avoiding difficult situations. It is crucial that student teachers are supported in productively attributing and learning from problematic experiences, so that they can create their own mastery experiences that will contribute to increasing their teacher efficacy. In this study, we investigated whether an attribution tool based on Smedslund’s conditions of action and trust helps student teachers to productively learn from problematic experiences in classes they find difficult to teach. We expected that using this tool would result in more mastery experiences in these classes and would increase student teachers’ efficacy for teaching.

Our findings show that use of the attribution tool results in an increased number of mastery experiences. The average number of aspects of the attribution support tool present during the lessons for all teachers increased from 6.0 to 7.9. This means that the total number of failures during the lessons in the difficult classes declined. Set resolutions became more specific, or shifted, for 77 % of the student teachers. Moreover, the self-awarded marks per teacher per lesson increased from 5.7 to 6.9. Our results also show that the teacher efficacy of the participants did indeed increase, starting at an average of 1.78 for all participants and ending at an average of 2.67.

Finally, we asked our student teachers to evaluate the whole procedure. Participants’ responses to the questionnaire were positive: All saw improvements in their teaching performance and teacher efficacy. All student teachers declared that they had succeeded in applying some or all of their set resolutions in the next lesson in the difficult classes. All student teachers said they would use the procedure again. However, they pointed out that the whole procedure, including all the forms to be filled in, took a lot of time; they would prefer to use only parts of the procedure, in particular the attribution tool.

The current findings show that the attribution tool can make a positive contribution by increasing the teacher efficacy for teaching of student teachers in classes they find difficult. What is remarkable is that the student teachers used this tool independently, without further support from the teacher educator. The development of such tools is new in research on teacher efficacy, although the importance of mastery experiences and productive attributions is admittedly recognized (Cakiroglu, Capa-Aydin, & Hoy, 2012). However, research into teacher efficacy has not yet contributed to the development of tools which enable (student) teachers to themselves develop productive attributions and hence create mastery experiences. The current findings show that this is an important and accessible route.

Of course, our study has some limitations as well. We will now discuss and also present several suggestions for future improvements. To measure the participants’ teacher efficacy we used the OSTES, which consists of 24 items spread over three categories, to which we added one general question. The items had to be rated on a four-point scale, with the student teachers choosing from “weak,” “moderate,” “strong,” and “very strong belief in my capabilities.” Some participants said that the gap between “moderate” and “strong” was too large and suggested that an extra option such as “sufficient” could be added. On the other hand, with a five-point scale teachers might have more doubts about which rating to choose, which would only make the questionnaire harder to fill in.

To measure the increase in mastery experiences, we looked at whether there was a difference in the number of aspects of the attribution support tool present from one lesson to the next. When a question had been answered with no, we asked the participant to make an attribution and an objective for the next lesson. However, when a question had been answered with yes, we did not ask for an explanation. An improvement for future research may be to also ask the participants to explain questions answered with yes, so that the changes in attributions can be followed more closely during the lesson cycles.

We also looked at the number of resolutions thought up after each lesson. We had expected that the number of set resolutions would decrease during the lesson cycles, because there would be fewer failures to describe and therefore fewer resolutions or goals to be set for the next lesson. However, this proved not to be the case. The number of set resolutions did not change and remained at an average of three per lesson per teacher. This may be because lessons will probably never be perfect, even for teachers who have been in the profession a long time. Another possible explanation is that when student teachers have higher teacher efficacy they feel more confident to attempt more ambitious teaching practices and are likely to set a greater number of goals or possibly attempt more challenging resolutions.

Further research is also needed to better understand the attributions made by teachers. Exactly what aspects of a lesson do teachers consider to be successful? Which attribution questions were answered positively and which often received a negative answer? And how do these attributions relate to the types of goals teachers formulate for the following lessons?

In spite of the limitations of our study, further research will clarify some aspects of it, and important lessons for reflection in teacher education can be drawn from our findings.

For almost 20 years, in many teacher education programs, student teachers have been encouraged to reflect upon their experiences and to formulate and try out new resolutions. Research on reflection has shown that it is very difficult for student teachers to reflect productively on problematic experiences. The various reflection models developed to stimulate reflection by student teachers are too generic and do not provide guidelines on how to deal with problematic experiences. Our approach to promoting student teachers’ reflection deviates in two important respects from regular and prescribed practice; this has two important implications for the promotion of productive reflection in teacher training.

First, the focus remains on one difficult class for a longer period, rather than dividing attention among all classes in which the student teachers give lessons. The cyclical form of following the same difficult class for a period of at least five lessons made the student teachers very aware of what happened in that particular class. This sustained reflection turned out to be productive for the student teachers. Second, and most importantly, through using the attribution support tool to explain their experiences the teachers were directed to a more accurate and deeper way of looking at their own lessons. In contrast, most existing reflection strategies contain steps or stages in which the student teacher is asked to describe essential characteristics, explain why this or that happened, and draw up resolutions; however, these strategies do not provide guidelines or tools explaining how to perform these steps in a productive way, which in turn may hamper their usefulness (Janssen et al., 2009; Lee, 2005; Marcos, Sanchez, & Tillema, 2011). Our attribution tool (“Appendix 2”) helps student teachers to focus on what is important in an experience and what possible factors can help to explain what happened, and hence also to formulate productive resolutions. Thus, the attribution tool can be used to guide student teachers through the steps of general reflection strategies and so help them to learn from experience and develop productive resolutions.