Keywords

39.1 Introduction

Pre-service teachers (PSTs) must develop a range of skills, including a certain type of noticing skill, that is different from lay people’s skills. When observing a mathematics lesson, either live or on video, during their mathematics education programme, they are expected to notice aspects of the lesson that are deemed important for the development of pupils’ knowledge. Yet the complexity of a mathematics lesson is such that if they direct attention to something, they do so at the expense of something else. Much research has been undertaken in recent years focusing on what it is that PSTs do and do not notice in a mathematics lesson and how they make sense of it.

Teacher education programmes contribute to the development of noticing in different ways. For example, in our previous research (Simpson et al. 2017), we found that a two-year master’s programme for future mathematics teachers without a focus on noticing and with two short school practice placements does not significantly influence PSTs’ patterns of attention when observing a videoed lesson. On the other hand, there is a body of research showing that video-based interventions do influence noticing in important ways (see Sects. 39.2.4 and 39.5). In the above research, we were interested to see in what way a quite short video intervention spanning three seminars would or would not influence future mathematics teachers’ patterns of attention. In many ways, we confirmed the results found in existing literature. The PSTs increasingly focused on the mathematical aspect of the lesson and on students rather than on the teacher, and their comments were more specific than general. We also looked briefly at their knowledge-based reasoning and found that the PSTs increasingly described and evaluated less, but also interpreted less when reflecting on what they saw in the video. The aim of this article is to elaborate on the above research and to present some further findings about the nature of PSTs’ knowledge-based reasoning and how it was affected by the video intervention.

39.2 Theoretical Background and a Review of Literature

39.2.1 Teacher Noticing

The concept of noticing is usually described as consisting of the processes of attending to particular events in the lesson and making sense of these events. Probably the most influential in the field is the characterisation by van Es and Sherin (2002; cited in Sherin and Star 2011), who propose three aspects of noticing:

(a) identifying what is important or noteworthy about a classroom situation; (b) making connections between the specifics of classroom interactions and the broader principles of teaching and learning they represent; and (c) using what one knows about the context to reason about classroom events. (p. 573)

In this paper, ‘pattern of attention’ will be used to refer to the first process, while the other two processes will both be referred to as ‘knowledge-based reasoning’. The concept of noticing is often conflated with professional vision, which is characterised as seeing phenomena in a scene from the area of expertise that are different from those arising from lay viewings of the same scene (Goodwin 1994). For example, Sherin et al. (2011) understand noticing as ‘professional vision in which teachers selectively attend to events that take place and then draw on existing knowledge to interpret these noticed events’ (p. 80).

It is obvious, given the complexity of observing a lesson, that observers must split their attention between what they see as noteworthy and what they choose to neglect; it is not a passive process. Moreover, as Schoenfeld (2011) points out, the observer’s knowledge, beliefs and orientations will have an impact on where attention is actually directed.

Much research has been aimed at the patterns of PSTs’ attention in general. It mostly concludes that they pay more attention to the teacher and classroom management than to students or mathematical content and its implementation in the lesson (e.g., Santagata et al. 2007; Alsawaie and Alghazo 2010). Moreover, PSTs’ comments are found to be rather evaluative. Generally, studies do not distinguish between ‘more and less important’ moments to be noticed. Some even say that before teachers are able to attend to important moments, they have to develop the ability to notice trivial classroom features (Star et al. 2011). However, they do add that it is not clear ‘whether it is better to focus first on improving teachers’ awareness of the full range of (trivial and important) events (as was done here [in their course]) or to focus explicitly on only important events from the outset’ (Star et al. 2011, p. 132). In some studies, the authors speak about salient features of mathematical instruction to be noticed (e.g., Mitchell and Marin 2015).

Naturally, the ability to ‘identify noteworthy events in a teaching situation depends on one’s image of what is important in teaching’ (Alsawaie and Alghazo 2010, p. 227). Further, what the authors of a video-based intervention see as important in teaching mathematics will also have an impact on such ability for the participants. For the presented study, the important moments are the ones that are generally accepted as playing the key role in pupils’ learning of mathematics: the types of tasks used by teachers and the kinds of discourses that they orchestrate when implementing them (Hiebert et al. 2003). Moreover, in line with the constructivist view of learning, pupils’ active role in developing their mathematics knowledge is emphasised. Thus, the concept of opportunity to learn is important; namely: ‘[the] circumstances that allow students to engage in and spend time on academic tasks such as working on problems, exploring situations and gathering data, listening to explanations, reading texts, or conjecturing and justifying’ (Kilpatrick et al. 2001, p. 333).

It includes ‘considerations of students’ entry knowledge, the nature and purpose of the tasks and activities, the likelihood of engagement, and so on’ (Hiebert and Grouws 2007, p. 379) and is seen as the single most important predictor of pupils’ achievement.

In the review of literature, I will restrict myself to studies on video-based interventions with future mathematics teachers aimed at noticing.

39.2.2 Structure of Video-Based Interventions

The instructional strategy employed when embedding classroom videos into a course is informed by the learning goal and purpose at hand (Blomberg et al. 2014). In literature, we can find courses based on situated cognition learning theory, which suggests ‘that learning should be rooted in authentic activity; that learning occurs within a community of individuals engaged in inquiry and practice; that more knowledgeable “masters” guide or scaffold the learning of novices; and that expertise is often distributed across individuals’ (Whitcomb 2003, p. 538). In such a case, ‘video is used as a problem anchor to elicit learners’ mental action. Video thus represents a complex example from which learners can collectively derive principles or rules’ (Blomberg et al. 2014, p. 447). Another approach is based on cognitive learning theory, according to which learning involves the storage and access of knowledge in long-term memory; it is necessary to avoid overloading the learner’s working memory, so prompts are used and explicit guidelines are given, etc. In such interventions, videos are used as illustrations of previously taught principles and rules (Blomberg et al. 2014).

The learning goal and purpose influence the way videos are embedded in tasks. Video-based interventions utilise various scaffolds to develop noticing. Their leaders provide participants with some framework that draws their attention to particular features of the lesson and that they can use to account for what they notice. An example is the Mathematical Quality of Instruction (MQI) analysis framework (Mitchell and Marin 2015), which utilises aspects such as teacher mathematical error or imprecision, use of mathematics with pupils, cognitive demand of task and student work with mathematics. Roth McDuffie et al. (2014) provided participants with ‘four lenses of analysis of lessons’ (namely, teaching, learning, task, and power and participation). The Lesson Analysis Framework draws attention to four aspects of the lesson and particularly to connections between them: the learning goal of the lesson, pupils’ learning, specific and instructional activities and alternative strategies (Santagata et al. 2007; Santagata and Angelici 2010; Santagata and Guarino 2011; Santagata and Yeh 2014; Yeh and Santagata 2014). Participants in Star and Strickland (2008) study used a ‘five observation categories framework’ to observe a lesson, namely, classroom environment, classroom management, tasks, mathematical content and communication.

Video-based interventions vary in length, ranging from short interventions, such as four sessions within one month (Santagata and Angelici 2010), five sessions in 10 weeks (Mitchell and Marin 2015) or eight sessions in three months (Blomberg et al. 2014) to whole-semester courses (e.g., Star and Strickland 2008) and differ in the type and number of videos used. For example, Santagata and Angelici (2010) only used one video of the whole mathematics lesson, while Santagata et al. (2007) used three and Alsawaie and Alghazo (2010) used 10. Others (e.g., Blomberg et al. 2014; Roth McDuffie et al. 2014) used video clips only. Some also used videos of interviews with individual pupils (e.g., Santagata and Guarino 2011; Yeh and Santagata 2014). Some used videos of the participants’ own teaching (e.g., Mitchell and Marin 2015), and some interventions were complemented with a field experience (Stockero 2008; Santagata and Guarino 2011).

39.2.3 Measuring Effects of Video-Based Interventions

Some video-based interventions use an ‘experimental vs. control group’ design (Alsawaie and Alghazo 2010; Blomberg et al. 2014; Santagata and Yeh 2014). Others investigate effects of two different types of scaffolds (Blomberg et al. 2014; Santagata and Angelici 2010). Still others do not have a control group and examine the effect of the intervention only (Santagata et al. 2007; Mitchell and Marin 2015; Roth McDuffie et al. 2014).

There are basically two types of measures used in video-based interventions. In the first (Stockero 2008; Roth McDuffie et al. 2014), the participants’ responses are treated together. The development in noticing is usually studied in group discussions. The second measures in what way the individual responses differ before and after the intervention (e.g., Santagata et al. 2007; Star and Strickland 2008; Alsawaie and Alghazo 2010; Santagata and Angelici 2010; Santagata and Guarino 2011; Blomberg et al. 2014; Mitchell and Marin 2015). The tasks used for the individual pre- and post-tests are usually based on the analysis of videos of teaching of the whole lesson or its parts (an exception is the use of learning journals in Blomberg et al. 2014). In some studies, the same video is used in both tests (Santagata et al. 2007; Santagata and Angelici 2010; Santagata and Guarino 2011; Yeh and Santagata 2014; Mitchell and Marin 2015) while in others, videos of different lessons are analysed (Star and Strickland 2008; Stockero 2008; Alsawaie and Alghazo 2010; Santagata and Yeh 2014). Simpson et al. (2017) have noted problems with both. In the former, it is difficult to discount the learning effect (especially for short-term interventions): Is any change in the participants’ pattern of attention the result of the intervention or the fact that they see the video for the second time? In the latter, it is not taken into account that different videos may provide participants with different stimuli—the post-test video may include more moments (and/or more visible moments) in which students appear in the foreground than in the pre-test video; thus, it is no wonder that more student-centred comments appear in the post-test. In the study presented in this text, this problem is dealt with by balancing videos (see Sect. 39.3.3).

For the analysis of comments, different frameworks are used. van Es and Sherin (2008, 2010) framework is widely used. It identifies four dimensions with several codes. The first is Actor, which splits into the focus of Teacher and Student (or students); Curriculum Developer (a comment referring to a textbook author, curriculum documents, etc.); Self (observers discuss themselves in relation to the video) and Other. The second dimension is Topic and it includes Classroom Management, Climate (the social environment), Mathematical Thinking, Pedagogy and Other. The third dimension is Stance, which overarches the pattern of attention and knowledge-based reasoning. It includes Describe (a recounting of what is seen), Evaluate (a judgment about what is seen) and Interpret (making inferences or links to what is seen which might help account for it or understand it). Finally, the dimension of Specificity captures whether the comment relates to a specific event in the lesson (Specific) or to some aspect of the whole class or whole lesson or makes a generalization beyond the class (General). In some studies (such as van Es and Sherin 2010), the authors also coded whether the comment was related to the video or not.

Other authors have developed the framework further. An example is Stockero (2008), who, using Manouchehri (2002) levels of reflection to better capture the quality of reflection, elaborated Stance into: Description, Explanation (connecting interrelated events and exploring cause and effect issues), Theorizing (adding support to an analysis by a reference to research or course reading or providing ‘substantial evidence from transcripts and/or student written work as justification’, p. 377), Confronting (considering alternate explanations for events and/or considering others’ point of view) and Restructuring (showing evidence of Theorizing and Confronting by considering alternative instructional decisions and ‘of re-examining his or her fundamental beliefs and assumptions about teaching and learning’, p. 377). Similarly, Roth McDuffie et al. (2014) distinguished four quality levels, from descriptions with general impressions and evaluative comments at Level 1 to the analysis and interpretations of relationships between teaching strategies and students’ thinking at Level 4.

39.2.4 Results of Video-Based Interventions and Our Previous Work

All the above studies report changes in the pattern of attention and knowledge-based reasoning after video-based interventions. PSTs increasingly focus on students rather than the teacher, and they are better observers of the mathematical content. They use fewer subjective evaluative comments. There is mixed evidence in terms of the development of the interpretation skill. To avoid repetition, results of related research will be further elaborated in Sect. 39.5.

Simpson et al. (2017) report on an intervention study of the pre- and post-test design whose aim was to find how pre-service mathematics teachers developed in regard to their pattern of attention following their participation in a video-based intervention.Footnote 1 The data were coded using Sherin and van Es’ framework and quantitative methods were used to look for statistically significant differences in PSTs’ comments in the pre- and post-tests.

The PSTs’ written reflections were significantly longer after the intervention—on average more than 50% longer than those before it. The PSTs commented less on self in relation to the video and more on students in the video. There was an increase in the frequency of the mathematical thinking code after the intervention, i.e., the PSTs noticed mathematical aspects of the lesson more at the expense of Classroom Management and Pedagogy. Their comments became more descriptive and less evaluative, but at the same time also less interpretative. The responses were significantly more specific after the intervention. To sum up, the study suggested a markedly similar shift in attention to that seen in other studies (e.g., Santagata et al. 2007; Mitchel and Marin 2015) except for interpretation, which has been reported to increase in some studies (e.g., Stockero 2008; Alsawaie and Alghazo 2010; Roth McDuffie et al. 2014).

Our previous work has mostly been focused on the pattern of attention. In this article, I will look at the second process of professional vision, that is, knowledge-based reasoning. I will revisit the same data from the intervention study for further analysis to answer the research question:

In what way is the PSTs’ knowledge-based reasoning as demonstrated in the written analysis of a lesson on video affected by a video-based intervention?

Our previous research has also shown that there are differences in the pattern of attention which depend on the lesson observed. Thus, the second research question is:

Are there any differences in PSTs’ knowledge-based reasoning that depend on the lesson observed?

39.3 Methodology

39.3.1 Participants

The participants were Czech mathematics PSTs in the first semester of a four-semester master’s programme. They had completed bachelor degrees in either Mathematics or Mathematics with a Focus on Education, but had had no formal education in teaching mathematics. Most were in their early or mid-20s. Five students were already qualified as teachers of other subjects and wanted to widen their qualification for mathematics. Six students had limited experience teaching mathematics. In total, 32 PSTs participated in the study. There was no selection made, all the PSTs in that year level participated.

39.3.2 Intervention

The intervention formed part of a mathematics education course that was taught by the author and was attended by all participants of the study. During the intervention, no school practice placement was assigned to the participants. In this course, prior to the intervention, PSTs were introduced to a theory of concept development process in mathematics, constructivist approaches to the teaching of mathematics and the division of mathematics tasks into procedural and making-connections types (taken from TIMSS). The theory was illustrated by either written cases or short video clips of mathematics lessons. Concrete prompts directing the PSTs’ attention were used and thus this video use was more aligned with the cognitive learning theory approach.

As stated above, the course’s main aim was for the PSTs to develop their ability to notice features of the lesson salient to its success, and the tasks prepared for the intervention complied with it. The intervention was based on the situated cognition learning theory. There were online (through the Workshop module in Moodle, in a virtual learning environment [VLE]) and in-person (in sessions) opportunities for participants to work cooperatively, which follows the social cognition view approach.

The intervention spanned three guided-observation sessions (each about 120 min), with home study tasks, over a three-month period (see Table 39.1). The tasks consisted of watching videos of mathematics lessons and were set within the Moodle VLE. The videos were purposefully selected from a pool of videos used in preceding years with PSTs in which noteworthy events were visible and were motivating for PSTs to comment upon. In line with video-based courses in teacher education, the lessons were not used as examples of good practice. For example, Seago (2004) found that ‘the most useful video clips were based on situations where there was some element of confusion (either the students’ or the teachers’) that typically arises in classrooms’ (p. 267). This was confirmed by Sherin et al. (2009), who say that ‘video clips should provide something for teachers to puzzle over or speculate about… it is through this process of inquiry that teacher learning will likely occur’ (p. 215).

Table 39.1 Description of the video-based intervention

Some videos were of lessons from other countries as ‘the exposure to alternative practices helps observers to become aware of their own cultural routines’ (Santagata et al. 2007, p. 127). Recordings of whole lessons were used for home tasks and clips for sessions. In the whole-lesson video, all important elements to understand the lesson are present: ‘goals for students’ learning, instructional activities, strategies for monitoring students’ thinking and assessing their learning, curriculum and pedagogy, and so on’ (Santagata et al. 2007, p. 127).

During the sessions, the PSTs’ responses to home tasks were discussed. The course teacher was drawing their attention to important moments (see Sect. 39.2.1) that they might not have noticed, e.g., by asking one PST to present his or her comments and inviting others to comment on them, or by showing the appropriate part of the video. To reduce PSTs’ inclination towards criticism of the teacher in the video, the course teacher repeatedly reinforced the norms that ‘included respecting others’ ideas and providing evidence for claims’ (Stockero 2008, p. 376).

39.3.3 The Pre- and Post-tasks

Two videos were selected for the pre- and post-intervention tasks: HK01 and HK04 (both from TIMSS 1999 Video Study). They capture Grade 8 lessons and are about half an hour in length. The topic of HK01 is square roots and HK04 is about linear identities. The piloting of the videos with an earlier group of PSTs had shown that they were lessons with which the participants could identify (Brophy 2004); despite the cultural differences between Hong Kong and the Czech Republic, the approach taken to teaching mathematics and to managing and organizing the class resembled a common approach taken in the Czech Republic.

The lessons were provided to the participants on a disk and they were accompanied by Czech subtitles. The PSTs were asked to write a reflection of the lesson; no prompts for the reflection were provided. There was no time or word limit and the PSTs were assured that they were not being assessed or judged on their responses. They were encouraged to write about what they found interesting and/or important.

To balance the videos and to avoid possible confusion caused by the use of the same or different videos for the pre- and post-tasks (see Sect. 39.2.3), the PSTs were randomly assigned to comment on one of the two videos before the first session and the complementary video after the last session. The videos were not discussed during the sessions.

39.3.4 Analysis

The PSTs’ written responses were divided into units of analysis, each representing some articulated observation. They were usually whole sentences; however, in some cases they were a clause where a sentence appeared to contain a shift of focus (e.g., from the teacher to a student). Across the pre- and post-intervention responses, there were 1591 units of analysis.

39.3.4.1 Pattern of Attention

We used the framework developed by van Es, Sherin and their colleagues. The process of analysis is described in detail in (Simpson et al. 2017); however, as our new analysis is based on it, it should also be briefly mentioned here. Each unit of analysis was allocated codes based on the four dimensions given in Sect. 39.2.3. The descriptions of the categories in van Es and Sherin (2008, 2010) were used to create a coding manual and an inductive process of coding scripts and agreeing on meanings of codes was undertaken by two coders. ‘Inter-rater reliability was assessed using Janson and Olsson (2001) multidimensional extension of Cohen’s kappa, and once a good-to-excellent level of agreement (ι = 0.71) was achieved, the coders were randomly assigned all remaining responses to code’ (Simpson et al. 2017).

Examples of units of analysis and their codes are in Table 39.2.

Table 39.2 Examples of coding for the pattern of attention

39.3.4.2 Knowledge-Based Reasoning

Based on the study of literature and mainly on Stockero (2008),Footnote 2 a more refined framework to capture the nature of PSTs’ reasoning about events was developed. It was used for the units of analysis which were coded as Evaluate and Interpret in the Stance category. Table 39.3 presents the framework and gives examples. For the sake of completeness, I also include Description, even though only statements that go beyond description and in which an observer engages with the information, makes judgements about it and/or interprets it are relevant in this text.

Table 39.3 Framework for knowledge-based reasoning

Statements from all the categories (except for Alteration I) were also given a value describing the way the PST saw their content. It could be rather negative (‘The teacher does not prompt further to find out if the pupil understands what the mistake was.’ Q2), positive (‘I appreciate that the lesson was conducted through simple questions, clear for pupils.’ Q2) or neutral (‘The approach I offer seems to be oriented more to a concept: what I am learning, rather than process; this is the way to proceed.’ Q4). Alteration I statements were rather negative in nature as the PSTs typically suggested an alternative when they did not like what had been done in the lesson. However, there were cases in which the PST did not openly criticise the event but suggested an alternative action anyway. Thus, Alteration I statements were not given any value.

Without the units coded Describe in the first stage of analysis, 1046 units of analysis were left. The coding was done by the coders in a manner similar to that in the first stage.

39.3.4.3 Quantitative Methods

Finally, I used statistical methods to find differences between PSTs’ knowledge-based reasoning in pre- and post-tests. Due to the small number of comments coded as Interpretation, Alteration II and Prediction, this was done only for Evaluation, Explanation and Alteration I.

First, the assumptions of normality and homogeneity of variance were assessed. The results of the Shapiro-Wilk test were significant for Q1 Evaluation (W = 0.93, p = 0.035). This suggests that the difference is unlikely to have been produced by a normal distribution. A Q–Q scatterplot was used to further evaluate the normality of data, which showed that normality cannot be assumed. Thus, the Related-Samples Wilcoxon Signed Rank test, which does not require normality, was used for Evaluation. For Q2 Explanation (W = 0.97, p = 0.413) and Q2a Alteration I (W = 0.95, p = 0.138), the Shapiro-Wilk test showed that normality can be assumed, thus a paired samples t-test was conducted in these cases. The same applies to the negative/positive/neutral nature of comments for which normality assumption and assumption of homogeneity of variances was met (both Shapiro-Wilk and Levene’s tests were not significant).

As observed by Simpson et al. (2017), a difference was found in comments related to HK01 and to HK04 in the PSTs’ pattern of attention. I was therefore interested to see whether any differences might also occur in the knowledge-based reasoning. The Shapiro-Wilk test showed that normality can be assumed and thus a paired samples t-test was used.

39.4 Results

39.4.1 Knowledge-Based Reasoning

Table 39.4 shows that prior to the intervention, the most statements on average were coded as Explanation, followed by subjective Evaluation. PSTs provided little interpretation of what they saw. Furthermore, they did not suggest elaborated alternative actions and make predictions. On the other hand, the task in the pre-test did not encourage them to do so explicitly.

Table 39.4 Relative number of codes in the pre- and post-tests

It is also worth pointing out that one fifth of the comments are of an evaluative nature; that is, the PSTs make a judgement without providing a plausible explanation for it. Statements such as ‘I like the structure of the lesson’ or ‘I did not like the lesson at all’ were common.

Table 39.4 and Fig. 39.1 depict the development in the quality of comments beyond description between the pre- and post-tests. The only significant difference was found in Q1 Evaluation (the result of the Related Sample Wilcoxon Signed Rank Test was significant, with standardised test statistic of 2.724 and p = 0.006). This suggests that the PSTs had significantly more evaluative comments before the intervention. In the pre-test, the evaluative comments appeared on average in 20% of cases (M = 0.20, SD = 0.13), while in the post-test it was only in 13% (M = 0.13, SD = 0.10). For 72% of individual PSTs, the percentage of evaluative comments decreased in the post-test compared to the pre-test (Fig. 39.1 on the right: The black colour represents the PSTs with more evaluative comments in the post-test and the grey colour PSTs with more evaluation in the pre-test).

Fig. 39.1
figure 1

Codes for knowledge-based reasoning and their distribution in the pre- and post-tests (left) and the change in Q1 Evaluation between pre- and post-tests (right). Source SPSS

It should be noted that PSTs increasingly described after the intervention and evaluated less (Simpson et al. 2017), which was taken as a sign of learning. The lower level of Q1 Subjective evaluation observed here can be seen in a similar way. The PSTs gained new knowledge in the course and the intervention and were more cautious about jumping to evaluative conclusions than before the intervention. Rather, they included more descriptions in their reflections, showing that they noticed more events: These descriptions are more specific after the intervention.

There are small shifts in the other code values, but none of them are significant. There is a small increase in Explanation and Prediction, which goes in the direction of expert-like, knowledge-based reasoning. The decrease in Alteration I also does not have to be a negative result, as the PSTs suggested rather general or trivial alternative actions not based on theory.

In terms of the negative/positive/neutral nature of comments, the only difference that proved to be significant is in neutral comments (t(31) = 4.36, p < 0.001). After the intervention, the PSTs made significantly more neutral comments (on average 41%, M = 0.41, SD = 0.14) compared to the situation at the beginning when only 24% of all non-descriptive comments were neutral (M = 0.24, SD = 0.17). When looking at individual PSTs, for 26 of them (81%), the percentage of neutral comments increased in the post-test (Fig. 39.2).

Fig. 39.2
figure 2

The change in neutral comments between pre- and post-tests

39.4.2 Differences for HK01 and HK04

In Simpson et al. (2017), we found differences in comments about the two lessons used in the pre- and post-tests. Namely, responses for HK04 focused less on Teacher and more on Curriculum Developer (Actor category) and less on Classroom Management and more on Mathematical Thinking (Topic). There was no difference for Stance and Specificity between the two lessons. In the re-analysis of data, no statistically significant difference in terms of knowledge-based reasoning (Q1–Q5) was found. The only significant difference appeared in comments with a negative tinge (t(31) = 4.29, p = 0.000). The mean of negative comments related to HK01 (M = 0.37, SD = 0.23) was significantly higher than the mean for HK04 (M = 0.21, SD = 0.23). For HK01, an average 37% of comments that went beyond description were critical in nature, while it was only 21% for HK04. This adds a further argument to being cautious when using different videos in the pre- and post-tests. The content of the lesson matters and might distort results.

39.5 Discussion and Conclusions

The study explored the influence of a video-based intervention on PSTs’ knowledge-based reasoning. In previous research, it was established that this particular intervention led to changes in the pattern of attention that were markedly similar to changes seen in the literature. The exception was the category related to the levels of knowledge-based reasoning, i.e., Interpretation. In contrast to some other studies, the PSTs, rather than providing more interpretation after the intervention, provided less interpretation. With a more refined framework, I reached the conclusion that the PSTs did make progress in their knowledge-based reasoning, but only at a lower level. They provided fewer evaluative judgments and more explanation. This finding accords with, for example, Mitchell and Marin (2015) and Stockero (2008). Still, the participants in the presented study did not display more interpretation, elaborated alternatives or predictions.

The question is why statements coded as Interpretation disappeared after the intervention, considering that the participants underwent the intervention and the mathematics education course within which it was embedded and in which some theoretical notions were introduced. Why were there at least some attempts to interpret events before the intervention and after it none, considering that the task of the pre- and post-tests was the same? At least two explanations are plausible. First, the pre-test was done immediately after the part of the mathematics education course in which the theory mentioned was introduced (see Sect. 3.2). Thus, the PSTs had it fresh in their mind and used it in the pre-test. The post-test followed after three months and the theory may have been forgotten. This has an important implication for the course: The theory was probably not continually reinforced and the PSTs could not apply it. Another explanation is that the PSTs not only became more reluctant to make judgments but also grew reluctant to interpret things, as if the more they had learned, the more they had realised how complex the teaching-learning situations were and that there were no easy interpretations. In fact, the significance of increase in neutralityFootnote 3 of comments further confirms the above consideration. The PSTs after the intervention did not jump easily to conclusions and did not make as many critical comments. The increase in neutral comments at the expense of both negative and positive ones (the change was not statistically significant) may point to the PSTs’ attempt to avoid evaluation and be impartial.

In Simpson et al. (2017), two patterns of results across relevant studies were noted:

In the studies by Sherin and van Es (2009) and van Es and Sherin (2010), there is a very direct movement towards increased interpretation with roughly balanced decreases in description and evaluation. However, Mitchell and Marin (2015), Blomberg et al. (2014), and our [video-based intervention] all show increases in description, generally at the expense of evaluation.

These differences may be attributed to several factors, some of which concern the methodology of research. First, some researchers may have a different threshold for coding a comment as interpretation. However, in the presented study, there was no significant change even if the codes of Explanation and Interpretation are taken together. Second, Sherin and van Es (2009, 2010) used a group measure in their studies. The same applies for Roth McDuffie et al. (2014), who report that by the end of their course, PSTs regularly analysed and interpreted what they attended to. The data in their study were transcriptions of group discussions throughout the course. The group discussions may be deeper in that their participants picked up on one another’s ideas and developed the interpretation further. For instance, if one person provides an interpretation, even the ones who would have not been able to come up with it themselves as individuals may grasp it. Such a discussion will likely produce more cases of interpretation.

Third, if we look at another study that reports a significant change from no interpretation to interpretation supported by evidence and offering pedagogical alternatives (Alsawaie and Alghazo 2010), we can see that, unlike in the presented study, their task in the pre- and post-tests directly called for such knowledge-based reasoning (‘Highlight and critique important events in the lesson. If you were the teacher, how would you handle things differently? If you were a student in this class, would you be able to learn what was taught in the lesson well?’, p. 229). Thus, seeing more interpretation in their case might be at least in part due to the task itself.

Next, Simpson et al. (2017) suggest that one possible reason for an increased interpretation might lie in the way the video-based intervention was organised and in the tasks used. Indeed, studies comparing two different forms of video-based interventions, such as Santagata and Angelici (2010) and Blomberg et al. (2014), showed that there were important differences in knowledge-based reasoning between the participants of the two interventions. Thus, the tasks PSTs undertook in our intervention might not have motivated them to use theory, so they did not feel they needed to do so in the post-task. The same may help account for the increased interpretation reported in Mitchell and Marin (2015). Their intervention was similar to the presented study in that it aimed at salient features of mathematical instruction, but they used a very specific framework (MQI), which provided PSTs with guidance in the analysis of lessons during the intervention and which they could use when doing the post-test. Perhaps a more detailed look at the intervention tasks would make this conclusion more secure.

Interestingly, Blomberg et al. (2014) found that only the group of PSTs who took the situated strategy course were able to maintain a focus on engaging consistently in higher-level categories of evaluating, which in their case included detailed explanations, and integrating. This would mean that the intervention in the presented study did not sufficiently implement the situated strategy. However, there was also an important difference between our study and Blomberg et al. (2014). Our data consisted of a written analysis of one lesson at one point of time, while Blomberg and colleagues coded a learning journal that the participants were to write during the course. After each session, they were asked to write what they learned and while doing so, they were guided by eight questions (e.g., ‘Provide examples… that confirm and/or contradict what you learned today.’, p. 451). This may motivate the PSTs towards deeper reflection, adding support to the effects of the intervention itself. An implication may be to also include learning journals in video-based interventions. This does not explain, however, why the cognitive group in Blomberg et al. (2014) did not do so well as the situated learning group.

Note that in our previous research it has been found that the two-year master’s programme that included two 4-week school practice placements did not lead to any changes in the pattern of attention (Simpson et al. 2017). Stockero (2008) showed that a video-based intervention followed immediately by field experience leads to significant gains in higher level reflection. The same applies to research by Santagata and Guarino (2011). An implication for teacher education would be to couple the video-based intervention with school practice placement.

For Sherin and van Es’ research (2008, 2009, 2010), there is another possible reason for increased interpretation. Their participants were experienced teachers and, moreover, they learned from reflecting on the videos of their own teaching.Footnote 4

To sum up, the question is whether it is reasonable to expect that PSTs at the beginning of their master’s studies with very limited or no teaching experience are able to make high-level professional reasoning. In Simpson et al. (2017), we speculate that ‘teacher education programs might need two distinct phases to develop noticing: the first concentrated on shifting attention and second on theorizing’. Thus, there is a question of whether we would get greater gains in knowledge-based reasoning if the video-based intervention was organised later in the master’s programme when PSTs have more knowledge of mathematics education concepts.

The study presented above has its limitations. First, a one-to-one correspondence is presumed between what is written down and what is actually noticed. The PSTs were able to notice an event but for whatever reason chose not to record it. Second, with different measures, we might have reached different results, for example, if we used a group measure or if we asked more targeted questions in the pre- and post-tasks. Third, even though the study provided more insight into the nature of PSTs’ knowledge-based reasoning, its quality was not investigated. For example, further exploration could be undertaken of whether the explanations or alternatives proposed by the PSTs are plausible and coincide with an expert’s (e.g., experienced teachers or educators) view. The question also arises of what the results for interpretation would be if no theory was presented in the course prior to the intervention or if the intervention was organised later in the two-year master’s programme.