Introduction

What teachers do makes a difference to student outcomes. Approaches to teaching that emphasise feedback have been identified as powerful influences on student achievement (Hattie 2014). Much research on feedback has been carried out in higher education (Ridder et al. 2015) or English language teaching contexts (Haifaa and Marsden 2014), or has focused on written feedback (Gioka 2006). In the UK, policy requires that teachers ‘give pupils regular feedback, both orally and through accurate marking’ (Department for Education 2013), and this is supported by research that suggests that feedback can have an impact on learner outcomes (Black and Wiliam 1998; Hattie and Timperley 2007; Hattie 2014)—although this is not always positive (Kluger and DeNisi 1996). There has been recent interest in understanding oral feedback in authentic situations in the Netherlands and Norway (Voerman et al. 2012; Skovholt 2018), and in this study, we are interested in teacher-student feedback in secondary science lessons—what types of feedback are used, in which situations, and how this relates to other types of oral interaction in the classroom. The OECD’s Teaching and Learning International Survey (TALIS) found that teachers in England report high levels of use of assessments involving observing students working and providing immediate feedback, with 89% of teachers reporting doing this frequently or in all or nearly all lessons and 82% reporting providing written feedback on work at the same frequency (Department for Education 2014). Previous research on feedback in secondary science has focused on peer, rather than teacher, feedback (Gan Joo Seng and Hill 2014) and the role of technology to support feedback (Zhu et al. 2017). Research in other disciplines suggests that oral feedback is more effective than written feedback (Boulet et al. 1990). We aim to contribute to understanding these oral feedback practices in order to inform teachers, via initial teacher education or continuing professional development (CPD), of specific types of practice that might help support students in learning. We also aim to develop the work of Torrance and Pryor (2001) by extending this model beyond assessment to understand the practical and theoretical basis of feedback. The present study aims to contribute to the understanding of how science teachers conceptualise and practice feedback against the background of all of their oral interactions with students. The research questions guiding the study were (i) how do science teachers define feedback and perceive their oral feedback practices? (ii) what are students’ perceptions of how teacher oral interactions help them learn? and (iii) to what extent, and in what ways, do science teachers provide oral feedback to students?

Defining Feedback

Teachers’ oral feedback has been characterised differently across studies, and this is a challenge for researchers seeking to make sense of classroom practice, and for teachers to make sense and enact policies that demand that they provide regular feedback. As Knight (2003) argues, feedback definitions appear to lie along a continuum:

At one end, Askew and Lodge (2000) claiming feedback is almost everything that happens in a classroom. At the other end, Ramaprasad’s (1983) definition, modified by Sadler in 1989 for educational purposes, focuses quite specifically on an improvement model; that of closing the gap between desired and actual performance.

A common use of feedback in observational studies is as an interaction in triadic dialogue (Lemke 1990). Triadic dialogues, or IRF (initiation-response-feedback) exchanges, are those consisting of speech in the familiar order of teacher initiation-student response-teacher feedback. Typically, a teacher initiates talk by asking a question (I). A student then responds (R) and the teacher follows up with some feedback (F) regarding how well the student’s response meets the teacher’s expectation. IRF exchanges have been studied in secondary science contexts (Chin 2006; Salloum and BouJaoude 2017). In their synthesis of meta-analyses, Hattie and Timperley (2007, p.81) define feedback more specifically as ‘information that is provided from a range of different sources...that relates to aspects of the learner’s performance or understanding’. This definition highlights the role of information in feedback, but does not require the learner to use the information or to change their understanding as a result. As Chin recognises, only some responses require action from students (Chin 2006). Ruiz-Primo and Furtak (2007) found that students with teachers who use exchanges that involve eliciting a question and student response, recognition of student response and using the information perform better in science than those in classes whose practice is more consistent with IRF exchanges, highlighting the importance of students as active participants in feedback processes when learning is the goal. This implies feedback as practice in which teaching is modified based on identifying learner needs (Andersson and Palm 2017; Heitink et al. 2016), and in which the information provided has led to a change in the student’s thinking. In contrast to these studies, the present research analyses all oral classroom interactions in order to identify and understand the frequency and types of feedback practices used by teachers.

Ramaprasad (1983) and Wiliam (2011) argue that for information to be classified as feedback in an educational context, action must be taken by the student. This positions feedback as a process in which the learner has an active role to play and means that oral interactions must be considered in the context of what is done by the student with the information. For this reason, we have grounded our understanding of what constitutes oral feedback in an analysis of what students report that teachers say to help them learn. The sense in which we use feedback then is that it is useful information which supports learning and relates to learning goals, regarding aspects of one’s performance or understanding, and is used to improve the student’s learning of science. This is consistent with Hattie and Timperley’s consideration of feedback as ‘information about the content and/or understanding of the constructions that students have made from the learning experience’ (Hattie and Timplerley 2007, p.82).

Feedback is sometimes used synonymously with formative assessment. We draw a distinction between these. Formative assessment is used to mean the process of obtaining information from students with the purpose of promoting learning (Black and Wiliam 2003), whether by teachers or the students themselves (Black et al. 2004). Feedback is an important component of formative assessment as it refers to the information provided to students in order to promote their learning (using the definition of Hattie and Timperley 2007). In other words, feedback is a necessary but insufficient feature of formative assessment. Feedback can also be provided outside a formative assessment situation, i.e. outside the intentional situation in which a teacher gathers information and evidence in relation to what a student knows or is able to do, in everyday classroom talk. However, formative assessment is not possible without feedback, because formative assessment is specifically intended to provide feedback to improve learning (Sadler 1998).

The characteristics of feedback matter. In their review, Black and Wiliam (1998) found that students learnt more when feedback contained specific information about strengths and weaknesses and how to improve. Research in language education has found that the most effective feedback contains the correct answer and that explanation feedback is better than correct answer feedback (Butler et al. 2013). Hattie and Timperley (2007) found that feedback to the self (praise) was rarely effective. Although positive environments should be more conducive to constructive feedback, they do not necessarily bring about quality instructional support (Gamlem and Munthe 2014). Some studies have identified feedback practices which may be ineffective or even detrimental to learning. For example, a negative correlation has been found between rewards and performance on task (Deci et al. 1999) and praise may lead to children with low self-esteem avoiding important learning experiences (Brummelman et al. 2014). If feedback is to be considered as useful information about performance or understanding that relates to learning goals and supports learning, this evidence suggests that praise may not be feedback.

Hattie and Timperley (2007) propose a model in which feedback is characterised according to its focus. They define four levels of feedback: the levels to the task, the process, self-regulation, and the self. Hattie and Timperley (2007) argue that feedback to the self (of the type ‘well done, you are a good student’) is rarely effective and that feedback to the task, process and self-regulation are interrelated. Feedback to the process (how to complete the task) and to self-regulation (where students are asked to self-evaluate, for example) are thought to be most effective. Feedback to the task is thought to be effective where the task has been carefully designed with learning goals in mind. Hattie and Timperley’s model provides a useful way of understanding the effectiveness of different levels of feedback, but does not discuss feedback in real classroom contexts, nor does it examine what teachers actually do. For research to inform practice, it is important for teachers to have examples, resources and examples of success criteria, and to know what different feedback strategies look like (See et al. 2016). It is therefore important to understand what teachers do at present.

One example of a study that focuses on teachers’ practice is the study of teachers in the Netherlands by Voerman et al. (2012) who investigated the frequency of different types of feedback. They distinguished between two specific types of feedback: discrepancy (difference between current level of performance of a student and a desired level of performance) and progress (difference between current level of performance and an earlier level of performance), and classified feedback as positive or negative. They found that teachers in their study most often provided non-specific feedback, which results in missed teaching opportunities, and their study prompts questions about classroom feedback in other contexts, in our case English science classrooms.

Theoretical and Conceptual Framework

Research into feedback has drawn on a range of theoretical perspectives on learning, including behaviourism and social constructivism (Thurlings et al. 2013). These are based on a different set of assumptions and therefore emphasise different aspects of learning (Agarkar and Brock 2017). This has implications for how feedback is understood. Behaviourist approaches focus on immediacy, correction and performance, whereas social constructivist approaches focus on helping students to build knowledge. Teachers, however, tend not to act in alignment with a single theoretical approach but rather use a range of practices in which one theoretical approach might dominate (Niederhauser and Stoddart 2001). The present study, in common with Mortimer and Scott (2003) recognises that science learning involves a change from ‘everyday’ ideas that students hold towards a currently accepted scientific idea. This can be achieved in ways consistent with different learning theories. For example, in a social constructivist approach, students’ views are placed at the centre of analysis, recognising that talk plays an important role in learning (Driver et al. 2000), that learning is active, students come to it with existing knowledge and ideas, and that a more knowledgeable other can shape the student’s understanding.

These theoretical assumptions can be used to characterise teachers’ practices. Working with primary teachers, Torrance and Pryor (2001) proposed a model to characterise approaches to classroom formative assessment, identifying two ideal-typical approaches: convergent and divergent. Convergent approaches, aligned with behaviourism, focus on discovering if the learner knows, understands or can do a predetermined task, and are associated with closed questions, contrasting errors with correct responses and judgemental evaluation. In contrast, divergent approaches are aligned with constructivist theories and are associated with teaching in the zone of proximal development (Vygotsky 1986), and which incorporate analyses of alternative views, open questioning and descriptive evaluation. These correspond to different notions of feedback: as a unidirectional ‘gift’ from teacher to student where students are seen as passive recipients (convergent) to (divergent) situations where the students are participants and feedback is a more expanded discourse between teachers and students (Askew and Lodge 2000). These ideal-typical categories present a useful way of classifying what teachers do and how they use feedback. In this study, Torrance and Pryor’s ideal-typical model provides the basis for analysis of teachers’ perceptions and practices, although not for classifying teachers themselves. Teachers use a wide variety of approaches in different situations and use both convergent and divergent approaches.

Methodology

This study is grounded in real classroom life. We were interested in teachers’ use of oral feedback, i.e. information in relation to a learning goal, shared orally, that relates to aspects of the learner’s performance or understanding and which provokes a response from learners. We therefore started from the perspective of students and grounded the analysis in categories of feedback based on reports of what helped them to learn. In this study, the case study teachers were interviewed and observed and audio-recorded. Students were interviewed at the end of each lesson and these interviews analysed before the audio-recordings of lessons were analysed using a framework derived from literature definitions of feedback and students’ interview responses.

Research Questions

The study aims to contribute to the understanding of how science teachers conceptualise and practice feedback. The research questions guiding the study were (i) how do science teachers define feedback and perceive their oral feedback practices? (ii) what are students’ perceptions of how teacher oral interactions help them learn? and (iii) to what extent, and in what ways, do science teachers provide oral feedback to students?

Participants

This study was carried out in two purposively sampled mixed comprehensive ‘outstanding’ state schools for students aged 11–18 in England. The Ofsted (Office for Standards in Education) inspection framework states that in a school graded as outstanding ‘teachers provide pupils with incisive feedback’ and that pupils use feedback effectively. Although the grading of school is not a judgement on individual teachers, it was anticipated that a range of effective feedback practices would be observed in schools graded as ‘outstanding’.

All science teachers in these schools were invited to participate, and 3 out of 10 science teachers in one school and 7 out of 13 in the other consented to take part, resulting in a sample of 10 teachers (Table 1).

Table 1 Characteristics of case study teachers

A purposive sample of 84 students was selected by identifying individuals who had engaged in an individual oral interaction with their teacher. These students were invited for interview at the end of the observed lesson. In schools in England, ‘feedback’ is commonly interpreted as written comments on students’ work, to the extent that the school inspectorate (Ofsted) has issued clarifications and guidance to bust ‘myths’ about what they expect to see in relation to feedback on inspection (Ofsted 2018). We wanted to ensure that data pertinent to oral feedback was collected; therefore, students were asked what the teacher had said that helped them to learn as well as what they thought the teacher had wanted them to learn, what they had learnt, what had helped them and what they did as a result of what teachers had said to them. This is in common with the approach taken by Tunstall and Gipps (1996), and contrasts with Voerman et al. (2012) who centred their analysis on the perspective of the feedback providers (teachers) rather than the feedback recipients (students). Interviews were necessarily brief to minimise disruption to students’ routines, lasting a maximum of 10 minutes, taking place between lessons. Although students identified a range of experiences that had helped them to learn, including visuals and peer collaboration, this study focuses on only their responses that relate to what teachers said. Whilst there are limitations in using perceptions as a proxy for what students actually learnt, some studies have shown that students are capable of conceptualising and articulating strategies and processes that are beneficial to their learning (Gipps et al. 2000; Williams 2010), and this approach has been used elsewhere (cf. Eriksson et al. 2017; Peterson and Irving 2008), where learning outcomes are varied and the demands of authentic teaching situations mitigate against robust measures of learning. Whilst this needs to be considered in interpreting the findings, we interviewed a large number of students to account for the many different things that students might report learning. This is important because individual students learn quite different things in the same class because they come with different background knowledge and experiences (Nuthall 2007).

Data Collection

Teachers were asked about their understanding of feedback and its purpose, and to describe their oral feedback practices during interviews lasting between 30 and 90 minutes. We also asked about their teaching experience, their experiences of CPD and the challenges they experienced in relation to feedback challenges.

All teachers were observed for 3 (n = 2) or 4 (n = 8) hour-long lessons to allow continued and repeated observation of their practice with different groups of students. Teachers were asked to teach as they usually would to ensure high ecological validity. A total of 38 hour-long lessons were audio-recorded, with field notes kept.

A sample of students, purposively selected as above, was interviewed immediately after each observed lesson.

Analytical Framework

All student and teacher interviews were transcribed and analysed thematically using an inductive approach prior to the analysis of classroom observations. Each interview was transcribed verbatim, and each utterance coded. Coding of teacher interview transcripts was organised around two central categories: convergent- and divergent-type conceptualisations of feedback. Student interview transcripts were used to identify the types of oral interaction that students reported helped them learn (namely discrepancy comments, sharing success criteria and open questions). These were subsequently used to identify oral feedback in the analysis of classroom observations. Interview transcripts were analysed by an independent researcher using the codes provided. This generated a value of 0.77 for Cohen’s kappa, indicating excellent agreement (Cohen 1968).

Rather than live coding, a post-observation analytical framework was developed, drawing on the codes from student and teacher interviews in order to identify different types of oral interaction, including those interactions classified as feedback, i.e. information in relation to a learning goal, shared orally, that relates to aspects of the learner’s performance or understanding and which provokes a response from learners and what students reported that teachers said helped them to learn. These are presented in Table 2, along with definitions and examples. This is distinct from existing frameworks (cf. Hattie and Timperley 2007; Ruiz-Primo and Furtak 2007) as it is grounded in students’ perceptions of what helped them to learn, which implies the use of the information on the part of the student. Previous studies of oral feedback in school science (Chin 2006; Ruiz-Primo and Furtak 2007) have imputed cognitive processes onto students in describing feedback interactions as such. We instead asked students to reflect on what they learnt, how they learnt and specifically what the teacher said that helped them learn in order to identify feedback interactions from the perspective of students. These responses were used to inform the construction of the framework used to analyse observations. To be included as feedback, an oral interaction had to meet both these criteria.

Table 2 Categories of oral interaction observed in science lessons

It was important to identify the range of types of oral interaction observed because (i) we were interested in the extent to which teachers used feedback in their oral interactions with students and (ii) scholars operationalise ‘feedback’ in different ways, with some (cf. Askew and Lodge 2000) classifying almost all oral interactions as feedback. Identifying and coding all teachers’ oral interactions allows the findings to be interpreted by those using alternative conceptualisations of feedback All oral interactions were classified, counted and identified as taking place within either whole class or small group or individual teaching using the framework developed for the study (Table 2). The length of interaction was not included, only type. The analytical framework including codes, definitions and examples from Table 2 was shared with an independent researcher who coded observations for three teachers’ observations. A value of Cohen’s Kappa of 0.79 was achieved, indicating excellent inter-rater agreement (Cohen 1968).

Results

The results are presented in three sections: teachers perceptions of feedback, student perceptions of feedback and analysis of teachers’ feedback practices, set in the wider context of all teachers’ oral interactions.

Teacher Perceptions of Feedback

Interviews with teacher revealed a range of conceptualisations of feedback. They described oral feedback as an immediate, non-threatening, two-way interaction which is specific. Teachers identified different purposes of feedback relating to improvement and learning, namely to improve task performance, to improve learning, or to improve student autonomy. Teachers identified three main feedback practices: asking the students questions (9 teachers), assessing current levels of understanding (6 teachers) and promoting independent learning, in particular by not giving students answers (5 teachers). Teacher conceptions of feedback were organised into two main categories drawing on Torrance and Pryor (2001).

Convergent-Type Conceptualisations of Feedback

In convergent-type conceptualisations of feedback (Torrance and Pryor 2001), learning-as-attaining-outcomes (Hargreaves 2005) was evident in teachers’ responses. These responses prioritised providing students with answers, for example:

Just checking and getting a feel that they’ve understood what the point of the lesson was … I would make a note in my planner if I felt that they’d not done it and then we’d go over it next lesson. (Charis)

Similarly, in convergent-type responses, there was a focus on the assessment, or finding out what the students know, and what teachers would do rather than on what they expected students to do in response, for example:

At the beginning of a lesson [I] do some work and check on understanding. (Isobel)

I’ve got a few things for getting feedback, I’ve got the mini whiteboards, I’ve got some little coloured cards in sets that are red, green, yellow with true, false and nobody knows on one side and ABC on the other…so that’s where we’re actually polling everybody [about] what are you thinking. (Jacob)

Teachers who had more convergent-type conceptualisations tended to speak at a very general level about students being engaged rather than about what they were learning or thinking during feedback interactions:

If I’m going round the classroom they know they can ask me questions pretty much on anything, but I do try and generally have it related to stuff and assessing how they’re thinking about things, and to check that they are actually thinking about it rather than just passively taking in this information (Henry)

That said, teachers were often aware of the limitations of their oral feedback approaches, for example:

Using the hands up approach, that tends to be one of my fall backs, of going, ok, like ‘Hands up what do you think about?’ … but generally your quality of feedback from that is fairly poor… you can get anywhere between everyone having a go and or the entire class sitting there completely stone faced. (Jacob)

Some teachers reported that they preferred written feedback situations as it allowed for greater specificity and understanding what students know and are able to do:

I do really like the DIRT [Dedicated Improvement and Reflection Time] time we call it, cos like I said I do think that is really valuable. (Charis)

I try to give feedback in the lesson but that’s often more generalised … but written feedback is where I really get to know my students better. (Kris)

Teachers who described their practice in ways consistent with a convergent understanding of feedback perceived that feedback was important, but found it easier to engage in specific learning conversations in a written format. This indicates a need for professional development on the ways in which oral interactions can be better used to promote learning.

Divergent-Type Conceptualisations of Feedback

In divergent-type conceptualisations of feedback (Torrance and Pryor 2001), learning-as-the-construction-of-knowledge (Hargreaves 2005) was evident in teachers’ responses. These teachers saw their role as helping students to generate their own solutions:

I help them get that information from somewhere else, like their friend or somewhere else, about something that’s going to help them move forward from where they're at with their learning. (Eric)

Although teachers with more divergent-type conceptualisations also valued finding out what students knew, their focus lied on helping students to understand their own errors and helping them to improve:

Like a two way kind of process, the teacher gives them some advice and then they improve a piece of work or they answer a question with more detail, something for them. (Belle)

There’s no point giving it if you’re not expecting some response from it, do you know what I mean? Otherwise you’re just saying ‘Oh yeah that was just fine’ and they won’t bother doing anything else with it. Yeah, you’re giving them feedback because there’s something that you want them to add or to do. (Flora)

Flora and Belle emphasise student action (adding or doing) in response to student feedback, whereas Dillon focuses more on what students are thinking:

Feedback for me is looking at an outcome, spotting patterns, checking for misunderstandings and then helping the student to realise what mistakes they have made…. hopefully getting them to come up with the corrections and the changes in their understanding... I do like it when the students have light bulb moments, and a light bulb moment is not me giving them the answer but they come up with the answer themselves. (Dillon)

Although not mutually exclusive, this difference in emphasis on doing and thinking is important. Where the focus is on task performance, it is important for tasks to be designed in alignment with what teachers want students to learn.

Garry indicated that requiring students to take a central role was seen as unpopular by some students:

I get them to think about it so they kind of create their own feedback in a way, I just give them a nudge in the right direction... which winds some of them up, ‘cos they just want the answer, but I find you are more likely to see that eureka moment where they go ‘Oh yeah!’ (Garry)

Specific practices that teachers with divergent conceptualisations of feedback reported using were assessing student understanding and indicating to students what they were doing well or what they could improve, i.e. identifying progress made, and work needed to extend (Garry), promoting independent learning (Eric) and questioning (Flora):

It’s communicating to students either what they have done well so they can continue doing that, or what they could do better so that they could make improvements, some way of looping back to them. (Garry)

In some ways I’m not giving him feedback because I’m not really telling him anything, but at the same time you’re creating a context in which he is actually getting feedback, it’s just he’s doing it himself as he’s going through. (Eric)

I think questioning. Yeah if you go and look at someone’s work and it’s looking shocking, how do you think you can make that look a little bit better? You know, have you included this? Did you remember to make sure that? You know, those sorts of questions so they can look at it and now think ‘Oh yeah that’s not right or that bit’s not there’. (Flora)

Teachers with a divergent conceptualisation of feedback and an understanding of learning-as-construction-of-knowledge were able to describe specific practices that they believed helped students to learn. These teachers also reported challenges with their oral feedback practice, most notably balancing the desire to help everyone with limited time given the class size:

I give feedback every lesson but I don't think I could have enough time to see everyone in the class if I was giving personalised feedback to everyone, so it's usually the people who are struggling or are confident enough to ask, or they're interested enough to ask, are the ones I give oral feedback to. (Belle)

Of the ten teachers, Belle, Dillon, Eric, Flora and Garry had more divergent views about feedback whereas Henry, Isobel, Jacob, Kris and Charis had more convergent conceptualisations. Previous research in science education has found that teachers’ perceptions are not always consistent with their practices (Mansour 2013) so it is important to interpret teachers’ responses in relation to their practice. In contrast to other research on teachers’ perceptions of feedback (cf. Hargreaves 2005), these teachers did not discuss feedback in relation to boosting self-esteem or inspiring or motivating students, although some noted that their approach was based on sensitivity to individual needs, for example:

You’re registering to them, you know, ‘I value the importance of your understanding and I’m going to engage with you as a person to help you move forward’. So I think there’s a lot there about relationships as well. (Eric)

The following section reports students’ responses to what teachers in the study said to help them learn, and will be contrasted with teachers’ perspectives.

Student Perceptions of Feedback

Students identified a number of interactions with the teacher that they perceived help them to learn, many relating to instructional design (Table 3). Indeed, students widely reported these non-oral interactions as being important in helping them learn—more so than what teachers said. However, this study focuses on oral interactions that help children learn, and in this section, student comments are named according to the teacher and lesson number.

Table 3 Student reports of what helped them learn in their science lesson

Students reported that they learnt when teachers identified errors or misunderstandings and gave them ideas about how to improve:

She picked up a few things, we did it in centimetres by accident when we wrote the results down, so that helped us to realise we had done it wrong so we could do it right again. (Charis, Lesson 3)

This type of discrepancy comment, in which students find what they have done incorrectly, misunderstood or need to do to improve, was identified most frequently as an oral interaction that helped learning. This supports literature from outside science education that argues that discrepancy feedback improves learning (Hattie and Timperley 2007; Kluger and DeNisi 1996; Shute 2008; Voerman et al. 2012). Interestingly, these students did not identify comments that identified what they had achieved (progress comments) as important. These comments have been found to be influential in terms of improving learning strategies and motivation (Voerman et al. 2012).

Open questions were identified by students as important in helping them learn (10 references for 6 lessons). For example:

Yvette: He asks us questions and makes us think for ourselves, I think that’s one of the things about Sir, he kind of like makes us, you know, and asks us and then points to people and says what’s this?

Zoë: Yeah, yeah instead of saying the answer and stuff yeah. (Dillon, Lesson 3)

Some students linked this open questioning and lack of answer provision to their developing independence, for example:

Euan: It’s the instead of, here’s the answer, think about it and get it done yourself because in an exam we haven’t got Sir just stood there. (Garry, Lesson 3)

When discussing open questions, two characteristics were important to students: they made them think, and they were made to work out answers for themselves. For example:

Charlotte: Well when he didn’t give us the answer and we kind of had to work it out on our own.

Daisy: And that’s probably a good thing. (Garry, Lesson 1)

Open questioning, in which the teachers inquire about what the students understand in order to develop their understanding further is consistent with Torrance and Pryor’s (2001) divergent approach. Open questions have been found to require further cognitive input from students (Chin 2006), so it is unsurprising that students report that this type of question helps them learn.

Comments about success criteria were also important for students. These typically related to conversations about rubrics or mark schemes. For example:

What we needed to do to get the distinction, we were doing the pass stuff today but going to like, exactly what we needed to do. (Flora, Lesson 1)

The student below discusses the importance of knowing how answers are judged in an examination context, i.e. process level feedback:

It was when he talked to the whole class and like pointed out in each answer

where the marks have come from. So it was kind of helpful for him to say … ‘Oh

you’ve got full marks but it’s when you know why you’ve got full marks or no

marks’. (Eric, Lesson 4)

Both Eric and Flora’s students’ comments relate to externally defined success criteria. In common with Sadler (1989) these comments suggest that students find value in discussing the desired quality of work. This theme did not emerge from interviews with teachers in relation to discussions about their feedback practice.

There may be other interactions that helped students learn but students did not mention these in interviews. Some students were unable to identify anything that the teacher said to help them learn:

Not really helped me learn, it helped me do the task yeah, but I think things to help you learn you sort of find that on your own’. (Isobel, Lesson 2)

This highlights the importance of task design. Of course, more of what the teachers said may have resulted in learning either directly or indirectly, but in this grounded approach to understanding feedback, the framework was developed only from the interactions that students reported to help them learn. Feedback is just one component of teachers’ practice—through their use of visuals, design of collaborative work, use of practical work and independent practice, students identified that these teachers were helping them learn.

Student reports of what teachers said that helped them learn corresponded well with how teachers described their practice in terms of the use of open questions and discrepancy comments. There was less correspondence in relation to progress comments. Teachers reported this as a feedback practice, but it did not emerge as a key theme from student reports. Conversely, students identified discussion of success criteria as important, whereas teachers did not mention this in relation to their practice, although it may have been implicit where teachers discussed moving or nudging students ‘in the right direction’. In terms of Hattie and Timperley’s (2007) levels of feedback, students did not identify feedback to the self as useful, whereas feedback to process or self-regulation were seen as useful (see quotes relating to Garry and Eric above), and feedback to the task as dependent upon the nature of the task (see quote relating to Isobel above).

In the section that follows, we examine teachers’ feedback practices in light of these findings. The categories of oral interaction that are identified as feedback are those in which the teacher said something that provided information that led to student (internal or external) action in relation to learning, i.e. open questions, comments about success criteria and discrepancies.

Teacher Practice

Categories of oral interaction classified in the analysis of lesson recordings, along with examples drawn from qualitative data are presented in Table 2. In coding teachers’ oral interactions, sensitivity to the teachers’ purpose in terms of what they wanted students to learn was required, and this was inferred from recordings, interviews and field notes. Applying two criteria (i) that students reported an oral interaction helped them learn and (ii) the interaction that met our definition of feedback: that it is useful information which supports learning, relates to learning goals, regarding aspects of one’s performance or understanding, and is used to improve the student’s learning of science. As a result, three types of oral interaction were identified as feedback in this study: namely discrepancy comments, sharing success criteria and open questions. As Table 4 shows, feedback interactions make up a minority of all teachers’ oral interactions.

Table 4 Frequency of all oral interactions across all teachers and lessons

Table 5 shows the distribution of these different interaction types across the ten case study teachers, presented in columns from the most common to the least common, and in rows from most feedback interactions to fewest. All teachers engaged in oral feedback practices. These constituted a minority of classroom oral interactions. Much teacher talk focused on telling, e.g. giving task instructions and describing or explaining content or asking closed questions in relation to this content. The types of oral interaction identified by students as helping them learn were discrepancy comments, comments on success criteria and open questions. These made up 3.3%, 5.8% and 12.3% respectively of all teacher talk across these classes. There was considerable diversity amongst teachers. Eric, Garry and Dillon used oral feedback most frequently and were teachers who also held divergent conceptualisations of feedback. It is important to note that more feedback is not necessarily better. At high frequency levels, feedback can reduce effort on task, and thereby reduce task performance (Lam et al. 2011).

Table 5 Frequency of types of oral interaction for each teacher

Questioning constituted a major part of the oral interactions that occurred within the teachers’ lessons, and were the most frequently used oral feedback practice. However, only Eric and Garry asked more open than closed questions during their lessons. This aligns with previous research, that cognitively demanding questions that challenge students to think for themselves are scarcely used (Alexander 2014).

Although teachers did not mention success criteria, or indeed learning goals, in relation to their feedback practices, students reported that these comments helped them to learn, consistent with findings from higher education research (Rust et al. 2003). Teachers might consider their learning goals implicit, but our findings suggest a greater focus on sharing these might be helpful for students.

Discrepancy comments made up only 3.3% of all oral classroom interactions. For most teachers, this was the least common oral feedback practice. This is of interest as this was most frequently reported by the students as being helpful. Laboratory studies involving undergraduates have found that such comments improve student performance by helping students create more goals for themselves at a higher level of performance than they are working at (Nicklin and Williams 2011). If teachers could find a way of incorporating more discrepancy comments into their interactions, there is the possibility of making classroom talk more powerful for students.

Table 6 presents the contexts (whole class or individual/small group) when the three types of oral feedback interaction were used. Fifty-six percent of oral feedback interactions occurred in individual or small group contexts. Comments on the success criteria were seen more frequently in whole class interactions, whereas discrepancy discussions and open questions were observed more frequently in small group or individual interactions. Teachers perceived that they used feedback more in small group and individual interactions, but this was the case only for discrepancy feedback and open questions.

Table 6 Oral feedback in whole class and small group/individual situations

The finding that oral feedback constitutes less than one fifth of all oral interactions is important because whilst much teacher talk might support activities that help students learn, there are ways for teachers to make greater use of oral feedback interactions, even in whole class situations.

Characterising Feedback Practices across Teachers

There was considerable diversity amongst the case study teachers. This is linked to how teachers empower and enable students to behave within the feedback process, and how they utilise the different oral feedback types within it, aligned to two ideal-typical approaches to assessment: divergent and convergent (Torrance and Pryor 2001). In both divergent and convergent interactions, the students are active, as they are required to respond and improve their performance as a consequence of oral feedback taking place. Alongside this, the oral interactions are two-way, as they require both teachers and students to interact with each other. However, these two-way discussions may result in unidirectional information being provided, i.e. from the teacher to the student, rather than a true two-way co-construction in which the students could direct their next actions. Indeed, all three oral feedback types could be used in either a divergent or convergent way. It is therefore not just what the teachers discuss with their students, but also how they construct these interactions that is important.

Brief but representative vignettes have been selected from the 38 hours of lessons recorded to illustrate the different ways teachers utilised the oral feedback types. They both represent interactions relating to difficulties arising from practical work.

Garry Lesson 3

Whilst carrying out some practical work, a student spots that their results are not showing the pattern they expected.

Phil: ‘Sir I don’t think I’ve done it right, I think it might be a bit wrong’.

Garry: ‘Why?’

Phil: ‘I don’t know’.

Garry: ‘So what are you going to do?’

Phil: ‘Sulk. No, I need to charge it all up again and I need to start again and turn the wires around. I really don’t see the point.’

Garry: ‘Of what?’

Phil: ‘This experiment’.

Garry: ‘Well the point of the experiment is to gather results and to be able to do the section B paper, and overall in the long term is to teach you the skills of how to carry out an experiment. Already you have noticed that is not fitting a pattern, not everybody can do that, you have actually used your judgement to say that is not correct and I need to start again. That’s good.’

Phil then continued with his practical work and resolved the issue he had encountered. Garry interacts with his student in a divergent way. He asks the student open questions and makes the student identify what they need to do to improve the data they have collected. Garry then concludes by describing the behaviours that this experience is developing in the student that will be transferable to future learning. This is consistent with research in mathematics education that has found that feedback which includes information and cues on how to proceed, the more students elaborate on content and feel interested in the subject (Rakoczy et al. 2008).

Charis Lesson 4

In contrast, Charis displays a more convergent type of practice. Whilst writing up conclusions after some practical work, a student is having difficulties in evaluating her method.

Heather: ‘What should I say Miss about the problems?’

Charis: ‘So just say it wasn’t a perfect experiment to carry out, which could have led to these anomalies.’

Heather: ‘So it wasn’t a perfect experiment’...

Heather: ‘Miss, I’ve put for my primary error it couldn’t have been a perfect experiment because there could have been issues with the elastic band?’

Charis: ‘With pulling it, it’s the amount of force you use isn’t it’.

Charis uses directive teaching and closed questions, and provides answers for Heather. She is observed doing all the explaining, and Heather’s required engagement is procedural rather than intellectual, and the interaction is more monologic. Charis engages the student in a convergent way by missing the opportunity to make Heather think as she is given the answers needed to complete the task. The student is reliant on the teacher, and subsequently acts and completes their conclusion using the teacher’s answers.

From these two examples, the different behaviours of the teachers and students in oral interactions can be seen to align with divergent and convergent ideal-typical approaches. However, as Torrance and Pryor (2001) noted, these approaches are not necessarily mutually exclusive, and evidence of both oral interaction approaches did occur across all of the teachers throughout their lessons.

Discussion

This case study of ten teachers sought to understand what oral interactions were acting as feedback, and the prevalence of feedback in teachers’ oral interactions in science lessons.

We found that alongside questioning, the most frequent oral interactions undertaken by the teachers were directive teaching and praise, which made up approximately 77% of all oral interactions. Directive teaching is associated with telling students what to do rather than facilitating discussions, and is coupled with unidirectional, transactional information such as giving task instructions, providing the answer or explaining scientific ideas to students. Given the practical nature of science and the importance of safeguarding during this work, this is perhaps unsurprising.

From a review of literature on feedback and interviews with students, we define feedback as useful information which supports learning and relates to learning goals, regarding aspects of one’s performance or understanding, and is used to improve the student’s learning of science. Drawing on this definition and student interviews (to indicate what information helped them to learn) we found that open questions, discrepancy comments and success criteria interactions are types of oral interaction that can act as feedback. These are used infrequently, accounting for only 21% of all classroom interactions. Discrepancy comments and open questions were used more frequently in individual and small group situations whereas success criteria interactions were provided more often in whole class interactions. This was consistent with teachers’ perceptions: they reported a preference for oral feedback in individual or small group situations, which was when they were observed providing discrepancy feedback and asking open questions whereas they did not identify discussions about success criteria as feedback, and tended to use this more frequently in whole class situations.

None of the teachers had either discrepancy or success criteria interactions as their most frequent three types of oral interaction. That is not to say that the teachers did not engage in these types of interaction; on the contrary, they all did so to varying degrees. However, these interactions were less common than other types of oral interaction. This finding concurs with other studies in which convergent evaluative interactions were most common (Chin 2006; Gamlem and Munthe 2014; Knight 2003; Ruiz-Primo and Li 2013). The teachers with more convergent practices tended to use closed questions more frequently, with the teacher providing the answers or explanation, and as such doing the thinking for the student. A minority of teachers engaged with learners in joint activities and used open questions through which students were challenged to think and to construct and reconstruct explanations in order to develop scientific understanding.

The least used oral feedback interaction for eight of the ten teachers was discrepancy feedback. This is of interest as discrepancy feedback was the oral interaction type that was cited by most students as being helpful for their learning. If teachers could find a way of using the time that they have with learners to increase the number of interactions focusing on discrepancy feedback, rather than some of the other forms of oral interactions they undertake perceived as being less beneficial to learners, then there is the possibility of making classroom talk more effectual for both teachers and students, with a shift away from feedback at the self level (which students did not report as helpful), towards feedback to the process or self-regulation levels.

In common with Voerman et al. (2012), here we find little use of specific feedback in science classrooms, with just over a fifth of interactions classified as feedback, with discrepancy comments provided least frequently. Although our students did not report that progress comments helped them learn (and these were therefore not classified as feedback), these were the least commonly used oral interactions across the teachers in the study. However, as Hattie and Timperley (2007) point out, in some circumstances, instruction is more effective than feedback: feedback must build on something.

As a result of the cross-sectional analysis of findings generated from the data, this study contributes to our understanding of oral feedback, by developing Torrance and Pryor’s (2001) model of ideal-typical approaches to assessment. We have elaborated this model, with practical and theorised implications of both ideal-typical approaches to feedback (Table 7). In common with Torrance and Pryor’s (2001) classroom assessment model, teachers’ feedback repertoires drew on multiple practices aligned to differing learning assumptions, indicating the feedback ideal-typical approaches are not ‘necessarily mutually exclusive in practice’ (Torrance and Pryor 2001, p. 616).

Table 7 Elaboration of Torrance and Pryor’s (2001) ideal-typical model of approaches to assessment and application to oral feedback practices

The present study raises the possibility that approaches associated with divergent feedback practices aligned to constructivist learning theories may be more beneficial for students’ learning in science. A divergent feedback process is more likely to involve co-constructed feedback loops (Askew and Lodge 2000), dialogic rather than monologic interactions, and the locus of responsibility being shifted towards the students. This increases student independence and empowers them to behave as co-agents, being sources of evaluative knowledge, co-constructing with the teacher the way forward and most importantly being made to think. Such interactions align with constructivist theories and views of learning, which are thought to be more effective in helping students to learn in science. Conversely, a convergent feedback process sees feedback as a gift provided to the student (Askew and Lodge 2000), directive forms of teaching, and the locus of responsibility remaining with the teacher. The students are treated and behave as recipients, with the teacher being the source of evaluative knowledge whilst directing students how to act and doing the thinking for them. Such interactions have roots in a behaviourist view of learning, perceived to be less effective for learners in science.

On the whole, teachers’ perceptions of their oral feedback practices were inconsistent with their behaviour. For example, whilst they described progress information as important, this was rarely used in their teaching. A noteworthy difference between teachers’ conceptualisations of feedback and the definition derived from literature related to the inclusion of discussions related to learning goals and success attributed to them. As learning goals are perceived to be a vital facet of feedback (Askew and Lodge 2000; Hattie and Timperley 2007; Voerman et al. 2014), the exclusion of this aspect was notable in the teachers’ perceptions, particularly as it was found to be important for students. Current practices appear to address the feedback dimensions of ‘How am I going?’ and ‘Where to next?’ (Hattie and Timperley 2007), but seem to be lacking with respect to addressing the question related to ‘Where am I going?’ One reason for this may be due to the predominance of feedback practices within the current English education system that encourage teachers to provide progress and discrepancy information as W.W.W./E.B.I. (what went well/even better if). These practices involve parties reviewing progress or noting action to undertake to improve in relation to a goal, but do not explicitly discuss what quality would be with respect to the learning goal.

Conclusions

This study contributes to our understanding of oral feedback in science classrooms, and specifically to the prevalence of feedback in everyday classroom talk. Rather than use existing frameworks which commonly include the majority of teacher utterances as feedback, this study presents an analysis of teachers’ practice grounded in what students say helps them learn and identifies three specific feedback practices, namely open questions, comments about the success criteria and discrepancy comments. This attention to learning as purpose allowed us to identify and exemplify specific practices that are more likely to lead to learning. These are useful to highlight in teacher education and during classroom observations carried out as professional development.

The analysis of teachers’ practice found that non-feedback oral interactions such as directive teaching (describing, explaining and issuing task instructions), closed questions and giving answers dominate in science classrooms. Approximately one-fifth of all oral interactions were identified as feedback, with open questions to most commonly used oral feedback practice, consistent with findings from the Netherlands. There was diversity in teachers’ practice, with those who had more divergent conceptions of feedback using feedback more frequently than those with convergent conceptions. This study can help educators (teachers, providers of professional development and teacher educators) outside of the English context identify oral interactions that are more productive in terms of their likely impact on learning in science lessons.

We asked how do science teachers define feedback and perceive their oral feedback practices? Teachers identified feedback as a two-way interaction between themselves and their students, and as a process to support improvement in learning or performance of a task. They tended not to discuss learning goals or intentions. Our research suggests that encouraging teachers to share goals or intentions could have a positive impact on learning. Teachers with convergent conceptualisations of feedback emphasised finding out what students knew, and helping them find the correct answer by telling. They tended to speak about learning in a non-specific way. Teachers with divergent conceptualisations of feedback saw their role as provoking students to find answers for themselves through specific strategies such as open questions, discrepancy comments and suggestions about what students could do to reach a better understanding. This suggests a need for more specific discussions about the nature and types of feedback in teacher education, with exemplification and modelling to support. The exemplification in this study provides a resource for such work.

Our second research question asked what are students’ perceptions of how teacher oral interactions help them learn? The types of oral interaction identified by students as helping them learn included discrepancy comments, comments on success criteria and open questions. This needs to be interpreted in light of the finding that other (non-oral practices) were reported more frequently as helping them learn, and much of the classroom discussion related to supporting these activities. Oral feedback might not always be the most appropriate strategy. For example, in chemistry, feedback involving models has been found to be more effective than oral feedback (Padalkar et al. 2015), and this was reflected by students in this study who found visuals to be important in their learning. Likewise, some of the case study teachers preferred to provide written feedback consistent with findings that open format prompts have been found to elicit a greater range of student responses when presented in written rather than oral format (Furtak and Ruiz-Primo 2008). That said, there may be value in encouraging teachers to reflect on their classroom talk in order to identify ways in which they can increase the proportion of feedback in lessons. Recording technologies such as IRIS Connect could be used to support this self- and peer-reflection on teacher talk.

Finally, we aimed to ascertain to what extent, and in what ways, do science teachers provide oral feedback to students? In an analysis of teachers practice, we found that all teachers used oral feedback during their teaching. However, very little teacher talk in science classrooms was identified as feedback (on average 21% of all talk). Rather, task instructions, closed questions and the direct teaching of science dominated. Of the types of oral interaction that were considered feedback, open questions were most frequently used, followed by discussions about success criteria and discrepancy comments.

This study demonstrates that there is considerable diversity amongst teachers in the way they perceive and use oral feedback practices. Those who make greater use of oral feedback have divergent conceptualisations of feedback, grounded in constructivist assumptions, in which students have an active role whereas those who make the least use of oral feedback have convergent conceptualisations of feedback, grounded in behaviourist assumptions where students have a passive role. Attention to specific feedback practices through continuing professional development including peer observation could help teachers and those involved in teacher education shift practice towards the type of interaction that is more likely to engage students with learning. However, this should not come at the expense of other ways of teaching: the key is to shift the locus of responsibility towards students from teachers, so that the role of the teacher interaction is to promote thinking by the student.

Much research on feedback has been carried out in laboratory or higher education settings, with less attention to practice in real science classrooms for the duration of whole lessons. Bridging the gap between research and practice by carrying out research in authentic settings is important if research is to have relevance to what teachers do. However, studying classroom interactions is challenging. The context in which teachers are more accustomed to being observed, i.e. in relation to high stakes internal and external assessments of performance makes it difficult to recruit participants.

Observation of classroom talk is inferential at best as it does not reveal what is happening in the minds of teachers and learners, and requires understanding of the function of what was said. Classifying oral interactions requires interpretation on the part of the observer, which presents a difficulty given ambiguity of meaning and the change of meanings over time (Mercer 2010). Several steps were taken to address this. Student and teacher interviews were carried out to gain insights into their thinking, to put participants at ease and to demonstrate awareness of classroom customs and pedagogical challenges, and to highlight separation from the institutional community or associated with accountability mechanisms.

The analytical framework used presents a tool for analysing oral feedback practices based on a stated definition of feedback and grounded in student reports of what teachers say that helps them learn. The framework discriminates well between individuals and, therefore, has the potential to be used to stimulate discussion about departmental practices, and highlight where there might be individuals with particular strengths.

Notwithstanding the relatively limited sample, this work offers valuable insights into oral feedback in authentic science classrooms in England. It has resulted in the elaboration of Torrance and Pryor’s ideal-typical model of approaches to assessment for approaches to oral feedback, with associated implications for practice, and provides a framework that can be used to analyse the totality of teacher talk in science lessons in other national contexts (Torrance and Pryor 2001).

Finally, the findings of this study suggest that there may be particular types of oral interactions and divergent practices that occur during dynamic interactions between teachers and their students that are beneficial in helping learners develop understanding in science. These include more ‘visible’ discussions throughout lessons related to the learning goals, including what quality and success look like in comparison to them, increased use of all oral feedback types (discrepancy feedback; success criteria interactions; open questions) during oral interactions, encouraging dynamic students behaviours so that students are enabled to be cognitively engaged, active and involved in generating evaluative knowledge to identify how they should subsequently act and promoting student-directed learning, especially by not providing answers, so students are provoked to work on their own or with peers. We recommend that future studies look at whether there is an association between the frequency of use of oral feedback categories and student learning outcomes and whether professional development on the use of feedback has a positive impact on student learning outcomes.

The findings will be of interest to policy makers, practitioners, researchers and those involved in providing professional support for educators. If a sustained change in practice is to be the aim of professional development for teachers, the focus needs to be on shifting teachers’ beliefs (Niederhauser and Stoddart 2001). This requires teachers to have the opportunity to reflect on practice, engage in dialogue, be based on work with students, and provide opportunities for peer observation, coaching and feedback (Joyce and Showers 1980). One way that this can be facilitated is through engaging teachers in collaborative action research projects with like-minded peers (Harrison 2013) to support teachers in making sense of and develop their classroom practices.