Skip to main content

Students’ engagement across a typology of teacher feedback practices


The provision of feedback is widely practised as part of formative assessment. However, studies that examine the impact of feedback are usually from the teachers’ perspective, focusing on why and how they provide feedback. Fewer studies examine feedback from the students’ perspective, especially in the way they experience, make sense of and take up their teachers’ feedback. This paper provides empirical evidence of student engagement with different patterns of teacher feedback in their written essays. Data were gathered from 45 students (from 5 different schools) through group interviews and analysis of student artefacts from three rounds of writing tasks. The findings on affective, behavioural and cognitive engagement surfaced the conditions that will contribute to students’ will and skill to act on their teachers’ feedback. The implications on both teacher and student assessment literacy are discussed. The discussion will provide professional development providers and policy makers with new perspectives of and approaches to strengthening formative assessment practices in ways that are more cognizant of students’ experience of feedback.


There has been much interest and development in the field of instructional feedback. In the early days when the term was first used in psychology, feedback was thought of as “information given to individuals or groups about their own performance” (Wiliam, 2018, p. 5). Half a decade later, feedback was still described as information “specifically relating to the task or process of learning that fills a gap between what is understood and is aimed to be understood” (Hattie & Timperley, 2007, p. 82). The interest is in how and why feedback is given, with little emphasis on the giver and/or the recipient. However, feedback leads to improvement only when the learners use it (Kluger & DeNisi, 1996; Sadler, 1989; Shute, 2008). Given that students are active participants in the teaching and learning process, and will process, and respond to the information provided (Shute, 2008), consequently, a more encompassing definition conceives of formative assessment as, “Any information about a performance that a learner can use to improve that performance or grow in the general domain of the performance” (Smith & Lipnevich, 2018, p. 593). This definition conceives an active role of the learner in feedback and is the definition adopted for this paper.

Currently, research into learners’ use of feedback focuses on moderating factors both external (e.g. timing of feedback) and internal to the learner (e.g. learner’s ability) (Lipnevich et al., 2016). In addition, these factors interact in very complex ways (Jonsson, 2013; Jonsson & Panadero, 2018; Winstone et al., 2016). This paper seeks to contribute to the field by investigating how school-going students engage with various patterns of teacher feedback in the context of English language essay writing. It begins with reviewing relevant literature on the two key constructs: teacher’s feedback practices and student engagement.

Literature review

The learner’s role in feedback

The value of feedback is widely acknowledged, and its use and effectiveness have been collated in a number of reviews, notable ones being Hattie and Timperley’s (2007), and Shute’s (Shute, 2008). Two recent reviews of feedback report impressive effect sizes related to its use. The values range from an average of 0.5 (Hattie & Timperley, 2007) to 0.8 or higher (Shute, 2008). Feedback is most effective when it is directly related to a task and shows students how to complete or conduct the assignment (Hattie & Timperley, 2007). Conversely, low effect sizes are noted when feedback is given as a form of extrinsic motivation, such as in the form of praise, rewards, and punishment (Hattie & Timperley, 2007).

Ramaprasad (1983) defined feedback as “information about the gap between the actual level and the reference level of a system parameter which is used to alter the gap in some way” (p. 4). Sadler (1989) postulates that for feedback to be effective, learners should have an idea of their current level of performance in relation to the desired level and know how to close the gap. Interestingly, while he emphasised the student role, the subsequent literature emphasises the teacher’s role, for example, how to give effective feedback (Brookhart, 2008) what to do and what to avoid (Shute, 2008). A seminal piece often quoted in relation to feedback explicated four different levels of teacher feedback (Hattie & Timperley, 2007). These comprise task, process and self-regulation levels of feedback on the students’ performance as well as self-level comments. The latter are not directly related to the task but are teachers’ comments on the student as a person (e.g. “Good effort”). This seminal piece, again, focuses on the role of the teacher.

This emphasis on teachers’ role can be also seen in studies that investigated the types of feedback teachers gave students, such as those that examined teachers’ written corrective feedback which focused specifically on grammar correction (Ferris et al., 2013; Truscott, 1996). These studies and others also analysed students’ revision of their writing after receiving various types of feedback (Nicolás-Conesa, Manchón & Cerezo, 2019; Shintani & Ellis, 2013). One such study by Faigley and Witte (1981) distinguished between revisions in terms of surface changes (which do not change the meaning) and changes that altered the meaning of the text. Yusoff and Daud (2013) subsequently used Faigley and Witte’s (1981) taxonomy of revisions to investigate the efficacy of using wikis to improve engineering undergraduates’ writing.

The message in the extant literature appears to be that certain types of teacher input will result in better output, often in terms of performance in tasks or tests. However, this assumption is debatable because students “differ in their capacity and willingness to use feedback” (Jonsson & Panadero, 2018, p. 549). In fact, the current thinking in feedback is “a process through which learners make sense of information from various sources and use it to enhance their work or learning strategies” (Carless & Boud, 2018, p. 1). As such, it is important that studies investigating feedback should also look into how students interact with feedback.

Of the studies cited previously, few studies sought additional information from the students’ perspective. In one study, a questionnaire was used to find out students’ perceptions about the study and their focus while they were writing the tasks, and not on how they engaged with the feedback (Shintani & Ellis, 2013). Some studies drew entirely on the students’ perceptions through self-reports (Beaumont et al., 2011; Dann, 2018; Lizzio & Wilson, 2008; Zumbrunn et al., 2016). One qualitative study interviewed school-going students on their perceptions of classroom assessments (Brookhart & Bronowicz, 2003). However, while these interviews and self-reports can offer valuable insights into the students’ point of view, they cannot reveal if students acted in a way consistent with what they had reported.

Learners are complex characters in the way they feel about and act on the feedback (Beaumont et al., 2011; Dann, 2018; Esterhazy & Damsa, 2019; Nicol, 2010). As such, there has been greater interest in exploring how students engage in feedback (Lipnevich et al., 2016; Lizzio et al., 2008; Nicol & Macfarlane-Dick, 2006; Zumbrunn et al., 2016). In fact, examining students’ perceptions and responses to feedback is beneficial as students believe that they have information for teachers that would help teachers better support their learning (Wong, 2012a, 2012b, 2016, 2017).

Defining engagement

Engagement is not a stable character trait but a state of being that is malleable, depending on contextual factors (Furlong & Christenson, 2008). In its simplest conception, engagement can be seen as an overall motivation to learn (MOE & ASCD, 2007). It is also a complex construct comprising a few dimensions. First, there is an affective dimension variously described as psychological by Anderson et al. (2004) involving a sense of belonging and relationships with teachers and peers, or “interest, values and emotions” (Fredricks, Blumenfeld & Paris, 2004, p. 65). There are also two other dimensions: behavioural engagement (such as attendance, doing work and following rules) and cognitive engagement which is manifested when students spend time and effort on task (Fredericks et al., 2004). For the purposes of this study, students’ engagement with feedback has been delimited to these three aspects: affective referring to students’ emotions towards teachers’ comments, behavioural seen in their uptake of feedback (e.g. revisions, help seeking and strategies) and cognitive in terms of their processing, attention, recall and understanding of feedback (Winstone & Lipnevich, 2020).

However, the literature on engagement does not offer a theory to rationalise the focus on these dimensions of affect, behaviour and cognition (Tay, 2016). Since the ultimate aim of teachers’ feedback is for students to take ownership to act upon it, this paper posits that we look at engagement from an agentic perspective involving triadic reciprocity (Bandura, 2001). The latter theorises that three constructs: personal factors, behaviour, (i.e. cognitive processes and affect) and environment (e.g. situational context) mutually interact to influence each other. In this study, these three factors are seen in the way students can or cannot act on feedback (behaviour) if they do not feel efficacious (personal). But if they get help through the teacher’s comments or lesson activities (environment), they can then act. Adopting Bandura’s (2001) theory for this present study provides the framework to discuss each dimension (cognitive, affect and environment) in detail but more importantly, to explicate the interaction among them.

Significance of study

The literature on feedback shows a shift in interest from what the teachers do to how learners engage with feedback. It is also generally accepted that engagement involves not just the behaviour (e.g. editing) but also the affective and cognitive elements that explain the behaviour. However, current studies offer limited insight into this, because they generally involved only analysis of student revision or student self-reports. Each of these approaches lacks the in-depth investigation into the relation between what learners feel, understand and how they act on their teachers’ feedback.

The present study seeks to fill this gap by gathering evidence from student interviews, complemented by an analysis of student artefacts to check alignment between what they say and do. In addition, research in this area generally stops at one or two tasks (e.g. Yusoff & Daud, 2013). A longitudinal study spanning across three writing tasks is needed to verify if students transfer their learning from the feedback given over a longer period of time and across different tasks. The findings will be valuable to teachers to understand what feedback practices are efficacious. They will also contribute to research into sustainable feedback that supports the students on the current task while also developing their ability to self-regulate performance on subsequent tasks (Carless et al., 2011).

Apart from adding to the extant literature on feedback practices, the understanding generated can help contribute to policy at school and national levels to create conditions more conducive to efficacious formative assessment practices. This is timely especially in contexts such as Singapore where recent national policies seek to balance the summative focus on testing with a greater focus on formative aspects of assessments.


This study was conducted across three to five months. During this time, the teachers gave students writing tasks consisting short texts or full essays as part of class assignments planned for the term. These written assignments were submitted and subsequently returned with teachers’ feedback. In all, the study collected three consecutive pieces of writing tasks and analysed them for patterns of teacher feedback practices. Adopting Bandura’s (2001) triadic reciprocity theory which includes an examination of the effect of the environment on the learner’s affect and behaviour, the study also examined the context of activities that took place before the task and after the feedback was given. Such information was gleaned from student artefacts as well as interview data. After the three rounds of writing tasks were completed, three selected students per class were interviewed as a group through video conference platform because of Covid-19 safe distancing measures enforced in schools. During the interviews, students’ own artefacts were used as stimulus for discussion. For example, they were asked to comment on their teachers’ feedback and their action after receiving it. Both the interviews and artefacts were used to triangulate the practices that better facilitated learners’ engagement with the feedback affectively, cognitively and behaviourally. In short, the research questions were:

  • RQ1: What are the different feedback practices used by teachers?

  • RQ2: How do students respond affectively, cognitively and behaviourally to such feedback practices?


Based on the principle of maximal variation (Miles & Huberman, 1994), five participating secondary schools were chosen to reflect the cross section of schools in Singapore. One was a high-performing all-girls’ school while another was an all-boys school, both affiliated to churches. The rest were co-educational government schools which drew a range of students.

In each school, three Secondary 3 classes (aged 14 to 15 years) were involved, with each class comprising about 40 students and taught by a different teacher. As reported earlier, all students were assigned three rounds of writing tasks. However, this study will only report the artefacts of only the students who were nominated by their teachers to participate in focus group discussion. These students were selected based on their willingness to speak up during these interviews. From the analysis of their artefacts and teachers’ written feedback on them, it can be inferred that the student participants were drawn from a range of ability and motivational levels.

During the group interview, students were asked for their general understanding and experience of feedback in English lessons. (See “Appendix” for detailed interview schedule). They were also asked about their responses to specific instances based on the artefacts that were collected from the three rounds of writing. Lastly, they were asked on the factors that would influence their response to feedback. The work submitted by each of these student interviewees (spanning at least two tasks) was also analysed for their follow-up to the feedback given.

Data analysis

The 15 focus group discussion sessions (comprising 45 students in total) were recorded and transcribed before being analysed initially with an a priori coding template suggested by the affective, cognitive and behavioural framework adopted for this study. The unit of analysis was usually a sentence and recorded in a Microsoft Office Excel sheet. Any relevant observations from that same student’s artefacts were also recorded in the column next to his comments. For example, the student’s comments on redrafting were confirmed against his artefacts.

To ensure inter-rater reliability, the first and second author separately coded a subset of feedback comments before adjourning for comparison and discussion. Subsequently, discussion focused on significant patterns among the codes that suggested potential themes that would answer the research questions (Maguire & Delahunt, 2017).

Ethics protocol

The study complied with the ethics protocols set by the first author’s affiliated institution. Researchers met up with the potential teacher participants to explain the objectives and procedures of the research. Matters pertaining to consent, anonymity, confidentiality, and the right to withdraw were also explicitly detailed and any concerns that participants had were addressed. These details were written on a consent form and given to the participants, which the participants signed once they agreed. A similar consent form was also given to the students and their parents.


This section will begin with answering RQ1 on the patterns of feedback practices used by teachers. These practices are presented in three broad categories: written feedback, the feedback-related activities that preceded the written task (pre-task feedback practices) and those that followed the written task (post-task feedback practices). The second half of this section focuses on the findings to RQ2 on student participants’ engagement in these three categories of feedback practices as evidenced through what they said during the focus group discussions and did as seen in the artefacts. The names reported here are pseudonyms. This section concludes with a table to summarise the key findings of students’ affective, behavioural and cognitive engagement with the types of feedback practices.

Typology of feedback practices

Written comments

There were many similarities in the kinds of teacher feedback seen across the five schools. There was evidence of personal level feedback directed at motivating students, for example “Well done!” or “Good effort”. Another common practice was corrective feedback through highlighting grammatical errors with a circle or underscore (e.g. a lot). Often times, these would be accompanied with a symbol to indicate the error (e.g. sp for spelling errors) or an indication of how the error could be corrected (e.g. “You do not need to include two punctuation marks. One will do.”). Some teachers would focus only on giving feedback on selected areas while others highlighted every mistake made. At the end of the essay, teachers typically included a statement summarising or highlighting the strengths and areas of weaknesses (e.g. “Accurate language as a whole”).

Pre-task feedback practices

In order to help students understand the demands of the upcoming writing task, teachers would show them the rubrics used for grading it. Alternatively, such understanding would be facilitated through a class discussion of success criteria (see example in Fig. 1).

Fig. 1
figure 1

Excerpt of a success criteria checklist for self-assessment

In addition, students may be required to self-assess their own performance against the checklist of success criteria (as can be seen on the right column of Fig. 1). The completed checklist was to be submitted along with the new writing assignment.

Students may also be invited to reflect on their own work using a “Feedback Cover Sheet” (see Fig. 2). In contrast to the success criteria checklist, this is an open-ended response where students report what they are satisfied with in their work and what feedback they would specifically like their teachers to give regarding the work that accompanied this cover sheet. Teachers would then respond to the students’ feedback query on the same sheet after they had marked the accompanying essay.

Fig. 2
figure 2

Feedback cover sheet

Post-task feedback practices

Upon returning the students’ graded work, some teachers designed activities to help students to make sense of the feedback. One common approach was a class discussion highlighting the common errors made by most students. Teachers took the opportunity to explain the symbols used in their written feedback (e.g. “SP” indicated spelling errors). Sometimes, teachers designed an accompanying worksheet (see Fig. 3) to help students focus and record the correct forms. During such lessons, good and negative examples of student work were also shown. Student participants mentioned that they would clarify doubts during these class discussions or individually with the teacher after class. Teachers also singled out weaker students for individual consults after lessons. There was evidence that such post-task feedback practices were more routinized with some teachers than others.

Fig. 3
figure 3

Worksheet that accompanied class discussion

Students’ engagement with feedback

Affective engagement

Students reported various emotions on receiving written feedback, ranging from nonchalance (“you ponder it for like a few minutes…(then) it’s not important…any more”) to feeling “a bit excited and a bit scared”. Where they agreed was that they all looked at the marks first. One student explained, “When I look at the marks, I sort of had like a certain expectation as to what the feedback will be”. Gerard commented that the worse the marks are, the more important the feedback becomes. Wendy’s comments explain why, “You get like a 10 out of 30, then you are just wondering why you go wrong, where you went wrong. So in order to understand why you are awarded that mark, you have to look at the feedback as well.”

While teachers intended personal level statements like “Good effort” to motivate students, it appeared that not all learners found them helpful. One student participant, Rita, said, “’Good effort’ … doesn't really benefit me. It doesn't pull up my self-esteem. It doesn't make me feel good about my writing. No. It also isn't helping me to improve anything”. Her classmate, Alice, concurred, “When I read the ‘decent attempt, keep it up’ …I didn't feel motivated…I felt great about my work, but it didn't push me to further continue it.”

One exception to this appears to be when the teacher used rubber stamps with motivating messages, for example “Keep on trying” or “Good effort”. Even then, it seemed to be a novelty as indicated by Messi who commented they were “Very cute” and how classmates would ask around, “Eh, what kind of stamp did you get?”.

Generally, other students preferred that teachers complemented the “keep it up” with specific details on areas to be improved. Alice gave an example of such follow-up comments: “Like, you can start the sentence by, I think you can ‘dot dot dot’, so that it is more targeted and so that I can know what to focus on.” Such specific instruction appeared to be what was helpful to influence their affective and behavioural engagement:

So, if I don’t do well enough according to my expectations, honestly I will feel really like, dejected and really sad because I did not live up to a certain expectation. But if like, at the bottom, it states what you are good at, or it says maybe you can try this or just some small encouragement, like ‘Good try’ or ‘Good job’, that kind of thing, then I think I’ll feel more… encouraged to do better. (Ella)

Students also reported being encouraged when teachers indicated where specifically they had done well:

If I am feeling really dejected, because she really has a lot of comments, then I’ll look at the end to see if she has any good points to say about my essay, for example, ‘Oh you’ve elaborated well.’ Then, I’ll actually feel quite proud and work on improving that part. (Ariel)

Their teachers’ affirmation of their improvement in subsequent tasks also raised their sense of self-efficacy. One student reported feeling pleased at his teacher’s comment (“Most of the errors were corrected”). Another student felt a great sense of satisfaction on reading his teacher’s affirmation on his revised version: “I was like, actually I still remember when I got the paper back. Finally, like all the problems over here are solved”.

Behavioural engagement

The interviews and artefacts were analysed for evidence of student uptake of feedback and the conditions that would facilitate such actions. One common theme is that students act on the feedback if instructed and even then, it is when they can understand how to correct it. Artefacts such as Fig. 4 show students editing some areas while appearing to ignore others.

Fig. 4
figure 4

Example of selective revision

They variously commented, “Sometimes, I understand her annotations but sometimes, I don’t” and “It’s …important for the teachers to realise that students aren’t in their heads. So they don't know what the teacher might mean in certain ways.” The artefacts bore out that at best, students would attend to the editing that the teacher suggested, but often without understanding. Figure 5 shows a student who did not understand that the word “sometime” was to be corrected to “sometimes”.

Fig. 5
figure 5

Example of revision without comprehension

Even when the learner understands and edits correctly, there is a risk of limited transfer as seen in the example in Fig. 6, when in the following task, the child continues to make the same mistake of using two punctuation marks to end a sentence.

Fig. 6
figure 6

Example of lack of transfer

Students also commented that “it takes time … to actually absorb the feedback” and seek clarification from peers or their teachers. This is particularly when there were many written comments throughout the returned work. In such cases, students tended to focus on the summary statements “than look at everything from each paragraph” (Connor). Some others read the summary before reading the entire essay.

From the interview data, it was clear that certain post-feedback practices encouraged student action. For example, some teachers were more explicit in expecting students to engage behaviourally in their feedback by designing follow-up tasks such as a rewrite or a similar writing piece. In the absence of such instructions, students tended to take a look at the feedback and filed away the returned piece of work. But if they had to do follow-up work, they felt there was “no point of writing the whole essay again” because they “do not actually read the whole thing again”. They preferred to “write their wrong sentence structure and then write beside of it…the correct version of it” or to be given a choice on which part they wanted to revise. The latter gave students a sense of autonomy and self-efficacy:

It’s like, you take the original and you improve it, but sometimes I’ll just do a rework, because sometimes I feel like the original was so bad that I could not see any way on how to improve it. (Jerry)

Cognitive engagement

As mentioned in the earlier section, certain post-feedback practices helped students better process their teachers’ written feedback. One common routine mentioned across schools was teacher-led class discussion after the graded writing task was returned:

Because after every written assignment, the teacher will prepare slides for us, and she will go through the general feedback on what the class has done well and what the class hasn’t done well. And I think the general feedback is useful because it does apply for every student. But the specified feedback she gives us, I think it’s the most helpful because it is specialized for us. (Emily)

Such verbal explanations were also preferred by students who commented that “because sometimes (with) writing, you don’t understand”.

Being live interactions, students could raise questions for further clarification. This explains why some students reported that one-to-one consultations with teachers were the most helpful among feedback practices:

I might not understand then maybe she don’t have time for me, to answer that question (in class), right? So if it’s verbal then she can just straightaway tell me. Then she also can write it down. Then I will notice. Then I will know what (it means) because she explain to me on the spot. (Dan)

Class discussions were sometimes accompanied by worksheets that helped students focus on correcting common mistakes made by the class. These activities helped students who made these mistakes understand better how to correct them.

In addition to post-feedback activities, some pre-feedback activities were helpful in preparing students to make sense of feedback they would be receiving. For example, some students found that a success criteria checklist issued before they started writing helped them know what teachers were looking out for and hence “get … good marks” (Sophie). However, it was less clear if students used it intentionally to self-assess work before handing it up. Sam confessed he ticked on the checklist “for the sake of doing it”. Others, having forgotten that they had to do it earlier, scrambled to tick the boxes just before handing up the self-assess checklist along with the homework.

As for the pre-feedback practice of requiring students to write a feedback cover sheet (in which students need to report on areas done well and ask for specific feedback from teachers), there was mixed reaction. One student found it bothersome since to him, it duplicated the reflection required in the success criteria checklist. Others found it helpful:

Yes, it can help you…you can tell yourself two things you did well, like maintain it in the next essay and then, you can add another two things that you did well again. So, you can keep adding to it and … (Finally) it becomes like a perfect essay. (Tom)

In fact, the teacher’s targeted feedback made such a lasting impact that one student remembered it for a subsequent task which was a test.

I want my teacher to tell me how I to improve on my first and last paragraph and she did emphasise on my first. She said my first was okay and I had to improve on my last one. So, she did actually told me what I want. (Ginny)

It appears that if the teachers made the reflection part of the lesson routine, these pre-task feedback activities would be more efficacious.

I think analyzing our work before handing it up is really helpful, because it helps us to reflect on our work and it allows us to see what we did well and what we are missing. I don’t think she made us do it for the later ones, but for this first piece, she made us do it and I find it really helpful. (Ariel)

In summary, the study found patterns in how students’ affective, behavioural and cognitive engagement on the three broad categories of feedback practices. The key findings are summarised in Table 1.

Table 1 Students’ engagement with different feedback practices


The present study sought to examine students’ engagement with different patterns of teacher feedback. It was longitudinal in nature, looking at how forty-five student participants responded to their teachers’ feedback practices over three rounds of written tasks, as evidenced by the interview data, complemented by their follow-up action to the feedback.

Bandura’s (2001) model of triadic reciprocity was used to guide the analysis of the data to see the interaction among teacher’s feedback practices, students’ self-efficacy and behaviour. The findings surfaced two themes of will (as in motivation to take action and volition to persist) and skill (as in strategies and knowledge) to follow up on the feedback. In terms of will, the students appeared more motivated to attend to feedback when they wanted to improve (for example, after receiving a disappointing mark). In contrast to what teachers hoped, personal level statements like “Good effort” were not motivating, similar to the meta-syntheses by Hattie and Timperley (2007). In addition, they were also more willing to invest effort if they could edit specific areas they wanted to improve, rather than rewrite the whole essay. Zimmerman and Cleary (2006) emphasised how important it is to have such opportunities for adolescents’ to develop agency and efficacy as the latter, in particular, plays “a major role in their transition from childhood dependency to adulthood self-sufficiency” (p. 65).

However, students’ will to act on the given feedback was also contingent on their skill, that is, whether they knew what to do to close the gap highlighted by the feedback. They preferred specific instructions on how to improve the current piece. However, too many comments could be overwhelming and sometimes demotivating. The summary statements provided a better focus on what and how to improve for the next piece of work. Such feedback practices will go some way to developing in students feedback literacy which Carless and Boud (2018) define as the understandings, capacities and dispositions needed to make sense of information and use it to enhance work or learning strategies’ (p. 1316).

Some practices that helped students self-regulate involved the use of self-assessment against a checklist or self-reflections. However, these practices were more efficacious when teachers built them into the lesson routine. Also, teachers’ affirmation of their efforts helped sustain their motivation to engage in the feedback. In short, efficacious classroom practices that promote students’ agentic engagement with feedback need to go “beyond addressing their awareness and cognisance to engage their affective and behavioural dimensions” (Goh, 2021a, p. 36).

The findings suggest implications on both practice and policy level if we hope for students to take ownership of the feedback to direct their learning.

Implications on practice

The study mapped out a typology of feedback practices to understand how different practices engage students. However, it appears that it is not any one particular type of feedback that is universally effective. Instead, feedback is effective in so far that they result in positive student engagement. The discussion earlier analysed engagement in terms of three dimensions (affective, behavioural and cognitive) and also how they interacted. The findings suggest that teachers need to review how they are preparing their students to receive the feedback. They also need to set aside lesson time for learners to make sense and act on it.

In doing so, we also revise students’ role as passive recipients of feedback. Far from being dispassionate, students can be engaged in feedback as dialogue, instead of teachers just providing feedback as information (Tan & Wong, 2018). To be involved in the feedback process, students need to react to feedback often on an emotional level first (To, 2016). Paying attention to their affective engagement is also a way of ensuring well-being in students during assessment, so that they will reframe the role of assessment as helping them to learn.

To reinforce the students’ active role, teachers also need to create opportunities for them to direct their learning through connecting the feedback to future episodes of learning and performance. For example, teachers can instruct students to highlight (by colour-coding) the parts of the next essay where they had applied the feedback from the previous piece. Another possibility is for teachers to create exit cards after class discussions on common mistakes to assess if students can transfer their learning to new context. Alternatively, a pre-writing task can help strengthen students’ grasp of area which they can then transfer to their actual submissions (Goh, 2021b).

Further, based on the findings on students’ behavioural engagement, teachers or their schools could intentionally plan how they guide students to move from piecemeal action (e.g. responding by paragraphs) to a more comprehensive response (e.g. reflecting on the whole piece of work, or revising a larger portion of the task). In fact, if assessment tasks were designed as integrative patches such that the feedback from each piece is seen as continuous learning to be reflected upon and transferred across time, such a practice will result in deep learning (Trevelyan & Wilson, 2011).

Implications on policy

National assessment policy

Feedback is recognised as an important formative assessment practice and is associated with improving learning (e.g. Hattie & Timperley, 2007). It plays an important role in formative assessment (Black & Wiliam, 1998; Brookhart, 2008; Sadler, 1989) and has been touted in policy circles (e.g. OECD, 2005) and adopted in education systems such as Singapore and Hong Kong. While its role in learning and attainment has been widely emphasised (e.g. Hattie & Timperley, 2007), the findings from this study proffer another perspective to improving the effectiveness of learning through feedback. Specifically, beyond the provision of feedback, teachers need to be cognizant of and understand how students perceive, interact with, respond to, and act on feedback for attaining the types of desired learning goals. Additionally, the way in which students’ affectively, behaviourally and cognitively engage with the plethora of feedback typologies presented in this paper will have implications on their overall learning and growth, such as developing lifelong learning skills and twenty-first century competencies, not just for attainment. For instance, the findings have pointed out that when engaged in feedback, students would metacognate by looking for specific patterns in feedback across different pieces of work, or be motivated to act on the feedback. Some even go beyond what is expected in terms of the follow-up, and this would illustrate that they had some levels of self-directedness and self-regulation, which could be strengthened. Given that many national, state and local education jurisdictions emphasise these twenty-first century competencies and dispositions, developing policy guidance and direction for local authorities or schools to increased student engagement with feedback, especially paying attention to their affective and behavioural engagement with feedback for growth and development, is one step towards realising them.

In addition, research has reported that teachers face difficulties with assessment for learning and feedback because they report that students do not exhibit improvements or do not appear to benefit from the feedback. As many of these studies examined the angle of providing feedback, the findings from the present study alert researchers, policy makers and practitioners to understand the provision of feedback from the recipients’ perspective, and suggest that beyond the cognitive, to pay attention to students’ affective and behavioural responses. Paying attention to the affective engagement with feedback enables educators to identify aspects that require attention, including test preparation and anxiety and thereby provide handles and support structures to motivate and nudge students to interact more actively with the feedback. This emphasis is particularly important in the post COVID-19 world, given that education systems worldwide have now been sensitised to the need for mental well-being in students.

Policy for teacher education

Related to national assessment policy would be teacher education policy. We propose that schools of education include a focus on students’ engagement with feedback in pre- and in-service education especially in courses on assessment for learning. This would better enable educators—novice and experienced—to be connected with and attuned to the way students respond and react to feedback, and therefore strengthen the value of feedback in students’ learning. While we point out that effective feedback is when there are one-to-one sessions between teachers and students, we are also mindful of time constraints on teachers. A way to move forward would be to tap the affordances of technology where students could have access to present and past feedback, or be engaged in a digital dialogue with their teachers.

Limitations and recommendations for future research

The study was conducted in the context of English language lessons and written essays with the secondary school students. It has been reported in detail to facilitate transferability and replicability to other similar contexts. However, it may not be as generalisable to other subjects and with other age groups. Hence, it is recommended that this study is replicated in other subject areas and with younger children. It is conceivable younger learners may need more support in how they receive and act upon feedback. Additionally, that student participants were volunteered by their teachers could also raise the question of selection bias. As such, it would be instructive if the conclusions can be verified with further intervention studies.


Feedback is integral to the teaching and learning process and is given formally and informally by teachers. It was first conceived and used in engineering as a mechanism for checking if the action has produced the required effect in the system (Wiliam, 2018). The term is now adopted in an educational setting where the context is arguably less predictable, given that learners are more complex than machine parts.

Much of the research so far has focused on what and how teachers provide feedback, both written, orally and through demonstration. However, research on the way students receive and respond to feedback has gained interest only recently. This study has attempted to plug this gap in the field by examining the way secondary students think about and act on the written feedback provided by their English Language teachers. Triangulating students’ comments with the actions their take provides a window to understand the extent to which students are cognitively engaged with the feedback provided as well as the extent to which they are behaviourally engaged to act on the feedback. While this study only focused on a small sample of students and on one academic student, the findings suggest that ways to understand how learners interact with feedback. The findings contribute to the nascent field of involving students in the assessment process, highlighting the importance of including students as active agents than passive absorbers of information.

Availability of data and material

Data will be available via NIE Data Repository.

Code availability

Not applicable.


  • Anderson, A. R., Christenson, S. L., Sinclair, M. F., & Lehr, C. A. (2004). Check & connect: The importance of relationships for promoting engagement with school. Journal of School Psychology, 42(2), 95–113.

    Article  Google Scholar 

  • Bandura, A. (2001). Social cognitive theory: An agentic perspective. Annual Review of Psychology, 52, 1–26.

    Article  Google Scholar 

  • Beaumont, C., O’Doherty, M., & Shannon, L. (2011). Reconceptualising assessment feedback: A key to improving student learning? Studies in Higher Education, 36(6), 671–687.

    Article  Google Scholar 

  • Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy and Practice, 5(1), 7–74.

    Google Scholar 

  • Brookhart, S. (2008). How to give effective feedback to your students (1st ed.). Association for Supervision and Curriculum Development (ASCD).

    Google Scholar 

  • Brookhart, S. M., & Bronowicz, D. L. (2003). “I don’t like writing. It makes my fingers hurt”: Students talk about their classroom assessments. Assessment in Education Principles, Policy & Practice, 10(2), 221–242.

    Article  Google Scholar 

  • Carless, D., & Boud, D. (2018). The development of student feedback literacy: Enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43(8), 1315–1325.

    Article  Google Scholar 

  • Carless, D., Salter, D., Yang, M., & Lam, J. (2011). Developing sustainable feedback practices. Studies in Higher Education, 36(4), 395–407.

    Article  Google Scholar 

  • Dann, R. (2018). Developing feedback for pupil learning: Teaching, learning, and assessment in schools. Routledge.

    Google Scholar 

  • Esterhazy, R., & Damsa, C. (2019). Unpacking the feedback process: An analysis of undergraduate students’ interactional meaning-making of feedback comments. Studies in Higher Education, 44(2), 260–274.

    Article  Google Scholar 

  • Faigley, L., & Witte, S. (1981). Analyzing revision. College Composition and Communication, 32(4), 400–414.

    Article  Google Scholar 

  • Ferris, D. R., Liu, H., Sinha, A., & Senna, M. (2013). Written corrective feedback for individual L2 writers. Journal of Second Language Writing, 22(3), 307–329.

    Article  Google Scholar 

  • Fredricks, J., Blumenfeld, P., & Paris, A. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research, 74(1), 59–109.

    Article  Google Scholar 

  • Furlong, M. J., & Christenson, S. L. (2008). Engaging students at school and with learning: A relevant construct for all students. Psychology in the Schools, 45, 365–368.

    Article  Google Scholar 

  • Goh, R. (2021a). Why bother with students’ cognitive, affective, and behavioural engagement with feedback. In R. Goh (Ed.), Designing quality assessment feedback practices in schools (pp. 35–49). Pearson. ISBN 9789813130265.

    Google Scholar 

  • Goh, R. (2021b). Assessment feedback practices in a secondary school: Helping learners become more independent across discipline. In R. Goh (Ed.), Designing quality assessment feedback practices in schools (pp. 169–200). Pearson. ISBN 9789813130265.

    Google Scholar 

  • Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.

    Article  Google Scholar 

  • Jonsson, A. (2013). Facilitating productive use of feedback in higher education. Active Learning in Higher Education, 14(1), 63–76.

    Article  Google Scholar 

  • Jonsson, A., & Panadero, E. (2018). Facilitating students’ active engagement with feedback. In A. A. Lipnevich & J. K. Smith (Eds.), The Cambridge handbook of instructional feedback (pp. 531–553). Cambridge University Press.

    Chapter  Google Scholar 

  • Kluger, A., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254–284.

    Article  Google Scholar 

  • Lipnevich, A. A., Berg, D. A., & Smith, J. K. (2016). Toward a model of student response to feedback. In G. T. L. Brown & L. R. Harris (Eds.), Handbook of human and social conditions in assessment (pp. 169–185). Routledge.

    Google Scholar 

  • Lizzio, A., & Wilson, K. (2008). Feedback on assessment: Students’ perceptions of quality and effectiveness. Assessment & Evaluation in Higher Education.

    Article  Google Scholar 

  • Maguire, M., & Delahunt, B. (2017). Doing a thematic analysis: A practical, step-by-step guide for learning and teaching scholars. All Ireland Journal of Higher Education, 9(3), 3351–3514.

    Google Scholar 

  • Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis (2nd ed.). Sage.

    Google Scholar 

  • Ministry of Education and Association for Supervision and Curriculum Development (Singapore). (2007). The PETALS™ Primer. Singapore: Ministry of Education.

  • Nicol, D. (2010). From monologue to dialogue: Improving written feedback processes in mass higher education. Assessment & Evaluation in Higher Education, 35(5), 501–517.

    Article  Google Scholar 

  • Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218.

    Article  Google Scholar 

  • Nicolás-Conesa, F., Manchón, R. M., & Cerezo, L. (2019). The effect of unfocused direct and indirect written corrective feedback on rewritten texts and new texts: Looking into feedback for accuracy and feedback for acquisition. The Modern Language Journal, 103(4), 848–873.

    Article  Google Scholar 

  • OECD. (2005). Formative assessment: Improving learning in secondary classrooms [Policy brief].

  • Ramaprasad, A. (1983). On the definition of feedback. Behavioural Science, 28(1), 4–13.

    Article  Google Scholar 

  • Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119–144.

    Article  Google Scholar 

  • Shintani, N., & Ellis, R. (2013). The comparative effect of direct written corrective feedback and metalinguistic explanation on learners’ explicit and implicit knowledge of the English indefinite article. Journal of Second Language Writing, 22(3), 286–306.

    Article  Google Scholar 

  • Shute, V. J. (2008a). Focus on formative feedback. Review of Educational Research, 78(1), 153–189.

    Article  Google Scholar 

  • Smith, J., & Lipnevich, A. (2018). Instructional feedback. In A. A. Lipnevich & J. K. Smith (Eds.), The Cambridge handbook of instructional feedback (pp. 591–603). Cambridge University Press.

    Chapter  Google Scholar 

  • Tan, K. H. K., & Wong, H. M. (2018). Assessment feedback in primary schools in Singapore and beyond. In A. A. Lipnevich & J. K. Smith (Eds.), The Cambridge handbook of instructional feedback (pp. 123–144). Cambridge University Press.

    Chapter  Google Scholar 

  • Tay, H. Y. (2016). Investigating engagement in a blended learningcourse. Cogent Education, 3, 1135772.

    Article  Google Scholar 

  • To, J. (2016). ‘This is not what I need’: Conflicting assessment feedback beliefs in a post-secondary institution in Hong Kong. Research in Post-Compulsory Education, 21(4), 447–467.

    Article  Google Scholar 

  • Trevelyan, R., & Wilson, A. (2011). Using patchwork texts in assessment: Clarifying and categorising choices in their use. Assessment & Evaluation in Higher Education.

    Article  Google Scholar 

  • Truscott, J. (1996). The case against grammar correction in L2 writing classes. Language Learning, 46(2), 327–369.

    Article  Google Scholar 

  • Wiliam, D. (2018). Feedback: At the heart of—but definitely not all of—formative assessment. In A. A. Lipnevich & J. K. Smith (Eds.), The Cambridge handbook of instructional feedback (pp. 1–28). Cambridge University Press.

    Google Scholar 

  • Winstone, N., & Lipnevich, A. (2020). Feedback: How do we know that it makes a difference. The Black Box of Feedback.

  • Winstone, N. E., Nash, R. A., Parker, M., & Rowntree, J. (2016). Supporting learners’ agentic engagement with feedback: A systematic review and a taxonomy of recipience processes. Educational Psychologist, 52(1), 17–37.

    Article  Google Scholar 

  • Wong, H. M. (2012a). The perceptions of Singaporean teachers and students toward academic self-assessment. Doctoral dissertation, National Institute of Education, Nanyang Technological University.

  • Wong, H. M. (2012b). Students’ interview transcripts. In The perceptions of Singaporean teachers and students toward academic self-assessments. Doctoral dissertation, National Institute of Education, Nanyang Technological University.

  • Wong, H. M. (2016). I can assess myself: Singaporean primary students’ and teachers’ perceptions of students’ self-assessment ability. Education 3–13: International Journal of Primary Elementary and Early Years Education, 44, 442–457.

    Google Scholar 

  • Wong, H. M. (2017). Implementing self-assessment in Singapore primary schools: Effects on students’ perceptions on self-assessment. Pedagogies: An International Journal, 12, 391–409.

    Article  Google Scholar 

  • Yusoff, Z. S., & Daud, N. M. (2013). Frequency and types of revision made in Wiki assisted writing classroom. World Applied Sciences Journal, 21(13), 153–160.

    Google Scholar 

  • Zimmerman, B. J., & Cleary, T. J. (2006). Adolescents’ development of personal agency. In F. Pajares & T. Urdan (Eds.), Adolescence and education: Self-efficacy beliefs of adolescents (Vol. 5, pp. 45–69). Information Age Publishing.

    Google Scholar 

  • Zumbrunn, S., Marrs, S., & Mewborn, C. (2016). Toward a better understanding of student perceptions of writing feedback: A mixed methods study. Reading and Writing, 29(2), 349–370.

    Article  Google Scholar 

Download references


This study was funded by Singapore Ministry of Education (MOE) under the Education Research Funding Programme (OER 02/19 KTHK) and administered by National Institute of Education, Nanyang Technological University, Singapore. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Singapore MOE and NIE.


This study was funded by Singapore Ministry of Education (MOE) under the Education Research Funding Programme (OER 02/19 KTHK) and administered by National Institute of Education, Nanyang Technological University, Singapore.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Hui Yong Tay.

Ethics declarations

Conflict of interest

We confirm that there are no known conflicts of interest associated with this publication.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.




  1. 1.

    How long have you been in this school?

  2. 2.

    What is your general experience of learning English in this school?

  3. 3.

    How are you usually assessed for English in this school?

Part 1: General understanding of students’ ways of using and understanding assessment feedback in English lessons.

  1. 4.

    Can you describe the assessment feedback you have experienced in your English lessons?

  2. 5.

    What is important to you about assessment feedback?

Part 2: Specific examples of how students experience assessment feedback practice.

  1. 6.

    Do you have artefacts of the assessment feedback? What does it look like?

    (If students do not bring any, researchers will retrieve the students’ work for discussion)

  2. 7.

    What do you usually do after you’ve received assessment feedback?

    [If students say they don’t do anything, probe why].

  3. 8.

    Based on the example you have just described, what do you think was the purpose(s) of the assessment feedback?

Part 3: Alignment and corroboration between teachers’ practices and teachers’ purposes of assessment feedback.

[Still in relation to the examples mentioned] Do you think the purpose(s) of the assessment feedback was achieved?

  1. 9.

    What are the factors that influence how you respond to assessment feedback?

  2. 10.

    What would encourage you to respond to assessment feedback and take it seriously?

  3. 11.

    How can assessment feedback be done differently from what you’ve experienced?


  1. 12.

    Is there anything you might want to say about assessment feedback that we have not discussed?

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Tay, H.Y., Lam, K.W.L. Students’ engagement across a typology of teacher feedback practices. Educ Res Policy Prac (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • Formative assessment
  • Formative feedback
  • Student engagement
  • Feedback practices
  • Feedback typology