Abstract
This study aims to explore the effects of pre-service teachers’ use of rubric in self-assessment with instructor feedback on academic achievement and self-regulated learning. Their perceptions and experiences of the self-assessment intervention were also investigated. A total of 79 pre-service teachers participated in the study. A mixed methods approach was used as a blend of experimental and qualitative design. The quasi-experimental research model with pretest/posttest control group design was employed in the quantitative phase of the study. The pre-service teachers (N = 79) were assigned to either use of rubric in self-assessment involving tutor feedback condition or a non-self-assessment condition for their essay assignments. Besides, the pre-service teachers’ perceptions and experiences of using self-assessment with instructor feedback were explored in the qualitative phase of the study. Data were collected using a rubric, an achievement test, a self-regulation in learning subscale and reflective journals. The results indicated that the rubric used in self-assessment with instructor feedback group had higher achievement and use of self-regulated learning strategies than the no-intervention group. The reflective journals also revealed that most of the pre-service teachers found the self-assessment a useful learning tool. They felt that it helped them to improve their learning by guiding them to set their own goals, monitor their progress and reflect on their learning through their own tasks. The implications for educational research and practice are discussed.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Self-assessment as a learning regulatory strategy (Nicol & McFarlane-Dick, 2006; Panadero et al., 2013; Panadero et al., 2017) has the potential to increase cognitive and affective outcomes (Andrade, 2019; McMillan & Hearn, 2008; Panadero et al., 2017; Sitzmann et al., 2010; Yan et al., 2022). Students with a simple self-assessment form just grade/mark their own tasks (Falchikov & Boud, 1989). However, students with a more complex self-assessment form engage in multiple actions, such as seeking feedback, evaluating and reflecting on their own performance against an assessment criteria (Panadero et al., 2016a, b, 2023b). Self-assessment helps students to seek and gather information about their tasks, evaluate and reflect on their own work and revise it accordingly (Andrade & Valtcheva, 2009; Yan & Brown, 2017). Using self-assessment for formative purposes is more likely to enhance student learning than using it for summative purposes (Andrade, 2019; Panadero et al., 2019; Yan & Brown, 2017).
Self-assessment is a core component of the self-regulated learning cycle (Andrade & Valtcheva, 2009; Panadero et al., 2013). It exists not only in the final stage of self-regulated learning but in the whole process of self-regulated learning (Panadero et al., 2018; Yan, 2020). Seeking feedback and self-reflection, rather than scoring/marking accurately through self-assessment, improve learners’ self-regulated learning skills and metacognitive skills (Kostons et al., 2012; Panadero et al., 2016a, b; Zimmerman & Moylan, 2009). There are three main cyclical actions in the self-assessment process (Yan, 2020; Yan & Brown, 2017): (1) defining self-assessment criteria, (2) self-directed feedback, and (3) self-reflection. Students need to seek and use feedback from different sources in order to complete their own tasks successfully. The feedback, called external feedback, can come from teachers, peers or tools. Students’ critical reflections based on relevant feedback help them to identify their academic strengths and weaknesses (McMillan & Hearn, 2008; Yan & Brown, 2017; Yan & Carless, 2022). Previous studies have shown that self-assessment practices play a crucial role in self-regulated learning and academic achievement (Panadero et al., 2012, 2017; Yan et al., 2020b). Self-assessment also influences student motivation through its close relationship with self-regulated learning (Leenknecht et al., 2020; McMillan & Hearn, 2008; Panadero et al., 2012, 2017). Some of the meta-analysis studies have also found evidence of the effectiveness of self-assessment practices (Brown & Harris, 2013; Karaman, 2021; Panadero et al., 2017; Sitzmann et al., 2010; Yan et al., 2023b). However, narrative review studies pointed out that the benefits of self-assessment interventions on self-regulated learning and academic achievement are still not clear (Andrade, 2019; Brown & Harris, 2013). In fact, there is a gap in research studies that investigate the design of effective self-assessment interventions to improve student learning (Panadero et al., 2023a; Yan et al., 2023b).
Constructing effective self-assessment practices
There are several ways in which effective self-assessment processes can improve student learning and teaching, such as clearly defining assessment criteria; providing external feedback from teachers, peers or tools; and providing self-assessment training for students (Boud & Falchikov, 1989; Panadero et al., 2017; Yan et al., 2023b). Self-assessment without assessment criteria does not provide accurate self-assessment results (Andrade & Valtcheva, 2009. Students assess their own performance against clearly defined assessment criteria. Well-structured assessment criteria can help students to know their goals and to plan their own performance tasks accordingly from the beginning of this process. Students set their goals, monitor their own tasks closely and then evaluate their completed tasks objectively against the assessment criteria (Andrade, 2010; Panadero et al., 2013).
Feedback is an important factor in students’ tasks to improve their self-regulated learning and academic performance (Hattie & Timperley, 2007). Students’ self-assessment practices require external feedback from teachers, peers or tools (Andrade, 2018; Panadero et al., 2016a, b). However, effectively integrating self-assessment into feedback is a complex process (Panadero et al., 2016a, b).
Self-assessment tools
Self-assessment tools including assessment criteria such as rubrics, scripts, checklists and standardized diaries are used to promote self-assessment and learning (Goodrich, 1996; Panadero et al., 2012; Yan et al., 2020b). Students grade their own tasks using rubrics with assessment criteria (Krebs et al., 2022; Reddy & Andrade, 2010). Rubrics created by instructors for formative purposes usually provide useful information to students and guide them in their own tasks (Brown et al., 2015; Panadero et al., 2023a; Stevens & Levi, 2013). Students can better understand their own tasks, improve their reflective thinking and show higher performance with rubrics (Alonso-Tapia & Panadero, 2010; Leach, 2012; Panadero & Romero, 2014). Theoretically, the tool used to promote self-assessment or peer assessment supports students in setting their goals, monitoring their progress and developing their self-regulated learning, metacognition, regardless of its accuracy (Krebs et al., 2022; Panadero & Jonsson, 2013; Yan, 2022). Besides, the scripts also contain a series of questions for students to answer step by step for their own tasks. The use of rubrics is mostly recommended for students’ tasks of low or medium level of complexity but scripts for their tasks of high level of complexity (Panadero et al., 2013; Reitmeier & Vrchota, 2009). Therefore, instructors can use such self-assessment tools in a variety of ways to modify courses and assessment. Previous studies have shown that the effects of rubrics on students’ learning, motivation and their academic performance are inconclusive (Andrade et al., 2009; Jonsson & Svingby, 2007; Panadero et al., 2012). However, researchers agree that the use of rubrics in self-assessment or peer assessment has a positive impact on student learning (Nicol, 2021; Panadero & Jonsson, 2013; Panadero et al., 2023a). Several research studies have reported that the use of rubric resulted in greater gains in student learning or academic performance compared to a control group (Andrade et al., 2008, 2010; Bradford et al., 2016; Brookhart & Chen, 2015; Fraile et al., 2023). Moreover, Panadero et. al.’s (2023a) meta-analysis study showed that the use of rubrics has a moderate impact on student academic performance (g = 0.45, k = 21) but a small impact on self-regulated learning (g = 0.23, k = 5) and self-efficacy (g = 0.18, k = 3). Their results revealed that more studies are needed to explore different moderators. It is also suggested that future research studies should report more details about the design of the rubric and the intervention (Panadero et al., 2023a, b).
Feedback
Effective feedback improves efficient learning (Black & Wiliam, 1998; Sadler, 1989). It provides valuable answers to these fundamental questions: where learners are going, where they are now and how they will get there (Hattie & Timperly, 2007; Sadler, 1989). Self-assessment processes that include feedback from teachers or peers (external feedback) are more powerful than self-assessment without external feedback (Dinsmore & Wilson, 2016; Taras, 2003; Yan et al., 2023b). Self-assessment with external feedback helps students to correct their own tasks and improve their academic performance and self-regulated learning (Dinsmore & Wilson, 2016; Panadero et al., 2019). Teachers can provide several feedback on student self-assessment. Students receive feedback from teachers on how they use the self-assessment criteria (i.e. the rubric), their self-assessment practices and how they can easily re-arrange their self-assessment results (Boud et al., 2013). Students can also receive additional feedback from teachers on their performance (Andrade et al., 2008; Panadero et al., 2023b; Wollenschläger et al., 2016).
However, the number of studies focusing on the relationship between external feedback and self-assessment is limited (i.e. Panadero et al., 2020). Moreover, some of the empirical studies showed that external feedback had little effect (Panadero et al., 2012; Raaijmakers et al., 2019). A few meta-analysis studies have examined the effectiveness of self-assessment interventions (Karaman, 2021; Yan et al., 2023b). For instance, Yan et al. (2023b) investigated whether observable or explicit self-assessment interventions such as discussing assessment criteria, seeking external feedback, self-reflection and calibrating self-assessment judgements were effective on academic performance. Their results showed that self-assessment interventions with explicit feedback from peers or teachers on academic performance had a significantly higher effect than those with implicit feedback. Therefore, it is important to understand the potential impact of external feedback on the self-assessment process.
Self-reflection
According to Yan and Brown’s (2017) model, determining the assessment criteria, seeking feedback and self-reflection are essential stages in the self-assessment process. Self-assessment process helps students to develop their learning skills through their own reflection and metacognitive monitoring (Wang, 2017; Yan, 2018). Reflection is one of the most important stages of the self-assessment process for students to construct and reconstruct their own practical knowledge (van Diggelen et al., 2013). Self-reflection based on feedback in the self-assessment process has a potential impact on students’ self-regulated learning and metacognition (Dunlosky & Rawson, 2012; Yan & Brown, 2017). Students check their completed tasks against the criteria throughout this stage (Zimmerman & Moylan, 2009). They reflect on themselves, evaluate the quality of their learning process and products and identify their own strengths and weaknesses (McMillan & Hearn, 2008).
Students’ perceptions of self-assessment
Students play an active role in the self-assessment process (Harris & Brown, 2018; Panadero et al., 2019). Students’ perceptions of self-assessment may influence their behaviour and its implementation (Yan et al., 2020a, 2023a). Several of the students’ perceived factors (i.e. attitude, self-efficacy, their perceived usefulness of self-assessment) have a potential influence on their self-assessment practices (Harris & Brown, 2013; Logan, 2015; Yan et al., 2020a). Students’ misconceptions about self-assessment can also have a negative impact on its implementation, as well as on their further learning (Panadero et al., 2016a, b; Ross, 2019; Yan et al., 2023a). There is an increasing number of studies focusing on students’ perceptions of self-assessment (e.g. Hanrahan & Isaacs, 2001; Wanner & Palmer, 2018; Wong, 2017). For instance, Yan et al.’s (2023a) systematic review of 44 eligible studies on students’ perceptions of self-assessment addressed two main points: (1) students’ perceived usefulness of self-assessment and (2) factors influencing the implementation of self-assessment. The results showed that both individual and instructional factors influence students’ perceived usefulness of self-assessment and their use of it. Therefore, understanding students’ perceptions of self-assessment is crucial to maximize its positive impact on their academic performance and learning (Wanner & Palmer, 2018; Yan et al., 2023a).
Given the importance of self-assessment due to its potential to enhance student learning and academic achievement, theoretically and practically examining the effectiveness of self-assessment practices through mixed design is essential. Exploring the effectiveness of self-assessment design (using rubric with instructor feedback), which has been little studied, may provide empirical support to the literature. Thus, the main goal of this study was to investigate the effect of using rubric in self-assessment involving instructor feedback on pre-service teachers’ academic achievement, self-regulated learning and also their perceptions of self-assessment. The following research questions were addressed in this study (RQ):
-
RQ1. Do pre-service teachers using rubrics in self-assessment with instructor feedback improve their academic performance and self-regulated learning compared to the control group?
-
RQ2. What are pre-service teachers’ perceptions and experiences of using the rubric in self-assessment with instructor feedback?
Method
A mixed methods approach blending quantitative and qualitative methods in the research process was employed in this study. Both experimental and qualitative designs were used to assess and interpret the effectiveness of the experimental variable (Lund, 2012; Onwuegbuzie & Leech, 2006). The study investigated not only the effect of the experimental variable (self-assessment intervention) on academic achievement and self-regulated learning, but also the experimental group’s experiences and perceptions of the self-assessment process.
The quasi-experimental research model with pretest/posttest control group design was used in the quantitative phase of the study. Two existing classes consisting of pre-service teachers were randomly assigned as experimental and control groups. In contrast to the lack of intervention in the control group, pre-service teachers in the experimental group were allowed to engage in self-assessment practices with instructor feedback. In the qualitative phase of the study, an inductive approach of grounded theory (Strauss & Corbin, 1998) was used to analyse the participating pre-service teachers’ perceptions about and experiences of the self-assessment rubric with instructor feedback.
Participants
The sample consisted of 79 pre-service teachers enrolled in the Faculty of Education at a state university in the northern region of Turkey. The participants were 55 women (69.6%) and 24 men (30.4%). Two of the existing classes in the Faculty of Education were randomly assigned as experimental (N = 44) and control (N = 35) groups.
The participants in both groups were enrolled in the same course called “Research Methods in Education”. The compulsory course is offered in the second year of the teacher education programme. The course lasts 15 weeks per semester, excluding 2 weeks for mid-term and final exams. Both classes were taught by the same instructor/researcher. Throughout the course, the participating pre-service teachers were supported to understand, analyse and critique different research methodologies.
Research procedure
Ethical approval was obtained from the Human Research Ethics Committee of Sinop University before collecting the research data (approval number: 2022/017). In the research process (see Fig. 1), the pre-service teachers in the experimental and control groups completed the Achievement Test and the Self-Regulation in Learning subscale as pretest and posttest. The pre-service teachers in both groups were required to complete two essay assignments related to the course content during the 10-week period. The assignments had medium complexity. The researcher/instructor provided instructions for their assignments. Pre-service teachers in both groups were required to write five main sections for their essays: (1) their research title; (2) the purpose and significance of the study; (3) the research problem; (4) the research model, study group, data collection tools, and data analysis; (5) the references in their research proposals. While the pre-service teachers in the experimental group were asked to complete their tasks using self-assessment rubrics with instructor feedback, the pre-service teachers in the control group did not have any self-assessment intervention to complete their tasks. The rubric was only given to the pre-service teachers in the experimental group when they were given their tasks in the classroom. After the rubric was given to the pre-service teachers in the experimental group, they were appropriately warned not to share the rubric with their colleagues in the control group. It was specifically emphasized that sharing the rubric with the control group was considered an unauthorized collaboration and was not permitted by the instructor. The control group wrote their essay assignments as usual but specific assessment criteria were also given orally to complete their tasks. The instructor explained to the experimental group about how to use rubrics to plan, monitor and evaluate their own tasks. They were asked to submit a draft of their own assignment, using the self-assessment rubric, within 3 weeks. Then, the instructor gave feedback to the students on how they had used the rubric in their assignments. The instructor helped them to self-assess their tasks against the assessment criteria, to self-assess and to re-arrange their self-assessment results. In order to facilitate the self-assessment processes, instructor-given feedback was provided to students to enable them to use their appropriate skills (Panadero & Romero, 2014). They eventually submitted their final essays with the rubric within a week of receiving feedback from their instructor. The activities that used the rubric in students’ self-assessment with instructor feedback on their assignments were included in their final grades for the course.
Besides, the participating pre-service teachers in the experimental group wrote reflective journals about their perceptions and experiences of using the rubric in self-assessment with instructor feedback after completing each task. Reflective journals were used to explore the pre-service teachers’ perceptions and experiences of self-assessment.
Data collection tools
This study has used both quantitative (i.e. instruments) and qualitative (i.e. reflective journals) data collection tools to answer the research questions.
Rubric
The rubric was created based on the assessment criteria for the tasks. Two experts in the fields of curriculum and instruction and assessment in education designed the rubric. It was used in the form of self-assessment and was shared with the participating pre-service teachers in the experimental group. The instructor also introduced the rubric to them in order to facilitate its use. The rubric had the following components (Appendix Table 4):
-
1)
Two essay assignments with self-assessment rubric
-
2)
The rubric with 10 assessment criteria
-
3)
Self-grading with four levels of performance on the tasks (1—unsuccessful, 2—almost successful, 3—successful and 4—very successful)
Pre-service teachers in the intervention group completed the rubrics. Then, they wrote reflective journals after completing each task. The rubric was used to calculate self-grading performance but self-grading was not included in students’ final grades for the course. The quality of the rubric design and intervention was also reported according to the instrument developed by Panadero et. al.’s (2023a) meta-analysis study (Appendix 2).
Achievement test
An achievement test was developed by the instructor to assess the pre-service teachers’ knowledge and skills related to the course content. The first part of the achievement test consisted of multiple-choice questions and the second part of the test consisted of open-ended questions. Two experts concluded that the achievement test was appropriate to assess their knowledge and skills of the course content to be used in the study. They concluded that the achievement test had content validity. The test was administered to both groups (experimental and control group) as pretest and posttest. An answer key for multiple-choice items and a rubric for open-ended items were prepared by the instructor to assess the students’ performance. The KR-20 reliability coefficient of the multiple-choice items in the achievement test was calculated as 0.82. Two raters independently scored the open-ended responses. Cohen’s kappa value was calculated as 0.86. This indicates that there is a high level of agreement between the raters (McHugh, 2012).
Self-regulation in learning scale (SLS)
One of the aims of this study was to examine if self-assessment practices have an impact on pre-service teachers’ self-regulated learning. Therefore, this study uses the scale developed by Erdogan and Senemoglu (2016). It consisted of 67 items with 17 dimensions. The scale was a 5-point scale, ranging from strongly agree (5), agree (4), neutral (3), disagree (2) and strongly disagree (1). Confirmatory factor analysis showed that the final model was an acceptable fit. The whole scale has a reliability of 0.91 (Cronbach’s Alpha). The scale had two subscales: self-regulated learning skills and motivation. Self-regulated learning skills subscale consisted of three main dimensions: before study, during study and after study. Environmental structuring, planning and arrangement of study time were investigated in the before-study dimension. Before-study dimension with 13 items has a reliability of 0.78 (Cronbach’s alpha). Organizing and transforming; seeking appropriate information; seeking easily accessible information; seeking peer, teacher or adult assistance; self-monitoring; and rehearsing and memorizing were investigated in during-study dimension. The dimension with 19 items has a reliability of 0.77. Self-evaluation, self-consequences after success and self-consequences after failure were also investigated in after-study dimension. The after-study dimension has a reliability of 0.82. Besides, motivation subscale included task value, self-efficacy, anxiety, attributions for failure and goal orientations. Cronbach’s alpha coefficient for the motivation subscale with 22 items was computed as 0.81. The current study selected the self-regulated learning skills subscale of the SLS as dependent variables.
Reflective journals
Reflective journals were used to explore pre-service teachers’ perceptions and experiences of using self-assessment with instructor feedback. Pre-service teachers in the experimental group wrote reflective journals about their perspectives and experiences of self-assessment after completing each task. In addition to using self-assessment with rubrics (i.e. using rubrics for self-grading) to improve learning, the study also used reflective journals to report self-assessment results. The reflective journals allow pre-service teachers to reflect deeply on self-assessment (McMillan & Hearn, 2008; Wang, 2017). The tool provides details of students’ strengths and weaknesses in their self-assessment of tasks (Wang, 2017; Yan et al., 2023a).
Data analyses
The quasi-experimental research model with a pre-post control group design was used for the quantitative phase of the study. An analysis of covariance (ANCOVA) for the achievement test and a multivariate analysis of covariance (MANCOVA) for the self-regulation in learning subscale were performed to detect any significant differences between the experimental group and control groups. Before conducting data analysis, the assumptions of the analysis of covariance were examined. It was determined whether the data obtained from the experimental and control groups for the achievement test were normally distributed. The results showed that both data (the posttest scores of the experimental group: Kolmogorov–Smirnov = 0.977, N = 44, p > 0.05; the posttest scores of the control group: Kolmogorov–Smirnov = 0.959, N = 35, p > 0.05) were normally distributed. The Levene test showed that the variance between the posttest scores of the experimental and control groups was equal (F(1,77) = 0.086, p > 0.05). Another assumption of ANCOVA was that the slopes of the regression lines were equal. The results showed that the effect of group*pretest on pre-service teachers’ achievement scores was not significant (F(1,74) = 0.599, p > 0.05). This means that the assumption of homogeneity of the regression slopes is not violated.
Prior to examining the multivariate effects on students’ self-reported use of self-regulation learning strategies, all assumptions of MANCOVA were examined. MANCOVA should need to meet all the assumptions of multivariate analyses of variance (MANOVA). The data were checked for violations of normality, and homogeneity of variances for the assumptions associated with MANOVA. The results showed that the dependent variables were normally distributed (p > 0.05). Box’s M test for equality of covariance matrices was not statistically significant. It showed that there was homogeneity of variance and covariance matrices between groups (p > 0.05). Besides, the dependent and covariate variables were examined whether additional assumptions of MANCOVA were met. Findings showed that the dependent and covariate variables were linear. Equal slopes and variances between the two groups met the assumptions.
First, independent samples t-test for achievement test and multivariate analysis of variance (MANOVA) for self-regulated learning skills subscales were performed on the pretest scores of the experimental and control groups. The findings revealed that there were no statistically significant differences on the achievement test [t(78) = 0.54, p = 0.957], self-regulated learning skills subscales [Wilks’ Lambda (λ) = 0.975, F(3,70) = 0.589, p = 0.625] between the groups on the pretest scores. The pretest achievement scores were used as covariates in an analysis of covariance (ANCOVA) to compare the posttest achievement scores.
In the qualitative phase of the study, an inductive approach of grounded theory was used to analyse the pre-service teachers’ reflective journals of their completed essay assignments. Their reflective journals helped to understand their feelings and attitudes towards the learning process. The data was carefully read with reference to the research question and Yan and Brown's (2017) model of the self-assessment process. A coding scheme was developed by the researcher. The researcher and also a research assistant were selected as independent coders. They used the coding scheme to code all qualitative data. Then Cohen’s kappa as one of the inter-coder reliability coefficient methods was used and calculated as 0.88 (Cohen, 1960; Nili et al., 2020). The calculated value indicated that there was almost perfect agreement between the coders (Landis & Koch, 1977; McHugh, 2012). Clearly defined themes emerged from the analysis of the research question. The themes and sub-themes were presented in the study.
Results
RQ1: Do pre-service teachers using rubrics in self-assessment with instructor feedback improve their academic performance and self-regulated learning compared to the control group?
Descriptive statistics for the Academic Achievement Test and the Self-Regulation in Learning subscale of the pre and posttest scores for the experimental and control groups are presented in Table 1.
Analysis of covariance (ANCOVA) and multivariate analysis of covariance (MANCOVA) were used to examine the effects of self-assessment practices on students’ academic achievement and self-regulated learning skills. The data analyses are summarized in Tables 2 and 3.
Effect of self-assessment using a rubric on pre-service teachers’ academic performance
The effect of rubric-based self-assessment on pre-service teachers’ academic performance was examined in the study. An analysis of covariance (ANCOVA) was used to examine the differences in academic achievement between the control and experimental groups for the pre and posttest implementation. The results of the univariate ANCOVA showed that the difference between the mean academic achievement test scores of students in the experimental and control groups was statistically significant (F(1, 76) = 4.36, p < 0.05, partial η2 = 0.054). This indicated a significant difference in favour of the experimental group. According to the Bonferroni results, the mean of the academic performance test in the experimental group (M = 64.38) was higher than in the control group (M = 60.11). The eta-squared value (partial η2 = 0.054) showed a moderate effect on student achievement (Cohen, 1988). The results are presented in Table 2.
Effect of self-assessment using a rubric on pre-service teachers’ self-regulated learning
The MANOVA was conducted to assess the students’ self-regulated learning skills on the three posttest means by using pretest means as covariates (see Table 3). Data analyses revealed a statistically significant difference between the means for self-regulation skills in the experimental and control groups (Wilks’ λ = 0.85, multivariate F(3,67) = 3.75, p = 0.015, partial η2 = 0.144). The results showed that the use of rubrics in self-assessment had a large effect on students’ self-regulated learning skills. The analysis revealed that the experimental group significantly had a higher score than the control group on the during-study and after-study subscales at the Bonferroni-adjusted α level (during-study subscale: F(1,69) = 5.98, p = 0.017, partial η2 = 0.08; after-study subscale; F(1,69) = 5.65, p = 0.02, partial η2 = 0.076). The analysis of each subscale within the during-study and after-study dimensions was examined. The results showed that the experimental group scored significantly higher than the control group on rehearsal and memorization [F(1,60) = 16.35, p = 0.00, partial η2 = 0.214] on during-study subscales. However, the experimental group scored significantly lower than the control group on the self-consequences after the success [F(1,60) = 17.51, p = 0.00, partial η2 = 0.226] on the after-study subscale.
RQ2: What are pre-service teachers’ perceptions and experiences of using the rubric in self-assessment with instructor feedback?
An inductive approach of grounded theory was used to analyse the pre-service teachers’ reflective journals of their completed assignments in the qualitative part of the study. The pre-service teachers in the experimental group revealed their opinions and experiences about the completion of their assignments using rubrics in self-assessment with instructor feedback. The documented data from their reflective journals were analysed.
Themes and codes of the data from reflective journals
Pre-service teachers’ perceptions and feelings about using rubrics for their essay assignments
Most of the pre-service teachers (PS) expressed that self-assessment with a rubric (assessment criteria provided by the teacher) is useful for their essay assignments (n = 30). One participant reported that:
I evaluated my assignments throgh the rubric. It helped me what I am supposed to do in my assignments (PS2).
Effectiveness of self-assessment practices on self-regulation and academic performance
Guidance for students in their goal-setting: Students felt that the rubric was a clear guide to help them set their goals and plan their essay tasks (n = 6). As one pre-service teacher noted that:
Before doing my assignment, I read the rubric carefully. Rubric like a map. It helps me to decide where I should start, and what I am going to do on my essay assignment. I relied on the assessment criteria in the rubrics and tried to reach the expected high level of performance for the tasks (PS5).
Developing self-directed feedback seeking: Most of the pre-service teachers reported that the use of assessment criteria (rubrics) formulated by the instructor as external feedback for their assignments was beneficial (n = 8). As one of the participants mentioned that:
I studied my assignments step by step based on the assessment criteria provided by my tutor (PS3).
Another participant also stated that:
I have clearly stated the critical parts on my research assignment since I undertstood the rubrics well (PS14).
This study aimed to examine if pre-service teacher could benefit from self-assessment using rubrics with instructor feedback. Some of the students believed that the feedback that they received from instructor on their self-assessment results was helpful for their assignments (n = 7). As some of the pre-service teachers mentioned the feedback from the instructor was:
When I begin to my assignment, I needed to understand the rubric. I received feedback several times from my instructor on self-assessment rubric. So I can easily finished my task (PS3).
Before doing the second homework, I received feedback from my tutor about my previous homework based on the rubric and I think that I learned what I need to pay attention while doing this homework (PS6).
The pre-service teachers also applied internal feedback that came from within themselves about using the rubric to perform their tasks. Students’ perceptions and feelings about their own performance were reported (n = 4). For example.
I did not feel that I am sufficient while doing my assignment (PS13).
I was confused when I begin to my first research assignment according to rubric. However, I had less difficulty on my second assignment since I understood rubric well (PS18).
Self-reflection: After seeking sources of feedback, rubrics helped pre-service teachers reflect on their own performance. They evaluated the quality of their own tasks and identified their strengths and weaknesses (n = 8). Some of the participants expressed that:
I had chance to self-evaluate myself through the assignments. And also I learned how to write research methodology with the rubric (PS7).
I can evaluate the quality of my own research paper through the rubric. Generally, I have created good product in that assignments. However, I had difficulty when conducting a literature search in that process (PS16).
I reliazed that I had serious problems using quotations in my essay. Without rubric I would not know my shortcomings. If I use more references in my essay, I think that my work is going to be better. Therefore, I decided to improve myself (PS20).
Pre-service teachers mentioned in their reflective journals that the rubric helped them to grade themselves on their own tasks and to identify their difficulties objectively (n = 3). One of them implied that:
… I tried to self-grade my own performane by comparing the assessment citeria in the rubric objectively (PS4).
Discussion
The current study investigated the effects of pre-service teachers’ use of rubrics in self-assessment with instructor feedback on their academic performance and self-regulated learning (RQ1). The results showed that pre-service teachers’ use of rubrics in self-assessment significantly improved their academic performance. Rubrics are an important teaching and learning tool to promote self-assessment. The tool could help improve students’ academic performance and self-regulation (Panadero & Johnson, 2013; Panadero et al., 2023a). This study supports previous studies that asking students to provide their own feedback through the use of rubrics in self-assessment has the potential to improve their academic performance (Andrade et al., 2010; Fraile et al., 2023; Lipnevich et al., 2023).
The current study also showed that asking pre-service teachers to self-assess their own work by using rubrics effectively increased their use of self-regulated learning strategies (i.e. cognitive factors). In other words, the pre-service teachers who engaged in self-assessment reported a higher use of learning strategies in the whole process of self-regulated learning (before study, during study, after study) than the control group. Although the difference between the experimental and control groups did not reach statistical significance at each stage of the SRL phase, it can be concluded that the use of rubrics in self-assessment promotes the use of self-regulated learning strategies (i.e. rehearsal and memorization). Previous experimental studies have mostly examined the effects of different self-assessment tools (rubrics, scripts, exemplars, etc.) on students’ learning outcomes at different educational levels in a comparative way (Lipnevich et al., 2014, 2023; Panadero & Romero, 2014; Panadero et al., 2012). For instance, Lipnevich et al.’s (2023) study found that the students in the rubric condition had higher writing performance than the students in the exemplars or combined conditions in compulsory education. On the other hand, some of the studies compared the effects of using rubrics in self-assessment and not using rubrics in self-assessment on students’ learning outcomes (e.g. Andrade et al., 2010; Panadero & Romero, 2014). Panadero and Romero (2014) also found that the group of pre-service teachers who used rubrics in their self-assessment reported higher use of learning strategies, performance and accuracy than the group who did not use rubrics in their self-assessment. Unlike other studies, the current study compared the effects of using rubrics in self-assessment with instructor feedback condition versus no self-assessment condition on academic performance and self-regulated learning. Thus, this study clarified that well-implemented self-assessment in higher education could improve students’ academic performance and their use of self-regulated learning strategies. This finding highlighted the crucial role of the combination of rubric feedback and instructor feedback in the self-assessment process (Brown & Harris, 2013; Hattie & Timperley, 2007). To some extent, this finding is in line with the recent meta-analysis studies (Panadero et al., 2023a; Yan et al., 2023b). Panadero et al. (2023a) showed that the use of rubrics had a moderate effect on academic performance but a smaller effect on students’ self-regulated learning and self-efficacy. Yan et al. (2023b) also provided evidence that self-assessment interventions with explicit feedback from others on academic performance had a larger effect than those without explicit feedback.
In RQ2, the study also identified the pre-service teachers’ perceptions and experiences of completing their assignments using a self-assessment rubric. It is crucial to explore how students perceive the implementation of self-assessment in order to increase its positive effects. By examining how pre-service teachers perceive the implementation of self-assessment, this study contributes to previous research on the use of rubrics with external (i.e. instructor) feedback. Most of the participants found this process useful. The pre-service teachers also received additional feedback from the instructor on their use of the rubric. Their perceptions of using the rubric in self-assessment involving instructor feedback showed that it helped them to promote their self-regulated learning by guiding them to set their own goals, monitor their progress and reflect on their learning with their own tasks. The results were in line with previous studies on students’ positive perceptions of the implementation self-assessment (Andrade & Du, 2007; Hill, 2016; Wang, 2017).
Students’ individual factors (e.g. educational level) may also influence their perceptions, beliefs and implementation of self-assessment (Andrade, 2019; Brown & Harris, 2013; Yan et al., 2023a). This is possible because older students tend to have higher academic ability and better use of self-regulated learning strategies than younger students (Brown & Harris, 2013). According to the results of the study, the use of rubrics in self-assessment with instructor feedback is highly recommended for student tasks of low or medium complexity in higher education.
Limitations and future directions
Although the results of the current study have some theoretical and practical implications, it is necessary to consider several limitations. Firstly, this study is limited by its reliance on a small sample from higher education. Another limitation of the study was the short intervention period. However, this study showed strong effects despite the short intervention period.
The study focused on the impact of self-assessment practices on self-regulation (e.g. self-regulated learning strategies) and academic achievement. Their perceptions and experiences of self-assessment practices were also investigated. Other dependent variables on self-regulation, motivation and cognition would be investigated in further studies.
This study investigated one condition (use of rubric in self-assessment with instructor feedback) using a controlled experimental design. To maximize the positive effects of self-assessment practices, further studies are needed to investigate the effects of self-assessment interventions (rubrics, scripts and/or exemplars with or without external feedback) and students’ perceptions of self-assessment practices at different levels of education. Further research is needed to investigate more specifically how innovative and effective self-assessment interventions can be designed to promote self-regulated learning, motivation and academic achievement. The findings of the study can be generalized in further studies by focusing on different subjects, different learning outcomes and students at different educational levels.
Conclusion
In this study, the self-assessment design (using a rubric with instructor feedback) showed the potential to improve pre-service teachers’ academic performance and self-regulated learning. This study can contribute to the literature on feedback and self-regulation by investigating the effect of self-assessment design on students’ academic performance, self-regulation and also their perceptions of self-assessment. It provides empirical support for the self-regulatory cycle, as well as Yan and Brown’s (2017) theoretical model of the self-assessment process. Instructor feedback in the self-assessment process led to more progress in students’ academic performance and self-regulated learning (Andrade et al., 2008; Wollenschläger et al., 2016). The combination of rubric feedback and instructor feedback, which has been little studied, produced positive effects. Innovative ways of self-assessment interventions are needed to improve students’ learning outcomes.
References
Alonso-Tapia, J., & Panadero, E. (2010). Effect of self-assessment scripts on self-regulation and learning. Infancia y Aprendizaje, 33(3), 385–397. https://doi.org/10.1174/021037010792215145
Andrade, H. (2010). Students as the definitive source of formative assessment: Academic self-assessment and the self-regulation of learning. In H. J. Andrade & G. J. Cizek (Eds.), Handbook of formative assessment (pp. 90–105). Routledge.
Andrade, H., & Du, Y. (2007). Student responses to criteria-referenced self-assessment. Assessment & Evaluation in Higher Education, 32(2), 159–181. https://doi.org/10.1080/02602930600801928
Andrade, H., & Valtcheva, A. (2009). Promoting learning and achievement through self-assessment. Theory into Practice, 48(1), 12–19. https://doi.org/10.1080/00405840802577544
Andrade, H. L., Du, Y., & Wang, X. (2008). Putting rubrics to the test: The effect of a model, criteria generation, and rubric-referenced self-assessment on elementary school students’ writing. Educational Measurement: Issues and Practice, 27(2), 3–13. https://doi.org/10.1111/j.1745-3992.2008.00118.x
Andrade, H. L., Wang, X., Du, Y., & Akawi, R. L. (2009). Rubric-referenced self-assessment and self-efficacy for writing. The Journal of Educational Research, 102(4), 287–302. https://doi.org/10.3200/JOER.102.4.287-302
Andrade, H. L., Du, Y., & Mycek, K. (2010). Rubric-referenced self-assessment and middle school students’ writing. Assessment in Education: Principles, Policy & Practice, 17(2), 199–214. https://doi.org/10.1080/09695941003696172
Andrade, H. L. (2018). Feedback in the context of self-assessment. In A. A. Lipnevich & J. K. Smith (Eds.), The Cambridge handbook of instructional feedback (pp. 376–408). Cambridge University Press. https://doi.org/10.1017/9781316832134.019
Andrade, H. L. (2019). A critical review of research on student self-assessment. Frontiers in Education, 4. https://doi.org/10.3389/feduc.2019.00087
Black, P., & Wiliam, D. (1998). Insidethe black box: Raising standards through classroom assessment. Phi Delta Kappan, 80(2), 139–148.
Boud, D., & Falchikov, N. (1989). Quantitative studies of student self-assessment in higher education: A critical analysis of findings. Higher Education, 18(5), 529–549.
Boud, D., Lawson, R., & Thompson, D. G. (2013). Does student engagement in self-assessment calibrate their judgement over time? Assessment & Evaluation in Higher Education, 38(8), 941–956. https://doi.org/10.1080/02602938.2013.769198
Bradford, K. L., Newland, A. C., Rule, A. C., & Montgomery, S. E. (2016). Rubrics as a tool in writing instruction: Effects on the opinion essays of first and second graders. Early Childhood Education Journal, 44, 463–472. https://doi.org/10.1007/s10643-015-0727-0
Brookhart, S. M., & Chen, F. (2015). The quality and effectiveness of descriptive rubrics. Educational Review, 67(3), 343–368. https://doi.org/10.1080/00131911.2014.929565
Brown, G. T. L., & Harris, L. R. (2013). Student self-assessment. In J. H. McMillan (Ed.), The SAGE handbook of research on classroom assessment (pp. 367–393). Sage.
Brown, G. T., Andrade, H. L., & Chen, F. (2015). Accuracy in student self-assessment: Directions and cautions for research. Assessment in Education: Principles, Policy & Practice, 22(4), 444–457. https://doi.org/10.1080/0969594X.2014.996523
Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37–46.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Erlbaum.
Dinsmore, D. L., & Wilson, H. E. (2016). Student participation in assessment: Does it influence self-regulation? In G. T. L. Brown & L. R. Harris (Eds.), Handbook of human and social factors in assessment (pp. 145–168). Routledge.
Dunlosky, J., & Rawson, K. A. (2012). Overconfidence produces underachievement: Inaccurate self evaluations undermine students’ learning and retention. Learning and Instruction, 22(4), 271–280. https://doi.org/10.1016/j.learninstruc.2011.08.003
Erdogan, T., & Senemoglu, N. (2016). Development and validation of a scale on self-regulation in learning (SSRL). Springerplus, 5, 1686. https://doi.org/10.1186/s40064-016-3367-y
Falchikov, N., & Boud, D. (1989). Student self-assessment in higher education: A meta-analysis. Review of Educational Research, 59(4), 395–430.
Fraile, J., Gil-Izquierdo, M., & Medina-Moral, E. (2023). The impact of rubrics and scripts on self-regulation, self-efficacy and performance in collaborative problem-solving tasks. Assessment & Evaluation in Higher Education, 48(8), 1223–1239. https://doi.org/10.1080/02602938.2023.2236335
Goodrich, H. W. (1996). Student self-assessment: At the intersection of metacognition and authentic assessment. Harvard University.
Hanrahan, S. J., & Isaacs, G. (2001). Assessing self-and peer-assessment: The students’ views. Higher Education Research & Development, 20(1), 53–70.
Harris, L. R., & Brown, G. T. (2013). Opportunities and obstacles to consider when using peer-and self-assessment to improve student learning: Case studies into teachers’ implementation. Teaching and Teacher Education, 36, 101–111. https://doi.org/10.1016/j.tate.2013.07.008
Harris, L. R., & Brown, G. T. (2018). Using self-assessment to improve student learning. Routledge.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487
Hill, T. (2016). Do accounting students believe in self-assessment? Accounting Education, 25(4), 291–305. https://doi.org/10.1080/09639284.2016.1191271
Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130–144. https://doi.org/10.1016/j.edurev.2007.05.002
Karaman, P. (2021). The impact of self-assessment on academic performance: A meta-analysis study. International Journal of Research in Education and Science (IJRES), 7(4), 1151–1166. https://doi.org/10.46328/ijres.2344
Kostons, D., Van Gog, T., & Paas, F. (2012). Training self-assessment and task-selection skills: A cognitive approach to improving self-regulated learning. Learning and Instruction, 22(2), 121–132. https://doi.org/10.1016/j.learninstruc.2011.08.004
Krebs, R., Rothstein, B., & Roelle, J. (2022). Rubrics enhance accuracy and reduce cognitive load in self-assessment. Metacognition and Learning, 17(2), 627–650. https://doi.org/10.1007/s11409-022-09302-1
Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1), 159–174.
Leach, L. (2012). Optional self-assessment: Some tensions and dilemmas. Assessment & Evaluation in Higher Education, 37(2), 137–147. https://doi.org/10.1080/02602938.2010.515013
Leenknecht, M., Wijnia, L., Köhlen, M., Fryer, L., Rikers, R., & Loyens, S. (2020). Formative assessment as practice: The role of students’ motivation. Assessment & Evaluation in Higher Education, 46(2), 1–20. https://doi.org/10.1080/02602938.2020.1765228
Lipnevich, A. A., McCallen, L. N., Miles, K. P., & Smith, J. K. (2014). Mind the gap! Students’ use of exemplars and detailed rubrics as formative assessment. Instructional Science, 42, 539–559.
Lipnevich, A. A., Panadero, E., & Calistro, T. (2023). Unraveling the effects of rubrics and exemplars on student writing performance. Journal of Experimental Psychology: Applied, 29(1), 136–148. https://doi.org/10.1037/xap0000434
Logan, B. (2015). Reviewing the value of self-assessments: Do they matter in the classroom? Research in Higher Education Journal, 29, 1–11.
Lund, T. (2012). Combining qualitative and quantitative approaches: Some arguments for mixed methods research. Scandinavian Journal of Educational Research, 56(2), 155–165. https://doi.org/10.1080/00313831.2011.568674
McHugh, M. L. (2012). Interrater reliability: The kappa statistic. Biochemia Medica, 22(3), 276–282.
McMillan, J. H., & Hearn, J. (2008). Student self-assessment: The key to stronger student motivation and higher achievement. Educational Horizons, 87(1), 40–49. Retrieved January 10, 2024, from https://www.jstor.org/stable/42923742
Nicol, D. (2021). The power of internal feedback: Exploiting natural comparison processes. Assessment & Evaluation in Higher Education, 46(5), 756–778. https://doi.org/10.1080/02602938.2020.1823314
Nicol, D., & McFarlane-Dick, D. (2006). Formative assessment and self-regulated learning, a model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199e218. https://doi.org/10.1080/03075070600572090.
Nili, A., Tate, M., Barros, A., & Johnstone, D. (2020). An approach for selecting and using a method of inter-coder reliability in information management research. International Journal of Information Management, 54, 102154.
Onwuegbuzie, A. J., & Leech, N. L. (2006). Linking research questions to mixed methods data analysis procedures. The Qualitative Report, 11(3), 474–498.
Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: A review. Educational Research Review, 9, 129–144. https://doi.org/10.1016/j.edurev.2013.01.002
Panadero, E., & Romero, M. (2014). To rubric or not to rubric? The effects of self-assessment on self-regulation, performance and self-efficacy. Assessment in Education: Principles, Policy & Practice, 21(2), 133–148. https://doi.org/10.1080/0969594X.2013.877872
Panadero, E., Tapia, J. A., & Huertas, J. A. (2012). Rubrics and self-assessment scripts effects on self-regulation, learning and self-efficacy in secondary education. Learning and Individual Differences, 22(6), 806–813. https://doi.org/10.1016/j.lindif.2012.04.007
Panadero, E., Alonso-Tapia, J., & Reche, E. (2013). Rubrics vs. self-assessment scripts effect on self-regulation, performance and self-efficacy in pre-service teachers. Studies in Educational Evaluation, 39(3), 125–132. https://doi.org/10.1016/j.stueduc.2013.04.001
Panadero, E., Brown, G. T. L., & Strijbos, J. W. (2016a). The future of student self-assessment: A review of known unknowns and potential directions. Educational Psychology Review, 28(4), 803–830. https://doi.org/10.1007/s10648-015-9350-2
Panadero, E., Jonsson, A., & Strijbos, J. (2016b). Scaffolding self-regulated learning through self-assessment and peer assessment: Guidelines for classroom implementation. In D. Laveault & L. Allal (Eds.), Assessment for learning: Meeting the challenge of implementation (pp. 311–326). Springer.
Panadero, E., Jonsson, A., & Botella, J. (2017). Effects of self-assessment on self-regulated learning and self-efficacy: Four meta-analyses. Educational Research Review, 22, 74–98. https://doi.org/10.1016/j.edurev.2017.08.004
Panadero, E., Andrade, H., & Brookhart, S. (2018). Fusing self-regulated learning and formative assessment: A roadmap of where we are, how we got here, and where we are going. The Australian Educational Researcher, 45, 13–31. https://doi.org/10.1007/s13384-018-0258-y
Panadero, E., Lipnevich, A. A., & Broadbent, J. (2019). Turning self-assessment into self-feedback. In D. Boud, M. D. Henderson, R. Ajjawi, & E. Molloy (Eds.), The Impact of feedback in higher education: improving assessment outcomes for learners (pp. 147–163). Springer.
Panadero, E., Fernández-Ruiz, J., & Sánchez-Iglesias, I. (2020). Secondary education students’ self-assessment: The effects of feedback, subject matter, year level, and gender. Assessment in Education: Principles, Policy & Practice, 27(6), 607–634. https://doi.org/10.1080/0969594X.2020.1835823
Panadero, E., Jonsson, A., Pinedo, L., & Fernández-Castilla, B. (2023a). Effects of rubrics on academic performance, self-regulated learning, and self-efficacy: A meta-analytic review. Educational Psychology Review, 35, 113. https://doi.org/10.1007/s10648-023-09823-4
Panadero, E., Pérez, D. G., Ruiz, J. F., Fraile, J., Sánchez-Iglesias, I., & Brown, G. T. (2023b). University students’ strategies and criteria during self-assessment: Instructor’s feedback, rubrics, and year level effects. European Journal of Psychology of Education, 38(3), 1031–1051. https://doi.org/10.1007/s10212-022-00639-4
Raaijmakers, S. F., Baars, M., Paas, F., van Merriënboer, J. J., & Van Gog, T. (2019). Effects of self-assessment feedback on self-assessment and task-selection accuracy. Metacognition and Learning, 14, 21–42. https://doi.org/10.1007/s11409-019-09189-5
Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435–448. https://doi.org/10.1080/02602930902862859
Reitmeier, C. A., & Vrchota, D. A. (2009). Self-assessment of oral communication presentations in food science and nutrition. Journal of Food Science Education, 8(4), 88–92. https://doi.org/10.1111/j.1541-4329.2009.00080.x
Ross, J. A. (2019). The reliability, validity, and utility of self-assessment. Practical Assessment, Research, and Evaluation, 11(1), 10. https://doi.org/10.7275/9wph-vv65
Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18, 119–144.
Sitzmann, T., Ely, K., Brown, K. G., & Bauer, K. N. (2010). Self-assessment of knowledge: A cognitive learning or affective measure? Academy of Management Learning & Education, 9(2), 169–191. https://doi.org/10.5465/amle.9.2.zqr169
Stevens, D. D., & Levi, A. J. (2013). Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning (2nd ed.). Stylus Publishing.
Strauss, A., & Corbin, J. (1998). Basics of qualitative research techniques: Techniques and procedures for developing grounded theory. Sage.
Taras, M. (2003). To feedback or not to feedback in student self-assessment. Assessment & Evaluation in Higher Education, 28(5), 549–565. https://doi.org/10.1080/02602930301678
van Diggelen, M., den Brok, P., & Beijaard, D. (2013). Teachers’ use of a self-assessment procedure: The role of criteria, standards, feedback and reflection. Teachers and Teaching, 19(2), 115–134. https://doi.org/10.1080/13540602.2013.741834
Wang, W. (2017). Using rubrics in student self-assessment: Student perceptions in the English as a foreign language writing context. Assessment & Evaluation in Higher Education, 42(8), 1280–1292. https://doi.org/10.1080/02602938.2016.1261993
Wanner, T., & Palmer, E. (2018). Formative self-and peer assessment for improved student learning: The crucial factors of design, teacher participation and feedback. Assessment & Evaluation in Higher Education, 43(7), 1032–1047. https://doi.org/10.1080/02602938.2018.1427698
Wollenschläger, M., Hattie, J., Machts, N., Möller, J., & Harms, U. (2016). What makes rubrics effective in teacher-feedback? Transparency of learning goals is not enough. Contemporary Educational Psychology, 44, 1–11. https://doi.org/10.1016/j.cedpsych.2015.11.003
Wong, H. M. (2017). Implementing self-assessment in Singapore primary schools: Effects on students’ perceptions of self-assessment. Pedagogies: An International Journal, 12(4), 391–409. https://doi.org/10.1080/1554480X.2017.1362348
Yan, Z. (2018). Student self-assessment practices: The role of gender, school level and goal orientation. Assessment in Education: Principles, Policy & Practice, 25(2), 183–199. https://doi.org/10.1080/0969594X.2016.1218324
Yan, Z. (2020). Self-assessment in the process of self-regulated learning and its relationship with academic achievement. Assessment & Evaluation in Higher Education, 45(2), 224–238. https://doi.org/10.1080/02602938.2019.1629390
Yan, Z. (2022). Student self-assessment as a process for learning. Routledge.
Yan, Z., & Brown, G. T. (2017). A cyclical self-assessment process: Towards a model of how students engage in self-assessment. Assessment & Evaluation in Higher Education, 42(8), 1247–1262. https://doi.org/10.1080/02602938.2016.1260091
Yan, Z., & Carless, D. (2022). Self-assessment is about more than self: The enabling role of feedback literacy. Assessment & Evaluation in Higher Education, 47(7), 1116–1128. https://doi.org/10.1080/02602938.2021.2001431
Yan, Z., Brown, G. T. L., Lee, C. K. J., & Qiu, X. L. (2020a). Student self-assessment: Why do they do it? Educational Psychology, 40(4), 509–532. https://doi.org/10.1080/01443410.2019.1672038
Yan, Z., Chiu, M. M., & Ko, P. Y. (2020b). Effects of self-assessment diaries on academic achievement, self-regulation, and motivation. Assessment in Education: Principles, Policy & Practice, 27(5), 562–583. https://doi.org/10.1080/0969594X.2020.1827221
Yan, Z., Lao, H., Panadero, E., Fernández-Castilla, B., Yang, L., & Yang, M. (2022). Effects of self-assessment and peer-assessment interventions on academic performance: A meta-analysis. Educational Research Review, 37, 100484. https://doi.org/10.1016/j.edurev.2022.100484
Yan, Z., Panadero, E., Wang, X., & Zhan, Y. (2023a). A systematic review on students’ perceptions of self-assessment: Usefulness and factors influencing implementation. Educational Psychology Review, 35, 81. https://doi.org/10.1007/s10648-023-09799-1
Yan, Z., Wang, X., Boud, D., & Lao, H. (2023b). The effect of self-assessment on academic performance and the role of explicitness: A meta-analysis. Assessment & Evaluation in Higher Education, 48(1), 1–15. https://doi.org/10.1080/02602938.2021.2012644
Zimmerman, B. J., & Moylan, A. R. (2009). Self-regulation: When metacognition and motivation intersect. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Handbook of metacognition in education (pp. 299–315). Routledge.
Funding
Open access funding provided by the Scientific and Technological Research Council of Türkiye (TÜBİTAK).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author declares no competing interests.
Additional information
Pınar Karaman. Sinop University, Faculty of Education, Department of Educational Sciences/Curriculum and Instruction, Sinop, Türkiye. E-mail: pkaraman1626@gmail.com.
Current themes of research:.
Formative assessment. Self-assessment. Feedback.
Most relevant publications in the field of Psychology of Education:.
Karaman, P. (2021). The Impact of Self-Assessment on Academic Performance: A Meta-Analysis Study. International Journal of Research in Education and Science, 7(4), 1151–1166.
Karaman, P. (2021). The effect of formative assessment practices on student learning: A meta-analysis study. International Journal of Assessment Tools in Education, 8(4), 801–817.
Karaman, P., & Şahin, Ç. (2017). Adaptation of teachers' conceptions and practices of formative assessment scale into Turkish culture and a structural equation modeling. International Electronic Journal of Elementary Education, 10(2), 185–194.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix 1
Appendix 2: The report on the quality of rubric intervention
Instrument to report the characteristics of rubric design and implementation
Created by Panadero, E., Jonsson, A., Pinedo, L. & Fernández-Castilla, B (2023). Effects of rubrics on academic performance, self-regulated learning, and self-efficacy: a meta-analytic review. Educational Psychology Review.
The study investigates:
Rubrics and scoring accuracy
✓ Rubrics and academic performance
✓ Rubrics and students’ perceptions
✓ Rubrics and self-regulated learning
Category | Description | This study | |
---|---|---|---|
Design | |||
1 | Rubric presence | Have you included the rubric in the publication as supplementary material? | ✓ Yes No. Reason: |
2 | Assessment criteria | Number of assessment criteria included in the rubric | 10 |
3 | Performance levels | How many performance levels are included in the rubric? Also list the headings | 4 levels. 1—unsuccessful, 2—almost successful, 3—successful, and 4—very successful |
4 | Creation | Was the rubric created for this study? If not, please indicate the original source | ✓ Yes No |
5 | Scoring strategy | If the rubric contains an explicit scoring strategy, provide a brief description | Self-grading performance with four-performance level on the tasks |
6 | Type | How was the assessment communicated to the students, holistic (i.e. as an overall assessment for all criteria) or analytical (i.e. separately for all criteria assessed)? | Holistic ✓ Analytical |
7 | Type 2 | Was the rubric general (i.e. a general skill such as writing), task-generic (i.e. applicable to several similar tasks) or task-specific (i.e. only applicable to one particular task) | General ✓ Task-generic Task-specific |
Implementation | |||
8 | Self-assessment | Was the rubric used for self- assessment? | ✓ Yes No |
9 | Self-scoring | Was the rubric used to calculate a self- score? | ✓ Yes, but the self-score was not included in the final grade Yes, and the self-score represented _% of the final grade No |
10 | Peer assessment | Was the rubric used for peer assessment? | Yes ✓ No |
11 | Peer score | Was the rubric used to score a peer? | Yes, but the peer score was not included in the final grade Yes, and the peer score represented % of the final grade ✓ No |
12 | Feedback | Did the students receive additional feedback about their performance or on how they used the rubric? | Yes, on both Only on their performance ✓ Only on how they used the rubric No If yes, could you describe the additional feedback characteristics? Instructor gives feedback on their use of rubric in self-assessment |
13 | Official weight | Did the activity assessed with the rubric count towards the students’ grade? | ✓ Yes, for a 30% of the total No |
14 | Frequency | How many times was the rubric used? (once, twice, etc.) | Twice |
15 | Training | Did the participants receive training about the rubric? If yes, describe the training and the specific moment in which they received it | Yes. The instructor explained to the pre-service teachers how to use rubrics to plan, monitor, and evaluate their own tasks before doing their tasks |
16 | Revision | Did learners revise their work after using the rubric? | No ✓ Yes |
17 | Extent of involvement | How were learners involved in the rubric design and implementation? | ✓ Students just received and used the rubric Students were allowed to make small changes to the rubrics Students made substantial changes Students co-created the rubric Other: |
18 | Use of other instruments | Were any additional instruments employed to further strengthen the intervention effects, or to make comparisons with the rubric? If so, please, explain the characteristics of those instruments | Students write reflective journals on their perceptions and experiences about using rubric in self-assessment |
19 | Technology | Was any type of technology used for the design and/or the implementation of the rubric? If so, please provide the details | Online web platform, e-mails etc Students used e-mail and online web platform for sending their tasks and also for receiving feedback from instructor on their use of rubric in self-assessment |
Outcomes | |||
19 | Study Outcomes | These variables are directly measured as outcomes of the rubric activity.Select all the options that apply to your study from the right column | ✓ Beliefs & perceptions: including perceptions of learning capacity to use the rubric (e.g. fairness, usefulness), metacognition and self-regulation, attitudes and beliefs (e.g. self-efficacy), teachers’ perceptions/conceptions Emotions and motivation: emotions experienced by learners (e.g. achievement emotions, social emotions, etc.) & motivational beliefs (e.g. learning motivation) ✓ Performance: academic/domain specific performance, achievement, improved draft/work (i.e. revision) ✓ Skills: quality of contribution to the group, professional behaviour, problem solving skills, work habits, interpersonal skills, metacognitive & self-regulatory skills Reliability of rubric: consistency of rubric scores among different raters (e.g. several teachers) Validity of rubric: aspects related to testing the validity, such as content validity, comparing students and teachers' assessment, etc Other: |
Moderators/mediators | |||
20 | Moderators/mediators | Variables that are not usually manipulated but are taken into account when investigating rubrics. Select the variables that have been explored in your study from the right column | Gender: of assessor/assessee Ability & Skills: includes prior knowledge, prior performance, achievement level, GPA, finished high school, previous level of education, year of enrolment, etc Skills: reviewing ability, computer skills, etc ✓ Age/grade level: of assessor/assessee Other: |
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Karaman, P. Effects of using rubrics in self-assessment with instructor feedback on pre-service teachers’ academic performance, self-regulated learning and perceptions of self-assessment. Eur J Psychol Educ (2024). https://doi.org/10.1007/s10212-024-00867-w
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10212-024-00867-w