Keywords

1 Introduction

Traditionally, feedback from peers has been used when teachers cannot provide proper feedback themselves, usually because of large class sizes (e.g., Falchikov & Goldfinch, 2000). This way of using peer assessment has not been fully adopted by teachers, due to its low reliability and validity (e.g., Liu & Carless, 2006). However, there has been a shift in goals, from using peer feedback as an assessment tool replacing teacher feedback, to using it for the benefits of learning, as a learning tool (e.g., Adachi et al., 2018; van Popta et al., 2017). As peer assessment consists of two processes—giving feedback and receiving feedback—both can (and do) contribute to learning (e.g., Li et al., 2020). However, several studies have indicated that giving feedback can lead to comparable or even greater learning than receiving feedback (e.g., Ion et al., 2019; Li & Grion, 2019; Phillips, 2016). Therefore, when used as a learning tool, giving feedback to peers can be a learning experience for feedback providers (or reviewers). Even though the contribution that giving feedback makes to learning has been shown, that part of the peer assessment process has been less studied than the receiving-feedback part. Separating these two parts and focusing only on giving feedback could lead to better understanding of the factors that might influence reviewers’ learning.

Such learning can be attributed to several factors. One is that while giving feedback to peers, students need to be cognitively involved with the material and the task. They need to compare the product to be reviewed with their own understanding and/or self-created product; this comparison leads to deeper thinking and thereby to learning (Nicol & McCallum, 2022). Another factor that can lead to learning is the process of thinking of and formulating appropriate feedback, which can again stimulate deeper thing about the material (e.g., Lundstrom & Baker, 2009).

To maximize learning originating from giving feedback to peers, it is important first to study how the design of the feedback-giving procedure can influence its outcomes. And to do that, it is important to deconstruct this process to study each phase separately. We conceptualised the feedback-giving process using the model suggested by Sluijsmans (2002) that includes three steps: define assessment criteria, judge the performance of a peer, and provide feedback for future learning.

Usually a feedback-giving task is included in a course as separate activity that requires specially allocated time because it covers bigger scale products, such as essays, reports, or team projects. This also means that such an activity should be planned appropriately—there should be enough time given to it, so quite often it is set as homework or self-study. Teachers may be reluctant to include giving feedback to peers in their courses, as the feedback can be unreliable and too time-consuming for students (e.g., of Liu & Carless, 2006). However, if giving feedback can fit into a regular 50-min class and have a formative nature, this would give students an opportunity to interact with the material at a meta-level and learn from it, and still proceed with the usual classroom activities. To achieve this goal, this activity must be designed so that the feedback-giving moment is not too long. Therefore, the reviewed products should be relatively small, so that giving feedback on them does not occupy too much time.

Using smaller scale products and a shorter feedback-giving interaction could also influence the learning of feedback providers. Therefore, studying what learning can be triggered by reviewing such products is valuable for practice. Below, the results of a series of (quasi-)experimental studies investigating the process of giving feedback on smaller products are presented. Each study focused on one particular design feature related to one of the steps of the feedback-giving model by Sluijsmans (2002) mentioned above: defining assessment criteria, judging the performance of a peer, and providing feedback for future learning. The following features of the feedback-giving process were studied: being provided with assessment criteria (Step 1); the quality and the type of reviewed product (Step 2); and the form of providing feedback (Step 3). The rationale for each study is described below.

Step 1: When faced with the task of giving feedback, students can either be provided with assessment criteria or come up with their own. There is no clear opinion as to which of these is more beneficial for learning. Some studies have indicated that thinking of their own assessment criteria leads to greater ownership of learning for students and results in more involvement and, thus, more learning (e.g., Canty et al., 2017; Tsivitanidou et al., 2011). However, other studies have suggested that assessment criteria can guide students in the process of giving feedback and provide them with required structure (e.g., Gan & Hattie, 2014; Panadero et al., 2013). Therefore, the question about the role that the source of assessment criteria plays in reviewers’ learning is not clearly answered.

Step 2: The quality and type of a reviewed product can influence the quality and content of the feedback that reviewers provide and, as a result, the learning that arises from it (e.g., Patchan & Schunn, 2015). Some studies have shown that giving feedback on higher quality products can lead to more learning, as students see good examples and understand the material better (e.g., Alqassab et al., 2018a; Tsivitanidou et al., 2018). However, if the level of the reviewed product is too high, students may not be able to find mistakes and, thus, learn (e.g., Cho & Cho, 2011), which may mean that products of mediocre quality can stimulate learning more than those of high quality. Similarly, the type of product may affect learning. For example, students may find familiar and straightforward products, such as answers to open-ended questions, easy to review, as they understand the format and the expecations, and can find more mistakes. Some research has shown that identifying more mistakes in a reviewed product leads to more learning (e.g., Adams et al., 2019). However, reviewing a more challenging product such as a concept map may lead to more conceptual understanding and trigger more learning (e.g., Chen & Allen, 2017). This makes the effect of different levels and types of reviewed products on learning interesting to investigate.

Step 3: Giving feedback can be done in the form of comments or grades. Previous research has shown that providing cognitive feedback, that is, focusing on the task and not on the evaluation, and identifying mistakes in the reviewed products leads to learning for feedback providers (e.g., Lu & Law, 2012; Lu & Zhang, 2012; Wooley et al., 2008). However, it is not clear if the form of giving feedback will influence the learning when reviewing smaller scale products.

In all of the studies conducted, there was one factor that was considered thoroughly—students’ prior knowledge. Previous research has shown that reviewers’ prior knowledge can influence the way they interact with the material and the feedback they give (e.g., Alqassab, 2017; Patchan & Schunn, 2015). This is most obvious if we look at the quality of the reviewed product—the same product can too difficult and not understandable for lower prior-knowledge students, and stimulating and inspiring for higher prior-knowledge students. This means that the first case would lead to less cognitive involvement and, thus, less learning, while the second case could trigger more cognitive engagement and, thus, more learning. Similar influences can be seen for the other steps of the feedback-giving process. Therefore, the level of prior knowledge of feedback providers was taken into account in the analyses.

Nowadays, giving feedback to peers is often done with the help of technologies—online platforms, apps, or specially developed tools—a plug-in in Canvas (an LMS), Eduflow, or PeerGrade, to name just a few. One distinguishing feature of using such products is the possibility for the teacher to adjust and adapt the process of giving feedback to their current goals by changing several parameters: anonymous or not, synchronous or not, using specific assessment criteria or not, reciprocal peer feedback or not—the list of adjustable parameters goes on. Moreover, such settings can be applied to all students or to specific groups of students. Therefore, knowing what settings lead to more learning for a specific group of students or in a specific context can have a clear translation into practice. This makes investigating the feedback-giving process conducted with a technology-based tool quite topical, as we use established methods to study the feedback-giving process in a new context to enrich both theoretical knowledge about it and practical implementation procedures.

In the sections below, we present our research, the goal of which was to investigate the learning of feedback providers in an online environment and how to increase such learning by designing the feedback-giving process in a particular way. First, we describe the studies conducted and the unique features they had. Second, we introduce the findings and their meaning for classroom practice. Finally, we draw conclusions and indicate the limitations of the studies, as well as directions for future research.

1.1 Design of the Studies Conducted

1.1.1 Common Features

The studies were conducted in an online inquiry-learning context, with each of the four studies focusing on one of the steps of the feedback-giving process:

  • Comparing learning from giving feedback to peers while being provided with assessment criteria or not—Step 1;

  • Investigating the effect of the level and type of reviewed products on reviewers’ learning—Step 2;

  • Comparing reviewers’ learning when providing feedback in the form of comments and grades—Step 3.

In all studies, students gave feedback using an online tool. According to a meta-analysis by H. Li et al. (2020), computer-facilitated methods of giving feedback had positive effects on students’ learning, in some cases even more than paper-based methods. In our contexts, the choice of an online tool was also supported by several considerations. First, with the help of this tool, students could give feedback anonymously. Previous research has shown that interpersonal relationships can influence the process of giving feedback, and anonymity helps to eliminate possible negative influence (e.g., Rotsaert et al., 2018). Second, students could give feedback at their own pace, which not only makes it convenient, but could also increase their ownership of their learning (e.g., Rosa et al., 2016). Giving students an opportunity to work at their own pace can be especially welcome during a standard lesson, as it is not always easy to differentiate students’ work in this way. Finally, the use of an online tool for giving feedback allowed smooth embedding in an inquiry-learning lesson.

Inquiry learning imitates the scientific research cycle and facilitates students’ following of this cycle. Inquiry learning with appropriate guidance can be beneficial for students’ cognitive development; for example, a meta-analysis by Furtak et al. (2012) reported an overall mean effect size of 0.5. Adding a feedback-giving activity in an inquiry-learning context makes the inquiry-learning cycle even closer to the real research cycle, as giving feedback on peers’ products (such as articles, presentations, proposals, etc.) is a natural part of scientists’ work. Critiquing peers’ learning products and providing suggestions for their improvement allow students to develop conceptual understanding of a topic and scientific reasoning skills (e.g., Dunbar, 2000; Friesen & Scott, 2013). Moreover, giving feedback on peers’ products provides students with another opportunity to reflect on and revise their own products, which may also stimulate learning. Therefore, studying the process and learning outcomes of giving feedback to peers in an inquiry-learning context might lead to better understanding of the different aspects involved in giving feedback than studying it in the context of traditional instruction.

Students gave feedback on concept maps in all four studies. This product was chosen for several reasons. First, as creating a concept map is a natural activity during the conceptualisation phase of an inquiry cycle, including this exercise did not break the flow of the lesson (e.g., Pedaste et al., 2015). Second, the product is quite compact, but at the same time requires understanding of the topic. Therefore, reviewing a concept map may be a relatively brief task, yet demonstrate a deeper level of understanding. Finally, research has shown that reviewing concept maps can add conceptual understanding compared to reviewing other products or just creating a concept map (e.g., Chen & Allen, 2017).

1.2 Participants

All studies were conducted with upper secondary-school students as participants, who are not the usual target group. Studies on peer feedback more often involve university students. There can be different reasons for that: researchers teaching at a university may have easier access to this audience, university students may seem to be more ready for feedback-giving activities, or university courses may seem more fit for such tasks than school lessons. The present series of studies allows for better understanding of the feedback-giving process in secondary school and the factors that influence the learning stimulated by it.

Participants were secondary school children (14–15 years old) from Dutch and Russian schools. They worked on a lesson on physics or chemistry from their curriculum in which a feedback-giving activity was included. For each study, students in each class were randomly assigned to the experimental conditions of that particular study. This was done to balance a possible difference between the classes.

1.3 Design and Procedure

The studies were experimental, using a pre-test post-test design. Participants worked individually in an online inquiry-learning environment that covered a topic from their physics or chemistry curriculum. The environment was built using the Go-Lab ecosystem (www.golabz.eu) and followed the stages of an inquiry cycle: orientation, conceptualisation, experimentation, conclusion and discussion (Pedaste et al., 2015). In each stage students were provided with some guidance for the inquiry process via specifically designed tools, but the learning process was still regulated by students themselves, as they could decide how to interact with the material and at what pace to move through it.

In the conceptualisation phase, students were asked to create a concept map with the key concepts of the topic they were studying. They made their concept maps using a special tool—Concept Mapper. The tool had some pre-defined concepts and link names, but also gave students an opportunity to add new concepts and link names. A view of the tool is given in Fig. 13.1.

Fig. 13.1
A screenshot of the concept mapper tool. It depicts icons of tools such as erase, undo, and others on the left, and a flow chart depicting an atom leading to proton and charge with connectors such as consists of, and has.

View of the concept Mapper tool (covering the topic of Study 3)

In the investigation phase, students worked in an online lab checking the hypotheses they had created to answer the research question for the lesson. Figure 13.2 presents an example of an online lab.

Fig. 13.2
An illustration of a house, and a graph. The house has a left window, a wall on the right, and a heater inside with the heat hitting the ceiling. The graph of temperature against time has 3 curves of different temperatures T 1, T 2, and T 3, respectively. The three curves depict increasing trends.

View of the online lab “Vertical temperature gradients” (used in Study 4). Images by The Concord Consortium, licensed under CC-BY 4.0. https://concord.org/

In the discussion phase, students were asked to give feedback on two learning products (mainly concept maps; answers to open-ended questions were used in one condition in one study) by fictitious peers created by the researchers. To make the context more realistic, students were told that these concept maps came from students from a different class or a different school. Creating the concept maps was done in collaboration with the teachers of participating classes. One reason for that was to ensure that the products were similar to ones created by students and fit the learning material. The other reason was to make the products to be reviewed have a specific level of quality. In particular, all learning products (concept maps and answers to open-ended questions) included some misconceptions and had some room for improvement. Students were guided through the feedback-giving process by the assessment criteria (apart from one condition in one study) formulated in the form of questions and aimed at indicating the desired features of the product. Such prompts have been shown to be helpful for the feedback-giving process (e.g., Gan & Hattie, 2014). The whole process of giving feedback was done in a special peer-assessment tool. This tool allowed students to see the reviewed product and the assessment criteria, and to provide their comments about the product. An example of a fictitious-peer concept map (covering the topic of Study 3) with assessment criteria is given in Fig. 13.3.

Fig. 13.3
A screenshot of the feedback tool. It has various words related to an atom including electron, neutron, and proton along with connecting words such as consists of, has, defines, has more, and others. Below is a list of questions including what important concepts are missing, and how would you change the structure of the map.

View of the feedback tool

After providing feedback on peers’ products, students were encouraged but not obliged to revisit their own concept map and change it based on their newly acquired knowledge.

The design of the studies and their target group create a unique combination that allows us to see in what ways giving feedback to peers can be used in less-usual settings (such as an online inquiry environment), and what lessons can be learned for more general usage. This contributes to knowledge about and understanding of the feedback-giving process.

1.4 Results and Recommendations for Practice

For each study, this section presents a rationale based on the existing research, a brief description of the specific details distinguishing it from other studies, results obtained and our recommendations based on the results obtained.

1.5 The Role of Assessment Criteria

The first step of the feedback-giving model used in the studies is to define assessment criteria (Sluijsmans, 2002). The literature presents two opposite approaches. According to several studies, assessment criteria can support and guide students in the evaluation process, as they indicate the desired characteristics of the reviewed products; students need such guidance, as providing meaningful feedback can be a challenging task (e.g., Gan & Hattie, 2014; Gielen & De Wever, 2015; Panadero et al., 2013). One approach, therefore, takes providing assessment criteria as necessary for better learning results. The other approach points out that using students’ own criteria might be easier for them than understanding ones that are given, especially for complex subjects (e.g., Jones & Alcock, 2014; Orsmond et al., 2000). And if students cannot interpret criteria that are given because these criteria are too difficult or abstract for their level of knowledge, they cannot provide feedback and learn from that process.

Previous research has not led to a clear conclusion about the contribution that being provided with assessment criteria makes to reviewers’ learning, and our study did not clarify the situation. In the study investigating that aspect, one group of students (n = 49) gave feedback on concept maps using provided assessment criteria. These assessment criteria were not topic-dependent, but focused on the important features of concept maps instead (see Fig. 13.3 for a view of assessment criteria). The other group of students (n = 44) had to come up with their own assessment criteria to review the same concept maps. We found no statistically significant difference in post-test scores (controlling for prior knowledge) between the participants who had been provided with assessment criteria and those who had not. However, the results indicated that students could still give meaningful and content-related feedback even if they were not supported by assessment criteria. These findings are in line with previous research suggesting that secondary school students do not necessarily have to be given assessment criteria to provide usable feedback to peers (e.g., Tsivitanidou et al., 2011).

These results are important for designing a feedback-giving activity in a real classroom. As there was no difference found in learning between two conditions, we can say that in our case, not providing students with assessment criteria did not lead to less learning. In other words, this may suggest that teachers can choose whether to give assessment criteria or not, depending on the situation. For small-scale products, not giving assessment criteria may even be more time-efficient, as teachers and students do not spend time on explaining and understanding the criteria. Teachers may instead focus their effort on explaining to students the benefits of giving feedback or discussing what helpful feedback can look like.

1.6 The Role of the Quality and Type of Reviewed Products

The second step of the feedback-giving model concerns judging a peer’s performance or product. Two studies were conducted to investigate this step.

The first study zoomed in on reviewing products of different quality. According to Hattie and Timperley (2007), evaluating peers’ products includes several cognitive activities, such as analysing the existing state of a product, comparing it against assessment criteria, and thinking of directions for improvement based on identified problems or mistakes. These activities can definitely be influenced by the quality of the products under review. Low- and high-quality products not only have a different number of mistakes, but the mistakes (or areas for improvement) are different and may require different types of analysis and solutions. In other words, they may require different thinking processes from a reviewer and different content in the feedback provided. To fully interact with products of different levels, students should have enough knowledge and understanding to give meaningful feedback (e.g., Alqassab et al., 2018b), which may mean that reviewers’ prior knowledge can play a role in the reviewing process and its outcomes. The same product may be challenging yet understandable for a student with higher prior knowledge, and beyond understanding for a student with lower prior knowledge. In such a case, the former student may learn a lot by analysing the product and thinking of possible improvements, while the latter may be overwhelmed and quit the process. However, finding mistakes and providing recommendations is not the only way of learning by reviewing. Students can learn when reviewing good examples, as they can see successful strategies for completing the task and may implement them later (Alqassab et al., 2018a; Tsivitanidou et al., 2018).

In our study about the level of reviewed products, students had to review one of three pairs of concept maps: two low-quality concept maps (29 students), two high-quality concept maps (25 students) or a mixed-quality set (23 students). The results showed that students reviewing a lower quality set had higher post-test scores (controlling for prior knowledge) than students reviewing a higher quality set [p = 0.048; MLOW = 6.39, SE = 0.50, MHIGH = 5.01, SE = 0.47]. In addition, the quality of the feedback provided by these students was also higher than in the other two conditions, with a statistically significant difference between groups reviewing low-quality and mixed-quality concept maps [p = 0.033; MLOW = 2.43, SD = 1.07, MMIXED = 1.82, SD = 0.90].

A similar rationale led to studying learning from reviewing different types of products—the contribution to the reviewer’s learning could differ. In this study, one group of students was asked to give feedback on concept maps (n = 66), while the other group reviewed answers to open-ended questions (n = 61). On the one hand, concept maps can stimulate deeper thinking because of their nature. Giving feedback on a product that visualises connections between key concepts for the topic may lead to deeper understanding and, thus, to greater conceptual learning (e.g., Chen & Allen, 2017). On the other hand, identifying mistakes or misconceptions in such a complex product as a concept map can be more (or too) challenging than in a more straightforward and familiar product such as answers to opened-ended questions. As the ability to spot mistakes and provide suggestions is connected to learning (e.g., Adams et al., 2019), reviewing a more complex product (concept map) could lead to less learning than reviewing a less complex one (answers to test questions).

The study did not show a statistically significant difference in mean post-test scores (controlling for prior knowledge) between the conditions reviewing concept maps and answers to open-ended questions. However, it is noteworthy that the quality of the feedback provided was found to predict post-test scores for both conditions [F(2, 122) = 7.95, p < 0.01, R2 = 0.12], with a regression coefficient of 0.57. And this quality was higher in the condition reviewing answers to tests questions than in the condition reviewing concept maps [t(123) = −2.37, p = 0.019; MTEST = 3.18, SD = 1.90, MCONCEPT = 2.53, SD = 1.14].

These findings could suggest that students felt more comfortable with and, as a result, were better at giving feedback on lower quality and more familiar and straightforward products than on higher quality and more complex ones, as they could see more mistakes and make more suggestions. Being able to give better feedback led to better learning outcomes.

There are several implications for practice based on these results. First, as the quality of the feedback given predicted reviewers’ learning, it is important to encourage students to give feedback thoughtfully. Second, the type of product to review does not seem to influence learning as long as students give high-quality feedback. There is no known universal way to increase the quality of feedback provided by students. Apart from explaining to students the benefits of giving feedback, teachers may introduce elements of evaluative judgement into a classroom routine as a way to practice this. In this way, students may develop their assessment skills without being specifically trained for peer assessment. Finally, to maximise reviewers’ learning, they should be providing feedback on products of the same or lower level of quality than their own current level of performance. This means that if teachers use fictitious-peer work for reviewing, they need to find pieces at the average or below-average level. And if they implement a full peer-assessment process, their matching strategy should assign students of approximately the same level to give feedback to each other.

1.7 The Way of Giving Feedback

The third step of the model is to provide feedback for future learning. The form this feedback takes can influence the learning arising from it. In our study, one group of students gave feedback in the form of comments (n = 46), while the other group provided feedback with grades using smileys (n = 47). In both conditions, students were supported by assessment criteria, which were formulated as questions for the comment condition and as statements for the smiley condition.

Several studies have shown that commenting leads to more learning by reviewers than grading (e.g., Wooley et al., 2008; Xiao & Lucking, 2008). This body of research suggests that commenting triggers more learning, as students are more cognitively involved with the material for a longer time than while grading, as they not only had to evaluate their peer’s work and identify areas of improvement, but also had to think of solutions. However, with smaller scale products, the difference in time (and probably effort) between reviewing by commenting and by grading might be not so obvious as with a larger scale product. Therefore, checking if these findings still stand for small-scale products can enrich our understanding of the feedback-giving process.

Our study confirmed the existing point of view—students in the commenting condition had higher post-test scores (controlling for prior knowledge) than students who graded peers’ concept maps with smileys [F(1, 87) = 5.84, p = 0.018, ɳp2 = 0.06; MCOMMENT = 5.23, SD = 0.33; MSMILEY = 4.09, SD = 0.34]. Moreover, a differential effect of commenting for different prior knowledge groups was found, with low-prior-knowledge students benefiting from commenting the most [F(2, 87) = 4.19, p = 0.018, ɳp2 = 0.09]. This backs up our idea that prior knowledge can be an influential factor in the learning of feedback providers. Obviously, students need to be knowledgeable enough to provide meaningful feedback (e.g., Alqassab et al., 2018b; van Zundert et al., 2012), but apparently commenting helped even low-prior-knowledge students to get cognitively involved with the concept maps. The fact that they could see some mistakes and comment on them was most likely enough to trigger their learning. These findings support our belief that students with any level of prior knowledge can benefit from giving feedback if this process is properly designed.

These results can be used as a basis for recommendations on incorporating the feedback-giving process into classroom practice. First, teachers should be aware of the fact that students may learn differently from giving feedback depending on their prior knowledge. When organising a feedback-giving activity in an online platform, this can be taken into account by using different settings for different groups of students. And second, as commenting was shown to contribute to reviewers’ learning more than grading, students should be given an opportunity to write comments when asked to provide feedback. Reviewing small-scale products is a brief activity that can fit within the usual classroom routine, but still confer all of the benefits of reviewing for students’ learning.

2 Conclusion

When properly organized, giving feedback to peers can be a learning experience for a feedback provider even when reviewing a small-scale product. This makes giving peer feedback more applicable in a real classroom situation, as teachers do not have to change a lot in the lesson to include a feedback-giving activity for a smaller product. This may allow students not only to be cognitively involved with the material, but also to be involved at a meta-level, as evaluating a peer’s product with given or self-created assessment criteria and providing appropriate feedback may require higher order thinking than just completing a task. Peer feedback can also be a valuable addition to an inquiry-learning lesson, as it allows students to reflect on their exploration process and in that way to deepen it.

Using online platforms (such as www.golabz.eu) can make giving feedback more natural and easier than in traditional instruction, due to the ability to configure parameters of the feedback-giving process according to the learning goals. Although research on this topic is ongoing, peer assessment should be implemented in secondary schools more often, with a view to benefiting feedback providers.

There are several limitations or considerations regarding the studies conducted. First, the studies isolated the feedback-giving part of the peer-assessment process, while in a real-life situation students usually fulfil both roles: feedback provider and feedback recipient. In a real classroom, teachers have two choices: they can either follow the experimental settings and ask students to give feedback only (for example, on learning products from previous cohorts), or they can use a full peer-assessment process with the idea that at least the feedback-giving part could stimulate learning. Moreover, an interesting direction for further research in this area can be checking the findings of these studies in the situation of a reciprocal peer-assessment process.

Second, the experimental studies used fictitious products. The limitation associated with this is that even though the products to be reviewed were created in cooperation with teachers, they might still have differed from those created by students. Therefore, an interesting follow-up of this series of studies could be an experiment comparing students’ feedback given on fictitious and real peers’ products. This will help to explore if the students’ responses differ and in what way. If teachers would like to use the results of the conducted studies and control the quality of reviewed products in a real classroom, they can do so by using pieces of work by students from previous cohorts, for example.

Third, the instruments used to measure students’ learning were researcher-developed, and differed in different studies. As our intention was to study the process of giving feedback to peers in as natural an environment as possible, we always developed lessons based on the curriculum used by the participating classes. Using different STEM topics covered in secondary school supports the idea that our approach can be implemented for different domains. However, the drawback of this approach was that we could not use the same testing instruments and they had to be developed specifically to address the learning of the content in an isolated lesson or a series of lessons used for the studies. It could be interesting to validate these instruments by conducting a larger scale study; however, it could also be quite challenging in practice.

Finally, due to the scarce number of studies conducted with secondary schoolchildren as a target group, we sometimes used findings obtained for university students to set the expectations for our studies. The differences between these target groups may pose risks to the external validity of the studies conducted. This means that more experimental studies should be carried out in the field of peer assessment aiming at different target groups and domains to enrich our knowledge about this process.

At a more general level, further research on the feedback-giving process can take several directions. First, as higher quality feedback provided by students was associated with higher learning gains for them, it is important to investigate factors that lead to giving poor-quality feedback. Knowing this may help with developing ways to increase the quality of feedback given. Second, as the inquiry-learning context could have provided a unique and quite natural context for giving feedback, it could be interesting to check whether the results obtained in these studies hold for giving feedback on other products in an inquiry context. Finally, several studies indicated a positive effect of training students in giving feedback, but these studies targeted a quite elaborate procedure of giving feedback on bigger scale products, such as an essay, a report, or even a thesis. With complex and elaborated products training seems like an important contributor to learning, but it is worth studying whether training is equally important when feedback is given on smaller scale products and what the desired format for such training could be.