1 Introduction

Feedback is a critical component of students' learning and performance (Hattie & Timperley, 2007) that has become increasingly important in higher education (Maringe, 2010). Due to increasing teaching workload and growing scalable classes in higher education (Shi, 2019), peer feedback is one of the feedback types that are crucial for higher education (Noroozi et al., 2023; Cho & Schunn, 2007). Peer feedback is an effective instructional strategy to support students' learning processes and outcomes at a large scale (Er et al., 2021; Noroozi et al., 2016, 2023; Taghizadeh Kerman et al., 2022a). Implementing peer feedback in classrooms not only helps teachers activate students' engagement but also helps students broaden and deepen their understanding of the topic (Bayat et al., 2022; Noroozi et al., 2022).

The literature reveals that peer feedback has positive impacts on students' learning, such as improving professional skills (Brill, 2016; Lowell & Ashby, 2018), enhancing writing performance (Huisman et al., 2018; Nelson & Schunn, 2009; Noroozi et al., 2023; Shang, 2019), and fostering argumentation skills (Noroozi & Hatami, 2019). Peer feedback also provides opportunities for active interactions and meaningful negotiations (Al Qunayeer, 2020), and improves judgment skills, decision-making skills (Bayat et al., 2022), self-regulation skills (Ku & Lohr, 2003), and communication skills (Ritzhaupt & Kumar, 2015).

In parallel to the growing popularity of online modality in higher education, the implementation of online peer feedback has exponentially increased over the last decade due to its convenience, flexibility, and accessibility (Latifi et al., 2021; Noroozi et al., 2016; Taghizadeh Kerman et al., 2022a). Online tools have provided an effective, time-saving, and easy way to set up peer feedback activities, particularly in classes with a large number of students (Er et al., 2021; Noroozi et al., 2016; Latifi et al., 2021). When implemented as an online activity, learners can take advantage of the flexibility to choose when and where they want to participate in the feedback tasks (Tsai et al., 2002). Additionally, the data collected from students’ online peer feedback activities can be recorded, later used, and reflected upon for a better understanding of the feedback processes and any emerging issues (Banihashem et al., 2022a; Er et al., 2021).

Although peer feedback offers numerous benefits for students' learning and performance, its application in higher education is not without challenges (see Cho et al., 2006; Noroozi et al., 2016, 2023). Some of these challenges regard students’ attitudes and perceptions of peers and their feedback, such as the level of trust and low tolerance for critical feedback and resistance (Hu & Lam, 2010; Panadero & Alonso-Tapia, 2013). Moreover, issues in the implementation of peer feedback can arise due to students’ inadequate skill and knowledge levels, which may include limited feedback literacy, familiarity with criteria, and experiences with providing and receiving feedback (Winstone et al., 2017), insufficient specialized knowledge and literacy about the topic (Van Zundert et al., 2010; Valero Haro et al., 2019, 2023), and weak writing and language skills (Allen & Mills, 2016; Lundstrom & Baker, 2009). Another significant challenge frequently mentioned in the literature is the complexity of the feedback task requiring higher-order thinking skills (Er et al., 2021; Zhu & Carless, 2018), which may not be properly handled by all students. If not properly addressed, these challenges can result in superficial feedback or impede the effective implementation and uptake of peer feedback.

Many theoretical models and frameworks of peer feedback have been proposed in the literature (e.g., Panadero & Lipnevich, 2022; Wu & Schunn, 2023), which may help to tackle these challenges. These models and frameworks aim to clarify how students engage in peer feedback activities, how they analyze and process feedback, and how such feedback from peers is incorporated into the revised works of students. However, there is a need for greater clarity regarding the operationalization of these models and frameworks in real educational contexts. Identifying factors that can influence the processes and outcomes of peer feedback could help teachers implement effective peer feedback activities in their classrooms (Cui et al., 2022). It is particularly important to examine how the unique characteristics of students and learning environments impact their engagement during peer feedback processes and how this engagement affects their learning outcomes. While several systematic reviews have been conducted in the field of peer feedback (e.g., Topping, 2021; Zhang et al., 2021), they differ from the present review study in terms of scope and focus. Our systematic review takes a comprehensive approach to examine the role of students' characteristics, learning environment, learning processes, and outcomes of online peer feedback in higher education.

2 Conceptualizing the review

We adopted Biggs’ model (2003) as the basis for conceptualizing our review. Biggs’ model provided a framework that helped identify the critical dimensions to be addressed in our review, ultimately yielding practical results for teachers. This model entails four dimensions including (a) student characteristics, (b) learning environment, (c) learning processes and activities, and (d) learning outcomes that fit well with the aim of our peer feedback study. In Biggs’ model (2003), students’ characteristics refer to prior knowledge, abilities, intelligence, personality, and background, and it represents students' incoming personal learning influences. These characteristics are different from one person to another, inevitably resulting in different performances. In the case of peer feedback, students' characteristics such as attitude, motivation, and gender may affect peer feedback processes and outcomes (e.g., Lane et al., 2018). The learning environment includes different features including instructional mode, subject area, course structure, learning tasks, etc. Although the literature confirms the impacts of online learning environments on peer feedback performance (e.g., Lin, 2016, 2018a; Noroozi & Mulder, 2017), it does not say how different elements of learning environments can influence the design of peer feedback and its implementation. There is a need to provide an overview of the impacts of different elements of online learning environments on students’ peer feedback performance. Learning processes and activities explain how students approach learning and what strategies and techniques they follow to learn. It is necessary to identify and understand the learning processes involved in peer feedback engagement to better understand student behavior. Finally, learning outcomes are the last dimension of Biggs’ model. Providing an overview of the learning outcomes obtained through the implementation of online peer feedback implementation can guide teachers to know for what purposes and for what kind of learning outcomes, online peer feedback can assist them. In general, the learning outcomes attained by students can be classified into three overarching domains, a classification that finds its roots in Bloom's Taxonomy (1956). Firstly, situated within the affective domain are the intricate nuances of feelings, perceptions, and emotions that students undergo when engaging with online peer feedback. Secondly, the cognitive domain encapsulates the vast spectrum of knowledge acquisition and the cultivation of intellectual proficiencies that transpire throughout the learning process. The cognitive domain encompasses six progressively intricate levels: starting from foundational knowledge and comprehension, then extending to application, analysis, synthesis, and culminating in evaluation. Thirdly, behavioral outcomes pertain to the observable actions, demonstrable behaviors, or tangible responses that students exhibit consequent to their engagement with online peer feedback. Underpinning students’ learning outcomes via online peer feedback on Bloom’s Taxonomy (1956) furnishes an organized framework for understanding the outcomes and enriches the interpretation of these outcomes with pedagogical insights.

By taking all four dimensions of Biggs’ model (2003) into account, our systematic review provides a general framework for teachers on how to effectively count for students’ characteristics in an optimal learning environment to engage in desirable peer feedback activities to achieve intended learning outcomes. The following research questions are formulated to achieve the main goal of this review study:

  • RQ1. What are the students’ characteristics that influence online peer feedback in higher education?

  • RQ2. How do the conditions of the learning environment impact online peer feedback in higher education?

  • RQ3. What are the learning processes and activities that influence online peer feedback in higher education?

  • RQ4. How does online peer feedback influence the learning outcomes in higher education?

3 Method

We first followed the PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) (Moher et al., 2009) method to systematically review the literature. Then, we used a quality appraisal strategy to fine-grain the identified publications (Theelen et al., 2019). Finally, we developed a coding scheme based on Biggs’ model (2003) to analyze the publications included in the final selection.

3.1 Search strategy

To find relevant publications, first, Web of Sciences (WOS) and Scopus were selected as the main databases since these two cover almost all relevant publications. Second, we defined our terms for search query including (improve* OR develop* OR foster* OR promot* OR support* OR enhance* OR train*) AND (“peer feedback” OR “peer review” OR “peer assessment” OR “peer learning”) AND (“higher education*” OR university* OR college* OR academy* OR “tertiary* education*”) AND (online* OR electronic* OR internet* OR computer* OR “e-learning*” OR virtual* OR “web*-based”). All publications from WOS and Scopus were imported to EndNote X9.0 reference management software for further analysis.

3.2 Inclusion and exclusion criteria

For screening identified publications, first, three primary inclusion and exclusion criteria were applied: (1) only peer-reviewed publications in the English language were included; (2) only publications from 2000 to 2023 were included; and (3) only empirical articles were included. This means that book chapters, proceedings, reports, dissertations, and conceptual articles were excluded. In the second phase of screening, we only selected empirical studies with intervention designs to get more valid and reliable findings. This means that non-experimental studies, analytical research, and those studies which only reported qualitative or descriptive results were excluded. We also focused exclusively on studies undertaken in higher education contexts. Therefore, studies in the context of K-12 education were excluded. In addition, we only focused on studies conducted in online learning environments which means that other types of learning settings such as blended, hybrid, or face-to-face education were excluded.

3.3 Identification of relevant publications

The first screening led us to identify a total of 2221 papers (WOS: N=639, Scopus: N=1582). After an initial screening, 368 articles were removed because of duplications. Then, 1362 publications did not meet the secondary inclusion criteria which left us with 491 papers for full-text screening. Full-text screening led 386 publications to be dropped because they were not conducted either in higher education contexts or in online learning settings. Finally, 105 studies were left for quality appraisal.

3.4 Quality appraisal

We used the quality appraisal framework proposed by Theelen et al. (2019) which includes a checklist for critical appraisal of both quantitative and qualitative studies. Each publication received a score for each question of the checklist ranging from zero (not mention) to three (extensive mention) and if the mean score was two or more than two then the article met the required quality for inclusion. We found that 22 studies did not meet the minimum criteria to be included in the final analysis and only 83 studies remained (Table 1). The stages of our screening and selection process are illustrated in Fig. 1.

Table 1 Quality appraisal criteria
Fig. 1
figure 1

The flowchart of the screening and selection process

3.5 Included publications

Out of the final pool of 83 selected publications for analysis (Table 2), the majority of publications were published since 2020 (N=12, 15%).

Table 2 Characteristics of the included publications

These papers were published in a wide range of scholarly journals, from writing research to information technology. We found 8 publications in Interactive Learning Environments (10%), 7 publications (8%) in Computers and Education, and Assessment and Evaluation in Higher Education, 5 publications in the Internet and Higher Education (6%), and 3 publications in Computers in Human Behavior (4%). The selected publications were geographically diverse, with 30 publications from Taiwan (36%), followed by 11 publications from the Netherlands (13%), 9 publications from the United States (11%), 6 publications from China (7%), 4 publications from Iran (8%), and 3 publications from Spain (4%).

The most common research design among the selected publications was experimental design (N=49, 59%), followed by quasi-experimental design (N=25, 30%). The study context varied from medicine to statistics, but studies with education science contexts were found to be dominating (N=21, 25%). In terms of online platforms, the selected publications used Wiki (N=5, 6%), Blackboard (N=4, 5%), Brightspace (N=3, 4%), Facebook (N=3, 4%), KnowCat (N=2, 3%), and mobile apps (N=2, 3%). This diversity in online platforms suggests that online peer feedback has been implemented in various online learning environments, and researchers have investigated the impact of different platforms on the peer feedback process and outcomes.

3.6 Analytic strategy

A coding scheme was developed based on the Biggs model (2003) to thematically analyze the included publications and address research questions (Table 3). The coding scheme consisted of four dimensions, including students' characteristics, learning environment conditions, learning processes and activities, and learning outcomes. All 83 publications were analyzed and coded using the coding scheme in ATLAS.ti 9 (Friese, 2019), and the inter-rater reliability between the two coders was examined by randomly selecting and coding sample papers. The Kappa results showed 84 percent agreement between the two coders (κ=0.84, p<0.001), indicating high consistency and reliability of the coding. We followed a deductive approach to group-identified codes. In this approach, we started from a theoretical lens to categorize basic codes and observations. During coding, the approach confirmed or rejected the propositions, allowing for a structured analysis (Moradian et al., 2014; Rauss & Pourtois, 2013).

Table 3 Coding scheme

4 Results

4.1 RQ1. What are the students’ characteristics that influence online peer feedback in higher education?

We identified 36 codes for students’ characteristics that influence online peer feedback in higher education. We categorized the codes into three main categories: demographic characteristics, academic background, and personality and psychological features (Table 4).

Table 4 Students’ characteristics that influence online peer feedback in higher education

Demographic characteristics

Eight of the reviewed publications explored the role of student demographic characteristics, including gender (Noroozi et al., 2020, 2022), language (Culver et al., 2022), parental education (Culver et al., 2022), and race/ethnicity (Culver et al., 2022). Culver et al. (2022) found that language, parental education, and race/ethnicity did not predict students’ performance in a peer-reviewed lab activity. Among the demographic characteristics, other studies reported gender having a more significant influence on online peer feedback activities. In particular, there were significant differences between females and males in terms of negative sentiment comments (Lane et al., 2018) and peer feedback quality (Slee & Jacobs, 2017). Female students tended to produce higher-quality feedback (Noroozi et al., 2020, 2022; Slee & Jacobs, 2017) while providing negative comments with more caution (Lane et al., 2018). However, male students produced higher-quality argumentative essays than females based on peer feedback (Noroozi et al., 2020). Females tend to be more collaborative and communicative, which translates into more detailed and constructive feedback. They are also more likely to engage in social comparison and evaluation processes, which may enhance their ability to provide feedback that is sensitive to the needs and perspectives of others. On the other hand, males may be more competitive and goal-oriented, which may motivate them to improve their writing and argumentation skills in response to feedback. They may also be more confident in their writing abilities and willing to take risks, which could lead to greater creativity and effectiveness in their writing. As a result, while the impact of demographic variables on peer feedback may vary depending on the specific characteristic being considered, gender appears to be a consistently significant factor.

Academic background

Seven of the reviewed publications have explored the role of students' academic backgrounds including their online education experience, the type of high school that they have graduated from (Altinay, 2016), education level (Slee & Jacobs, 2017), field of study, feedback experience (Cheng & Hou, 2015), presentation ability (Day et al., 2021), and writing proficiency (Jiang & Yu, 2014; Yang & Meng, 2013). There were no significant differences in the mean grades of students allocated at different educational levels (Slee & Jacobs, 2017). Additionally, there were no significant differences in argumentation ability and conceptual understanding of students in different fields of study. However, there were meaningful differences between higher education students depending on the types of high school (Science High School, Vocational High School, Social Science High School, Anatolian High School, and Regular High School) they graduated from and their experience with distance education, specifically in collaborative learning or peer learning. Graduates of science high schools and participants with distance education experience reported more positive perceptions and experiences in online peer learning and assessment during collaborative learning (Altinay, 2016). Overall, the reviewed studies suggest that student's academic backgrounds can influence their peer feedback processes and outcomes in online learning settings. However, the impact may vary depending on the specific aspect of the academic background being considered.

Personality and psychological features

Seven of the reviewed publications explored the role of personality and psychological features including emotions (Cheng et al., 2014), epistemic beliefs (Noroozi & Hatami, 2019; Tsai & Liang, 2007), motivation (Tseng & Tsai, 2010), perceptions (Day et al., 2021; Jiang & Yu, 2014), and self-efficacy (Day et al., 2021; Tseng & Tsai, 2010). Different studies have achieved different results on the impact of epistemic beliefs on peer feedback processes and outcomes. For example, Cheng et al. (2014) found that students' participation in the peer assessment activity was influenced by their emotional responses, with students who experienced positive emotions being more likely to participate actively and provide high-quality feedback. In contrast, students who experienced negative emotions were more likely to avoid the activity or provide superficial feedback. Tsai and Liang (2007) showed that students with more constructivist-oriented epistemic beliefs might benefit more from peer feedback. Tseng and Tsai (2010), found that students with higher intrinsic motivation tended to have greater confidence in evaluating peers' work, receiving peers' opinions, and making the reaction to peers' feedback. Day et al. (2021) acknowledge that students' perceptions of peer feedback can impact their engagement and motivation to improve their presentation skills. Overall, the reviewed studies suggest that students with different personality traits behave in different ways when receiving peer feedback, and thus they achieve different outcomes.

4.2 RQ2. What are the learning environment conditions that influence online peer feedback in higher education?

In total, we identified 39 codes representing learning environment conditions that influence online peer feedback in higher education. We grouped these codes into two main categories including learning platform and learning setting (Table 5).

Table 5 Learning environment conditions that influence online peer feedback in higher education

Learning platform

The reviewed publications on online peer feedback have explored the use of a variety of learning platforms to implement peer feedback including Expertiza (Hoffman, 2019), Blackboard (Ismaeel, 2020), Wiki (Al Abri et al., 2021; Xiao & Lucking, 2008), Adobe Connect program (Altinay, 2016), Google Apps (Slee & Jacobs, 2017), Google Docs, Sakai VLE, and Sakai Wiki (Canham, 2018), and Calibrated Peer Review software (Culver et al., 2022). Peer feedback on wikis, for example, was shown to facilitate the improvement of writing essays and peer feedback content quality (Al Abri et al., 2021; Gielen & De Wever, 2015; Xiao & Lucking, 2008). Calibrated Peer Review software was developed to offer a student-centered approach to process-based writing while minimizing the role of instructors in providing feedback (Culver et al., 2022). Moreover, the quality of the online learning environment, the collaborative and socially constructive effort of peers, and the assessment of the resulting progress were found to be important for enhancing the motivation and involvement of students in learning and skills development (Pifarré et al., 2014). Additionally, the visualization of group awareness information in the KnowCat platform positively influenced students' collaborative behavior (Pifarré et al., 2014). Overall, the type of learning platform to implement online peer feedback is an important component of peer feedback processes and outcomes in online learning settings.

Learning setting

The reviewed publications on online peer feedback have explored the features of the environment in which peer feedback is implemented including context, team, and learning culture. Altinay (2016) found that there is a meaningful difference between different contexts in terms of peer learning. Students within the arts and sciences context perceived it more positively in a collaborative peer learning task compared to students in the communication, engineering, and technology contexts due to differences in task complexity, disciplinary culture, and prior experience. Moreover, the learner's experiences and perceptions of the online learning culture were essential in creating quality education through peer feedback (Donia et al., 2021; Pham et al., 2020). Furthermore, Agrawal and Rajapakse (2018) found that diverse teams with members from different academic disciplines provided more valuable feedback. The effectiveness of peer feedback in mixed academic teams may be influenced by a variety of factors, including the communication skills of team members, and the ability of team members to take each other's perspectives and engage in critical reflection. Overall, the features of the environment including context, team composition, and learning culture are important factors for the quality of peer feedback processes and outcomes in online learning settings.

4.3 RQ3. What are the learning processes and activities that constitute online peer feedback in higher education?

We identified a total of 107 codes that represented the learning processes and activities of online peer feedback implementation. We grouped the codes into three main categories including content, feedback activity design, and technology. These categories are the heart of learning and teaching with technology (Koehler & Mishra, 2009) (Table 6).

Table 6 Learning processes and activities that constitute online peer feedback in higher education

Content

The reviewed publications on online peer feedback have explored the type and quality of feedback that influences the peer feedback processes. In terms of feedback type, Çevik (2015) found that both assessors and assessees improved their problem-solving skills. Regarding feedback quality, Tsai et al. (2002) found a positive relationship between the quality of peer feedback received and assessee students' performance. Students who perceived peer feedback as accurate and useful were more likely to utilize the feedback comments from peers to improve their work reviewed (Wang et al., 2019). Therefore, the quality of the feedback is crucial in influencing the peer feedback processes. The feedback should be rich in content and include good features such as being affective, constructive, timely, and detailed and containing problem identification, and problem justification (Taghizadeh Kerman et al., 2022a). The better the quality of the feedback, the more likely students are to take it seriously and uptake it to improve their work.

Feedback activity design

The reviewed publications on online peer feedback have explored various characteristics of peer feedback design considerations related to the peer feedback processes in higher education. These considerations include whether peer feedback should be voluntary or obligatory (Liu et al., 2019), whether it should be given anonymously or not (Lane et al., 2018; Lin, 2018a), the number of rounds of peer feedback (Chen et al., 2020; Lai et al., 2020), and the role of peer feedback (Day et al., 2021; Çevik, 2015). For example, findings indicate that voluntary peer feedback can lead to more accurate scores (peer rater accuracy) for the final task (Liu et al., 2019), and a collaborative team of reviewers can produce higher-quality feedback than individual reviewers (Mandala et al., 2018). Peer feedback can also improve problem-solving skills and reasoning abilities for both assessors and assessees (Çevik, 2015; Patchan et al., 2018). Furthermore, more rounds of peer assessment can lead to improved writing performance and the validity of peer scores (Liang & Tsai, 2010). Peer feedback training has also been found to have positive effects on writing improvement (Jiang & Yu, 2014) and text revisions (Yang & Meng, 2013), although no significant increases were observed in student assessment knowledge when participating in peer assessment training (Hoffman, 2019). Peer scoring and commenting tasks as part of peer feedback activity can improve students' performance (Chen et al., 2020; Hsia et al., 2016; Xiao & Lucking, 2008). When online peer feedback is provided anonymously, it has demonstrated the potential to enhance students' essay writing performance, as evidenced in the context of EFL learning (Al Abri et al., 2021), high-quality cognitive feedback (Liu et al., 2019), and constructive feedback (Basheti et al., 2010). Anonymity in online peer feedback can be useful because it encourages honesty and openness, reduces bias and social pressure, and promotes constructive feedback that is focused on helping the recipient improve. Compared to their male counterparts, female peer reviewers were found to be more influenced by anonymity than male peer reviewers as they produced more negative comments in their feedback (Lane et al., 2018). However, Liu and Zhang (2017) found no significant differences between anonymous and identified discussion groups in terms of writing quality. Moreover, the use of worked examples, including a typical answer model of a high-quality argumentative essay, has been found to improve the quality of argumentative essay writing and facilitate the acquisition of domain-specific knowledge (Latifi et al., 2020; Valero Haro et al., 2019). Overall, the feedback activity design considerations related to the peer feedback processes can significantly influence the outcomes of online peer feedback in higher education.

Technology

The reviewed publications on online peer feedback have explored various technological innovations that can facilitate peer feedback processes including synchronous or asynchronous online discussions (e.g., Liu et al., 2017; Zheng et al., 2018), video peer assessment (Ge, 2019), and video annotation (Lai et al., 2020; Lai, 2016). Synchronous peer assessment discussions were found to elicit interaction between basic and advanced cognitive dimensions, which may be valuable in developing cognitive abilities, improving writing (Liu et al., 2017; Zheng et al., 2018), and promoting affective and meta-cognitive feedback quality, meta-cognitive awareness, and self-efficacy (Zheng et al., 2018). Additionally, asynchronous discussion environments were shown to improve students' performance of argumentation and conceptual understanding. The use of video feedback and video annotation was found to be effective in improving e-learners' translation performance and the effectiveness of online peer assessment (Ge, 2019; Lai et al., 2020; Lai, 2016). Furthermore, reviewed publications have shown that supportive learning strategies with the help of technology can lead to improved learning. These strategies include mobile-supported (Chang & Lin, 2020; Kuo et al., 2017), blog-supported (Rahmany et al., 2013; Yeh et al., 2019), Facebook-based online peer assessment with micro-teaching (Lin, 2016), web-based alternatives (Ismaeel, 2020). For instance, the use of mobile phones in peer assessment can promote students' learning interests, motivation, and self-efficacy (Kuo et al., 2017). Blog-supported peer feedback can improve students' speaking and writing skills (Yeh et al., 2019; Rahmany et al., 2013). The use of argumentative peer feedback scripts and text-based digital learning modules can enhance the quality of students' written argumentative essays (Noroozi & Hatami, 2019; Noroozi et al., 2016). Additionally, feedback and feedforward support in terms of prompts can improve peer learning processes, argumentative essay quality, and domain-specific learning (Latifi et al., 2021). Overall, the use of technology as affordances in online peer feedback can enhance the effectiveness and efficiency of the peer feedback processes that lead to improved learning outcomes.

4.4 RQ4. What are the learning outcomes of online peer feedback in higher education?

We identified a total of 165 codes that represented learning outcomes of online peer feedback implementation in higher education. We categorized the learning outcomes into three categories based on Bloom’s Taxonomy (1956) including cognitive outcomes (number of codes = 104) (see Table 7), behavioral outcomes (number of codes = 16) (see Table 8), and affective outcomes (number of codes = 45) (see Table 9).

Table 7 Cognitive outcomes of online peer feedback implementation in higher education
Table 8 Behavioral outcomes of online peer feedback implementation in higher education
Table 9 Affective outcomes of online peer feedback implementation in higher education

Cognitive outcomes

Cognitive outcomes are related to the acquisition of knowledge, comprehension, application, analysis, synthesis, and evaluation (Bloom, 1956). These outcomes are categorized into knowledge, comprehension, application, analysis, synthesis, and evaluation.

Knowledge

Among all, twenty studies explored knowledge outcomes such as domain-specific or domain-general knowledge (e.g., Latifi et al., 2020, 2021; Noroozi & Hatami, 2019), and assessment knowledge (Hoffman, 2019). These studies have found that various approaches, such as a combination of worked examples and scripting (Latifi et al., 2021; Valero Haro et al., 2019), guided peer feedback (Noroozi & Mulder, 2017), feedback and peer feedforward support (Latifi et al., 2021), mobile-supported (Chang & Lin, 2020), the use of awareness tools in KnowCat (Pifarré et al., 2014), and the rating-plus-qualitative-feedback (Hsia et al., 2016), can facilitate the acquisition of domain-specific or domain-general knowledge.

Comprehension

Five studies investigated comprehension outcomes in online peer feedback (e.g., Gielen & De Wever, 2015; Zhan, 2020). The selected studies on online peer feedback have identified comprehension outcomes such as conceptual understanding, elaboration (Gielen & De Wever, 2015), and ability to justify (Zhan, 2020). Peer feedback activities in asynchronous discussion environments were found to promote students' conceptual understanding, while structured peer assessment was shown to improve the quality and focus of peer feedback elaborations (Gielen & De Wever, 2015) and students' ability to justify their arguments with credible evidence (Zhan, 2020).

Application

Fifty studies have explored application outcomes in online peer feedback, including writing (e.g., Culver et al., 2022; Latifi et al., 2021), feedback performance (e.g., Chen et al., 2020; Day et al., 2021), problem-solving (e.g., Chang et al., 2015; Çevik, 2015), and dance performance (Hsia et al., 2016). The reviewed publications have demonstrated that various approaches, such as structured peer assessment (Tsai & Chuang, 2013), argumentative peer feedback script (e.g., Noroozi & Hatami, 2019; Noroozi et al., 2020), and online discourse community (Luhach, 2020), can improve argumentative essay writing. Additionally, online peer feedback with Total Quality Management (TQM) (Chang et al., 2015), the role of peer feedback (assessors and assessees) (Çevik, 2015), and peer learning experiences (Altinay, 2016) have been shown to facilitate problem-solving within an active, social process.

Analysis

Six studies investigated analysis outcomes in online peer feedback. These studies have identified various analysis outcomes, such as argumentation skills, reflective thinking (Chen et al., 2009; Pham et al., 2020), and critical thinking (e.g., Altinay, 2016; Zhan, 2020). Chen et al. (2009) and Pham et al. (2020) showed that it can enhance students' reflective thinking skills. Additionally, Liu et al. (2001) and Zhan (2020) demonstrated that online peer feedback can promote students' critical thinking abilities.

Synthesis

Three studies explored synthesis outcomes in online peer feedback (Chang et al., 2015; Liu et al., 2001). The reviewed publications on online peer feedback have identified various synthesis outcomes, such as design skills (Chang et al., 2015) and planning skills (Liu et al., 2001). Chang et al. (2015) found that using online peer feedback with TQM can enhance design skills, while Liu et al. (2001) demonstrated that web-based peer assessment can promote planning skills among students.

Evaluation

Three studies explored evaluation outcomes in online peer feedback (e.g., Hoffman, 2019; Liu et al., 2019). The reviewed publications on online peer feedback have evaluated various outcomes, such as assessment skills (Liu et al., 2019) and meta-cognitive awareness (Liu et al., 2001; Zheng et al., 2018). Liu et al. (2019) found that students who participated in voluntary group feedback provided more accurate scores (i.e., peer rater accuracy) than those in the compulsory group. Zheng et al. (2018) showed that synchronous discussion had a significant positive impact on improving meta-cognitive awareness. Liu et al. (2001) also found that monitoring and regulation can enhance structured peer assessment.

Behavioral outcomes

Behavioral outcomes refer to the level of student engagement, communication, and teamwork in learning activities that are caused by involvement in peer feedback activities. These outcomes are categorized into engagement, communication, and teamwork (Table 8).

Engagement

Seven studies explored learners’ engagement in online peer feedback (e.g., Lin, 2019; Yuan & Kim, 2017). Research suggests that using the rating-plus-qualitative-feedback (Hsia et al., 2016) and collaborative review (Mandala et al., 2018) can enhance students' participation in online learning activities. Additionally, Cheng et al. (2014) found that students who responded more frequently tended to participate more actively and express more positive emotions in response to their peers' positive comments or neutral questions. Also, Su et al. (2022) showed that using group awareness tools can enhance student engagement with online peer feedback in collaborative language learning activities.

Communication

Three studies, including Altinay (2016), Lai (2016), and Lai et al. (2020), examined the impact of online peer feedback on learners' communication skills. These studies found that using round number and video annotation (Lai, 2016; Lai et al., 2020) and collaborative learning (Altinay, 2016) was particularly effective in promoting the development of communication skills.

Teamwork

Three studies, including Chang et al. (2015), Donia et al. (2021), and Altinay (2016), examined the impact of online peer feedback on learners' teamwork skills. While online peer feedback with TQM and collaborative learning were found to improve teamwork skills according to Chang et al. (2015) and Altinay (2016), respectively, Donia et al. (2021) found no significant direct effect on teamwork.

Affective outcomes

Affective outcomes refer to the quality of students’ perceptions of their learning caused by online peer feedback implementation. To identify the aspects of affective learning outcomes in online peer feedback, students’ satisfaction, motivation, attitude, self-efficacy, sense of autonomy, and confidence have been examined (Table 9).

Satisfaction

Satisfaction was measured in five studies and revealed participants’ positive evaluation of online peer feedback implementation (e.g., Donia et al., 2021; Noroozi & Mulder, 2017). For example, some studies measured students’ satisfaction with the digital learning module with guided peer feedback (Noroozi & Mulder, 2017), anonymity (Liu et al., 2017), and the rating-plus-qualitative-feedback (Xiao & Lucking, 2008). These studies suggest that students generally have high satisfaction with online peer feedback implementation when provided with certain conditions, such as guided feedback, anonymity, and rating-plus-qualitative-feedback mode.

Perception

Twelve studies explored students’ experiences of learning online peer feedback including perceived collaborative task (Mandala et al., 2018), perceived fairness (Lin, 2018a), perceived usefulness (Kuo et al., 2017), perceived learning outcomes (e.g., Lin, 2016, 2018a; Noroozi & Mulder, 2017), perceived ease to use (Kuo et al., 2017; Ge, 2019). For example, some studies showed that students in the anonymous group (Lin, 2016, 2018a; Basheti et al., 2010), guided peer feedback (Noroozi & Mulder, 2017), and video peer assessment (Ge, 2019) perceived that they had learned more from peer feedback activities compared to other groups.

Motivation

Eight studies explored students’ motivation in online peer feedback settings (e.g., Chen et al., 2020; Noroozi & Mulder, 2017). For example, some studies (Chen et al., 2020; Hsia et al., 2016) measured students' motivation with the rating-plus-qualitative-feedback mode of peer feedback and found that students expressed higher motivation when provided with this type of feedback. Kuo et al. (2017) found that mobile-supported peer feedback also increased student motivation, while Noroozi and Mulder (2017) found that the digital learning module with guided peer feedback improved student motivation. Overall, these studies suggest that certain conditions in online peer feedback settings can increase student motivation.

Attitude

Fourteen studies explored students’ attitudes toward online peer feedback (e.g., Noroozi & Hatami, 2019; Wang et al., 2019). For example, reviewed publications showed that online peer feedback with TQM (Chang et al., 2015), mobile-supported (Kuo et al., 2017), scripting (Noroozi & Hatami, 2019), guided peer feedback (Noroozi & Mulder, 2017), blog-supported (Rahmany et al., 2013), and structured peer assessment (Wang et al., 2019) caused attitudinal change towards online peer feedback.

Self-efficacy

Five studies measured students’ self-efficacy after online peer feedback (e.g., Ismaeel, 2020; Zheng et al., 2018). For example, reviewed publications showed that synchronous discussion (Zheng et al., 2018), mobile-supported (Kuo et al., 2017), structured peer assessment (Wang & Wu, 2008), and web-based alternative (Ismaeel, 2020) have positive effects on students’ academic self-efficacy skills.

Confidence

Confidence has been rarely examined in relation to peer feedback. Altinay (2016) found that online peer feedback programs increase students' confidence by empowering them to take ownership of their learning.

5 Discussions

In this section, the main elements including students' characteristics, learning environments, learning processes and activities, and finally learning outcomes are discussed. Also, under each of the main elements, its more detailed dimensions are explained.

Researchers have explored various aspects of students' characteristics in relation to online peer feedback, including personality traits, emotions, epistemic beliefs, motivation, perceptions, and self-efficacy. These factors play a role in terms of how students give and receive feedback, as well as their engagement and learning outcomes in online peer feedback activities. Reviewed publications showed how demographic characteristics, such as age and gender can influence students' engagement and outcomes in online peer feedback activities. While gender has been a primary focus in many studies examining the relationship between demographic factors and online peer feedback, the effects of other factors, such as age, nationality, and language have not been extensively explored. However, some studies have explored the relationship between these demographic factors and online peer feedback. These findings suggest that demographic factors beyond gender can play a role in shaping students' behaviors and outcomes in online peer feedback activities. Numerous studies have explored the relationship between academic backgrounds and outcomes in online peer feedback activities. For instance, Cho and Schunn (2007) found that students' prior experience with peer feedback was related to their feedback quality and learning outcomes in an online writing task. Similarly, Li et al. (2021) found that students' educational level and prior knowledge were related to their perceptions and use of peer feedback in online learning environments. These findings suggest that academic backgrounds, including factors such as high school graduation, educational level, prior experience, and knowledge are critical considerations that scholars, educators, and educational designers must take into account when implementing peer feedback in online learning environments.

Our study aligns with previous research that emphasizes the significance of students' characteristics in the online peer feedback processes (Banihashem et al., 2023; Noroozi et al., 2022). Given the significant impact of students' characteristics, including their academic backgrounds, on their engagement with online peer feedback activities, educators, scholars, and instructional designers must recognize and address these factors (Li et al., 2021). Previous research has shown that students' prior experience, educational level, and knowledge are critical factors when implementing online peer feedback activities (Cho & Schunn, 2007; Li et al., 2021). Therefore, designing feedback activities that are tailored to students' specific needs and backgrounds may enhance their engagement, motivation, and learning outcomes. Additionally, recognizing the diversity of students' academic backgrounds and providing opportunities for peer feedback in different formats and languages may help create a more inclusive and equitable learning environment (Li et al., 2021).

Our review revealed that various platforms have been used to implement online peer feedback settings, with Wiki, Blackboard, KnowCat, Facebook, and Mobile apps being the most commonly utilized (Li et al., 2021). This finding is consistent with prior research that emphasizes the significance of learning technologies in the implementation of online peer feedback (e.g., Chang & Lin, 2020; Gielen & De Wever, 2015). The choice of technology can have a significant impact on the effectiveness of online peer feedback activities, as different platforms may have different features, functionalities, and affordances that influence students' engagement and learning outcomes (Noroozi & Hatami, 2019; Shang, 2019). As such, it is essential to consider the characteristics and affordances of the technology when designing and implementing online peer feedback activities to ensure optimal outcomes for students' learning and performance (Li et al., 2021). For example, a Wiki platform may be more suitable for collaborative writing tasks, while a mobile app may be more effective for providing feedback on multimedia projects.

The learning environment in online peer feedback is not limited to the platform used but also includes other factors such as culture, faculty, and teamwork, which can influence learning outcomes (Donia et al., 2021; Pham et al., 2020). Cultural factors such as language proficiency and communication styles can impact the effectiveness of online peer feedback activities, highlighting the importance of ensuring that feedback prompts and instructions are clear and easily understood by all students. Additionally, faculty support, including training and guidance on how to provide and receive feedback, can enhance students' engagement and the quality of their feedback. Students may receive different levels of support in different learning communities or settings, which can affect their actions and reactions during the online peer feedback process, ultimately leading to varying learning outcomes (Kuo et al., 2017). For example, students in a supportive and collaborative learning community may be more likely to engage actively in the feedback process and provide constructive feedback to their peers. In contrast, students in a competitive and individualistic learning community may be more likely to focus on their own performance and provide less constructive feedback to their peers. Additionally, factors such as the level of guidance and scaffolding provided by the instructor, the type of feedback prompts, and the overall design of the online peer feedback activities can also impact students' actions and reactions during peer feedback processes. Therefore, it is essential to consider the broader learning context when designing and implementing online peer feedback activities to ensure that they are effective in different cultural and institutional settings (Donia et al., 2021; Kuo et al., 2017).

Online peer feedback activities should provide flexibility, support, and guidance to address these cultural factors (Li et al., 2019a, b). Anonymity, instructor modeling, cooperative environment, guidelines, and examples are strategies to encourage cross-cultural peer feedback (Li et al., 2019a, b). Overall, cultural sensitivity is key to designing effective peer feedback for diverse learners as culture profoundly impacts students’ expectations and engagement in such activities (Hofstede, 2001; Li et al., 2019a, b). Educators can use strategies such as clear guidelines, cross-cultural communication, supportive learning environments, culturally responsive pedagogy, forming diverse peer feedback groups, cultural competence promotion, and critical reflection to overcome cultural hurdles and promote participation in online peer feedback (Golonka & Lance, 2020).

Also, we analyzed online peer feedback from three dimensions: content, feedback activity design, and technology. Within the content dimension, the type of feedback provided by peers has received particular attention from scholars (Van Zundert et al., 2010). Studies have explored the impact of different types of feedback, such as corrective, elaborative, and directive feedback, on learning outcomes and student motivation. Understanding the impact of feedback type on learning can help educators design effective online peer feedback activities that promote student learning and engagement. Previous research has also found that the type, features, and quality of feedback provided in online peer feedback activities can predict students' success. For example, a study by Taghizadeh Kerman et al. (2022b) found that the quality of feedback provided by peers was positively associated with students' writing performance. Similarly, a study by Patchan et al. (2016) found that the quality of feedback, including its specificity, clarity, and detail was a significant predictor of students' writing improvement in online peer feedback activities.

Within the feedback activity design dimension of online peer feedback, scholars have explored the effectiveness of various strategies and methods for implementing peer feedback activities. Some of the strategies that have been studied include the number of peer feedback rounds, reviewer characteristics, and training. For example, a study by Topping (2017) found that increasing the number of peer feedback rounds improved the quality and quantity of feedback provided by peers. Other studies have explored the impact of reviewer characteristics, such as experience and expertise, on the effectiveness of online peer feedback activities. Additionally, studies have shown that providing training for students on how to give and receive feedback can improve the quality of feedback provided in online peer feedback activities. Previous studies have also emphasized the importance of peer feedback rounds, reviewer characteristics, and training in the effectiveness of online peer feedback activities (Latifi et al., 2020; Min, 2006; Noroozi et al., 2019).

In the field of educational technology, various methods have been explored for implementing educational strategies with the aid of technology. These include video annotation, video peer assessment, different types of discussions and support, as well as various scaffolding techniques (Noroozi & Hatami, 2019). Among these, peer feedback processes and activities are considered to be of significant importance because they help students express their opinions, write more effectively, reflect on their knowledge, and achieve deeper learning (Noroozi & Hatami, 2019). Through this student-led approach, students may also develop higher-order thinking skills by taking on the tasks and responsibilities of assessors. Despite the potential benefits of peer feedback, empirical research has identified several problems related to the reluctance to include peer feedback in instructional practices and the learning process (Zhu & Carless, 2018). To address these issues, it is important to establish a safe environment by clearly communicating the goals of peer assessment and training assessors to provide constructive feedback and scaffolding (Topping, 1998). Educators should encourage thorough discussion of evaluation criteria before peer evaluation occurs, and they should intervene if feedback or marking is deemed unsatisfactory (Topping, 1998). The activities and processes discussed above can help achieve these goals (Noroozi & Hatami, 2019).

Our analysis reveals that online peer feedback is utilized for different learning purposes, including cognitive, behavioral, and affective outcomes (e.g., Latifi et al., 2020; Lin, 2018a; Noroozi & Hatami, 2019). Based on Bloom's classification, the primary cognitive outcomes resulting from the implementation of online peer feedback were in the application category, such as writing performance, feedback performance, and problem-solving (e.g., Hsia et al., 2016; Latifi et al., 2020). Researchers focused on the engagement of students in the peer feedback process as the primary behavioral outcome, which was influenced by various learning mechanisms and strategies, such as guided peer feedback, mode, and anonymous condition (e.g., Latifi et al., 2021; Noroozi & Mulder, 2017). In terms of affective outcomes, researchers mainly investigated perception and attitude towards peer feedback (e.g., Chang et al., 2015; Lin, 2018a; Noroozi & Hatami, 2019).

To achieve the desired goal, it is crucial to adopt appropriate educational strategies. For instance, to acquire skills in argumentative essay writing, structured peer assessment, a combination of worked examples and scripting, argumentative peer feedback script, mixed feedback and peer feedforward support, and online discourse community can be effective (e.g., Noroozi et al., 2020; Tsai & Chuang, 2013; Valero Haro et al., 2019). These educational approaches create opportunities for students to prepare and learn more, discuss, think, and reflect on the criteria of argumentative writing by providing formulae, procedures, and examples of desirable works. To increase student participation in the online peer feedback process, educational approaches such as the rating-plus-qualitative-feedback, and collaborative review are useful because they motivate students to take the peer feedback process more seriously and get involved in it (Hsia et al., 2016; Mandala et al., 2018). To improve students' attitudes towards peer feedback, instructional approaches such as argumentative peer feedback, mobile peer assessment, online peer feedback with TQM, anonymous condition, guided peer feedback, blogging, and accurate and specific feedback can be effective (e.g., Chang et al., 2015; Lin, 2018a).

In summary, in higher education, it is essential for educators and educational designers to choose appropriate educational design principles and keep educational goals in mind while designing and implementing online peer feedback. Ignoring other aspects of educational goals and their effects may diminish the effectiveness of the educational technique. Therefore, it is crucial to consider the different learning purposes, cognitive, behavioral, and affective, and adopt appropriate educational strategies to achieve the desired result (e.g., Noroozi et al., 2011, 2016, 2020; Rahmany et al., 2013; Valero Haro et al., 2019).

6 A conceptual framework to guide the use of online peer feedback

Developing a conceptual framework to steer the integration of online peer feedback within higher education holds the potential to guarantee that instructors deploy strategic approaches that harmonize with precise learning objectives. Drawing inspiration from our discoveries concerning the fundamental dimensions of online peer feedback, we present a proposed evidence-grounded conceptual framework as illustrated in Fig. 2.

Fig. 2
figure 2

A conceptual framework to guide the use of online peer feedback

Assessing students’ characteristics represents the crucial first step in the incorporation of online peer feedback within higher education. Gaining insights into students' distinctive qualities, encompassing their pre-existing knowledge, skill sets, and attitudes toward peer feedback, serves as a compass for educators to implement online peer feedback that is more tailored to students’ needs, preferences, and abilities. For example, knowing that students have limited experience with online peer feedback, may convince educators to provide more guidance and support during the peer feedback process. On the other hand, if students have a high level of experience with online peer feedback, a more independent and self-directed approach may be found appropriate by educators. Similarly, if students have negative attitudes towards peer feedback, it may be necessary to use instructional approaches that focus on building trust and promoting a positive feedback culture. These methods provide students with more control and autonomy in the feedback process, as well as opportunities for collaboration and peer support.

In the second step, a successful implementation of online peer feedback requires a well understanding of learning environment conditions such as learning settings (context, team, and culture) and learning platform. Studies have shown that the context of learning plays a role in online peer feedback. For example, students within the arts and sciences context perceived online peer feedback more positively compared to students in the communication, engineering, and technology contexts and this is related to the differences in task complexity, disciplinary culture, and prior experience (Altinay, 2016). In addition, the type of learning platform should be considered in the implementation of online peer feedback in higher education, as different learning platforms offer distinct arrays of functionalities for facilitating online peer feedback. It is important for educators and designers to regularly and critically reflect on the most appropriate online platform for peer feedback, especially as technologies continue to rapidly change and develop. While selecting an appropriate platform is important, it should not be the primary consideration. Instead, educators and designers should prioritize defining clear learning objectives and determining the specific needs and characteristics of their students. This will enable them to select a platform that is most appropriate for achieving their goals. In addition, it is important to stay informed about new and innovative technologies, such as AI, that may have the potential to enhance the peer feedback process. By regularly reflecting on and evaluating the effectiveness of different online platforms and technologies, educators and designers can make informed decisions about which tools and approaches are most appropriate for their students and learning objectives. This can help to ensure that the peer feedback process remains current, effective, and engaging for students. Furthermore, in the final step, activities and processes should be determined according to the students' characteristics and learning objectives. This will ensure that the peer feedback process is tailored to the specific needs of the students and is designed to promote positive learning outcomes. By taking into account the students' characteristics and learning objectives, educators and designers can select appropriate activities and processes that will engage and motivate their students, promote effective feedback, and facilitate learning.

A foundational understanding of whom the peer feedback system is intended to serve and for what purpose is critical to ensuring that the peer feedback process is meaningful and relevant. By taking a student-centered approach and considering students' characteristics and learning objectives, educators and designers can establish peer feedback settings tailored to the specific needs of their students to guide them towards achieving learning outcomes. Moreover, when students are involved in peer feedback processes, it is important to consider their perspectives and experiences. Students should have a voice in the development of the peer feedback process and be involved in the selection of activities and processes that are most effective for their learning. This will help to promote student engagement and motivation and ensure that the peer feedback process is effective, relevant, and ethical. As Noroozi et al. (2011, 2016) suggest, objectives play a key role in determining what types of activities and strategies are needed to collect feedback effectively. However, it is also important to consider the ethical perspective when designing and implementing peer feedback processes. When peers are involved in online peer feedback situations, students need to know what happens with the feedback that is provided and received. Human values such as privacy, equality, and responsibility can be considered crucial in providing feedback in online situations. Therefore, educators and designers should ensure that appropriate measures are in place to protect students' privacy, promote equality in feedback provision and reception, and foster a responsible and constructive feedback culture. By integrating the student-centered approach and the ethical perspective, educators and designers can design effective and relevant peer feedback processes that promote positive learning outcomes while also being responsible and ethical.

The implementation of peer feedback in higher education should be tailored to the needs of students and fit with the educational objectives. For instance, projects with different goals, such as promoting cognitive, behavioral, and affective learning outcomes, may require different activities and methods at different stages of the feedback process. As such, the steps in our conceptual framework should be considered in a hierarchical manner, taking into account the specific learning goals and the needs of the students. This approach is supported by previous research, such as Noroozi et al. (2012, 2016), who have emphasized the importance of aligning the goals of the feedback process with the desired learning outcomes. By doing so, it is possible to design and implement peer feedback activities that are effective in promoting learning and development among students in higher education.

7 Conclusions, limitations, and suggestions for future research and practice

This systematic review utilized Biggs' (2003) model of online peer feedback to guide the analysis by focusing on the four dimensions of effective online peer feedback. The review provides a comprehensive overview of the current state of implementation of online peer feedback in technology-mediated learning environments and identifies gaps and areas for future research. The review emphasizes the importance of considering cultural differences, learner characteristics, and appropriate technologies in designing effective online peer feedback practices. The review also highlights the need for future research to focus on specific dimensions of online peer feedback to gain a more nuanced understanding of how each dimension affects learning outcomes. Overall, this review contributes to the field of online peer feedback and helps educators and researchers develop more effective approaches to enhance learning outcomes.

There are several limitations to this review that should be acknowledged. Firstly, the review only included empirical studies to ensure the reporting of authentic findings, which may have excluded some noteworthy reviews and conceptual papers. Secondly, while the selected literature databases cover the most relevant publications, some studies not indexed in these databases may have been missed. Thirdly, there may be a publication bias, where studies with null findings are not published, which could affect the generalizability of our findings. Therefore, caution should be exercised when interpreting our results. Fourthly, our study only investigated online peer feedback in higher education and did not examine its use and impact in K-12 educational contexts. Thus, our findings may not be generalizable to all modes of educational contexts. Future research could explore how online peer feedback in higher education differs in its use and impact compared to K-12 educational environments. Fifth, the review only focused on studies published in English, which may have excluded relevant studies published in other languages. Finally, the review only included articles published between 2000 and 2023, which may have excluded relevant studies published before 2000.

Future research should explore several areas to enhance our understanding of online peer feedback and optimize its implementation in higher education. First, investigating the impact of different types of feedback such as written or verbal feedback and text-based comments, audio or video feedback could provide insights into which types are most effective in promoting learning. Second, exploring the use of peer feedback as a formative assessment tool could help students identify areas they need to improve and make progress toward learning goals. Third, examining the effects of emotions and different delivery methods on learning outcomes could help identify factors that influence the effectiveness of online peer feedback. Fourth, exploring gamification and other motivational techniques to enhance engagement and developing best practices to ensure effective and efficient feedback processes could improve the quality of feedback. Fifth, incorporating virtual and augmented reality technologies to create immersive feedback experiences could enhance engagement and provide more effective feedback. Sixth, using blockchain technology to enhance the credibility and transparency of feedback could ensure that feedback is fair and accurate. Seventh, the emergence of new technologies such as ChatGPT holds great potential to support online peer feedback and essay writing (Farrokhnia et al., 2023; Banihashem et al., 2022a, 2022b). Future research in online peer feedback in higher education could investigate the potential role of AI and machine learning in enhancing peer feedback quality and relevance. Such research could explore how AI-powered tools can support students in providing personalized and constructive feedback to their peers. Additionally, the ethical implications of using AI-powered tools in online peer feedback should be investigated to ensure that these tools are used in a responsible and ethical manner. Furthermore, research could focus on integrating human values such as privacy, equality, and responsibility into the design and implementation of online peer feedback processes. This could include the development of guidelines and best practices that consider the ethical dimension of feedback provision and reception. Additionally, research could investigate how to promote a responsible and constructive feedback culture in online settings, and how to ensure that students are adequately prepared to provide and receive feedback in a responsible and ethical manner. By integrating the ethical perspective into online peer feedback, educators, and designers can help to ensure that these processes are not only effective and relevant but also responsible and ethical, contributing to the advancement of Responsible AI and AI ethics in education. Eighth, exploring the use of online peer feedback in interdisciplinary and cross-cultural contexts could optimize its implementation. Finally, this study concentrated on offering a comprehensive overview of the current state of online peer feedback implementation in higher education. We achieved this by conducting a systematic review exclusively centered on empirical studies known for their robust methodologies, ensuring reliable and valid results. As a suggestion for future research initiatives, we propose advancing further by conducting a meta-analysis to delve into the effect size of implementing online peer feedback in higher education. These areas of research could lead to more personalized, effective, and innovative approaches to online peer feedback.