Keywords

1 Introduction

Within presentation research, presenting is frequently defined as “a combination of knowledge, skills, and attitudes needed to speak in public in order to inform, self-express, to relate and to persuade” (De Grez, 2009, p. 5). Following this definition, an important notion is the interrelatedness of the cognitive, behavioural and affective domains considering the concept of oral presentation competence, since students’ public speaking performance can be enhanced or inhibited by any or all of these competencies (Van Ginkel et al., 2015). Further, this competence is regarded as crucial for working in varying professional environments, career success and effective participation in the democratic society (e.g. De Grez et al., 2009; Van Ginkel et al., 2015; Van Konsky & Oliver, 2012). Therefore, teaching this competence is considered as a crucial objective in higher education (Van Ginkel et al., 2015).

Although the provision of curricula towards developing presentation skills remains crucial in higher education, several challenges appear for curriculum designers and teachers. First of all, developing presentation competence is widely regarded as a time-consuming activity (Van Ginkel et al., 2015). This perspective does not correspond to the current trend in education in which student numbers rise, while possibilities for teacher-student interactions diminish. Consequently, there is a pressure on curricula to integrate both effective as well as efficient evidence-based approaches, including instructions, learning activities and feedback strategies, for teaching oral presentation competence. Second, this challenge is even strengthened given the fact that students should also develop several other academic, communication and domain-specific competencies in limited time frames during their educational lives, which even further increases the pressure on presentation curricula (Pittenger et al., 2004).

One of the crucial educational design principles for effective learning environments fostering students’ oral presentation competence is peer learning (Van Ginkel et al., 2015). Although previous studies addressed both the effectivity of peer feedback for encouraging public speaking performances in higher education as well as the efficiency by adopting peers in formative assessment processes, teachers outperformed peers in terms of the impact on students’ development of presentation competence. Follow-up studies experimented with VR technologies as an alternative feedback source in presentation courses and revealed significant effects on student learning comparable to teacher feedback. Recent developments regarding this innovative technology could potentially support peer and self-learning, since the VR systems can nowadays produce feedback messages on non-verbal communication aspects such as eye contact and use of voice that directly relate to the standards of high-quality feedback (Van Ginkel et al., 2019). However, it remains unclear how such messages should be formulated, how feedback messages are perceived by students in higher education and to what extent these messages, produced in VR systems, can be considered as effective for peer and self-learning.

This chapter synthesizes previous studies in presentation research with the aim to construct a research agenda on computer-mediated feedback in VR for peer learning fostering students’ oral presentation competence. In the first three sections, an overview is given of research focusing on the role, the effectiveness and the quality of peer feedback in presentation education. Subsequently, the following two sections discuss the potentials of VR, AI and computer-mediated feedback in such educational trajectories. Finally, a research agenda has been constructed focusing on computer-mediated feedback in VR and AI for improving peer learning in presentation education.

2 The Role of Peer Feedback in Presentation Research

A systematic review on the development of learning environments fostering oral presentation competence in higher education revealed that, besides principles relating to instruction, presentation tasks, behaviour modeling and the opportunity to practice, three out of the seven crucial educational design principles address the essence of feedback. Moreover, peer feedback can be considered as one of these seven crucial educational design principles (Van Ginkel et al., 2015). In specific, based on empirical research and arguments grounded in theory, it is concluded that feedback should be explicit, contextual, adequately timed and of suitable intensity in order to improve students’ oral presentation competence (Mitchell & Bakewell, 1995). Moreover, it has been highlighted that involving peers in formative assessment processes supports students’ development in presentation competence and attitudes towards presenting.

Feedback provided by peers is frequently positioned within the process of formative assessment (e.g. Baker & Thompson, 2004; Carroll, 2006; Hattie & Timperley, 2007; Noroozi & Hatami, 2019; Noroozi & Mulder, 2017; Shaw, 2001). According to Falchikov (2005), formative assessment is intended to monitor and improve student learning through providing students with feedback. Regarding publications in presentation research, several scholars claim the need to triangulate multiple feedback sources, such as teachers, peers and the self, for guaranteeing that reflective learning takes place (e.g. Carroll, 2006). Additionally, others emphasize that the adoption of peers encourages a higher sense of feedback sensitivity (e.g. Econopouly et al., 2010), increases active learning (Shaw, 2001) and collaborative learning (Kolber, 2011). Another argument for peer feedback within formative assessment relates to the point that assessing other students’ presentations helps students to be more aware of the presentation criteria which encourages their own public speaking performances (De Grez et al., 2012). Finally, the perceived responsibility by peers in giving and receiving feedback enhances their willingness to speak in public which as a consequence impacts their presentation performances (Mitchell & Bakewell, 1995).

Moving from conceptual arguments for peer feedback to empirical evidence for the effectiveness of this feedback source for developing presentation competence, several researchers claim the impact of feedback from peers on students’ speaking skills (e.g. Chang & Warren, 2005). However, only a few studies based their claims on experimental study designs (Van Ginkel et al., 2015). One example of such an experimental study demonstrated the superiority of peer feedback when combined with tutor feedback over a condition with solely tutor feedback (Mitchell & Bakewell, 1995). Nevertheless, it remains questionable what the impact of the peer as feedback source actually was, since the quantity of the feedback was not taken into consideration. Further, empirical results showed a fragmented picture regarding the impact of peer feedback on students’ attitudes towards feedback. Although some studies address positive perceptions of students towards peer evaluations, other studies highlight that certain students do not prefer peer feedback if they feel incompetent considering the predefined assessment criteria for presenting (e.g. Cheng & Warren, 2005). This is an important reason why peers should be trained in providing and receiving feedback by making use of feedback instruments, such as rubrics, prior to formative assessment processes in classroom settings.

Concluding, conceptual arguments embedded in theory, encompassing reflective, active and collaborative learning, support the involvement of peers in feedback processes in presentation education (Hattie & Timperley, 2007; Van Ginkel et al., 2015). Empirical evidence is found in peer learning studies for increasing students’ oral presentation competence and students’ attitudes towards presenting (De Grez et al., 2009). However, high quality evidence for the effectiveness of peer feedback in presentation research and conditions under which this feedback source is successful demonstrated ambiguous results. Therefore, more empirical and, more importantly, experimental study designs are needed to verify the effectiveness of peer feedback and the quality of peer feedback in presentation contexts. The following chapter focuses on the potential differential effectiveness of peer feedback and other commonly used feedback sources in higher education and their impact on students’ oral presentation competence.

3 The Differential Effectiveness of Feedback Sources

Over the last decades, the impact of peer feedback on students’ development of competence has received much attention in higher education research (see Latifi et al., 2020, 2021; Noroozi et al., 2012, 2018; Taghizadeh et al., 2022). These studies tended to focus solely on peer feedback or on the combination of peer feedback with other feedback sources, such as the teacher or the self. To illustrate, research has demonstrated that students’ knowledge about psychological concepts was improved when peer feedback was involved in the learning process (Kelly et al., 2010). Additionally, peer feedback improved the language skills and transferable skills of students (Tsaushu et al., 2012). Moreover, in regard to the combination of peer feedback with other feedback sources, studies demonstrated a positive effect on the development of scientific writing skills (Clarke et al., 2013).

While studies revealed a positive impact of peer feedback, as an individual feedback source or combined with other sources (such as the teacher), on the development of students’ cognition, skills and attitudes, it has been reported that different feedback sources, such as the peer or the teacher, potentially have a differential impact on learning (Hattie & Timperley, 2007). Moreover, empirical findings addressing this potential differential effect were lacking. Therefore, Van Ginkel et al. (2017a) aimed to investigate the impact of different feedback sources, that is the teacher, the peer, the peer guided by a tutor and the self, on the development of students’ oral presentation competence. In this study, a pre-test post-test quasi-experimental design was adopted and students’ presentation performances, in terms of cognition, behaviour and attitude towards presenting, were assessed using multiple-choice tests and a rubric. Results of this study showed a substantial overall progression in each of these components of students’ oral presentation competence. Interestingly, with respect to presentation behaviour, the impact of teacher feedback was significantly higher than the instructional conditions that involved the peer or the self. Moreover, the effect of self-assessment on students’ progression of presentation behaviour and attitude towards presenting was smaller compared to the other feedback sources.

The findings of the experimental study highlight the superiority of feedback provided by the teacher over peer feedback and peer feedback guided by a tutor. This, therefore, supports the idea of the differential impact of these different feedback sources on students’ learning. Results of this study are in line with literature that emphasizes the essence of the teacher, and their function as a role-model, for students’ learning within the context of higher education (Van Haaren & Van der Rijst, 2014). Moreover, it has been stated in research focusing on constructing educational design principles for peer feedback that the teacher fulfills an essential role as designer and facilitator within the peer feedback process (Van den Berg et al., 2006). Taken this together, although various studies revealed a positive impact of peer feedback on students’ development of competence, it is recommended to optimize the feedback of this source to make it as effective for learning as teacher approaches. However, this requires in-depth knowledge about underlying feedback processes, including the quality of feedback and differences in quality between the teacher and the peer.

4 Quality Criteria for Developing Effective Feedback Messages

Although the experimental field study focused on the impact of peer feedback in comparison to other commonly used feedback sources, such as the teacher, and students’ presentation performance, insights into the underlying feedback processes remain unclear. As such, it is questionable to what extent the quality of feedback differs between the teachers and peers. Regarding the gaps in the feedback and presentation literature, more knowledge is needed on how teachers, peers and peers guided by tutors deliver their feedback. Additionally, more research needs to be carried out to determine the aspects of feedback they focus on and how feedback processes relate to theoretical and empirical insights considering feedback quality criteria (Boud & Molloy, 2013; Price et al., 2010). Therefore, a follow-up study focused on analyzing the feedback processes, since these are considered as essential in student learning (Asghar, 2010; Falchikov, 2005), and may influence students’ oral presentation performance. Specifically, the empirical study examined the feedback processes initiated directly after five minute pitches of 95 undergraduate students in realistic university presentation courses.

In order to analyze the feedback processes of teachers and peers, a coding scheme was composed that included crucial feedback quality criteria based on the literature. To illustrate, the earlier studies addressed both content as well as form-related characteristics of feedback that influence students’ learning and performance. To start with, feedback should be specifically related to pre-defined assessment criteria (Moreno, 2004). In the context of presentation skills development, the content of the presentation, the structure of the presentation, the interaction with the audience and the presentation delivery (i.e. use of voice, eye contact and posture and gestures) should be included in the feedback. Moreover, feedback should also include content-related arguments that directly relate to the assessment criteria (Topping, 1998). Further, the following three criteria relate directly to the directions of feedback that are emphasized by Hattie and Timperley (2007). Feedback should incorporate information about students’ actual performance, the ideal or desired level of performance and opportunities to bridge the gap between the actual and desired performance. Besides content-related characteristics, form-related criteria are especially essential in the delivery of feedback messages from the teacher or peer to the individual student. In line with this, feedback should be delivered in manageable units in order to prevent cognitive overload (Mayer & Moreno, 2002). Subsequently, these messages should be formulated in a positive and constructive manner to increase the likelihood that students will uptake their feedback and to persist in learning (Kluger & DeNisi, 1996).

The analyses revealed that on all seven quality feedback criteria significant differences existed between the teacher, peers and peers guided by tutor (Van Ginkel et al., 2017b). The teacher scored higher than peers on all quality criteria of feedback and the teacher performed better than peers guided by tutor on six out of the seven quality criteria. Further, peers guided by tutor scored higher than peer feedback only on the content-related criteria. Reflecting these results with the previous experimental study on the feedback source, it can be concluded the feedback quality could be argued as the essential explanation for the earlier identified differences in impact between the teacher and the peer in presentation education. Both feedback quality as well as teachers as experts are highly emphasized as valuable in formative assessment processes in the literature (e.g. Shute, 2008).

Taking a closer look at the gathered results of this empirical study, it should be noted that also significant differences exist between peers and peers guided by tutors purely related to the content-characteristics of feedback. This might be caused by the fact that a tutor (a student-assistant) was present to guide the feedback processes by questioning and intervening. However, it remains remarkable that the previous experimental study did not reveal any significant difference between the peers and peers guided by tutor conditions regarding their impact on students’ oral presentation performances. This might be explained by the crucial role of form-related characteristics, such as the stepwise manner in which the feedback is presented and formulated, as being conditional for delivering a message effectively. Although other factors, for example the authority of the feedback provider, are not taken into consideration in this study, the quality of the feedback can be considered as crucial for student learning in presentation education. However, peers should be explicitly trained before entering feedback processes in classrooms. And, as addressed in this chapter, innovative technologies might also be valuable in feedback processes. Regarding the delivery of computer-mediated feedback messages in the presentation context, both content- as well as form-related should be critically be incorporated in the construction and composition processes of these messages.

5 Virtual Reality as an Alternative Feedback Source for Peer Learning

Previous experimental studies revealed that peer feedback, when adopted as an individual feedback source, had a limited impact on students’ development of presentation competence. Moreover, a lack of quality in peer feedback has been established. Subsequently, it has been recommended that students should be educated in providing peer feedback. Additionally, the triangulation of feedback sources was suggested to be potentially effective in enhancing reflective learning. Concerning the latter, it remained questionable whether innovative technologies, such as VR, might be a valuable contribution in peer feedback processes by delivering computer-mediated feedback aiming to foster students’ presentation skills. Recent studies in closely related fields revealed the potentials of integrating peer learning in VR-based technologies (e.g. Chang et al., 2020; Chien et al., 2020). In this study, we specifically focus on the field of presentation research.

As addressed in several domains, such as the medical, engineering, leisure and flight industry sectors, virtual learning environments are increasingly being adopted for practicing delicate surgeries for medical students, educating engineering students in spatial thinking skills, providing images of destinations for travelers and training pilots for real-life flying tasks (e.g. Coller & Scott, 2009; Hawkins, 1995; Merchant et al., 2014; Van Ginkel et al., 2019). However, it remained unclear whether learning environments adopting VR-based technologies can also be applied for developing academic and communication skills. These systems are potentially relevant, since they are able to imitate real-life situations and could deliver computer-mediated feedback from the VR system to the user (e.g. Boetje & Van Ginkel, 2021; LaViola et al., 2017; Van Ginkel et al., 2019).

Seeing the potentials of the VR technology, an experimental field study was conducted to examine to what extent there are significant differences in students’ presentation development between a VR and a traditional face-to-face condition. Additionally, this study intended to learn from perceptions of students regarding working with such an innovative tool as a potential replacement for a face-to-face presentation rehearsal in terms of practicing and receiving feedback (Van Ginkel et al., 2019). Therefore, in a realistic university presentation skill course, students were randomly assigned to one of the following conditions. In the first condition, students had to present a five-minute pitch to a VR audience and received quantitative feedback on eye contact, use of voice and posture and gestures traced by the VR system and explained by an expert. In the second condition, students had to present face-to-face and received feedback from a presentation teacher.

Within this experiment, comparable instruments were adopted for measuring students’ presentation skills, knowledge and attitude towards presenting as in an earlier described study in this chapter. Results showed that students’ developed these components of oral presentation competence significantly from pre-test to post-test without a difference between the VR and face-to-face condition. Further, the self-evaluation tests revealed that students in both conditions highly appreciated the feedback they received. However, the arguments they provided differ between the two groups. Students in the traditional setting who received feedback from the presentation trainer addressed the value of its feedback because of the positive and constructive comments, while students who presented in VR appreciated the—by experts—interpreted quantitative computer-mediated feedback regarding the detailed and analytical characteristics. More specifically, students who pitched in VR emphasized they never received such a detailed feedback on their skills in previous educational programs. Moreover, the objective character of the feedback, as perceived by students, was also highlighted as a valuable component for developing their presentation skills in a VR environment (Van Ginkel et al., 2019).

The lack of difference in impact between the conditions on developing students’ presentation competence might be explained by the opinions of students with regard to their rehearsal and feedback experiences in this experiment. Although arguments for their perceptions differed between students in the VR and face-to-face conditions, no differences in scores were found for two crucial educational design principles fostering presentation skills relating to both practicing as well as receiving feedback. The findings of this study, therefore, suggest that the incorporation of a VR-based presentation task in presentation education including computer-mediated feedback is effective for students’ development of presentation competence. However, based on this experiment, VR is not necessarily more efficient, since experts had to be involved in order to translate the quantitative feedback reports provided by the VR system to the students, and it remained questionable to what extent this alternative feedback source could contribute in peer learning. On the other hand, following technological developments, VR technologies also facilitate the delivery of immediate feedback during presentation performances in which the presence of an expert is not required. Moreover, even computer-mediated feedback, delivered after students’ presentations, is on the agenda of raped transitions in educational technology (Van Ginkel et al., 2020).

6 Two Recent VR Experiments: Students’ Perceptions on Computer-Mediated Feedback

In order to verify to what extent VR feedback could be valuable for peer learning, two additional VR field experiments were conducted focusing on (1) the effects of immediate feedback in VR on presentation skills development (Van Ginkel et al., 2020) and (2) the perceptions of students regarding the value of qualitative computer-mediated delayed feedback in a VR presentation environment (Sichterman et al., 2021). The first study focused explicitly on the role of immediate feedback, since VR offers the opportunity to deliver feedback directly during presentations of students on aspects such as eye contact and use of voice. The second study explored the value of qualitative computer-mediated delayed feedback messages following students’ perceptions, since this factor can be considered as a crucial intermediate variable for encouraging or inhibiting students’ presentation competence development (Van Ginkel et al., 2015). Based on these insights, follow-up studies should be formulated focusing on the role of VR feedback for peer learning, which will be used to construct a future research agenda on peer and learning in the field of presentation education.

Regarding the first field experiment, the effects of immediate computer-mediated feedback in VR were tested by comparing the impact of immediate feedback on students’ presentation development with a control group of delayed expert-mediated feedback in a realistic presentation course setting. The target aspects were eye contact and speech pace, since these components of non-verbal communication are frequently selected by students for formulating personal learning goals in secondary and higher education presentation curricula. Immediate feedback for eye contact was provided by making use of time icons, provided by the VR system, that appeared if the eye contact of the speaker began to linger. For example, if the presenter focused for more than five seconds on their slides, the icon, projected in VR, turned red, advising the student to re-focus their eye contact and to re-engage their audience members. For speech pace, a comparable icon was used to inform the speaker to slow down if their speech rate exceeded 160 words per minute. These timings are based on the validation of a presentation rubric in the scientific literature (Van Ginkel et al., 2017c). The results of the experiment revealed no difference in impact between the immediate feedback and expert feedback condition on presentation performance. Further, students characterized the VR environment as an effective and motivating platform for practicing presentation skills. Findings from this study facilitate the expansion of opportunities for students to use immediate feedback as an alternative form of feedback, for example in peer feedback, for their presentation skills development. Moreover, adopting such a type of feedback in education, without making use of experts, could result in less pressure on resources, including time and staffing (Van Ginkel et al., 2020).

Besides insights considering the value of immediate feedback in VR for students’ learning, recent technological and pedagogical developments allow for composing qualitative delayed feedback messages based on the earlier used quantitative feedback reports produced by the VR system in presentation education (see Van Ginkel et al., 2019). The conversion of quantitative feedback, which had to be interpreted by an expert, to qualitative feedback messages might suggest that there is no expert intervention needed anymore and that students could interpret the feedback messages individually or with their peers. Consequently, a preliminary study, in which 27 university students were involved, explored the perceived value of automated, qualitative feedback messages in a VR-system for developing students’ presentation skills development (Sichterman et al., 2021). In this experimental study, students’ perceptions on the qualitative automated feedback messages (i.e. the experimental condition) were compared with a situation in which quantitative feedback reports were produced by the VR system and interpreted by an expert (i.e. the control condition). The formulation of the feedback messages in the experimental condition was constructed by adopting (1) the seven feedback quality criteria as earlier explained in this chapter (Van Ginkel et al., 2017b) and (2) two crucial presentation criteria for non-verbal behaviour, relating to eye contact and use of voice, as emphasized in a previously validated rubric oral presentation skills (Van Ginkel et al., 2017c).

Considering students’ perceptions of feedback within this VR experiment, the following groups of items were selected: (1) aspects regarding the value of feedback (such as the perceived relevance of feedback, sensitivity of feedback and quality of the feedback messages) and (2) aspects regarding students’ development of presentation skills after receiving computer-mediated delayed feedback (such as perception of competence, presentation anxiety and attitude towards presenting). Starting with the perception of feedback, students highly appreciated the relevance of the feedback they received in both the experimental group (M = 4.01, SD = 0.79) as well as in the control group (M = 4.00, SD = 0.80). However, no differences between the conditions were found (t(25) = 0.05, p = 0.96). Further, students also perceived the feedback they received as constructive and non-confrontational, encompassing students’ feedback sensitivity, in both the experimental (M = 4.03, SD = 0.58) as well as the control condition (M = 4.02, SD = 0.59). Again, no significant differences were determined between the two groups (t(25) = 0.06, p = 0.96). Further, the quality of feedback was highly appreciated on six out of the seven quality criteria of feedback in both conditions without significant differences (see Table 7.1). However, only the feedback criterion relating to ‘opportunities to bridge the gap between the actual and desired performance’ was scored lower than ‘4.0’ in both conditions, which can therefore not be considered as ‘sufficient’ (Van Ginkel et al., 2017c). Despite of a lack of differences between the conditions, both in the qualitative VR feedback (M = 3.00, SD = 1.23) as well as in the quantitative VR feedback condition (M = 3.64, SD = 1.12) the scores on this feedback criterion were relatively lower. This might suggest that in follow-up experiments specific attention should be devoted not only on how feedback is provided to the actual presentation behaviour, but especially towards how feedback messages can be constructed in such a manner that they support strategies to develop presentation performances relating to the ideal or desired presentation behaviour.

Table 7.1 Mean scores, SDs and N related to closed questions (5-point Likert scale) about perceptions of the feedback quality for students within the control condition (intervention expert) and the experimental condition (no intervention expert)

Subsequently, students perceived their own development of presentation skills as more than sufficient, revealing the scores in the qualitative VR feedback condition (M = 6.57, SD = 1.27) and the quantitative VR condition (M = 5.75, SD = 1.36). Although no significant differences between these conditions were found on this perception of presentation skills (t(25) = 1.61, p = 0.12), interestingly, differences exist between the two groups for the component of presentation anxiety. Within the no intervention expert condition with qualitative feedback messages, students scored significantly lower on their perceived presentation anxiety (t(25) = −2.24, p = 0.034) after training in VR (M = 2.37, SD = 0.69) in comparison to the expert intervention condition with quantitative feedback reports (M = 3.08, SD = 0.92). This could be explained by the notion that students experience more pressure and perceive more stage fright after receiving feedback from a teacher. Therefore, these findings might suggest that training in VR, while receiving automated feedback without the intervention of an expert, can be considered as an effective strategy for reducing presentation anxiety in the stage of rehearsing speeches before presenting in front of real audiences and receiving feedback from experts. However, it remains questionable whether students experience similar levels of anxiety when peers are involved in the feedback process. Therefore, future studies will be undertaken in this area.

Another significant difference in this preliminary research was found between students of different domains regarding their attitude towards presenting (F(3, 23) = 3.86, p = 0.022), which includes the perception of students regarding the relevance to acquire presentation skills and their motivation to train these skills. A Tukey post hoc test revealed that students’ attitudes towards presenting were significantly lower for students within the ICT-domain (M = 3.56, SD = 0.92) compared to students within the educational and pedagogy domains (M = 3.73, SD = 0.60). The difference in self-perceived performance between the domains (see also Table 7.2) might refer to technical curricula focusing more on teaching domain-specific skills instead of integrating soft skills, such as presentation competencies, in their educational programs (e.g. Belboukhaddaoui & Van Ginkel, 2019). However, several recent studies in presentation research describe developing presentation skills in technical curricula (e.g. Mitrovic et al., 2017; Mohamed et al., 2015). Another argument for the lack of perceived presentation skills amongst technical students might relate to the idea that technical students naturally possess fewer communication competencies in comparison to students from non-technical curricula. Since there is a lack of evidence in empirical presentation studies regarding this issue, more research is needed towards (1) the integration of presentation environments in technical curricula and (2) the role of students’ traits, prior competencies and perceptions towards presenting in relation to presentation performances (see also Van Ginkel et al., 2015).

Table 7.2 Mean scores, SDs and N related to closed questions about students’ attitudes towards presenting for different educational domains

In retrospective, besides varying perceptions of students regarding their presentation anxiety and attitude towards presenting depending on conditions and/or domains, students appreciated the value and relevance of the feedback they received in both the non-expert as well as the expert intervention condition. In follow-up projects, insights from these studies are being used to compose and construct feedback messages for analyzing and evaluating ‘posture and gestures’ in presentation education, since this is regarded as another essential component of non-verbal communication in presentations (Van Ginkel et al., 2015). VR technologies can support the provision of feedback on eye contact and use of voice. However, for monitoring body language, Artificial Intelligence (AI) technologies are more suited to monitor detailed posture and gestures of presenters. Therefore, a current project focuses on constructing an application for the smartphone that supports students’ development in posture and gestures independently of time and place. By using AI technology, data about body language is converted into automatically generated feedback messages that supports students in their presentation development (see Fig. 7.1). Moreover, this application, entitled Honest Mirror, is aimed to meet design criteria regarding scalability, mobility, effectiveness and adoption in education. In order to guarantee the effectiveness of the app, the lessons learned from the earlier discussed VR studies are used for composing automatically generated feedback messages. Therefore, validated effective and ineffective postures and gestures were selected from the presentation literature (Schneider et al., 2017). Further, criteria for effective feedback in presentation research were adopted for constructing effective feedback messages (Van Ginkel et al., 2017a, 2017b). An example of such a message is: “You used your hands during your presentation. If used effectively, this can reinforce the message. Still, in a subsequent presentation you could try not to put your hands in your pockets. This attitude can come across to the audience as casual and uninterested. Therefore, try to keep your hands relaxed next to the body or use supporting gestures to convey a message more powerfully. In that case, make sure you have open hands to make those gestures possible.” In order to encourage the adoption of this app in education as an alternative feedback source in peer learning, it will be published open source and the app will be connected to the previously constructed VR system, which is already adopted in higher education presentation curricula.

Fig. 7.1
A feedback model of A I driven app fostering students' body language. It begins with a student presenting, which is recorded with a smartphone. A neural network analyzes the video and compares effective and ineffective postures and gestures. The application provides personalized and automated feedback on delivered postures and gestures.

The feedback model of the AI-driven app fostering students’ body language during presentation rehearsals

7 A Future Research Agenda on Computer-Mediated Feedback for Peer Learning in Presentation Research

After synthesizing varying review and empirical publications in the field of presentation competence development, it can be concluded that peer learning is considered as one of the crucial educational design principles for developing students’ public speaking performances in higher education. However, it is also stated that peer feedback is not yet as effective as teacher feedback, due to a lack of feedback quality in peer learning within current presentation curricula. From an educational technological point of view, VR technologies are regarded as valuable alternative feedback sources, since they can provide effective feedback comparable to teacher or expert feedback. However, while adopting VR technologies in presentation education, the role of teachers in guiding students, while guaranteeing high levels of feedback quality, should not be underestimated. Nevertheless, recent VR studies reveal that immediate feedback, without any support of teachers, is as effective as delayed feedback explained by teachers. Further, other studies revealed that computer-mediated delayed feedback messages, provided within VR systems without the support of teachers, are perceived as constructive and valuable by higher education students.

From a scientific perspective, based on synthesizing literature in the field of presenting and feedback, insights from this chapter might further refine the educational design principle regarding peer learning for developing students’ oral presentation competence, since empirical evidence from recent studies emphasized the value of computer-mediated delayed feedback messages within VR regarding students’ perceptions. However, it remains questionable to what extent combining different forms of feedback, such as immediate and delayed feedback, ánd combining different forms of technologies, such as VR and AI, could further optimize the effectivity of peer learning for developing varying aspects of oral presentation competence. Especially combining VR and AI could support the provision of such feedback messages on the most crucial non-verbal communication aspects, such as eye contact, use of voice (both supported by VR) and posture and gestures (supported by AI).

From an educational practice perspective, developing, testing and optimizing computer-mediated feedback messages by making use of innovative technologies in presentation education might release the pressure on teachers’ tasks in providing effective and efficient presentation courses, since such feedback opportunities might increase the value of peer feedback while solely using teacher feedback in these stages of the learning process or for specific learning objectives of students when it is needed the most. In line with supporting students’ learning processes and even for educating teachers, UNESCO emphasized the adoption of VR and AI technologies as crucial in the light of the global teacher shortage (Adubra et al., 2019; Parmigiani et al., 2020). If learners are able to individually interpret feedback messages without the intervention of a teacher, it could enrich the quality of feedback in peer and self-learning and further increase students’ development in a wide range of academic, communication, digital literacy and domain-specific competencies.

However, several limitations still exist with regard to the earlier discussed studies, which should be taken critically into consideration while constructing a future research agenda on the topic of computer-mediated feedback in VR for improving peer learning in presentation research. First of all, although recent studies revealed positive perceptions of computer-mediated feedback messages with regard to the relevance and value of feedback for developing students’ learning processes, it is questionable to what extent these feedback messages can also be considered as effective for developing presentation performances. Second, although previous studies revealed effects of innovative technologies for delivering feedback, such as VR or AI, on developing public speaking competencies, the N-values of these studies are relatively low. Experimental follow-up studies should therefore incorporate higher numbers of students in order to detect significant results in presentation developments or potential differences between VR or teacher intervention conditions. Third, most of the publications on feedback in VR contexts fostering presentation competencies report on relatively short-term experiments. In line with this, it remains questionable what the effects of peer or self-learning in VR contexts are on the long term when students have the opportunity to rehears their presentations several times in VR and also have the opportunity to develop themselves based on computer-mediated feedback messages in multiple occasions.

A future research agenda on computer-mediated feedback for peer learning in presentation research should incorporate the following studies. First, an experimental study should be conducted focusing on the effects of computer-mediated delayed feedback on developing students’ oral presentation competence. In such a study, the experimental condition should focus on the effects of students who individually interpret feedback messages without the support of teachers in VR, while the control condition consists of a situation in which students learn from feedback messages that are interpreted and provided by teachers. Such a study should reveal whether students do not only positively interpret earlier constructed feedback messages, as suggested in previous empirical studies, but to what extent these messages are also effective for developing their presentation competencies. Second, a follow-up study should concentrate on the effects of adopting computer-mediated feedback messages in peer learning in order to verify whether peer feedback can be optimized in terms of effects on developing students’ presentation competencies. Previous studies revealed that the quality of peer feedback is lacking in comparison to feedback provided by teachers and feedback quality standards. However, it remains questionable whether peer feedback, supported by VR and AI technologies, could help to optimize this learning environment characteristics in presentation education. Such a study should also incorporate procedures of peer assessment by taking into account the complexity of peer feedback processes through integrating specific feedback stages for combining face-to-face and computer-mediated feedback in formative assessment (e.g. Baartman & Gulikers, 2017). Third, another follow-up empirical study should follow students in their learning processes from a longitudinal perspective while rehearsing presentations in VR and/or with the support of AI, learning from interpreting feedback messages and formulating new learning objectives towards presenting. As such, results might reveal not only the possibilities of such technologies for peer and self-learning, but also provide insights about the sustainability of adopting AI technologies in higher education curricula in times when education is under pressure due to teachers shortages and in times of pandemics that force learners to optimize their learning processes by embracing online education.