Advertisement

Technology and Feedback Design

  • Phillip DawsonEmail author
  • Michael Henderson
  • Tracii Ryan
  • Paige Mahoney
  • David Boud
  • Michael Phillips
  • Elizabeth Molloy
Living reference work entry

Abstract

This chapter provides a synthesis of recent research into how technology can support effective feedback. It begins by adopting a definition of feedback in line with recent advances in feedback research. Rather than viewing feedback as mere information provision, feedback is viewed as an active process that students undertake using information from a variety of sources. The results of a systematic literature search into technology and feedback are then presented, structured around the parties involved in feedback: students, their peers, educators, and computers. The specific feedback technologies focused on include digital recordings; bug in ear technologies; automated feedback; and intelligent tutoring systems. Based on this synthesis of the literature, benefits, challenges and design implications are presented for key feedback technologies. The chapter concludes with a discussion of improved feedback approaches that are likely to be enabled by technology in the future.

Keywords

Feedback Educational technology Modality Learning 

Introduction

Feedback about student learning is important, often misunderstood and complex. Technology can enable current practices, offer new opportunities, but can also complicate and challenge feedback. This chapter reviews the literature on the use of digital technology in student feedback practices and highlights established and emerging trends, as well as the diversity in approaches. These approaches are thematically organized according to the source of feedback comments, namely: educator, computer, peer, and self. However, within these categories there is a wide range of technology mediated feedback practices, from digital multimedia recordings and text annotations to intelligent tutors and student response systems. Overall, these approaches are reported to lead to positive student perceptions or other outcomes. However, this chapter also highlights a number of challenges for educators and educational designers who seek to implement these designs and concludes with consideration of future practices.

What Is Feedback?

Feedback is such a commonly used term in educational contexts, so we might imagine that it is clearly understood and used well. Unfortunately, this is not the case. This is particularly problematic in the context of feedback and digital technology, where feedback is commonly and unnecessarily used quite differently in the fields of education and technology. From the point of view of education, feedback commonly refers to information provided to learners about their work by teachers or other agents. It is seen as an input into an educational process which is left in the hands of the learner to do with whatever they wish. Teachers may hope that the information provided is productively used, but there is little follow through to track it or ensure that this happens. In the technology discourse, feedback is a process, not an input, which regulates a system, necessarily influencing the output of that system. Feedback has not occurred if the system is not influenced. Input without effect is not feedback, it is merely input.

This gap in how feedback is understood might provide part of the explanation of why feedback in educational contexts has been subject to such relentless criticism by students. In higher education, feedback is often revealed as the number one concern of students across institutions, across disciplines, and over time. Students complain that they do not get enough information about their work, that what they do get is not useful and they do not get it in a timely fashion (see Li & De Luca, 2014, for a review of assessment feedback).

Is there then some way of bridging the divide which provides a way of understanding feedback that is consistent with its longstanding use in technology and offers useful directions for education? We suggest that firstly there is and secondly that we can build on this conception to establish ways of thinking about feedback in the digital context which respects the fact that learners are humans with their own volition and that an educational view of feedback must fit with this view, rather than with a more technical view as the learner as one component of a technical system.

Consequently, we argue that many of the current feedback traditions in education should be challenged. We should critically consider dimensions such as the agency of the student, the ability to measure effects, feedback’s location in a learning sequence, feedback’s goals, and how information flows. Such a framework is offered by Boud and Molloy (2013) who described three ways of thinking about feedback which they labeled Feedback Mark 0, Feedback Mark 1, and Feedback Mark 2. Table 1 provides a succinct comparison of these conceptions. They called the first of these conceptions Mark 0 because they regarded it as having so little of the characteristics of feedback used in other disciplines that calling it feedback at all was a problem. Unfortunately, Mark 0 reflects the most common feedback practices in education. Feedback in such a view is initiated by teachers, it normally occurs at the end of a sequence of teaching following an occasion of assessment, there is no process to detect whether information provided has any effect, and student involvement in feedback as such is minimal. Students may independently choose to do something as a result of the information available, but that is not in this conception an integral and necessary part of the process called feedback.
Table 1

Comparison of feedback conceptions

 

Feedback mark 0

Feedback mark 1

Feedback mark 2

Approach

Conventional – teachers provide comments without monitoring effects

Agentic – teachers monitor the effects of comments/inputs. The students’ role is to respond to teachers’ input

Participatory – both students and teachers have the role of monitoring and responding to effects

Locus

Teacher

Teacher

Teacher and learner

Features

Taken-for-granted act of teacher/assessor

Closed system (e.g., teacher and student)

Open system (multiple sources of input)

Adaptive/responsive

Location

At end of teaching sequence

During learning

During learning and beyond

Effects

Effects not detected directly

Effects monitored by teachers

Effects monitored by teachers and learners

Learner involvement

No student involvement needed

Students respond to input from teachers

Students respond, question, seek, and evaluate input

Information provided

Information provided not influenced by effects

Information provided changes in response to immediate effects

Information provided changes in response to immediate and long-term effects

Goal

Study improvement

Task performance improvement

Judgment performance improvement

The second of Boud and Molloy’s (2013) conceptions, called Feedback Mark 1, took key ideas of feedback as used in science and technology and applied them to educational contexts. In this conception, feedback was still driven by teachers or embedded in the learning management system, but it incorporated the fundamental idea of feedback as a process which necessarily leads to effects. In the case of learners, an effect would be some detectable change in their practices or learning outcomes. Feedback is not seen as an add on at the end of a process of teaching and learning but intrinsic to the learning process, leading to changes in what students do as they progress. These effects are monitored and the inputs varied in the light of the effects. There is always a feedback loop in which the information provided to learners is designed to lead to some change in learning behavior which is then monitored and the inputs changed to produce the effects desired. Feedback Mark 1 characterizes what is or should be commonplace in instructional design: system performance is referenced to effects on learners. The important move from Mark 0 to Mark 1 is the emphasis on effects and the necessary actions which learners must take if the process is identified as feedback.

Such a conception of feedback is not enough, however, to provide a robust basis on which to ground educational activity. The most important limitation is that it positions learners in a contingent relationship in which they have little volition: the system is modified to maximize outputs regardless of the desires of learners. They are supposed to learn despite themselves! How then could learners have a more active role in assessment without them being reduced to just one element in a physical system? This concern led to Boud and Molloy’s third conception of feedback, Feedback Mark 2. The importance of effects and the feedback loop is retained from Mark 1, but the learner is placed in a more agentic position. A key element is that feedback in this conception is dialogic, that is, information is exchanged as a two-way process between learner and teacher, students express a view about what they want to know and information moves to and fro throughout the learning process. Feedback is not associated only with acts of assessment but is a key feature of the entire learning system. Effects are monitored by both teachers and learners through a learning management system. Student engagement is not an add-on but an intrinsic feature of the feedback process.

Consideration of Feedback Mark 2 led Boud and Molloy to define feedback in a more learner-centered way as:

a process whereby learners obtain information about their work in order to appreciate the similarities and differences between the appropriate standards for any given work, and the qualities of the work itself, in order to generate improved work. (Boud & Molloy, 2013, p. 6)

Boud and Molloy’s definition raises several interesting questions for educators, technologists, and educational designers: What information is most useful? How can learners best obtain the information, and from whom or what? How do learners come to know the applicable standards? What is quality and how is it best communicated? And finally, how can learners action the information for improvement? In addition to these questions, Boud and Molloy’s conceptions of Mark 0, 1, and 2 should also challenge us to question the role of the teachers and students in monitoring effects. These questions continue to be applicable when we consider the role of digital technologies. For instance, how might technologies enable access, change roles, mediate delivery, offer new ways of creating, manipulating, or experiencing the input, and enable the tracking of effects? Unfortunately, it is common for researchers and practitioners to take for granted what feedback is and to report their work accordingly without revealing their assumptions. Certainly, in the context of the literature review in this chapter, there was a predominance of studies that treated feedback as if it were solely an input and failed to track effects. Nevertheless, the review of technology enabled feedback practices can inform our designs, but at the same time should be critically appraised in light of how they help achieve feedback as defined above.

Literature Review

What is the current state of feedback with technology? What promising new feedback technologies are there, and how are they being incorporated into feedback approaches, especially those described above? To explore such questions, a structured review of the literature was conducted. The process involved in the literature review is described below, as are the key findings relating to the most commonly researched types of technology used in feedback design and delivery. We have focused on feedback about student learning, rather than feedback about teaching or curricula; however, we note overlaps in some sections where the two are closely related.

Method

Systematic literature searches were performed by two experienced researchers (TR and PM) between December 2016 and February 2017. The searches were conducted in three stages: the first stage aimed to establish the scope of the field, the second aimed to ascertain the validity of search terms, and the third stage refined the search results and identified likely articles.

Stage one searches were guided by themes proposed by the members of the research team experienced in feedback design and the use of technology (PD, MH, DB, MP, LM). The chosen themes were media effects and outcomes, issues of timing, artificial intelligence, automated feedback, peer feedback, peer assessment, systems managing feedback flow, self-feedback, self-assessment, academic integrity, and stealth feedback. Stage one searches were performed by one author (PM) using three databases which provide access to a large number of scientific academic articles from education researchers: ProQuest Education, ERIC, and PsycINFO. These initial searches were kept deliberately broad in order to get a sense of the data before more exhaustive and targeted secondary searches were conducted. All primary and secondary search terms were recorded in a spreadsheet, along with the number of results.

Prior to the second stage searches, four members of the research team (PD, MH, TR, DB) examined the spreadsheet of search results and identified viable topics for further searches. These topics centered on the use of technological tools in the creation, mediation, tracking, or experience of feedback “inputs,” or performance related information. The research team felt that these topics were most likely to result in a breadth of feedback designs in which technologies played a variety of roles in the creation or mediation of performance information. These topics were then inductively organized into clusters based on the source of the feedback “inputs” (i.e., educator-to-student, computer-to-student, peer-to-student, and self-feedback) and were used as the basis of the search terms for the secondary searches (see column 2 of Table 2).
Table 2

Search terms, number of results returned, and number of papers selected and reviewed

Source of feedback

Stage 2 search terms to determine main technology types used by sources

Number of search results

Form of technology mediated feedback practice

Stage 3 search terms to identify key literature

Number of search results

Number of abstracts selected

Number of papers reviewed

Educator-to-student feedback

ab(feedback) AND ab(teacher OR educator) AND ab(technology OR online)

234

Digital recordings

ab(feedback) AND ab(audio OR video OR screencast OR multimodal OR podcast) NOT ab(peer) NOT ab(self) NOT ab(automated)

521

30*

27

Digital text

ab(“electronic feedback” OR “online feedback” OR “digital feedback”) NOT ab(peer) NOT ab(self) NOT ab(automated)

46

12

11

   

Collaborative writing tools

ab(feedback) AND (ab(“collaborative writing”) OR ab(wiki) OR ab(“google doc”)) NOT ab(peer) NOT ab(self) NOT ab(automated)

39

8

6

Bug in ear technology

ab(feedback) AND ab(“in ear”) NOT ab(peer) NOT ab(self) NOT ab(automated)

7

4

3

Computer-to-student feedback

ab(feedback) AND ab(automated OR automation OR automatic)

81

Computer assisted language learning

ab(feedback) AND ab(“computer assisted language learning”)

21

12

5

Student response systems

ab(feedback) AND ab(“response system” OR “feedback device”)

17

8

3

Automated feedback on MCQs

ab(“automated feedback” OR “automatic feedback” OR “online feedback”) AND ab(quiz OR test OR exam)

10

6

5

Automated writing evaluation tools

ab(feedback) AND ab(“automated writing”)

9

8

4

Intelligent tutors

ab(feedback) AND ab(“intelligent tutor” OR “cognitive tutor”)

5

4

3

Peer-to-student feedback

ab(feedback) AND ab(peer) AND ab(technology OR online)

140

Blogs and discussion boards

ab(“peer feedback” OR “feedback by peer”) AND (ab(blog) OR (ab(journal) OR ab(“discussion board”) OR ab(forum)))

29

9

7

Collaborative writing software

ab(“peer feedback” OR “feedback by peer”) AND (ab(“collaborative writing”) OR ab(wiki) OR ab(“google doc”))

15

8

4

Peer feedback software and tools

ab(“peer feedback” OR “feedback by peer”) AND ab(software OR tool OR program OR application)

62

9

4

Self-feedback

ab((“self feedback” OR “self evaluation” OR “self assessment”) AND (technology OR online))

103

Digital recordings

ab(“self feedback” OR “self evaluation” OR “self assessment”) AND ab(audio OR video OR screencast OR multimodal OR podcast)

55

13

9

   

e-portfolios

ab(“self feedback” OR “self evaluation” OR “self assessment”) AND ab(“e-portfolio” OR “web-based portfolio” OR “online portfolio” OR “digital portfolio”)

5

5

4

NOTE: Stage one searches were performed using the ProQuest Education, ERIC, and PsycINFO databases; however, the results of these searches were extensive and are not provided here. Stage two searches were performing using the ProQuest Education database, while stage three searches were performed using the ProQuest Education and ERIC databases. Items marked with * indicate where the number of abstracts reviewed was limited when it was deemed that a saturation point had been reached

Stage two searches were conducted by one author (TR) and were limited to peer-reviewed scholarly journal articles that were (a) written in English and (b) published between 1st January 2012 and 1st January 2017. These searches were guided by the need to establish which types of technology were used to provide feedback input by each of the four identified sources. To assist in the return of highly relevant research, searches were also restricted to articles that featured the search terms in the abstract, rather than anywhere in the entire document. In an effort to reduce the labor of sorting through the potential hundreds of abstracts that could have been returned for each topic, a pragmatic decision was made to limit the more focused secondary searches to one database. The ProQuest Education database was selected for this purpose, as it includes a vast catalogue of research focused on primary, secondary, and higher education. Abstracts were sorted by relevance and reviewed to assess (a) their relevance to the topic of interest and (b) the type of technology used in the feedback design. Search terms were then refined as necessary and recorded (see column 4 of Table 2). Search terms that did not result in the return of at least four articles published within the last 5 years were abandoned.

Stage three searches were conducted by two authors (TR and PM). These searches were performed on two databases simultaneously (ERIC and ProQuest Education); however, duplicate search records were omitted. The same search settings used in the stage two searches were applied again during stage three. Abstracts were sorted by relevance and read by at least one author to ascertain their relevance to the specific topic of interest. Although the search terms were designed to be as targeted as possible, not all of the search results were found to be relevant. As such, a decision was made to omit articles from further consideration if their abstracts did not fit within the scope of the search; namely, technology mediated feedback practices. The final column in the table provides the number of papers that met the search criteria, and actually informed the literature review. It should be noted that the search for educator generated digital recordings resulted in a particularly large number of papers. In this case (as indicated by an asterisk in Table 2), the papers were filtered as above, but only the first 30 papers that met the criteria were analyzed at the abstract level. It was deemed at the point of 30 abstracts that a saturation point had been achieved with regards to the main benefits, challenges, and design implications.

The papers that were selected for review were read in full. Some of these papers were then discarded as their findings were not empirically based. The key findings, particularly relating to the reported effectiveness (or not) of the design, and implications for future design were summarized. The following sections present a synthesis of those results, organized according to the source of feedback input and the form of technology practice as indicated in Table 2.

Educator to Student Feedback

The results of the initial searches indicated that digital recordings, digital text, collaborative writing tools, and bug in ear technology were the most commonly researched forms of technology used in feedback design.

Digital Recordings

As shown in Table 2, digital recordings were the most prevalent form of technology mediated feedback design to emerge from the literature review. The bulk of this research centers on the use of audio (e.g., Bourgault, Mundy, & Joshua, 2013; Carruthers et al., 2015; Cavanaugh & Song, 2014), video (e.g., Borup, West, & Thomas, 2015; Hawkins, Osborne, Schofield, Pournaras, & Chester, 2012), and screencast (e.g., Elola & Oskoz, 2016; Jones, Georghiades, & Gunson, 2012) recordings to provide asynchronous performance-related comments to students after submission of written assessment tasks.

Through using audio-visual media to deliver performance information to students, educators can provide detailed comments to students in a relatively short recording. It is generally argued that it is faster to communicate orally than it is through typing or writing (e.g., Denton, 2014; Orlando, 2016). Due to this affordance, educators tended to positively appraise the use of audio-visual media to provide performance information. For example, in a study comparing the use of text, audio, or screencast recordings to provide comments to students, Orlando (2016) discovered that four out of the six educators preferred using screencasts, two preferred audio, and none preferred using text. Other studies have reported that educators appreciate the increased efficiency afforded by recorded comments, indicating that the practice may be relatively sustainable compared to marking up electronic documents or writing handwritten comments on assessment tasks (Borup et al., 2015; Jonsson, 2013; Knauf, 2016; Morris & Chikwa, 2016; Portolese Dias & Trumpy, 2014). Interestingly, several studies have also shown that the content of recorded comments is more often focused on providing holistic suggestions for improvement, rather than the targeted and specific comments often seen in digital text-based comments (Borup et al., 2015; Cavanaugh & Song, 2014; Elola & Oskoz, 2016).

Performance information created using audio-visual digital recordings has been associated with enhanced student engagement (Hung, 2016; Morris & Chikwa, 2016; West & Turner, 2016) and performance (Denton, 2014; Elola & Oskoz, 2016). The majority of research confirms that students feel positively toward receiving audio-visual recordings from educators, finding the content to be individualized (Carruthers et al., 2015; Knauf, 2016) and detailed (Gould & Day, 2013; Jonsson, 2013; Morris & Chikwa, 2016). In studies that have directly compared audio-visual recorded comments with text, students generally have a strong preference for the former (Chew, 2014; Johnson & Cooke, 2016; McCarthy, 2015; Moore & Wallace, 2012; West & Turner, 2016). They also perceive recorded comments to be more supportive (Borup et al., 2015; Gould & Day, 2013), personal (Gould & Day, 2013; Knauf, 2016; Mathieson, 2012; West & Turner, 2016), and easy to understand (Bourgault et al., 2013; Turner & West, 2013) than text. On the other hand, some students are initially skeptical about receiving performance information in this way (Fawcett & Oldfield, 2016; Henderson & Phillips, 2015), while others note that text-based comments can be more efficient to scan through than digital recordings (Borup et al., 2015; Morris & Chikwa, 2016). This is because it is often necessary to listen to or watch a full recording to find the relevant information.

Overwhelmingly, students recognize that audio-visual recordings of performance information are personal and supportive; therefore, this modality of feedback can be highly effective in educational contexts in which the affective relationship between students and educators needs bolstering. Audio-visual media facilitate the communication of rich cues like tone and expression (Cavanaugh & Song, 2014), which allows educators to provide more enriched performance information than they can with text (West & Turner, 2016). Educators also tend to communicate in their recordings using a more personal and informal style, and students appreciate their teachers relating to them in a relaxed manner (Borup et al., 2015). Furthermore, many students hold the opinion that audio-visual feedback recordings reflect a greater investment of time and effort by the educator than text comments (Anson, 2015; Chew, 2014), even though the opposite is generally true (Knauf, 2016). For example, in a study of 99 students, Portolese Dias and Trumpy (2014) found that those who received screencast feedback were more likely to believe that their instructor had genuine concern for their learning than those who only received text comments. This may be because students interpret the increased level of detail as reflecting a deeper level of care from educators. As such, this modality may be particularly advantageous when students and educators are presented with limited opportunities for face-to-face dialogue, such as at the beginning of the year or in courses that involve online instruction (Anson, 2015; Borup et al., 2015).

Accessibility is one of the key design considerations when creating audio-visual recordings of performance information (Orlando, 2016). It is therefore recommended that educators create recordings that are of a manageable size for students to receive and download, and in a format that students can open without having to install additional applications. McCarthy (2015) recommends using programs that offer the ability to export to mp3 for audio and mp4 for video. Assuming the recordings are not excessively long (3–5 min is recommended), these formats compress files to a sharable size without significant loss of quality. They are also able to be opened automatically by native applications on most computers, smartphones, and tablets. Small files, such as audio recordings, can be sent to the student via email (Bourgault et al., 2013; Munro & Hollingworth, 2014) or returned within an electronic copy of students’ assignments (Orlando, 2016). Richer forms of audio-visual media (e.g., video and screencasts) can be shared using a video hosting website (Mathieson, 2012) or a virtual learning environment (Carruthers et al., 2015; Henderson & Phillips, 2014; Jones et al., 2012; Knauf, 2016; McCarthy, 2015).

The method used to return the recordings is also an important factor. As Orlando (2016) notes, embedding the recordings directly into the relevant section of the assessment task has the benefit of allowing students to easily connect comments to the specific section of the work to which they refer. Of course, this is only possible with smaller files, such as audio. For larger files, it may be most beneficial to upload to the virtual learning environment, as this allows students to store their feedback together with other course-related learning materials (Parkin, Hepplestone, Holden, Irwin, & Thorpe, 2012). It also avoids issues associated with using video hosting websites, such as potential breaches of privacy and security (Henderson & Phillips, 2014).

Digital Text

With an increasing number of written assessment tasks being submitted electronically, digital text has unsurprisingly become a common modality of technology mediated feedback used by educators (Chang et al., 2012). This is likely due to its simplicity and convenience; educators can employ online tools such as discussion boards and email to provide generalized comments to wider groups of students, or create digital text comments directly on a student’s electronic assessment tasks using easily accessible and user-friendly software such as word processing and PDF annotation programs. Furthermore, by utilizing simple tools such as tracked changes, sticky notes, comment boxes, or annotations, educators can link performance information directly to the applicable section on students’ assessment tasks (Beach, 2012). This leads to targeted and specific comments (Borup et al., 2015), which may aid in comprehension and enable students to take the information on board more readily.

Research suggests that most students are comfortable receiving digital text-based comments on their written work, as it aligns with their prior experiences and expectations of feedback (McCarthy, 2015). To compare student preferences for handwritten or digital text comments, N. Chang et al. (2012) recruited 250 undergraduate students to complete an online survey. The majority of students preferred digital text over handwritten comments and provided open-ended responses citing reasons such as timeliness, enhanced accessibility, and legibility. However, in a similar comparison study, Sopina and McNeill (2015) surveyed 335 first-year students who received performance information on subsequent assessment tasks via handwritten comments and digital text. Their results indicated that students were more satisfied with digital text than handwritten comments when it came to timeliness of return, but there were no significant differences in satisfaction for quality or format.

The use of digital text comments can be a timely and highly accessible method of providing performance information to students, particularly in comparison to handwritten comments. Digital text offers students the convenience of being able to access performance related comments on any personal computing device quickly and easily, no matter where they are located (Borup et al., 2015). Students can then store the comments permanently on their own devices, or on the university’s learning management systems (Parkin et al., 2012). Educators also appreciate the benefits of digital text-based comments; in a study by Borup et al. (2015), teaching staff noted that digital text provides the ability to easily review and edit comments, as well as the flexibility to complete assessment duties off site (assuming they have a portable computing device). However, despite the convenience of digital text-based comments, it is perhaps not the most efficient means of providing technology mediated performance information (see the audio-visual media subsection for more information on this topic).

To increase efficiency when creating digital text comments, some educators utilize electronic rubrics (Gabaudan, 2013) or statement banks (Borup et al., 2015; Denton & Rowe, 2015), as these modalities avoid the need to type similar comments repeatedly. Statement banks can be created by the educator themselves using word-processing software (Leibold & Schwarz, 2015) or with the help of digital mark-up tools such as GradeMark® (Watkins et al., 2014). However, students tend to prefer feedback that offers a high level of detail and personalization, and this is not always possible when providing “one-size fits all” comments (Denton & Rowe, 2015). In addition, statement banks and electronic rubrics may be most appropriate for tasks in which there is a clear or model answer, rather than more complex and open ended forms of assessment, especially those where the criteria involve considerable tacit knowledge. For those types of assessment tasks, audio-visual media may offer a better alternative, providing rich and detailed information in a relatively short timeframe.

Collaborative Writing Tools

Collaborative writing tasks are commonly used for educational purposes (Mauri, Ginesta, & Rochera, 2014), often with the goal of students constructing knowledge by engaging in the mutual exchange of opinion, concepts, and thoughts (Alvarez, Espasa, & Guasch, 2012; Zheng, Lawrence, Warschauer, & Lin, 2014). It has been argued that this process is valuable, as it can enhance the role of students as active learners, such as through reflection on their own ideas and abilities (Zheng et al., 2014). While there are various technological tools that support the act of collaborative writing, the literature in this review primarily relating to educator-provided performance information has primarily focused on the use of wikis (Eddy & Lawrence, 2012; Rott & Weber, 2013).

Wikis are web-based platforms that allow multiple users to author and edit written content, share files, and post multimedia content, either synchronously or asynchronously. In general, wikis allow various levels of privacy so that the content can either be viewable to the public or restricted to a specified group of users (Eddy & Lawrence, 2012). As Israel and Moshirnia (2012) point out, wikis are highly appropriate for use in educational assessment as they are designed to be user-friendly and flexible, and they allow students to act as both author and reviewer. In particular, wikis may be beneficial for engaging students in the process of authentic assessment, such as building informational resources for the public or clients (Eddy & Lawrence, 2012). Students generally perceive wikis to be easy to use and agree that they are a useful way to learn information (Israel & Moshirnia, 2012).

Due to their collaborative design, wikis are most commonly used in tasks incorporating peer feedback. As such, their utility as a tool for feedback design will be discussed in more detail in a later subsection. However, when it comes to educator to student feedback, it should be noted that wikis provide a potentially valuable platform for effective feedback processes. For example, the built-in editing tools allow educators to offer formative comments directly onto the relevant sections of the wiki, both during and after completion (Eddy & Lawrence, 2012). Students can then reflect and respond to the performance information they receive from educators. Furthermore, wikis allow educators the ability to view the entire history of author modifications, making it possible to monitor student progression and improvements over time. For group-based work, educators also have the ability to view each individual student’s degree of contributions over time. When using wikis to provide performance information to students, Eddy and Lawrence (2012) recommends that the feedback process can be enhanced if educators design checkpoints where they monitor student progress and provide formative comments throughout the creation of the wiki.

Bug in Ear Technology

This form of technology has historically involved the use of a two-way communication device, such as a radio transmitter or a Bluetooth communication device, placed in the ear of the student. This is coupled with a microphone used by an instructor who is observing the student, either from a distance within the same room or via webcam from a remote location (Gibson & Musti-Rao, 2015; Rock et al., 2012).

While the feedback modalities discussed above have primarily focused on technology that aids in the delivery of asynchronous performance information on written assessment tasks, the use of bug in ear technology is more appropriate for the provision of real-time comments on certain performance or skill-based assessments. Much of the literature relating to the use of bug in ear technology appears to be focused on training preservice teachers as they work in the classroom (Gibson & Musti-Rao, 2015; Kelly, O’Neil, & Kwon, 2014; Rock et al., 2012). As Gibson and Musti-Rao (2015) note, the provision of real-time performance information encourages preservice teachers to immediately correct erroneous teaching behaviors. This is a useful means of preventing errors from becoming a routine part of the teaching practice, which is a valuable component of teacher training (Gibson & Musti-Rao, 2015).

One interesting study of bug in ear technology was performed by Rock et al. (2012). These scholars designed an assessment process whereby supervisors used a webcam and microphone to provide real time feedback to 13 masters of teaching students as they were working in the classroom. Supervisors viewed the performance of students in real time from a remote location (e.g., their offices) by using the Skype videoconferencing program. The teachers-in-training were also running Skype on a webcam-enabled computer within their classrooms, as well as a Bluetooth-enabled earpiece which allowed them freedom to walk around while being able to hear their supervisor’s comments. Qualitative analysis of the student teachers’ written reflections revealed that they all highly valued the method by which they had obtained real-time performance information and were able to articulate how the comments had helped them to reflect on and improve their in-classroom strategies and academic delivery. However, almost half of the teaching students also mentioned having technical troubles with the bug in ear technology during the process.

Based on the research, it seems that one of the main affordances of bug in ear technology is that is allows educators to provide real-time performance information as students work on tasks in situ. Students are provided performance information in real-time while not disrupting the flow of the class. The timing of these comments has been described as very powerful, as it helps students immediately understand how they can improve their performance. Bug in ear technology is also relatively cost effective, as educators can provide supervision from their own offices using readily available technology (Kelly et al., 2014). However, one of the drawbacks with using bug in ear technology is that it is unlikely to be sustainable, especially in large classes, due to the amount of time required by educators to provide real time feedback to multiple individual students. Furthermore, due to the multiple pieces of hardware and software needed to run such activities, there is a high risk of technical failure. On this topic, Gibson and Musti-Rao (2015) note that these types of technology are rapidly improving, which may make bug in ear technology a viable option for certain types of assessment in the future.

Computer to Student Feedback

Initial searches revealed that there are six commonly researched forms of technology used to provide feedback from computers to students: computer assisted language learning software, student response systems, automated feedback on multiple choice quizzes, automated writing evaluation tools, and intelligent tutors. The subsections below expound on each of these topics.

Computer-Assisted Language Learning (CALL)

It is commonly agreed that language students receive regular feedback on their written and spoken proficiencies (Ghahri, Hashamdar, & Mohamadi, 2015; Penning de Vries, Cucchiarini, Bodnar, Strik, & van Hout, 2015). However, opportunities to practice speaking and writing can be limited due to large class sizes and time constraints, and feedback may focus on meaning rather than accuracy, particularly when addressing spoken proficiency (Lee, Cheung, Wong, & Lee, 2013; Penning de Vries et al., 2015). Within this context, digital tools and software for language learning – collectively known as computer-assisted language learning (CALL) – have emerged as a possible means of improving students’ access to language practice opportunities.

CALL is a broad field of research and practice that encompasses a diverse range of digital technologies from email, and simple audio systems, to more complex voice recognition, digital games, and immersive learning environments such as virtual worlds. However, much of the current research has focused on web-hosted software which provides students with automated feedback on their language skills (Choi, 2016; Lee et al., 2013; Penning de Vries et al., 2015). This software uses automatic text analysis or speech recognition to offer students immediate feedback on areas such as content, structure, and grammar (Lee et al., 2013; Penning de Vries et al., 2015). CALL is often described as a convenient and flexible tool with which to practice written or spoken language skills, and its automated nature means students may practice as frequently and as intensively as they choose (Lee et al., 2013; Penning de Vries et al., 2015). It has also been reported that students feel less anxious about making language errors when using a CALL system than in a classroom or face-to-face context (Penning de Vries et al., 2015).

CALL is typically used to offer students formative feedback, rather than to conduct summative assessment (Lee et al., 2013). Text-based CALL systems can facilitate a range of language tasks and often provide automated feedback on drafts to allow students to revise their work prior to submission (Lee et al., 2013). CALL systems for early language learners allow students to complete short translation tasks by filling gaps in dialogue or building sentences (Choi, 2016). CALL systems providing feedback on spoken language proficiency may require students to respond to onscreen prompts, such as pronouncing a word, or to answer a question by assembling an assortment of sentence components (Penning de Vries et al., 2015; Wang & Young, 2015). Penning de Vries et al. (2015) note that for more complex spoken language tasks, limiting students’ possible responses can improve the accuracy of automated speech recognition. However, this also limits the potential for practicing complex language tasks. Depending on the sophistication of the software, the performance information provided by CALL systems varies from limited, corrective feedback to suggestions on content, structure, and elaboration (Lee et al., 2013; Penning de Vries et al., 2015).

Students generally feel that CALL systems are beneficial to their language proficiency, perceiving them as helpful, easy to use, and motivating (Lee et al., 2013; Penning de Vries et al., 2015). However, evidence of the effectiveness of CALL systems on student learning outcomes is less conclusive. Studies assessing text-based CALL systems typically report significant improvements in student writing and language acquisition after using CALL (Choi, 2016; Lee et al., 2013). By contrast, preliminary findings on the efficacy of speech-based CALL systems suggest that CALL-facilitated speech practice may assist students’ pronunciation, but is no more beneficial to grammar development than students self-monitoring their spoken practice (Penning de Vries et al., 2015; Wang & Young, 2015).

CALL systems may also present users with technical and practical challenges. Insufficient or unstable Internet connections, or poor recording technology, can be particularly detrimental to the optimal operation of speech-based CALL systems (Penning de Vries et al., 2015). In addition, the automated nature of CALL systems means that feedback is typically targeted to specific areas – for instance, content rather than grammar – which can limit the usefulness of CALL (Lee et al., 2013). Lee et al. (2013) conclude that CALL systems should be used to supplement rather than supplant educator guidance and feedback. It is also recommended that students should be trained in how to use CALL systems and implement the resultant feedback; for example, students may be provided with revision strategies, examples, and opportunities to practice using the CALL system under educator supervision.

Student Response Systems (SRS)

Interactive student response systems (SRS) are commonly used in educational settings to attain and collate students’ responses to a question or topic in real time (Klein & Kientz, 2013; Voelkel & Bennett, 2014). SRS can be used for a range of tasks, including recording attendance and tracking students’ participation frequency; however, SRS are most frequently used as a dual feedback mechanism (Chui, Martin, & Pike, 2013). Student responses provided in class via a SRS provide educators with a snapshot of students’ levels of knowledge and understanding of content, which allows them to instantly alter their teaching to address gaps in understanding (Chui et al., 2013; Klein & Kientz, 2013). In turn, students receive immediate feedback on their own understanding of content, enabling them to reflect on their own learning and identify areas for revision (Chui et al., 2013). This approach is beneficial, as it creates a real-time feedback loop between educator and student (Voelkel & Bennett, 2014).

While SRS are available in a variety of formats, perhaps the most common is the traditional “clicker.” This in-class response system involves the use of handheld devices, which students use to select their answer to a multiple-choice or open-ended question posed by their educator (Chui et al., 2013; Klein & Kientz, 2013; Voelkel & Bennett, 2014). Aggregated responses are then sent to the educator’s receiving clicker, who may choose to display and discuss the distribution of the results with the class (Chui et al., 2013; Klein & Kientz, 2013). Educators’ questions and aggregated student results may be displayed on a webpage or embedded into common programs such as PowerPoint (Klein & Kientz, 2013; Voelkel & Bennett, 2014); however, the increasing prevalence of smartphones and wireless Internet access across educational contexts has seen alternatives to clicker devices emerge (Chui et al., 2013; Voelkel & Bennett, 2014). Web-based SRS such as Poll Anywhere allow students to respond via SMS, through online voting via a smartphone or laptop, or even via Twitter (Voelkel & Bennett, 2014). Such web-based SRS are low-cost and easy to use, requiring minimal training and setup times (Voelkel & Bennett, 2014).

Research suggests that students generally feel positively toward the use of SRS and consider it to be a valuable means of receiving feedback input. For example, students report that SRS are an engaging and thought-provoking learning tool, which allow them to learn more compared with non-SRS lectures (Klein & Kientz, 2013; Voelkel & Bennett, 2014). Students also feel more confident in their understanding after using SRS (Chui et al., 2013). While educator perceptions of SRS have received limited attention, one study found that web-based SRS were simple and quick to set up, easy for students to use, and offered good opportunities for student engagement and feedback (Voelkel & Bennett, 2014).

Findings relating to the effectiveness of SRS in improving student learning outcomes remain unclear. While some studies report that the use of SRS may improve student understanding and performance (for example, Klein & Kientz, 2013; Voelkel & Bennett, 2014) others suggest such gains may be temporary. For example, Chui et al. (2013) found that students who completed in-class SRS quizzes performed better than students who completed quizzes at the end of class and received feedback in the following lesson; however, overall course performance for both student cohorts remained similar. In addition, student rates of participation in SRS appear erratic; response rates may vary from 20% to 75%, with an average of 50% participation (Voelkel & Bennett, 2014).

It has also been noted that SMS voting may discourage students from participating due to the cost involved. Although researchers theorize that the increasing prevalence of mobile phone plans that offer unlimited SMS may alleviate this difficulty, it is recommended that free SRS options such as online voting are prioritized (Voelkel & Bennett, 2014). Students also report that they may not always carry their smartphone or laptop, which may inhibit participation where web-based SRS are used (Voelkel & Bennett, 2014). Researchers also caution that students can potentially correctly answer questions without fully understanding content, which may inculcate a false sense of confidence among students and lead to reduced studying and effort by students (Chui et al., 2013).

Automated Feedback on Online Multiple Choice Questions

Educators are increasingly using online multiple choice questions (MCQs) to provide formative assessment in educational contexts (Marden, Ulman, Wilson, & Velan, 2013). Online MCQs are typically made available to students via learning management systems, such as Moodle or Blackboard, which offer simple inbuilt templates (Bälter, Enström, & Klingenberg, 2013; DePaolo & Wilkinson, 2014; Sancho-Vinuesa, Escudero-Viladoms, & Masià, 2013). MCQs are typically completed by students outside of class, without restrictions on the use of study aids such as lecture notes or textbooks, although some studies have explored the use of invigilated, closed book online MCQs (Marden et al., 2013)

The online delivery of MCQs offers students flexible and convenient access (Bälter et al., 2013; Marden et al., 2013), while allowing them to receive immediate performance information through the automated marking and feedback process (Bälter et al., 2013). This may improve the efficiency and feasibility of formative feedback for educators, particularly for large cohorts (Marden et al., 2013). Online MCQs also offer a degree of flexibility with regard to feedback. For example, they allow educators to provide feedback of different types, including basic corrective indicators (i.e., correct/incorrect) (Bälter et al., 2013), generic comments that indicate possible errors (Sancho-Vinuesa & Viladoms, 2012), or longer and more detailed explanations or clarifications (DePaolo & Wilkinson, 2014). Online MCQs may also be designed to alert educators to students who have made repeated errors or numerous failed attempts to complete a quiz, helping them identify and support students who may be having difficulties with content (Bälter et al., 2013; Sancho-Vinuesa & Viladoms, 2012). Formative online MCQs generally allow students to test their knowledge of a topic by repeatedly retaking a quiz. Questions may be constructed around a set of variables to allow repeated attempts by students and to limit students sharing answers among themselves (Bälter et al., 2013).

Students are generally positive about the use of online MCQs for feedback purposes and consider them to be challenging, motivating, and valuable study tools (Bälter et al., 2013; DePaolo & Wilkinson, 2014; Marden et al., 2013). Students particularly appreciate that MCQs can be completed multiple times to test their knowledge, and report using this function to revise for summative assessments such as exams (Bälter et al., 2013; Marden et al., 2013). Moreover, the regular use of online MCQs can positively impact students’ study habits, allowing them to gain confidence and insight into their own learning (Bälter et al., 2013).

Research suggests that online MCQs can potentially impact student approaches to learning. Sancho-Vinuesa and Viladoms (2012) found that students using online MCQs with generic automated feedback tended to adjust their use of MCQs in accordance with how difficult they had found a topic of study and that students who had made regular use of formative MCQs tended to pass the corresponding summative MCQs. There is emerging evidence that the regular use of formative, online MCQs can lead to improved learning outcomes among students and a reduced rate of students failing or dropping out of a course (Marden et al., 2013; Sancho-Vinuesa et al., 2013; Sancho-Vinuesa & Viladoms, 2012). Improved learning outcomes, such as end-of-semester exam results, are particularly associated with online MCQs which offer students unlimited attempts and are completed outside of class (Marden et al., 2013).

It is recommended that the formative nature of online MCQs is clearly communicated to students, to increase the likelihood that students will use MCQs to test their own knowledge. Marden et al. (2013) suggest that students should be advised to first attempt quizzes under exam conditions, in order to provide a realistic indication of their knowledge, as completing MCQs using study resources may lead to quiz scores which do not accurately reflect a student’s understanding of content. It is also recommended that students make a note of which questions they answer incorrectly so as to revise these topics later (Marden et al., 2013). In addition, educators may consider incentivizing participation in online MCQs by allocating a small percentage of credit for undertaking the quizzes (DePaolo & Wilkinson, 2014; Marden et al., 2013).

Automated Writing Evaluation Tools

First developed in the 1960s, automated systems for assessing student writing have primarily been used to score student work (Link, Dursun, Karakaya, & Hegelheimer, 2014). The last decade has seen the emergence of automated writing evaluation (AWE) tools which not only assess writing, but provide students with formative feedback on language components such as grammar and structure (Chapelle, Cotos, & Lee, 2015; Link et al., 2014; Ranalli, Link, & Chukharev-Hudilainen, 2017). Feedback generated by AWE systems is instant and specific to individual student submissions, and generally focuses on diagnosing sentence-level errors in language mechanisms. However, AWE tools aimed at providing feedback on discourse characteristics, such as components of an introduction, have also been developed (Chapelle et al., 2015). Recent research relating to AWE tools has largely emerged from language disciplines, particularly English as a second or foreign language, and indeed marketing of AWE tools has increasingly targeted language disciplines (Bai & Hu, 2017; Ranalli et al., 2017). It is suggested that AWE tools can support educators by providing feedback on sentence mechanics, enabling educators to address higher-level writing components such as content and audience awareness (Ranalli et al., 2017).

AWE tools are typically used to aid students in drafting and revising their written work (Chapelle et al., 2015). In particular, AWE tools may be used to assist students in developing a multi-stage writing process, as the automated system means students can receive feedback comments on multiple drafts before submitting their work for final assessment by their educator (Chapelle et al., 2015). AWE tools are usually web-based platforms which offer students flexible access and multiple opportunities to receive feedback on their work (Bai & Hu, 2017; Chapelle et al., 2015). Feedback comments provided by AWE tools can be in a number of forms, including a score, and can highlight errors in a generic formulation (e.g., “You may be using the wrong preposition”) or locate feedback specifically within a student’s work (e.g., “You have used quiet in this sentence. You may need to use quite instead”) (Link et al., 2014; Ranalli et al., 2017). Some AWE tools can also assess and provide numeric indicators for the presence of content such as relevance, vocabulary, and structure (Bai & Hu, 2017). Common AWE systems include Criterion and Pigai (Bai & Hu, 2017; Chapelle et al., 2015).

Research relating to AWE tools has primarily sought to establish the accuracy of feedback. AWE tools are typically found to offer acceptable overall levels of feedback accuracy (between 71% and 77%), although there are significant variations between error types, which raise concerns as to their usefulness (Bai & Hu, 2017; Ranalli et al., 2017). In particular, AWE feedback may fail to recognize common second language written errors, significantly undermining claims of AWE’s usefulness in language learning (Ranalli et al., 2017). In addition, students may have difficulty in correctly applying AWE feedback to their work and have been shown to disregard up to 50% of the feedback (Chapelle et al., 2015; Ranalli et al., 2017). However, Bai and Hu (2017) found that student uptake of AWE feedback generally corresponds with the accuracy of AWE corrections, suggesting that students critically evaluate automated feedback and apply it as they consider appropriate. Ranalli et al. (2017) contend that inaccuracies in AWE feedback may damage students’ confidence in AWE tools.

While research into student perceptions of AWE tools is limited, students generally consider AWE feedback on sentence mechanics and grammar to be helpful (Bai & Hu, 2017), while AWE feedback on discourse components is largely considered by students to be somewhat or mostly helpful in identifying discrepancies between intended meaning and written output (Chapelle et al., 2015). Educator perceptions of AWE tools are similarly mixed. While educators typically agree that AWE tools promote student autonomy by offering flexible access to feedback, they also consider them to be largely ineffective for providing sufficient levels of high-quality, reliable feedback on student writing (Link et al., 2014). Educators are particularly concerned that inaccurate AWE feedback can be confusing and even misleading for students (Link et al., 2014); however, they do recognize their utility as an out-of-class assistant and grammar checker and note that they may help reduce workload in some instances (Link et al., 2014).

It has been argued that comprehensive training in AWE systems and features is essential for both educators and students (Chapelle et al., 2015; Link et al., 2014). Learning activities during class are recommended to ensure students can receive assistance if encountering difficulties with the AWE system (Link et al., 2014). In addition, it is recommended that students are advised of potential limitations of AWE-generated feedback and encouraged to critically evaluate all feedback recommendations made by the system (Bai & Hu, 2017; Link et al., 2014). Educators seeking to integrate AWE feedback into their teaching need to also maintain a degree of caution since the degree of accuracy varies depending on the context and complexity of text. Consequently, AWE tools are recommended as a complement to educator or peer feedback, rather than a primary feedback source (Link et al., 2014). While technologies will increasingly improve, the current value of AWE lies in providing students with diagnostic feedback on language mechanics at a basic, sentence level (Ranalli et al., 2017).

Intelligent Tutoring Systems

Many educational tasks require direct attention from an educator, from marking written tests to providing one-to-one support to students. However, large class sizes, coupled with increasing time pressures and staffing costs, can limit the ability of educators to provide this personal support (Chu, Yang, Tseng, & Yang, 2014; Hung, Smith, & Smith, 2015). As computer technology develops, intelligent tutoring systems have emerged as a means of providing students with interactive, flexible, and focused personal learning support (Hung et al., 2015; Steif, Fu, & Kara, 2016). Such support is particularly valuable as one-to-one tutoring from an educator has been shown to improve student achievement (Chu et al., 2014).

Intelligent tutoring systems may guide students through a learning exercise or seek to diagnose learning difficulties and provide corrective feedback (Chu et al., 2014; Hung et al., 2015). While intelligent tutoring systems may be designed around a number of systems, recent research regarding intelligent tutoring systems has focused on cognitive tutoring (Hung et al., 2015; Steif et al., 2016). Cognitive tutoring mechanisms use a model of cognitive behavior to interpret and evaluate student learning behaviors which take place within the tutoring system, typically centering on a problem-solving exercise (Chu et al., 2014; Hung et al., 2015). However, a significant criticism of cognitive tutoring systems has emerged from disciplines such as engineering and mathematics, which require students to undertake problem-solving tasks (Chu et al., 2014; Steif et al., 2016). While students may take any number of reasoning pathways to arrive at an answer (whether correct or incorrect), cognitive tutoring systems typically restrict students’ reasoning strategies by offering limited methods of solving a problem – for instance, offering pre-mapped intermediate steps in a calculation (Chu et al., 2014; Steif et al., 2016). However, some recent studies have investigated intelligent tutoring systems that reduce this limitation by derestricting reasoning pathways, to allow students to integrate various strategies and even commit pathway errors (Chu et al., 2014; Steif et al., 2016).

The design of cognitive tutoring systems varies significantly between disciplines. Students may complete a series of calculations in a mathematics or engineering context or work through a set of interactive, problem-based scenarios (Chu et al., 2014; Hung et al., 2015; Steif et al., 2016). Cognitive tutoring systems typically provide students with immediate feedback, either when students submit an answer or at a series of preselected points in the system (Chu et al., 2014; Steif et al., 2016). Feedback may take a range of forms, from a diagnosis which highlights the cause of an operational error in a mathematical problem to corrective feedback and suggestions in a dialogic, scenario-based system (Chu et al., 2014; Hung et al., 2015). Cognitive tutoring systems may also prevent students from continuing in a program until errors are corrected (Steif et al., 2016).

Students generally enjoy the interactivity of intelligent tutoring systems, which they feel positions them as active participants in their own learning (Hung et al., 2015). Students also consider that cognitive feedback systems provide sufficient feedback to benefit their learning (Hung et al., 2015). Research into the effect of intelligent tutoring systems on student learning outcomes is limited, but initial findings suggest students using intelligent tutoring systems may achieve higher learning outcomes than students undertaking simple web-based quizzes (Chu et al., 2014). It is recommended that students are trained in the use of intelligent tutoring systems before commencing any learning activity (Chu et al., 2014).

Peer to Student Feedback

Peer feedback is commonly considered to be beneficial to student learning; receiving feedback from peers allows students opportunities to consider their work from alternate perspectives, while providing feedback to peers can challenge students’ understanding of their own work and develop their critical thinking, improving their self-regulatory skills (Ciftci & Kocoglu, 2012; Ekahitanond, 2013; Wu, Petit, & Chen, 2015). The subsections below present research relating to the use of blogs and discussion boards, collaborative writing software, and specialized peer feedback software.

Blogs and Discussion Boards

As online learning has become more common, online text platforms such as blogs and online discussion platforms have become increasingly popular means of facilitating the peer feedback process. Blogs and discussion boards afford a collaborative, interactive, and flexible environment for students to share their work and provide and receive peer feedback (Ciftci & Kocoglu, 2012; Novakovich, 2016). As they are hosted online, blogs and discussion boards are accessible to students wherever an Internet connection is available, while their asynchronous nature allows students to provide and reflect on peer feedback in their own time (Ekahitanond, 2013; Lee & Markey, 2014; Yoo, 2016). Such social media can also facilitate the sharing of information, including feedback comments, in the form of multimedia such as images, audio, and videos, while the asynchronous nature of the exchange (comments on the blog or posts in the discussion forums) offer the chance for students to engage in a peer feedback dialogue.

Peer feedback via blogs and discussion boards follows a similar process to traditional, in-class peer feedback methods. Students providing peer feedback reflect on and comment on their peers’ work, which may take the form of a blog or discussion board post (Ekahitanond, 2013; Novakovich, 2016). In these activities, students are often assigned partners or a number of peers to ensure all students receive feedback (Lee & Markey, 2014). The feedback process may take place in class, such as during peer workshopping sessions or in students’ own time (Ciftci & Kocoglu, 2012; Novakovich, 2016; Wu et al., 2015). Options for blog hosting include established blogging sites, such as Blogger and Qzone (Lee & Markey, 2014; Xianwei, Samuel, & Asmawi, 2016), while discussion forums are typically hosted on native applications within learning management systems such as Moodle (Ekahitanond, 2013).

Students have been reported to consider the receipt of peer feedback through blogs and discussion forums to be enjoyable, motivating, and beneficial to their overall learning (Ciftci & Kocoglu, 2012; Ekahitanond, 2013). With regard to blog-mediated feedback in particular, students recognize that providing peer feedback helps to improve their writing skills, critical appraisal skills, and learning outcomes, while receiving peer feedback impacts positively on their own work (Ciftci & Kocoglu, 2012; Yoo, 2016). Blog-mediated peer feedback is also considered by students to be convenient and easy to use (Ciftci & Kocoglu, 2012; Xianwei et al., 2016). Students report similar benefits for peer feedback through discussion forums, including improved confidence and teamwork; however, students may also consider discussion forums to be a time-consuming and impersonal means of providing peer feedback (Ekahitanond, 2013).

The effectiveness of blog and discussion forum peer feedback in improving student learning outcomes remains underexplored in the literature. However, findings suggest that students who receive blog-mediated peer feedback often receive higher marks than students who receive peer feedback in person or via mark-up (Ciftci & Kocoglu, 2012; Novakovich, 2016). Studies have also found that blog-mediated peer feedback comments compare favorably with traditional in-class or electronic mark-up formats; blog mediated peer feedback tends to be of higher overall quality, with students offering increased substantive, critical, and accurate suggestions (Novakovich, 2016; Yoo, 2016). It has also been suggested that students may also feel more comfortable providing critical comments through a blog or discussion forum than in person (Ekahitanond, 2013; Yoo, 2016). However, some studies have found that while students appreciate feedback from their peers, they can be reluctant to integrate peer comments when revising their work, instead preferring comments from their educators, perceiving this feedback to be more accurate, credible, and “expert” (Ciftci & Kocoglu, 2012; Wu et al., 2015).

Blog-mediated and discussion forum peer feedback can present educators with a number of challenges. As with any digital feedback, blogs and discussion forums may be affected by technical difficulties such as inadequate or unreliable Internet access (Ciftci & Kocoglu, 2012; Ekahitanond, 2013). Students may also have difficulty adapting to blog or discussion forum interfaces (Ciftci & Kocoglu, 2012). It is recommended that educators familiarize students with the peer feedback platform through in-class training and provide detailed guidelines to ensure peer feedback is sensitive, constructive, and useful for all recipients (Ciftci & Kocoglu, 2012; Ekahitanond, 2013). In addition to training, it is suggested the educators consider providing student peer feedback exemplars and examples of their own experiences of peer feedback (Ciftci & Kocoglu, 2012). It has also been recommended that to avoid students disengaging from peer feedback participation, peer feedback should be integrated into the curriculum rather than designated an “optional” task; participation may also be incentivized with a small number of marks (Wu et al., 2015).

Collaborative Writing Software

Collaborative writing tasks can be a valuable approach to fostering discussion, reflection, and feedback interactions among students (Boldrini & Cattaneo, 2014). During the process of writing and reviewing, students engage in formative feedback among their peers and consider new perspectives and approaches (Boldrini & Cattaneo, 2014; Strobl, 2014). A number of online technologies have emerged as potential platforms for collaborative writing and peer feedback. Wikis and cloud-based text editors such as Google Docs offer simple, accessible, and flexible interfaces for students to draft, edit, and comment on collaborative writing tasks (Andrichuk, 2016; Strobl, 2014; Woo, Chu, & Li, 2013). Collaborative writing software can also be used to facilitate the peer feedback process on non-collaborative tasks; for instance, students may upload their work for review and receive comments or mark-up (Andrichuk, 2016; Boldrini & Cattaneo, 2014).

Student perceptions of collaborative writing software for peer feedback vary. It has been reported that students generally enjoy using collaborative writing platforms for peer feedback and agree that the process helps improve their text as well as their writing skills (Andrichuk, 2016; Strobl, 2014). However, two studies indicated that students gained more from receiving than providing peer feedback: Andrichuk (2016) reported that providing peer review did not enhance students’ writing ability as much as the students themselves expected, while Strobl (2014) found that students were more likely to agree that they learned from receiving peer feedback than from providing it. It is worth noting that this finding is in contrast to the general assessment literature, which states that students tend to benefit more from providing than receiving peer feedback (Ertmer et al., 2007; van Popta, Kral, Camp, Martens, & Simons, 2017). This difference may reflect the particular type of peer feedback that is being provided using collaborative writing software (i.e., written comments on a written task).

Students typically appreciate the flexible access of online collaborative writing tools, which allows them to work on their writing and provide feedback from any Internet-connected location (Woo et al., 2013). It has also been noted that some students can feel more comfortable sharing their work in an online environment (Andrichuk, 2016). However, it has been noted that this accessible nature may also contribute to student perceptions that providing peer feedback via collaborative writing mediums can be burdensome and time-consuming (Strobl, 2014). Staff perceptions of the use of collaborative writing platforms for peer feedback remain largely underexplored; however, one study reported that staff found collaborative writing platforms to be an efficient and convenient means for students to co-construct texts and provide peer feedback (Woo et al., 2013).

Research has yet to clearly establish the effect of peer feedback via collaborative writing tools on learning outcomes. While studies have found little difference in peer feedback outcomes between paper-based and collaborative writing software, it has been reported that peer feedback through online collaborative platforms can lead to increased revisions at the content level (Boldrini & Cattaneo, 2014; Woo et al., 2013). In addition, collaborative writing software prompts a higher number of peer comments than traditional paper-based peer feedback (Boldrini & Cattaneo, 2014). Comments are also more likely to be at a meaningful, content level than surface-level corrections (Woo et al., 2013). However, a small but significant number of students remain concerned about the possibility for plagiarism to occur in online collaborative writing and peer feedback processes (Andrichuk, 2016). As with most technology enabled learning, it is recommended that students receive appropriate guidance and scaffolding, including technical instructions, roles, and responsibilities such as avoiding plagiarism, and especially how to provide peer feedback comments (Andrichuk, 2016; Strobl, 2014; Woo et al., 2013).

Peer Feedback Software and Tools

The literature search for peer feedback software and tools invariably focused on online software and in particular resulted in two main themes: the re-purposing of existing social networking sites, such as Facebook (including specially designed Facebook add-ons), and purpose built learning platforms designed to support peer feedback (Demirbilek, 2015; Ho, 2015; Jiang & Yu, 2014; McCarthy, 2016). As noted in other online feedback systems, it is considered to be an advantage for students to access peers’ work and comments at a time and place convenient to them, thus avoiding the logistical challenges of traditional, paper-based peer review processes (Demirbilek, 2015; Ho, 2015). In addition, it is argued that social networking sites such as Facebook are familiar to most students and contain features which facilitate the provision of a variety forms of peer feedback input, such as commenting and “liking” posts (Demirbilek, 2015). Many social media and purpose built peer feedback tools also allow multimedia, such as images and video, to be posted, providing a scope for online peer feedback beyond text (Demirbilek, 2015; McCarthy, 2016). This visual element makes online peer feedback tools particularly suited to creative disciplines such as art and design; not only do students produce visual works, but creative work is subjective and typically shaped by multiple perspectives (McCarthy, 2016).

It has been reported in several studies that Facebook peer feedback tools make use of students’ existing accounts and capitalize on Facebook’s accessibility on a broad range of Internet-connected devices, including computers and mobile phones (Demirbilek, 2015; McCarthy, 2016). However, online peer feedback interfaces have the potential advantage of a purposefully designed range of functions, such as split-screens simultaneously showing the work, comments, and an instant chat to allow synchronous peer feedback (Ho, 2015; Jiang & Yu, 2014). Regardless of the interface, it is argued that these online interfaces encourage more of a dialogue, where drafts of student work (whether textual or visual) are uploaded, peers provide comments on the work, and students review and respond to feedback on their work (Ho, 2015; McCarthy, 2016).

It has been reported that students generally feel that they benefit from receiving peer feedback via these tools and prefer online peer feedback to handwritten, paper-based peer feedback (Demirbilek, 2015; Ho, 2015; McCarthy, 2016). Indeed, it has been noted that students felt that it is easier and more efficient to type comments, rather than handwriting them in a document’s margins (Ho, 2015). Students responded positively to the accessibility and familiar appearance of Facebook peer feedback tools which also made it easy to use (Demirbilek, 2015; McCarthy, 2016). Facebook peer feedback tools has also been shown to facilitate increased social connectivity between students, particularly in out-of-class contexts, and students report enjoying the opportunity to view and comment on their peers’ work (Demirbilek, 2015; McCarthy, 2016).

Peer feedback generated in purpose-built online platforms is generally oriented to revision, rather than surface-level comments or generalized praise that are frequently found in face-to-face contexts; however, students are more likely to incorporate peer feedback received in a face-to-face context than online peer feedback (Ho, 2015). Nevertheless, online peer feedback tools have been found to have positive effects on students’ learning outcomes, with low-performing students typically making greater improvements than higher-performing peers (Jiang & Yu, 2014). It is also interesting to note that strong correlations have been found between high levels of online activity and strong academic performance (Demirbilek, 2015; McCarthy, 2016).

Online peer feedback platforms also come with a number of challenges for educators, many of which are also relevant to offline methods. Of significant concern is that high-performing students may not accept that lower-performing peers are able to provide useful or accurate feedback (Jiang & Yu, 2014). Students also report anxiety around providing and receiving online peer feedback, particularly when providing critical comments, and it is suggested that offering students anonymity through the use of pseudonyms may alleviate these concerns (Demirbilek, 2015). Training students in using online peer feedback interfaces, and also in providing peer feedback, is recommended as essential to ensuring the success of online peer feedback for all students; exemplars and practice tasks may also be beneficial to increase students’ understanding of the peer feedback process (Demirbilek, 2015; Ho, 2015). McCarthy (2015) also suggests that educators consider providing appropriate assessment weighting for participation in online peer feedback, to ensure that students receive the benefits which attend strong, consistent participation in the online environment.

Self-Feedback

As S.-C. Huang (2016) observes, the distinction between self-feedback and self-assessment is often blurred and difficult to distinguish. Self-assessment is recognized as an important means of developing students’ learning skills and self-regulatory abilities (Boud, 1995; Huang, 2016), and as Hattie and Timperley (2007) argue, the act of questioning and judging oneself necessarily entails “selecting and interpreting information in ways that provide feedback” (p. 94). Thus, self-assessment and self-feedback function as linked and interdependent and are often categorized under the single banner of self-assessment.

Initial literature searches revealed that digital recordings and e-portfolios are the most commonly researched forms of technology used to facilitate students’ self-feedback. The subsections below aim to discuss these technological approaches to self-feedback, but at times reflect the indistinct and messy characterizations of self-feedback and self-assessment.

Digital Recordings

While self-assessment is recognized as an important component of students’ development as lifelong learners (Hawkins et al., 2012), it has also been shown that self-perceptions can be inaccurate when compared with expert or educator assessment (Hawkins et al., 2012; LeFebvre, LeFebvre, Blackburn, & Boyd, 2015). However, digital recordings have emerged as a means of facilitating and improving students’ self-assessment, a trend that has been facilitated by audio and video recording technologies becoming simpler, cheaper, and more readily available in educational contexts (O’Loughlin, Ní Chróinín, & O’Grady, 2013).

Digital recordings allow students to review and critically assess a recording of their own performance (LeFebvre et al., 2015), an affordance of particular value in disciplines which require the development of practical skills, and for transitory assessments such as oral presentations (Barry, 2012; O’Loughlin et al., 2013). Video and audio recordings are the most common digital recording formats and may be used in a range of circumstances. Video is prevalent in many practice-based disciplines, including medicine and physical education, while audio is used in disciplines where visual components of performance are less critical, such as language studies (Barry, 2012; Huang, Chen, Wu, & Chen, 2015; O’Loughlin et al., 2013). Recordings can be hosted via an online platform such as a learning management system, media sharing site, or simply viewed on the recording device itself.

Using digital recordings in the self-assessment process is an opportunity for students to identify discrepancies between their perceived and actual performance (LeFebvre et al., 2015). An iterative assessment design is often implemented, whereby a student is recorded undertaking a task, following which they complete a self-assessment; the recording is then reviewed, and a revised self-assessment takes place (Hawkins et al., 2012; Plant, Corden, Mourad, O’Brien, & van Schaik, 2013). The initial self-assessment stage prior to reviewing the recording may also be omitted (Barry, 2012; O’Loughlin et al., 2013), while semi-structured interviews, during which a student’s recording is viewed and discussed, offer an alternative format to written self-assessments (Plant et al., 2013). Explicitly encouraging students to undertake self-feedback is most common in language and communications disciplines; self-feedback may be directed through prompts such as open- and closed-ended questions, detailed instructions, and asking students to write reflexively on their own recorded performance (Huang, 2016; LeFebvre et al., 2015).

Digital recordings are often proposed as a means of improving the accuracy of students’ self-assessments; however, the degree of effectiveness of digital recordings in reducing inaccurate self-assessments remains unclear. While Kachingwe, Phillips, and Beling (2015) and Wittler, Hartman, Manthey, Hiestand, and Askew (2016) found limited improvements to accuracy following the introduction of video recordings for review, Hawkins et al. (2012) reported a significant improvement in self-assessment accuracy when video recordings were introduced in concert with a video-recorded exemplar performance. Indeed, it has been reported in a number of studies that the incorporation of video into the self-assessment process was more likely to improve student accuracy when appropriately scaffolded, whether through exemplars, detailed rubrics, or prompts (Barry, 2012; Hawkins et al., 2012; O’Loughlin et al., 2013; Yoo, 2016). S.-C. Huang (2016) also found that through the careful guiding of students to produce self-feedback also resulted in instances of Hattie and Timperley’s (2007) conceptions of feedback and feedforward, at both task and process levels, along with increased reflections on self-regulation (Huang, 2016). Overall, it has been reported that students generally consider the use of digital recordings to be beneficial for improving both their performance and self-assessment skills (Barry, 2012; Hawkins et al., 2012; O’Loughlin et al., 2013). In particular, language students reviewing audio recordings of themselves speaking valued the opportunity to detect discrepancies between their perceived performance and actual performance, including in pronunciation and fluency (Huang, 2016).

While self-assessment is necessarily self-driven, student engagement can be problematic. For example, students may provide vague or generic self-feedback rather than invest the time and effort required to make the process beneficial, regardless of the use of technology (Huang, 2016). Students may not have the insight to compare their own performance to a standard, they may not have the language to express views on their own performance, or may be reticent to reveal deficits in their own performance to an assessor (Boud and Molloy 2013). Students may also be unable to effectively use digital recordings to self-assess if it is not clear to them how to judge their own performance and against what standard (Hawkins et al., 2012). LeFebvre et al. (2015) therefore recommend the use of exemplars to enable students to recognize effective (or ineffective) practices when reviewing their own recordings. Clear educator guidance, along with structured rubrics, is also recommended to support students in developing their own self-assessment capacity (O’Loughlin et al., 2013).

e-Portfolios

Learning portfolios, in this context, are digital repositories of student’s work, including learning products, assessments, and feedback. A key feature of a digital portfolio is the ability for the artifacts within it to be organized, curated, annotated, and portrayed in different ways for different purposes and audiences (see Clarke & Boud, 2016). The literature review reflected this diverse application with frequent examples of digital learning portfolios being used to provide students valuable opportunities to review, reflect on, and curate their own learning and may even be consulted as a reference at a later date (Aguaded Gómez, López Meneses, & Jaén Martínez, 2013; Kabilan & Khan, 2012). It is recognized that learning portfolios can assist students in developing skills in self-regulation and self-assessment; however, traditional, paper-based portfolios have been criticized as impractical and difficult to manage, submit, and assess (Beckers, Dolmans, & van Merriënboer, 2016; Chang, Liang, & Chen, 2013). Increasingly, across the disciplines, educators have turned to online solutions to allow the creation, management, and sharing of students’ learning portfolios, known as “e-portfolios.”

e-Portfolios resemble their paper-based counterparts, but their digital format offers a number of advantages over traditional portfolios. e-Portfolios enable students to collate and manage their portfolios over time, stored in a central location that is generally accessible from any Internet-connected device (Beckers et al., 2016; Chang et al., 2013). As the e-portfolio is hosted by a digital platform a range of multimedia and file formats can be accommodated, including images and video (Kabilan & Khan, 2012). e-Portfolios are also more easily shared with peers and educators than their paper equivalents; for instance, students may share their portfolios by e-mail or display them on websites or social media (Kabilan & Khan, 2012). Like traditional portfolios, e-portfolios facilitate self-assessment through the act of curation. Students must review their progress when collating their e-portfolio and reflect on their own work and performance when considering the e-portfolio’s contents (Aguaded Gómez et al., 2013; Chang et al., 2013).

Options for e-portfolio management vary and include blogs, online discussion platforms such as Google Groups, and specialized online portfolio assessment systems (Aguaded Gómez et al., 2013; Chang et al., 2013; Kabilan & Khan, 2012). Along with promoting self-assessment, e-portfolios also allow students to share their reflections and progress their peers, and interact with one another’s e-portfolios; this is identified in the research as a significant advantage of the e-portfolio format (Aguaded Gómez et al., 2013; Chang et al., 2013; Kabilan & Khan, 2012). Educators using e-portfolios typically provide students with detailed instructions on the construction of the e-portfolio but also offer questions that can facilitate reflection and self-assessment. This guidance may be a brief list of discussion points or a 27 point list of questions (Chang et al., 2013; Kabilan & Khan, 2012). Aguaded Gómez et al. (2013) also recommend providing training for students if they may be unfamiliar with the online platform or software used for the e-portfolios.

It has been reported that self-assessment used in concert with e-portfolios yields highly consistent results between student self-assessment and educator assessment; furthermore, students’ self-assessment results were also accurately reflected by end-of-year exam results (Chang et al., 2013). However, researchers emphasize that self-assessment through e-portfolios is a skill rather than an automatic process for students, and as such must be fostered (Kabilan & Khan, 2012). C.-C. Chang et al. (2013) suggest that ensuring students have a clear understanding of portfolio assessment improves the reliability and validity of e-portfolios. Creating and maintaining e-portfolios is typically concluded to positively promote self-assessment and self-regulation in students (Kabilan & Khan, 2012).

Students generally consider e-portfolios to be an effective means of facilitating self-assessment by engaging them in their learning and allowing them to progressively monitor their progress and identify areas for improvement (Aguaded Gómez et al., 2013; Kabilan & Khan, 2012). However, some students found maintaining their e-portfolios to be too time-consuming and disliked the higher level of autonomy or independence required to produce the e-portfolio (Aguaded Gómez et al., 2013; Kabilan & Khan, 2012). As with self-assessment facilitated by digital recordings, concerns have been noted about students reluctant to engage in the e-portfolio process, marked by passivity or generalized responses. Time pressures are also cited as a contributing factor in low-quality self-assessment reflections (Kabilan & Khan, 2012). Technical problems and difficulties in Internet access were also noted as possible barriers to students engaging in the e-portfolio process (Aguaded Gómez et al., 2013; Kabilan & Khan, 2012).

Benefits, Challenges, and Implications

Overall, the literature review has revealed that there are various benefits, challenges, and design implications that may shape educators’ decisions about the technology practices that they choose to incorporate into their designs. Table 3 summarizes key findings, organized according to the source of feedback. However, there are also several general observations that apply across sources. First, technology enabled feedback is largely reported to have positive impacts on student perceptions and outcomes and are generally thought to be more engaging. However, these results need to be balanced by the fact that many of the studies were single intervention, often small in scale, and focused on a limited array of outcomes. This caveat is further discussed later in this chapter. Second, successful designs are often linked to technologies that are user-friendly, easy to access, and well supported. This reflects recent research in assessment design and in technology enabled learning more broadly (Bennett, Dawson, Bearman, Boud, & Molloy, 2017; Henderson, Selwyn, & Aston, 2015). Third, effective technology enabled feedback practices often fit within well-established traditions of feedback design (e.g., peer feedback software applied to contexts in which educational designs already use peer feedback).
Table 3

Benefits, challenges, and design implications for the use of technology in feedback design

Source

Forms of technology mediation

Benefits

Challenges

Design implications

Educator-to-student

Digital recordings:

 Audio recordings

 Video recordings

 Screencasts

Detailed, clear, and personalized comments

Contains rich cues, such as tone and expression

Efficient to create

Can enhance relationships between educator and student

May increase student engagement and performance

Recordings can be difficult to scan quickly

Large file sizes are difficult to share and view

Some students may be initially sceptical

Useful for providing post assessment performance information

Small files can be embedded directly in to the assessment task

Ensure that file sizes are manageable

Ensure that file formats are widely compatible

Good medium for providing holistic performance information

Consider issues of privacy and security

Rich cues afforded by this medium necessitate thought with regard to the tone of content delivery

Digital text:

 Annotations/tracked changes

 Sticky notes/comment boxes

 Discussion boards

 Email

 Electronic rubrics

 Statement banks

Simple and convenient for educators to use

Allows for specific comments corresponding to sections of work

Students are comfortable with this medium

More legible, accessible, and timely than handwritten comments

Providing detailed and personalized comments is not as efficient as audio-visual recordings

Electronic rubrics and statement banks are not as detailed as other forms of feedback

Learners may equate generic statements (e.g., statement banks) with lack of educator investment

Best for comments provided by educators

Comments can be linked to the specific parts of the assessment task

Limited suitability for performance or skill-based assessments

Electronic rubrics and statement banks are suitable for problem-based assessments

Best used out of class

Collaborative writing tools:

 Wikis

 Google Docs

User friendly and flexible

Offers various privacy levels

Educators can view history of modifications and contributions

Supports formative feedback during and after assessment

Monitoring student progress through drafts may take extra time and labor

Useful for authentic assessment

Fosters dialogical feedback processes

Purposeful checkpoint design is recommended

Can be used in class or out of class

Bug in ear technology:

 Two-way radio transmitter

 Bluetooth earpiece and videoconferencing software

Allows for real-time performance information and modification

Helps students reflect and improve

Avoids public loss of face/humiliation as the audience (for example clients in industry or pupils in class are not privy to the real time comments

Cost effective

Laborious process for educators

High risk of technological issues

May not be user friendly

Suitable for performance and skill-based assessments

Observation can be on-site or off-site

Computer-to-student

Computer assisted language learning

 Web-hosted software

Offers a range of feedback on content, structure, and grammar

Convenient and flexible

Students can use as many times as necessary

May reduce students’ performance anxiety

Easy to use

Reliant on Internet connections and recording technology

Limited to language learning subjects

Can be useful to limit number of possible responses for complex tasks

Best used to supplement educator feedback

Students may need training to get the most out of the activities and feedback

Student response systems

 Hand held “clickers”

 Web-based systems (SMS, smartphone/laptop voting, Twitter)

Instantaneous dual feedback to student and educator

Allow educators to adjust teaching based on results

Has low cost options

Easy to use

Enhances student engagement

Negligible impact on learning outcomes

SMS voting can be costly for students

Correct answers do not necessarily reflect understanding

Web-based SRS necessitates that students bring a digital device to class

Is best used for content that has a clear answer (e.g., problem-based learning)

Only useful for in-class context

Automated feedback on MCQs

 Moodle

 Blackboard

Convenient delivery

Simple for educators to use

Flexibility of feedback types

Students can test knowledge repeatedly at many points across course

Students may share answers with others if feedback design is not performed carefully

Useful for summative assessment revision

Most suitable for content in which there is a clear answer (e.g., problem-based learning)

Questions may be constructed around variables to allow repeated attempts without risk of student sharing answers

Automated writing evaluation tools

 Criterion

 Pigai

Enables instant and specific feedback

May enable educators to focus their feedback on higher-level components, such as content

Lack of accuracy in feedback can lower student engagement with their use

Students may not apply feedback if they are not able to interrogate and understand it

Recommended for language and writing-based subjects

Useful for drafting and revising written work

Students and educators require training before use

Best used as a complement to educator feedback

Useful for out of class feedback

Intelligent tutors

 Cognitive tutoring mechanisms

Interactive and flexible feedback

Corrective feedback

Provides students with immediate performance information

Only offers limited methods of problem solving, which can restrict students’ reasoning strategies

Most suitable for problem-based disciplines

Students need training before use

Peer-to-student

Blogs, discussion boards

 Blogger

 Qzone

 Native applications in VLEs

Collaborative feedback

Flexible

Interactive

Web-based

Foster high quality peer interactions

Dialogic feedback

Peer feedback

Engaging for students

Discussion boards can be impersonal

Require Internet access

Can be used for educator or peer feedback

Commonly used for language-based disciplines

Useful to assign partners when using in peer-base scenarios

Provide in-class training and exemplars

Collaborative writing software:

 Wikis

 Google Docs

Foster collective knowledge building for students

Risk of plagiarism when used for peer feedback on drafts

Purposeful checkpoint design is recommended

Can be used in class or out of class

Peer feedback software and tools:

 Facebook

 Other social media

Convenient

User friendly

Supports inclusion of multimedia

Use on multiple devices

Supports recursive drafting

Facilitates social connectivity between students

Students less likely to incorporate peer comments online than face to face context

Useful for creative disciplines

Best to match students of similar abilities when using for peer feedback due to credibility judgments

Students need training in peer feedback, including exemplars

Self-feedback

Digital recordings

 Audio recordings

 Video recordings

Can support iterative feedback processes

Enables almost instantaneous reflection on performance

It can be difficult engender student engagement and depth of reflection

Most appropriate for performance based or language related disciplines

Depth of reflection can be enhanced through exemplars

Educators should guide students through the process of self-evaluation

Consider issues of privacy and security

e-Portfolios

 Blogs

 Google Groups

 Specialized systems

Assist students to develop skills in self-regulation and assessment

Students can collate overtime

Easy to store

Incorporate a range of file formats

Easy to share with educators

Students may consider them to be time-consuming

Lack of student engagement

Useful for creative disciplines

Students require training to maximize efficiency and effectiveness

In addition to the above benefits, challenges, and design implications that have been identified in the review, there are three important observations regarding the silences within the literature. First, the literature has revealed a haphazard approach to being explicit about the particular conceptualization of feedback being adopted (e.g., Mark 0, 1 or 2). Implied within many of the papers is the assumption that feedback is simply something that is done to students after an assessment submission. In most cases, there is no clear indication of how the feedback inputs (e.g., comments on the assessment performance) are designed to impact on subsequent assessment or how the impact is to be measured. This calls into question the overall validity and comparability of many studies into technology and feedback; without knowing if a technology was used within a high-quality feedback design or not, it is difficult to conclude if the benefits of an approach are actually related to the technology. In addition, the composition or nature of the comments is sometimes less than clear in the feedback designs. Arguably, the impact of the feedback process is heavily dependent on the nature of the information being provided such as a focus on providing actionable comments and the clarification and use of clear performance standards. These details are frequently unclear despite being critical to the design. In adopting any of the designs identified in this chapter, it is highly recommended to first identify the purpose of feedback, which will in turn help identify what information needs to be conveyed, by whom, for what purpose, and what effects should be monitored. This “output” component, which is so often ignored in the feedback literature, necessitates that the learner will have an opportunity to undertake a subsequent task that shares some properties to the immediate task performed.

Second, the research in this review was often focused on the intervention or tool, measuring immediate effects such as student satisfaction or use, without also building into the data collection process a focus on the broader implications or context including the social, cultural, pedagogical, and instructional milieu. Such a limited focus may help explain the invariably positive results of technology enabled feedback reported in the literature. However, it is worth noting that this limited focus is a recognized perennial concern of the broader field of educational technology research. In contrast, it is argued that a more nuanced approach to educational design and research recognizes that the selection of a technology, or an educational design, does not guarantee results from one context to another. Instead, technology mediated feedback designs are dependent on a range of conditions, including variations across student cohorts, disciplinary cultures, and, importantly, the careful orchestration by those involved. Although most papers have not set out to engage in this kind of analysis, it is telling that most include concluding statements, suggesting that educators need to:
  • Support students and staff in their technical skills

  • Guide staff on how feedback can be best produced or engaged with

  • Increase motivation and engagement (often with assumptions that this can be done via awarding marks for student participation)

  • Be cautious of technology failure, costs, access, and Internet dependency

All of these conclusions are implicit acknowledgments of the fact that educational technologies are just one component in a complex and interdependent system. Moreover, the implementation of a technology practice causes ripples within that system. For example, an educator may choose to use a Student Response System as a way of increasing the frequency of in class feedback loops, however, in doing so new issues arise such as technical proficiencies of both students and teachers, but also the impact on the rhythm and sequencing of classroom activity, and the need for preparation time as well as deeper understanding of how to create effective and useful in-class questions.

Third, the papers more often than not report on isolated single interventions, that we argue could be more usefully framed within a design approach in which the design need, iterative development, and measures of success were explicated from the outset. The technology enabled feedback practices reviewed in this chapter are potentially valuable approaches for educators and educational designers to investigate and iteratively build upon. We argue that iterative design is a useful perspective to adopt. Inherent in the concept of design is that it is a response to a human need and that it needs to be iteratively improved through a variety of feedback loops. As a consequence, there needs to be a clear idea of the criteria or measures of success which can guide focused iterative design improvements.

The Future of Feedback and Digital Technology

This chapter has shown some recent advances in feedback and technology. In this somewhat speculative section, the authors conclude by considering what the future holds for feedback and technology.

The feedback approaches discussed in this chapter have largely been micro-level: they have focused on individual feedback interactions around a single student performance. Comparatively, less research focused on technology enabling high-level feedback designs. Over the next few years we anticipate a focus on technology that enables feedback designs at the module or program level. In addition to feedback about student performance against standards, this may also include ipsative feedback (Hughes, 2011), which is feedback based on students’ previous performance.

One approach to enable longer-term feedback designs could be adapting portfolio tools so they become repositories of not just student work, but also the feedback information related to that work. In such a portfolio, whenever feedback is provided to a student on their work by a teacher, peer, or even themselves, it would be stored in the portfolio. Then when students undertake a task that is similar, perhaps because it addresses similar learning outcomes or because it is a similar genre of task, relevant feedback would be re-presented back to the learner when they commence the new task. Such a design would assist in closing feedback loops that may have otherwise been left open, particularly when feedback is given on major summative tasks at the end of a course unit without immediate action required of the learner. Storing feedback within a portfolio in this way would require appropriate metadata, which would include, at a minimum, the particular learning outcomes the feedback addresses.

When educators know their students, they are able to give different types of feedback, such as the ipsative feedback described earlier. However, when educators know the students they are marking they tend to give biased grades (Malouff & Thorsteinsson, 2016). This has historically led to an either-or decision: blind marking for more accurate marks or non-blind marking for better feedback. However, marking and feedback need not be considered as the same process. It would be relatively simple to implement a system that split the marking process and the feedback process, such that marking could be done anonymously to reduce bias, and then once marks were determined the student’s identity could be revealed and comments made with the knowledge of who the student is. This would provide the best of both worlds: robust anonymous marking and feedback from somebody who knows who you are, where you have come from, and the sort of information that helps (or does not help) in the production of your work.

With the growth in technology tools for feedback, it is likely in the coming years that feedback may become less staged and more continuous: rather than completing a piece of work and waiting days or weeks for feedback comments, feedback will be a continuous real-time part of undertaking the work. Just as automated writing evaluation tools allow real-time feedback on writing tasks, other types of work may become targets for real-time feedback tools. These may be incorporated into sophisticated feedback designs such that students have access to staged feedback from experts, which tends to be expensive, as well as cheap feedback from technology tools whenever the student desires it.

Meta-analyses of feedback suggest that feedback which is focused on self-regulation has the greatest effect on student learning (Hattie & Timperley, 2007). A challenge historically with providing this sort of feedback is that self-regulation is much more difficult to observe than student task performance. However, recent developments within the field of learning analytics have focused on observing self-regulation (Gasevic, Kovanovic, Joksimovic, & Siemens, 2014) and on providing feedback about self-regulation. The relationship between the fields of learning analytics and feedback are yet to be fully established; however, it could be possible that in years to come student-facing analytics dashboards are commonly used in feedback designs.

In addition to automatic feedback from technology systems, future feedback approaches are likely to include semi-intelligent recommender and aggregation technologies that may connect students with people or systems that can support their ability to judge performance and discover strategies for improvement. For instance, work is currently under way for the development of instant messaging systems that will divert student feedback requests to peers within a class who are deemed likely to be able to provide the correct and most useful comments based on profiles built from online performance data. However, such recommendations need not be limited to their immediate peers and class educators. There are a range of potential human feedback sources beyond the education context that can be engaged through online communities, review sites, collaborative projects, and social media. As an example, when students contribute to Wikipedia as part of their studies they can engage in a structured feedback conversation as they edit a page with other Wikipedia editors (Di Lauro & Johinke, 2017). However, these feedback conversations are currently dispersed across the web; future technological approaches may seek to aggregate them into the feedback portfolios proposed earlier.

Returning to the conceptualization of feedback raised at the start of this chapter, feedback is only feedback where it leads to change. This chapter has demonstrated that emerging tools – from bug in ear technology, to automated writing evaluation systems – are having effects on student learning. However, they remain largely isolated within individual tasks. The next frontiers for feedback with technology involve the marriage of sophisticated feedback technologies with sophisticated long-term feedback designs.

References

  1. Aguaded Gómez, J. I., López Meneses, E., & Jaén Martínez, A. (2013). University e-portfolios as a new higher education teaching method. The development of a multimedia educational material (MEM). RUSC, 10(1), 188–209.Google Scholar
  2. Alvarez, I., Espasa, A., & Guasch, T. (2012). The value of feedback in improving collaborative writing assignments in an online learning environment. Studies in Higher Education, 37(4), 387–400.  https://doi.org/10.1080/03075079.2010.510182.
  3. Andrichuk, G. (2016). Perceptions of peer review using cloud-based software. Journal of Educational Multimedia and Hypermedia, 25(2), 109.Google Scholar
  4. Anson, I. G. (2015). Assessment feedback using Screencapture Technology in Political Science. Journal of Political Science Education, 11(4), 375–390.  https://doi.org/10.1080/15512169.2015.1063433.
  5. Bai, L., & Hu, G. (2017). In the face of fallible AWE feedback: How do students respond? Educational Psychology, 37(1), 67–81.  https://doi.org/10.1080/01443410.2016.1223275.
  6. Bälter, O., Enström, E., & Klingenberg, B. (2013). The effect of short formative diagnostic web quizzes with minimal feedback. Computers & Education, 60(1), 234–242.CrossRefGoogle Scholar
  7. Barry, S. (2012). A video recording and viewing protocol for student group presentations: Assisting self-assessment through a wiki environment. Computers & Education, 59(3), 855–860.CrossRefGoogle Scholar
  8. Beach, R. (2012). Uses of digital tools and literacies in the English language arts classroom. Research in the Schools, 19(1), 45–59.Google Scholar
  9. Beckers, J., Dolmans, D., & van Merriënboer, J. (2016). e-pPortfolios enhancing Students’ self-directed learning: A systematic review of influencing factors. Australasian Journal of Educational Technology, 32(2), 32–46.Google Scholar
  10. Bennett, S., Dawson, P., Bearman, M., Boud, D., & Molloy, E. K. (2017). How technology shapes assessment design: Findings from a study of university teachers. British Journal of Educational Technology, 48, 672–682.  https://doi.org/10.1111/bjet.12439.
  11. Boldrini, E., & Cattaneo, A. (2014). Scaffolding collaborative reflective writing in a VET curriculum. Vocations and Learning, 7(2), 145–165.CrossRefGoogle Scholar
  12. Borup, J., West, R. E., & Thomas, R. (2015). The impact of text versus video communication on instructor feedback in blended courses. Educational Technology Research and Development, 63(2), 161–184.CrossRefGoogle Scholar
  13. Boud, D. (1995). Enhancing learning through self assessment. Philadelphia, PA: Kogan Page.Google Scholar
  14. Boud, D., & Molloy, E. K. (2013). Feedback in higher and professional education. Oxon, UK: Routledge.Google Scholar
  15. Bourgault, A. M., Mundy, C., & Joshua, T. (2013). Comparison of audio vs. written feedback on clinical assignments of nursing students. Nursing Education Perspectives, 34(1), 43–46.  https://doi.org/10.5480/1536-5026-34.1.43.
  16. Carruthers, C., McCarron, B., Bolan, P., Devine, A., McMahon-Beattie, U., & Burns, A. (2015). ‘I like the sound of that’ – An evaluation of providing audio feedback via the virtual learning environment for summative assessment. Assessment & Evaluation in Higher Education, 40(3), 352–370.  https://doi.org/10.1080/02602938.2014.917145.
  17. Cavanaugh, A. J., & Song, L. (2014). Audio feedback versus written feedback: Instructors’ and Students’ perspectives. Journal of Online Learning and Teaching, 10(1), 122.Google Scholar
  18. Chang, C.-C., Liang, C., & Chen, Y.-H. (2013). Is learner self-assessment reliable and valid in a web-based portfolio environment for high school students? Computers & Education, 60(1), 325–334.CrossRefGoogle Scholar
  19. Chang, N., Watson, A. B., Bakerson, M. A., Williams, E. E., McGoron, F. X., & Spitzer, B. (2012). Electronic feedback or handwritten feedback: What do undergraduate students prefer and why. Journal of Teaching and Learning with Technology, 1(1), 1–23.Google Scholar
  20. Chapelle, C. A., Cotos, E., & Lee, J. (2015). Validity arguments for diagnostic assessment using automated writing evaluation. Language Testing, 32(3), 385–405.  https://doi.org/10.1177/0265532214565386.
  21. Chew, E. (2014). “To listen or to read?” audio or written assessment feedback for international students in the UK. On the Horizon, 22(2), 127–135.  https://doi.org/10.1108/oth-07-2013-0026.
  22. Choi, I.-C. (2016). Efficacy of an ICALL tutoring system and process-oriented corrective feedback. Computer Assisted Language Learning, 29(2), 334–364.CrossRefGoogle Scholar
  23. Chu, Y.-S., Yang, H.-C., Tseng, S.-S., & Yang, C.-C. (2014). Implementation of a model-tracing-based learning diagnosis system to promote elementary students’ learning in mathematics. Educational Technology & Society, 17(2), 347–357.Google Scholar
  24. Chui, L., Martin, K., & Pike, B. (2013). A quasi-experimental assessment of interactive student response systems on student confidence, effort, and course performance. Journal of Accounting Education, 31(1), 17.CrossRefGoogle Scholar
  25. Ciftci, H., & Kocoglu, Z. (2012). Effects of peer E-feedback on Turkish EFL students’ writing performance. Journal of Educational Computing Research, 46(1), 61–84.CrossRefGoogle Scholar
  26. Clarke, J. L., & Boud, D. (2016). Refocusing portfolio assessment: Curating for feedback and portrayal. Innovations in Education and Teaching International, 1–8.  https://doi.org/10.1080/14703297.2016.1250664.
  27. Demirbilek, M. (2015). Social media and peer feedback: What do students really think about using wiki and Facebook as platforms for peer feedback? Active Learning in Higher Education, 16(3), 211–224.CrossRefGoogle Scholar
  28. Denton, D. W. (2014). Using screen capture feedback to improve academic performance. TechTrends, 58(6), 51–56.  https://doi.org/10.1007/s11528-014-0803-0.
  29. Denton, P., & Rowe, P. (2015). Using statement banks to return online feedback: Limitations of the transmission approach in a credit-bearing assessment. Assessment & Evaluation in Higher Education, 40(8), 1095–1103.  https://doi.org/10.1080/02602938.2014.970124.
  30. DePaolo, C. A., & Wilkinson, K. (2014). Recurrent online quizzes: Ubiquitous tools for promoting student presence, participation and performance. Interdisciplinary Journal of E-Learning and Learning Objects, 10, 75–91.CrossRefGoogle Scholar
  31. Di Lauro, F., & Johinke, R. (2017). Employing Wikipedia for good not evil: Innovative approaches to collaborative writing assessment. Assessment & Evaluation in Higher Education, 42(3), 478–491.  https://doi.org/10.1080/02602938.2015.1127322.
  32. Eddy, P. L., & Lawrence, A. (2012). Wikis as platforms for authentic assessment. Innovative Higher Education, 38(4), 253–265.  https://doi.org/10.1007/s10755-012-9239-7.
  33. Ekahitanond, V. (2013). Promoting university students’ critical thinking skills through peer feedback activity in an online discussion forum. Alberta Journal of Educational Research, 59(2), 247–265.Google Scholar
  34. Elola, I., & Oskoz, A. (2016). Supporting second language writing using multimodal feedback. Foreign Language Annals, 49(1), 58–74.  https://doi.org/10.1111/flan.12183.
  35. Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., … Mong, C. (2007). Using peer feedback to enhance the quality of student online postings: An exploratory study. Journal of Computer-Mediated Communication, 12(2), 412–433.CrossRefGoogle Scholar
  36. Fawcett, H., & Oldfield, J. (2016). Investigating expectations and experiences of audio and written assignment feedback in first-year undergraduate students. Teaching in Higher Education, 21(1), 79–93.CrossRefGoogle Scholar
  37. Gabaudan, O. (2013). E-xperience erasmus: Online Journaling as a tool to enhance students’ learning experience of their study visit abroad (p. 5): Research-publishing.net. La Grange des Noyes, 25110 Voillans, France.
  38. Gasevic, D., Kovanovic, V., Joksimovic, S., & Siemens, G. (2014). Where is research on massive open online courses headed? A data analysis of the MOOC research initiative. The International Review of Research in Open and Distributed Learning, 15(5).  https://doi.org/10.19173/irrodl.v15i5.1954.
  39. Ghahri, F., Hashamdar, M., & Mohamadi, Z. (2015). Technology: A better teacher in writing skill. Theory and Practice in Language Studies, 5(7), 1495–1500.CrossRefGoogle Scholar
  40. Gibson, L., & Musti-Rao, S. (2015). Using technology to enhance feedback to student teachers. Intervention in School and Clinic, 51(5), 307–311.  https://doi.org/10.1177/1053451215606694.
  41. Gould, J., & Day, P. (2013). Hearing you loud and clear: Student perspectives of audio feedback in higher education. Assessment & Evaluation in Higher Education, 38(5), 554–566.  https://doi.org/10.1080/02602938.2012.660131.
  42. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.  https://doi.org/10.3102/003465430298487.
  43. Hawkins, S. C., Osborne, A., Schofield, S. J., Pournaras, D. J., & Chester, J. F. (2012). Improving the accuracy of self-assessment of practical clinical skills using video feedback – The importance of including benchmarks. Medical Teacher, 34(4), 279–284.  https://doi.org/10.3109/0142159X.2012.658897.
  44. Henderson, M., & Phillips, M. (2014). Technology enhanced feedback on assessment. Paper presented at the Australian Computers in Eductional Conference 2013, Adelaide, SA. http://acec2014.acce.edu.au
  45. Henderson, M., & Phillips, M. (2015). Video-based feedback on student assessment: Scarily personal. Australasian Journal of Educational Technology, 31(1), 51–66.  https://doi.org/10.14742/ajet.1878.
  46. Henderson, M., Selwyn, N., & Aston, R. (2015). What works and why? Student perceptions of ‘useful’ digital technology in university teaching and learning. Studies in Higher Education, 42, 1–13.  https://doi.org/10.1080/03075079.2015.1007946.
  47. Ho, M.-c. (2015). The effects of face-to-face and computer-mediated peer review on EFL writers’ comments and revisions. Australasian Journal of Educational Technology, 31(1), 1–15.CrossRefGoogle Scholar
  48. Huang, K., Chen, C.-H., Wu, W.-S., & Chen, W.-Y. (2015). Interactivity of question prompts and feedback on secondary students’ science knowledge acquisition and cognitive load. Educational Technology & Society, 18(4), 159–171.Google Scholar
  49. Huang, S.-C. (2016). Understanding learners’ self-assessment and self-feedback on their foreign language speaking performance. Assessment & Evaluation in Higher Education, 41(6), 803–820.CrossRefGoogle Scholar
  50. Hughes, G. (2011). Towards a personal best: A case for introducing ipsative assessment in higher education. Studies in Higher Education, 36(3), 353–367.  https://doi.org/10.1080/03075079.2010.486859.
  51. Hung, S.-T. A. (2016). Enhancing feedback provision through multimodal video technology. Computers & Education, 98, 90–101.CrossRefGoogle Scholar
  52. Hung, W.-C., Smith, T. J., & Smith, C. M. (2015). Design and usability assessment of a dialogue-based cognitive tutoring system to model expert problem solving in research design. British Journal of Educational Technology, 46(1), 82–97.CrossRefGoogle Scholar
  53. Israel, M., & Moshirnia, A. (2012). Interacting and learning together: Factors influencing preservice teachers’ perceptions of academic wiki use. Journal of Technology and Teacher Education, 20(2), 151.Google Scholar
  54. Jiang, J., & Yu, Y. (2014). The effectiveness of internet-based peer feedback training on Chinese EFL college students’ writing proficiency. International Journal of Information and Communication Technology Education, 10(3), 34.CrossRefGoogle Scholar
  55. Johnson, G. M., & Cooke, A. (2016). Self-regulation of learning and preference for written versus audio-recorded feedback by distance education students. Distance Education, 37(1), 107–120.  https://doi.org/10.1080/01587919.2015.1081737.
  56. Jones, N., Georghiades, P., & Gunson, J. (2012). Student feedback via screen capture digital video: Stimulating student’s modified action. Higher Education, 64(5), 593–607.  https://doi.org/10.1007/s10734-012-9514-7.
  57. Jonsson, A. (2013). Facilitating productive use of feedback in higher education. Active Learning in Higher Education, 14(1), 63–76.CrossRefGoogle Scholar
  58. Kabilan, M. K., & Khan, M. A. (2012). Assessing pre-service English language teachers’ learning using E-portfolios: Benefits, challenges and competencies gained. Computers & Education, 58(4), 1007–1020.CrossRefGoogle Scholar
  59. Kachingwe, A. F., Phillips, B., & Beling, J. (2015). Videotaping practical examinations in physical therapist education: Does it Foster student performance, self-assessment, professionalism, and improve instructor grading? Journal of Physical Therapy Education, 29(1), 25–33.Google Scholar
  60. Kelly, L., O’Neil, K., & Kwon, E. H. (2014). Comparative analysis: On-site versus remote supervision for APE preservice teachers. Research Quarterly for Exercise and Sport, 85(S1), 140–141.Google Scholar
  61. Klein, K., & Kientz, M. (2013). A model for successful use of student response systems. Nursing Education Perspectives, 34(5), 334–338.Google Scholar
  62. Knauf, H. (2016). Reading, listening and feeling: Audio feedback as a component of an inclusive learning culture at universities. Assessment & Evaluation in Higher Education, 41(3), 442–449.  https://doi.org/10.1080/02602938.2015.1021664.
  63. Lee, C., Cheung, W. K. W., Wong, K. C. K., & Lee, F. S. L. (2013). Immediate web-based essay critiquing system feedback and teacher follow-up feedback on young second language learners’ writings: An experimental study in a Hong Kong secondary school. Computer Assisted Language Learning, 26(1), 39–60.  https://doi.org/10.1080/09588221.2011.630672.
  64. Lee, L., & Markey, A. (2014). A study of learners’ perceptions of online intercultural exchange through web 2.0 technologies. ReCALL, 26(3), 281–297.CrossRefGoogle Scholar
  65. LeFebvre, L., LeFebvre, L., Blackburn, K., & Boyd, R. (2015). Student estimates of public speaking competency: The meaning extraction helper and video self-evaluation. Communication Education, 64(3), 261–279.CrossRefGoogle Scholar
  66. Leibold, N., & Schwarz, L. M. (2015). The art of giving online feedback. Journal of Effective Teaching, 15(1), 34–46.Google Scholar
  67. Li, J., & De Luca, R. (2014). Review of assessment feedback. Studies in Higher Education, 39(2), 378–393.  https://doi.org/10.1080/03075079.2012.709494.
  68. Link, S., Dursun, A., Karakaya, K., & Hegelheimer, V. (2014). Towards better ESL practices for implementing automated writing evaluation. CALICO Journal, 31(3), 323.CrossRefGoogle Scholar
  69. Malouff, J. M., & Thorsteinsson, E. B. (2016). Bias in grading: A meta-analysis of experimental research findings. Australian Journal of Education, 60(3), 245–256.  https://doi.org/10.1177/0004944116664618.
  70. Marden, N. Y., Ulman, L. G., Wilson, F. S., & Velan, G. M. (2013). Online feedback assessments in physiology: Effects on students’ learning experiences and outcomes. Advances in Physiology Education, 37(2), 192–200.  https://doi.org/10.1152/advan.00092.2012.
  71. Mathieson, K. (2012). Exploring student perceptions of audiovisual feedback via screencasting in online courses. American Journal of Distance Education, 26(3), 143–156.  https://doi.org/10.1080/08923647.2012.689166.
  72. Mauri, T., Ginesta, A., & Rochera, M.-J. (2014). The use of feedback systems to improve collaborative text writing: A proposal for the higher education context. Innovations in Education and Teaching International, 53(4), 411–423.  https://doi.org/10.1080/14703297.2014.961503.
  73. McCarthy, J. (2015). Evaluating written, audio and video feedback in higher education summative assessment tasks. Issues in Educational Research, 25(2), 153–169.Google Scholar
  74. McCarthy, J. (2016). Global learning partnerships in the Café: Peer feedback as a formative assessment tool for animation students. Interactive Learning Environments, 24(6), 1298–1318.  https://doi.org/10.1080/10494820.2014.994532.
  75. Moore, C., & Wallace, I. P. H. (2012). Personalizing feedback for feed-forward opportunities utilizing audio feedback technologies for online students. International Journal of e-Education, e-Business, e-Management and e-Learning, 2(1), 6.  https://doi.org/10.7763/IJEEEE.2012.V2.72.
  76. Morris, C., & Chikwa, G. (2016). Audio versus written feedback: Exploring learners’ preference and the impact of feedback format on students’ academic performance. Active Learning in Higher Education, 17, 125–137.CrossRefGoogle Scholar
  77. Munro, W., & Hollingworth, L. (2014). Audio feedback to physiotherapy students for viva voce: How effective is ‘the living voice’? Assessment & Evaluation in Higher Education, 39(7), 865–878.  https://doi.org/10.1080/02602938.2013.873387.
  78. Novakovich, J. (2016). Fostering critical thinking and reflection through blog-mediated peer feedback. Journal of Computer Assisted Learning, 32(1), 16–30.CrossRefGoogle Scholar
  79. O’Loughlin, J., Ní Chróinín, D., & O’Grady, D. (2013). Digital video: The impact on Children’s learning experiences in primary physical education. European Physical Education Review, 19(2), 165–182.CrossRefGoogle Scholar
  80. Orlando, J. (2016). A comparison of text, voice, and screencasting feedback to online students. American Journal of Distance Education, 30(3), 156–166.  https://doi.org/10.1080/08923647.2016.1187472.
  81. Parkin, H. J., Hepplestone, S., Holden, G., Irwin, B., & Thorpe, L. (2012). A role for technology in enhancing students’ engagement with feedback. Assessment & Evaluation in Higher Education, 37(8), 963–973.  https://doi.org/10.1080/02602938.2011.592934.
  82. Penning de Vries, B., Cucchiarini, C., Bodnar, S., Strik, H., & van Hout, R. (2015). Spoken grammar practice and feedback in an ASR-based CALL system. Computer Assisted Language Learning, 28(6), 550–576.CrossRefGoogle Scholar
  83. Plant, J. L., Corden, M., Mourad, M., O’Brien, B. C., & van Schaik, S. M. (2013). Understanding self-assessment as an informed process: Residents’ use of external information for self-assessment of performance in simulated resuscitations. Advances in Health Sciences Education, 18(2), 181–192.CrossRefGoogle Scholar
  84. Portolese Dias, L., & Trumpy, R. (2014). Online Instructor’s use of audio feedback to increase social presence and student satisfaction. Journal of Educators Online, 11(2), 19.CrossRefGoogle Scholar
  85. Ranalli, J., Link, S., & Chukharev-Hudilainen, E. (2017). Automated writing evaluation for formative assessment of second language writing: Investigating the accuracy and usefulness of feedback as part of argument-based validation. Educational Psychology, 37(1), 8–25.  https://doi.org/10.1080/01443410.2015.1136407.
  86. Rock, M., Gregg, M., Gable, R., Zigmond, N., Blanks, B., Howard, P., & Bullock, L. (2012). Time after time online: An extended study of virtual coaching during distant clinical practice. Journal of Technology and Teacher Education, 20(3), 277.Google Scholar
  87. Rott, S., & Weber, E. D. (2013). Preparing students to use wiki software as a collaborative learning tool. CALICO Journal, 30(2), 179–203.CrossRefGoogle Scholar
  88. Sancho-Vinuesa, T., Escudero-Viladoms, N., & Masià, R. (2013). Continuous activity with immediate feedback: A good strategy to guarantee student engagement with the course. Open Learning, 28(1), 51–66.CrossRefGoogle Scholar
  89. Sancho-Vinuesa, T., & Viladoms, N. E. (2012). A proposal for formative assessment with automatic feedback on an online mathematics subject. RUSC, 9(2), 240–259.Google Scholar
  90. Sopina, E., & McNeill, R. (2015). Investigating the relationship between quality, format and delivery of feedback for written assignments in higher education. Assessment & Evaluation in Higher Education, 40(5), 666–680.  https://doi.org/10.1080/02602938.2014.945072.
  91. Steif, P. S., Fu, L., & Kara, L. B. (2016). Providing formative assessment to students solving multipath engineering problems with complex arrangements of interacting parts: An intelligent tutor approach. Interactive Learning Environments, 24(8), 1864–1880.  https://doi.org/10.1080/10494820.2015.1057745.
  92. Strobl, C. (2014). Affordances of web 2.0 Technologies for Collaborative Advanced Writing in a foreign language. CALICO Journal, 31(1), 1–18.CrossRefGoogle Scholar
  93. Turner, W., & West, J. (2013). Assessment for “digital first language” speakers: Online video assessment and feedback in higher education. International Journal of Teaching and Learning in Higher Education, 25(3), 288–296.Google Scholar
  94. van Popta, E., Kral, M., Camp, G., Martens, R. L., & Simons, P. R.-J. (2017). Exploring the value of peer feedback in online learning for the provider. Educational Research Review, 20, 24–34.  https://doi.org/10.1016/j.edurev.2016.10.003.
  95. Voelkel, S., & Bennett, D. (2014). New uses for a familiar technology: Introducing mobile phone polling in large classes. Innovations in Education and Teaching International, 51(1), 46.CrossRefGoogle Scholar
  96. Wang, Y. H., & Young, S. C. S. (2015). Effectiveness of feedback for enhancing English pronunciation in an ASR-based CALL system. Journal of Computer Assisted Learning, 31(6), 493–504.CrossRefGoogle Scholar
  97. Watkins, D., Dummer, P., Hawthorne, K., Cousins, J., Emmett, C., & Johnson, M. (2014). Healthcare students’ perceptions of electronic feedback through GradeMark®. Journal of Information Technology Education: Research, 13, 27–47.CrossRefGoogle Scholar
  98. West, J., & Turner, W. (2016). Enhancing the assessment experience: Improving student perceptions, engagement and understanding using online video feedback. Innovations in Education and Teaching International, 53(4), 400–410.CrossRefGoogle Scholar
  99. Wittler, M., Hartman, N., Manthey, D., Hiestand, B., & Askew, K. (2016). Video-augmented feedback for procedural performance. Medical Teacher, 38(6), 607.CrossRefGoogle Scholar
  100. Woo, M. M., Chu, S. K. W., & Li, X. (2013). Peer-feedback and revision process in a wiki mediated collaborative writing. Educational Technology Research and Development, 61(2), 279–309.CrossRefGoogle Scholar
  101. Wu, W.-C. V., Petit, E., & Chen, C.-H. (2015). EFL writing revision with blind expert and peer review using a CMC open forum. Computer Assisted Language Learning, 28(1), 58–80.CrossRefGoogle Scholar
  102. Xianwei, G., Samuel, M., & Asmawi, A. (2016). Qzone weblog for critical peer feedback to improve business English writing: A case of Chinese undergraduates. Turkish Online Journal of Educational Technology – TOJET, 15(3), 131–140.Google Scholar
  103. Yoo, H. (2016). A web-based environment for facilitating reflective self assessment of choral conducting students. Contributions to Music Education, 41, 113–130.Google Scholar
  104. Zheng, B., Lawrence, J., Warschauer, M., & Lin, C.-H. (2014). Middle school students’ writing and feedback in a cloud-based classroom environment. Technology, Knowledge and Learning, 20(2), 201–229.  https://doi.org/10.1007/s10758-014-9239-z.

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Phillip Dawson
    • 1
    Email author
  • Michael Henderson
    • 2
  • Tracii Ryan
    • 2
  • Paige Mahoney
    • 1
  • David Boud
    • 1
  • Michael Phillips
    • 2
  • Elizabeth Molloy
    • 3
  1. 1.Centre for Research in Assessment and Digital LearningDeakin UniversityGeelongAustralia
  2. 2.Faculty of EducationMonash UniversityMelbourneAustralia
  3. 3.Department of Medical EducationUniversity of MelbourneMelbourneAustralia

Section editors and affiliations

  • Dirk Ifenthaler
    • 1
    • 2
  1. 1.Learning, Design and TechnologyUniversity of MannheimMannheimGermany
  2. 2.Curtin UniversityPerthAustralia

Personalised recommendations