COVID-19 forced colleges to move face-to-face courses into remote or online learning formats (Hodges, Moore, Lockee, Trust, & Bond, 2020). While most are hopeful colleges will return to “normal” once the pandemic ends, nobody is quite sure when this might be or if it will look the same. Regardless of what happens this next academic year, COVID-19 has already forced colleges in varying ways to shift to digital, a shift that will have long term effects on how instructors and students teach and learn moving forward. Educational technology professionals have an opportunity to help with this transition. In 2015, Borup et al. published an article that can help educators make the shift to digital in one way by providing more personal, detailed, and possibly useful and effective feedback. In the following, I describe the value of their research, how it can be applied, limitations, and areas of future research.

Value

Borup et al. (2015) begin by highlighting the importance of feedback in education and then do a thorough review of past literature on video feedback. The literature suggests that while the majority of students desire individual feedback, many students (especially in online or high-enrollment courses) are not provided personalized, detailed, or useful feedback. Borup et al. cite limitations of text-based feedback, when it is used, and conclude that audio/video feedback could improve feedback, especially in blended and online courses. However, they point out that video feedback is under researched, especially in terms of instructor perceptions, which led them to conduct a mixed method study to examine students and instructors’ perceptions of video and text-based feedback in a blended learning environment.

Online educators have experimented with video feedback for years (see Lowenthal & Mulder, 2017), but Borup et al. were one of the first to conduct a comprehensive investigation of instructor and student perceptions of video feedback across multiple courses, instructors, and students. It is easy to get enamored with video. However, Borup et al. sought to compare perceptions of text and video feedback—recognizing benefits of both. While they found no significant difference between perceptions of quality and delivery, they did find that students and instructors thought text-based feedback was more efficient and provided more specific critiques than video but that video encouraged and provided more supportive and conversational communication. In the end, both students and instructors valued the efficiency of text feedback over the affective benefits of video feedback.

Borup et al. highlight that while feedback is important, not all assignments require the same format, amount, or depth of feedback. They also point out that providing video feedback can take more time (for both the instructor and students) and might not always be appreciated by students. However, participants in their study, who were taking blended courses, acknowledged affective benefits of video feedback, which might be even more pronounced for students taking online courses in fully online programs—especially during an usually stressful time, such as a global pandemic when students might need and benefit from added affective support.

Application

Based on this study, one could conclude that video feedback is not worth the time or effort. However, this would miss some key points Borup et al. make throughout their article. Students taking online courses often report feeling isolated or alone (Kaufmann & Vallade, 2020), which is likely to increase in the coming months due to COVID-19. However, as Borup et al. found in this study, and previous research suggests, asynchronous video can help increase affective communication which can help establish and increase social presence (see Borup, West, & Graham, 2012; Lowenthal, 2014; Lowenthal & Dunlap, 2018), which research suggests in turn can decrease loneliness and increase retention in online courses (see Boston et al., 2009; Liu, Gomez, & Yen, 2009). Therefore, even if providing video feedback takes instructors more time and some students might find it inconvenient to watch, I contend that the affective benefits alone may make it well worth the effort for all parties involved.

Feedback offered in a video feedback format, though, is not automatically effective or useful simply because it is in a video format. There are things instructors can do to improve their use of video feedback. First, be strategic about when and if you use video feedback by identifying which assignments students might benefit the most from video vs. text feedback. This might be assignments that are visual in nature (e.g., a multimedia presentation or website), dense or nonlinear (e.g., a spreadsheet), and/or assignments where formative feedback can be provided earlier in a course (e.g., on a rough draft of a paper) that can then help students make improvements for a final version of the assignment that is turned in later in the course. Second, identify the type of video feedback to provide. Learning management systems, like Canvas and Blackboard, enable instructors to give video feedback that records the instructor talking via a webcam. But as Borup et al. acknowledge, screencasting video feedback might enable instructors to be more specific with the feedback they provide by showing students what the instructors are commenting on which can in turn end up providing more detailed and rich feedback than text-based feedback. Screencasting video feedback, depending on how it is shared with students, can also be saved and referenced by a student even after a course is over, unlike video feedback recorded and stored in a LMS. [Note: Overtime, though, I suspect and hope that educational technology companies are likely to create new products that might combine the benefits of webcam and screencasting video feedback and even ways to add additional benefits of video feedback, such as the ability to comment or discuss feedback further.] Third, instructors should practice using video feedback to increase their comfort level, decrease technical issues, and ultimately improve their use of it. For instance, it is helpful to first review the assignment, then take notes on what to talk about and focus on, then do a sample recording to verify that everything is working correctly before recording the final feedback. Finally, instructors should focus on keeping their feedback relatively short (e.g., 3–5 min) to help offset efficiency issues instructors or students might encounter with video feedback; writing notes before hitting record can also help with this, as can keeping a list of recurring issues one might notice with previous students to address.

Limitations and constraints

All research has limitations and constraints. Borup et al. do a good job of recognizing most limitations of their research. Readers should also keep in mind that this is just one study; further research is needed to better understand video feedback. The most notable limitations and constraints of this study, from my perspective, are due to context. This study was conducted in three 1-credit teacher education blended courses taught at a residential university. Instructors and students in fully online courses, and especially fully online programs, likely might have different perceptions of video feedback when this form of interaction might be one of the only one-on-one instructor-student interactions during a course. Also, older students completing a professional graduate program, in order for possible career advancement, might value video feedback (especially video feedback on relevant and authentic career related assignments) more than traditional college age students. Further, students might not be as invested in a 1-credit course as a standard 3-credit course. Also, the majority of students in this study were female; male student might value the affective benefits of video feedback even less. Different results might also emerge when using a different subject area where video feedback can be provided in a formative way earlier in a course (whether that be a rough draft of a paper, website, or piece of artwork) that then is later revised and turned in as a final project. The instructors in this study also appeared, for the most part, to be new to video feedback (some were even graduate students and likely inexperienced at teaching too); instructors with previous experience giving video feedback, who see the value in video feedback, might also respond differently as well as use it more efficiently and effectively.

Finally, video feedback might not be right for all instructors or all courses. Effective video feedback requires strong communication skills as well as a comfort level with being recorded, a quiet place to record (which can be challenging when quarantined), technical skills, and finally the time to do it (which can be challenging for high enrollment courses). Regardless of the context, the appeal of video feedback, as this study showed, might always be lost on some students who would prefer to quickly read text-based feedback than listen to recorded feedback or who might only care about their final grade.

Future suggestions

Additional research in similar and different settings is needed to support the findings of this study. Further research also needs to be conducted to investigate not only what instructors and students think about video feedback but more on identifying better ways to provide video feedback in fully online courses and whether video feedback can improve student outcomes (see Mahoney, Macfarlane, & Ajjawi, 2019). As colleges shift to digital, students must in particular learn to interact, communicate, and provide feedback in multimodal ways to help them in school and later in the workplace (Istenič Starčič & Lebeničnik, 2020); therefore there are opportunities for instructors to provide their students experiences in providing video peer feedback to other students in their courses. Finally, researchers need to investigate alternative ways to provide video feedback in high-enrollment courses, whether that be by providing general video feedback to the entire course based on general findings from grading an assignment, having students work on group projects that would result in having fewer assignments to provide video feedback on, or by randomly selecting student work to provide feedback on as general examples for the entire course.