Journal of Computing in Higher Education

, Volume 29, Issue 1, pp 160–177 | Cite as

Investigating students’ use and adoption of with-video assignments: lessons learnt for video-based open educational resources

  • Ilias O. Pappas
  • Michail N. Giannakos
  • Patrick Mikalef
Article

Abstract

The use of video-based open educational resources is widespread, and includes multiple approaches to implementation. In this paper, the term “with-video assignments” is introduced to portray video learning resources enhanced with assignments. The goal of this study is to examine the factors that influence students’ intention to adopt with-video assignments. Extending the technology acceptance model by incorporating students’ emotions, we applied partial least squares structural equation modeling based on a sample of 73 students who systematically experienced with-video assignments in their studies. In addition, students’ activity was analyzed using aggregated time series visualizations based on video analytics. Learning analytics indicate that students make varying use of with-video assignments, depending on when they access them. Students are more likely to watch a greater proportion of the video when they use with-video assignments during the semester, as opposed to during the exams. Further, the findings highlight the important role of students’ emotions in adopting with-video assignments. In addition, perceived usefulness of with-video assignments increases their positive emotions and intention to adopt this medium, while perceived ease of use increases only their intentions. Together, these constructs explain 68% of the variance in students’ intention to adopt with-video assignments.

Keywords

With-video assignments Open educational resource Students’ adoption Video-based learning 

Introduction

In the past few years, there has been a sharp increase in the employment of video for learning. Video-based learning techniques and practices have been assimilated in numerous settings, including “flipped” (or “inverted”) classrooms, Small Private Online Courses (SPOCs), and extended Massive Open Online Courses (xMOOCs). Traditional lectures might not be able to serve the purpose of disseminating information, since such information may be easily retrieved from online video lecture repositories at any time. Video lectures have given rise to flipped classrooms and may even assist SPOCs (Fox 2013). This particular type of blended-learning classroom uses technology (e.g., videos) to take lectures outside the classroom, thereby giving students and teachers more time for active learning inside the classroom (Roehl et al. 2013). By utilizing learning materials as a supplement to classroom teaching, instead of being viewed as a replacement for it, the aim is for these techniques to increase instructor leverage, and student throughput, mastery, and engagement (Fox 2013).

Video resources have emerged as one of the premier forms of learning materials. In our paper we use the term “with-video assignments” to refer to a video enhanced with assignments for the purpose of achieving defined competences and making video resources more attractive. With the widespread adoption of online video lecture communities—such as Khan Academy, Lynda.com, and VideoLectures.net, to mention few—conducting research on how students learn via video lectures, as well as the value of enhancing videos with other teaching approaches (e.g., assignments) has become critical. Despite the significant existing body of related research on the impact of video lectures (Giannakos 2013; Mikalef et al. 2016), as well as the rise of analytics for video-based learning systems (Giannakos et al. 2016b; Pappas et al. 2016c), the majority of earlier efforts have mainly focused on the sporadic or single use of video lectures in an educational context (Evans 2008) and/or the investigation of only one factor, such as student performance (Kazlauskas and Robinson 2012). During recent years, an increasing number of universities and educational technology providers (e.g., Udacity, edx) have begun to offer with-video assignments; however, this promising open-educational resource is yet to be empirically explored. Investigating students’ adoption and use of with-video assignments via learning analytics, as well as students’ attitudes thereof, will allow us to better understand the importance of enhancing video materials with assessment affordances.

To address the abovementioned critical issues, this study provides a first step towards understanding students’ multifaceted attitudes in relation to with-video assignments. In particular, the study investigates students’ adoption and use of with-video assignments. The student sample for this research used with-video assignments to supplement their studies; based on the captured learning analytics, as well as a post-attitudinal survey, we provide information on students’ use and adoption. Therefore, this study aims to address the following research question:

RQ. Do students’ perceptions and attitudes regarding with-video assignments lead them to higher adoption?

The remainder of the paper is organized as follows. Section “Background” outlines related work in this field and the focus of this research, while in section “Model development and hypotheses” presents the research model, along with its hypotheses. Section “Methodology” describes the research methodology that was applied, and in section “Findings” presents the empirical findings. Section “Discussion” presents a discussion of the findings, and the paper concludes with in section “Conclusions”, which summarizes the work conducted in this study.

Background

Video lecturing has been constantly gaining momentum over the past few years as one of the core open-educational resources (McGreal et al. 2012). Modern video repository systems (e.g., Khan Academy, Lynda.com, PBS Teachers, Moma’s Modern Teachers) have harnessed the power of social software tools and enhanced the videos present on them, leading to the creation of big data sets that increase the importance of learning analytics. The tendency of infusing videos with social software tools (such as wikis, weblogs, Facebook, Twitter, MySpace, and e-portfolios) is based on the affordances these tools provide, which can potentially promote video lectures.

Video lectures are gaining prominence, with instructors deploying them in various ways, including broadcasting lectures for distant education (Maag 2006), delivering recordings of in-class lectures with face-to-face meetings for review purposes (Brotherton and Abowd 2004), and delivering lecture recordings before class to conserve class time and provide more opportunity for hands-on activities (Day and Foley 2006). However, videos are not restricted to the abovementioned applications; other areas in which they have been used include presenting course topics (Jadin et al. 2009) and providing supplementary learning material for self-study (Dhonau and McAlpine 2002). Adding to the importance of learning analytics, several studies have focused on the potential educational advantages and disadvantages of video lectures (Ljubojevic et al. 2014; Traphagan et al. 2010). Previous studies have also examined different production styles, such as PowerPoint slide presentations with voiceover, full-screen videos of an instructor drawing (Khan-style), video captured from a live classroom lecture, an instructor recorded in a studio with no audience and close-up shots of the instructor’s head, and so on (Guo et al. 2014).

In general, previous studies have found that students make particular use of video lectures when these lectures are related with graded assignments and exams (Giannakos et al. 2016a). Students also enjoy having freedom with respect to when and where they learn, selecting the content they want to learn, and managing the pace of their learning (Heilesen 2010). For students using video lectures, several aspects have been found to improve, such as perceived sense of independence (Jarvis and Dickie 2009), increased self-reflection (Leijen et al. 2009), higher efficacy in test preparation (McCombs and Liu 2007), and more frequent review of material (O’brien and Hegelheimer 2007). The control over the medium that well-designed video lectures afford learners has also been noted to affect their perceptions of convenience and their supplemental practice (Hannafin 1984). Students use video lectures in a range of ways (Ullrich et al. 2013) and for a number of different reasons (Donkor 2011; Traphagan et al. 2010). However, Van Zanten et al. (2012) argued that the most common reasons for use are for revision and review during examination periods.

During recent years, several research studies have been conducted on interactive and novel features that have now become standards in state-of-the-art systems. These include slide-video separation, annotation, social categorization and navigation, advanced search, and questions on interactive videos (Kim et al. 2014; Kleftodimos and Evangelidis 2016; Wachtler and Ebner 2015; Wachtler et al. 2016). However, using video lectures is fundamentally different from working with conventional means of learning, such as textbooks or even digital textbooks. Video lectures are convenient in terms of setting the pace of learning through their extra navigation and multimedia affordances, thereby enhancing the learning experience (Giannakos and Vlamos 2013). While video lectures differ from textbooks in their lack of typography, which allows learners and instructors to emphasize key areas, they do provide extra information conveyed via the video’s pace and social aspects such as the voice tone, expression of emotions, visual cues, and so on.

Research on video technologies for learning has traditionally explained how students’ extrinsic motivators, utilitarian attributes and cognitive perceptions influence their behavior towards adopting a technology that may enhance their learning process (Giannakos and Vlamos 2013). However, instructors should also take into account students’ intrinsic motivations (e.g., emotions), which may help to increase students’ engagement and commitment (Khalil et al. 2016). Literature on technology adoption has suggested that emotions should be considered as an input regarding the formulation of behavioral intentions (De Guinea and Markus 2009). Research on emotions has flourished in various fields (e.g., (Barclay and Kiefer 2014; De Guinea and Markus 2009; Hibbeln et al. 2016; Pappas et al. 2014; Scherer 2005). In addition, Rienties and Rivers (2014) presented a literature review on the role of emotions in learning environments, though further research is needed in the area. Especially in the case of video-learning analytics, the emotion effect might be more intense due to the nonverbal signals conveyed. Kay and Loverock (2008) identified an emotion scale that includes the basic computer-related emotions, and indicated the great importance of positive emotions. Positive emotions in our context can be interpreted as the extent to which students feel that is enjoyable and exciting to use video assignments. Emotions may arise in users unconsciously and their effects may be multifold, perhaps positively influencing users’ behavioral intentions, undermining users’ intentions to adopt a technology, or overriding users’ intentions to stop using a technology (De Guinea and Markus 2009). Therefore, this study investigates the role of emotions in video-based learning materials.

As mentioned above, when video-based learning resources are available, students typically use them. For instance, Harley et al. (2003) found that almost all students (95–97%) in their study viewed the available video lectures at least once. This might be one of the reasons for the increased interest in research on video lectures. As presented above, contemporary research on video technologies for learning have focused on various aspects, such as video production, pedagogies, and interactive and innovative affordances, to name a few. However, previous studies in the area have mainly ignored the role of assessment affordances in video technologies for learning, in terms of how these affordances may influence students’ use and adoption, and how learning analytics from videos may explain students’ behavior.

Model development and hypotheses

The theoretical grounding for this study is derived from the technology acceptance model (TAM), based on which users’ behavior is influenced by their perceived ease of use and perceived usefulness regarding a technology (Davis 1989). The majority of studies on technology adoption have focused on students’ rational evaluations. Nonetheless, the role of non-rational evaluations, such as emotions or affective perceptions, is evident in the formation of users’ behavioral intentions (e.g., Barclay and Kiefer 2014; Kay and Loverock 2008; Pappas et al. 2014; Pappas et al. 2016b), and is very important in learning analytics environments (Rienties and Rivers 2014). Users’ traits affect both cognitive and affective perceptions, which in turn may influence their intention to adopt a technology. These two types of perceptions may affect each other, either at different stages or at the same time (De Guinea and Markus 2009). For example, students’ perceived enjoyment has been integrated into TAM in order to examine students’ acceptance of an Internet-based learning medium (Lee et al. 2009). Specifically, both perceived usefulness and perceived enjoyment influence both students’ attitudes and behavioral intentions, while no effect has been found for perceived ease of use on attitude.

Towards this direction, our study integrates students’ emotions with perceived ease of use and perceived usefulness, from TAM, and proposes a research model for explaining students’ intention to adopt with-video assignments. Figure 1 presents the proposed research model.
Fig. 1

Theoretical model of with-video assignment adoption

Perceived ease of use

In our study, perceived ease of use refers to the degree to which students believe that using with-video assignments is easy and free of effort. Previous studies have identified the significant effect of perceived ease of use on perceived usefulness and enjoyment in the context of e-learning (Agudo-Peregrina et al. 2014; Lee et al. 2005). When a task is considered easy by students, it means that it requires less cognitive effort, thus allowing them to concentrate on other learning issues (Saadé and Bahli 2005). Further, when students do not have to spend a significant amount of time and effort on using with-video assignments, they are quite likely to adopt this medium. Perceived ease of use has been found to directly increase attitude towards e-learning, but only indirectly impact intention to use e-learning (Lee 2010). In addition, the perception of not encountering any difficulties when interacting through with-video assignment is likely to make students experience pleasure from, and feel intrigued by it. Hence, we formulate the following hypotheses:

H1a

Students’ perceived ease of use of with-video assignments will have a positive effect on their perceived usefulness of this medium.

H1b

Students’ perceived ease of use of with-video assignments will have a positive effect on their emotions when using this medium.

H1c

Students’ perceived ease of use of with-video assignments will have a positive effect on their intention to adopt this medium.

Perceived usefulness

Perceived usefulness in our context refers to the degree to which students believe that using with-video assignments will enhance their performance, and is a critical factor influencing the students’ attitudes and behavioral intentions (Lee 2010; Lee et al. 2005). Nonetheless, it has been reported as one of the main barriers in adopting learning technologies in higher education (Buchanan et al. 2013), indicating that there is a need to further examine perceived usefulness in the context of video-based learning. Using with-video assignments as a learning method may offer students important benefits. They are able to access the learning materials any time, regardless of where they are, and study at their own pace. Further, students may benefit from the self-studying, reflection, and interactive affordances of video technologies in learning (e.g., integrated assessments, intuitive interfaces/players resulting in good control over the content, etc.). Such benefits are expected to increase users’ intention to use with-video assignments, as well as evoke positive emotions such as enjoyment and excitement. When students are able to estimate the positive consequences of using with-video assignments, it is likely that they will experience pleasure from, and feel intrigued by them. Thus, we propose the following hypotheses:

H2a

Students’ perceived usefulness of with-video assignments will have a positive effect on their emotions when using this medium.

H2b

Students’ perceived usefulness of with-video assignments will have a positive effect on their intention to adopt this medium.

Emotions

Emotions are an important dimension of technology acceptance, and may influence users’ behavioral intentions (Beaudry and Pinsonneault 2010). Here, emotions are defined as the extent to which students feel that using with-video assignments is enjoyable and exciting. Different results have been found regarding the relation of emotions with behavioral intentions (Rienties and Rivers 2014), and previous findings have identified positive emotions to be more important than negative ones when examining user experience in online environments (Pappas et al. 2014; Pappas et al. 2016b). For example, enjoyment has been found to have a direct effect on attitude towards e-learning adoption, but an indirect effect on intention to use e-learning (Lee 2010), while a direct effect of playfulness on intention to continue using e-learning has been also identified (Roca and Gagné 2008). However, it is expected that when students experience positive emotions when using with-video assignments, their intention to use them in the future will increase. Consequently, we pose the following hypothesis:

H3

Students’ emotions when using with-video assignments will have a positive effect on their intention to adopt this medium.

Methodology

Context

The present study follows an established framework for using videos to support students’ learning (Giannakos et al. 2016a). The study was conducted in an introductory programming course, and the focus of the course was on the World Wide Web as a platform for interactive applications, content publishing, and social services. By the end of the course, students were expected to be able to design and develop Web pages and Web applications using markup (e.g., HTML), design (e.g., CSS), and client-side (e.g., JavaScript) programming languages. The students had to deliver specific assignments, work on a self-selected group project, and take a written examination. The course materials, digital communication, and assignments and project work were derived from a Learning Management System called “its-learning”.1 Following the video-assisted framework proposed by Giannakos et al. (2016a), we implemented video assignments to scaffold students’ self-regulated learning. This is typical in many active learning approaches (e.g., flipped classroom), where students are involved with the learning materials in order to obtain the initial fundamental knowledge. This basic knowledge was made available using with-video assessments throughout the semester.

Sampling

In order to perform this study, an introductory programming course with 510 students was chosen. The course was offered by a Norwegian university and lasted for 12 weeks. During this time the students were given 10 videos in total—i.e., a video almost every week, except for the introductory lecture and the last lecture before the exams. Every video was accompanied with an assignment, which included seven questions that could be answered based on the provided content of the video. The learning material included in the videos was supplementary to the main lectures and was not mandatory for the students to use, nor was any reward offered in return for its use. However, the students were told that material from the with-video assignments was good practice for the final exams. The entire procedure was performed twice, with students from two different class years.

During the third week of the course, a survey was given to the students along with the video learning material. This is because by the third week the students had enough experience with the course and the with-video assignments to respond to the respective questions; however, they could not be considered as highly experienced at this time, thus reducing the bias of the sampling process. The participants were briefly informed about the purpose of the study and that their participation was voluntary. All responses were anonymous and the survey was presented in English. A total of 73 students (14.31%) volunteered to participate on the survey; this comprised of 14 females and 59 males, with mean age of 22.38 [standard deviation (SD) = 2.50].

Measures

In order to understand students’ attitudes and use of the with-video assignment, we captured both learning analytics and students’ attitudes. The students had to follow the video in order to answer the assignment questions (Fig. 2). The questions were such that it was highly unlikely the students would have known the answers beforehand. In addition, watching the video was the most efficient way to find the answers, since a different method would have been more time consuming. The students had to watch at least 40% of the video (approximately) in order to find the answers. The video had 73 views—exactly the number of our sample—which indicates that all students watched the video to answer the assignments. The videos were available to the students for the whole semester.
Fig. 2

An example of a with-video assignment

After answering the assignment that measured their performance, students were presented with a post-attitudinal survey, which included constructs regarding their perceptions on the ease of use and usefulness when using the video assignments. Further, the students were asked about how they felt when using the video assignments, followed by a question regarding their intention to adopt this medium in the future. Table 1 lists the operational definitions of the constructs in the theoretical model, as well as the studies from which the measures were adopted. In all cases, items were rated using a seven-point Likert scale anchored from 1 (“not at all”) to 7 (“very much”). All questionnaire items used, along with their descriptive statistics and loadings, are presented in the “Appendix”.
Table 1

Construct definitions

Construct

Definition

Source

Perceived ease of use

The extent to which students believed that using video assignments was easy for them

Ngai et al. (2007)

Perceived usefulness

The extent to which students perceived that using video assignments was useful and increased their performance

Ngai et al. (2007)

Emotions

The degree to which students believed that using video assignments was enjoyable, exciting, and made them feel good

Liu and Forsythe (2011)

Intention to adopt

The degree of students’ intention to adopt video assignments in the future

Lee et al. (2009)

Data analysis

The data collected from the online survey were analyzed using structural equation modeling (SEM) via Smart PLS 3.0 (Ringle et al. 2015). SEM was chosen because it is able to combine various statistical procedures (e.g., multiple regression, factor analysis) and simultaneously examine a system of regression equations (as opposed to traditional regression analysis). The one-way direction of the arrows used in the model represents the effect between the variables, as stated in the hypotheses (Hox and Bechger 1998). In addition, single-headed arrows represent regression coefficients (Hox and Bechger 1998). Partial least squares (PLS) is a component-based approach, which focuses on maximizing the explained variance of the examined constructs in order to increase the predictive value of the proposed model. Thus, since our goal here is to maximize the predictive value of this model and the sample size is relatively small (Atif et al. 2015; Gefen et al. 2000), employing PLS is appropriate to test our hypotheses.

The video analytics were analyzed using aggregated time series visualizations in order to identify the students’ activity throughout the semester. Information including the average percentage of the video viewed and watching time were obtained.

Reliability and validity of the measures

The constructs used in this study were evaluated regarding their reliability and validity. Reliability testing based on Cronbach’s alpha and composite reliability (CR) showed acceptable indices of internal consistency, as all constructs exceeded the cut-off threshold of 0.70. Establishing validity requires that the average variance extracted (AVE) is >0.50 and that the correlations between the different variables in the confirmatory models not exceed 0.8 points, the latter because exceeding 0.8 suggests low discrimination. In addition, the square root of each factor’s AVE must be larger than its correlations with other factors (Fornell and Larcker 1981). The AVEs for all constructs ranged between 0.629 and 0.896, all correlations were lower than 0.80, and the square root AVEs for all constructs were larger than their correlations. Table 2 displays the findings.
Table 2

Descriptive statistics and correlations of latent variables

Construct

Construct

Mean

SD

CR

AVE

1

2

3

4

1. Perceived ease of use

5.84

0.91

0.835

0.629

0.793

   

2. Perceived usefulness

5.59

1.12

0.963

0.896

0.669

0.947

  

3. Emotions

5.29

1.29

0.919

0.851

0.469

0.781

0.922

 

4. Intention to adopt

5.91

1.06

0.967

0.896

0.63

0.75

0.661

0.939

Diagonal elements (in bold) are the square root of the average variance extracted (AVE). Off-diagonal elements are the correlations among constructs (correlations of 0.1 or higher are significant, p < 0.01). For discriminant validity, diagonal elements should be larger than the off-diagonal elements

Next, we tested for multicollinearity (O’brien 2007) and for potential common method bias by utilizing Harman’s single-factor test (Podsakoff et al. 2003). Since the variance inflation factor for each variable was below 4, multicollinearity was not an issue in this study. Finally, the first factor did not account for the majority of the variance and no single factor occurred from the factor analysis, thus indicating an absence of common method bias.

Findings

Results from video analytics

Figure 3 presents the use of the video related to the quiz given to the students in the third week. We chose this video as an example of how students used the with-video assignment in order to answer the quiz. Figure 3 visualization of the videos’ use throughout the semester indicates that the students primarily used the with-video assignments mainly in two distinct instances. The first was in the initial days following the video’s release and the assignment period, since the video was given on the third week of the course. In this case, it is likely that the students might have needed to use some information/knowledge from the with-video assignment. The second instance was right before the exams, which is expected since students were studying more actively during that period. This was observed during both times the course was offered (Fig. 3).
Fig. 3

Students’ use of a with-video assignment throughout the semester, for both the first and the second years. This example provides a good idea of how students used the with-video assignments throughout the semester

Further, as can be observed from Fig. 3 in terms of comparing the watch time of the video (i.e., how many minutes of the video was watched in total) and the average percentage viewed (i.e., how much of the whole video was watched), different usage trends of the video assignments can be noted throughout the semester. Specifically, there is an important difference between the watch time and the average percentage viewed. This difference suggests that students who chose to view the video close to the date of the relevant course watched over 80% of it, while those who chose to view it right before the exams watched up to 40%. The majority of students chose to watch the video right before the exams, and the 40% watched was the minimum needed to answer the assignment. On the other hand, a (smaller) number of students associated the video with the lecture and watched a larger part of it, even if the additional material was not necessary to answer the quiz. The average percentage viewed was in some cases over 100% (i.e., the whole duration of the video), since it represents how much of the video someone watched—thus, where the watcher paused and rewound the video to watch a certain part again (e.g., when the answer to a question was given), this would increase the total viewing time. This behavior was observed both times the course was run (Fig. 3).

Results from structural model

With respect to the students’ responses to the attitudinal survey, the estimated path coefficients of the structural equation model were examined in order to evaluate the proposed hypotheses. Figure 4 presents the results from the analysis of the research model. In detail, perceived ease of use had a significant positive effect on both perceived usefulness and intention to participate, thus supporting H1a and H1c. However, perceived ease of use had no significant effect on students’ emotions, leading to the rejection of H1b. Further, perceived usefulness had a significant effect on both students’ emotions and their intention to participate, supporting H2a and H2b. Next, students’ emotions positively influenced their intention to participate, supporting H3. Finally, we controlled for students’ age and gender. As can be seen from Fig. 4, age had a positive effect on students’ intention to participate, while gender had no significant effect in this regard. Figure 4 also presents the squared multiple correlations (R2), which indicate the extent to which a variable may be predicted by its antecedents. The extent to which the R2 indicates a high effect on a variable depends on the research discipline. A value larger than 0.26 indicates a high effect (Cohen 1988). Here, the R2 for perceived usefulness was 0.45, for emotions 0.62, and for intention to participate 0.68, indicating that 45% of the variance in perceived usefulness is explained by perceived ease of use, 62% of the variance in emotions is explained by perceived usefulness, and 68% of the variance in intention to adopt with-video assignments is explained by perceived ease of use, perceived usefulness, emotions, and age.
Fig. 4

SEM analysis of the research model

Discussion

The present study is one of the first to examine the adoption and use of with-video assignments; this is done by focusing on emotions, along with the two basic constructs of TAM (i.e., perceived ease of use and perceived usefulness). The contribution of the study is twofold, since it first examines students’ use of with-video assignments through learning analytics (e.g., use throughout the semester, watch time behavior), and second includes a post-survey to investigate students’ adoption of with-video assignments.

In particular, findings from video learning analytics suggest that students use the video assignments differently depending on when they access them. Furthermore, the post-survey identifies the importance of positive emotions in increasing students’ intention to adopt with-video assignments. In particular, our research identifies that positive emotions are only affected by perceived usefulness but not from perceived ease of use; this means that students need to understand the reason behind the use of with-video assignments in order to feel positively towards them, and the ease of use of the system is not considered important as a reason for using it (i.e., usefulness). Further, the study verifies the positive effect of both perceived ease of use and usefulness on students’ intention to adopt video assignments. In view of this, the study offers evidence that the perceived usefulness of video assignments may induce positive emotions. In addition, consistent with TAM, the study verifies the critical role of perceived ease of use and usefulness in the adoption of with-video assignments.

The present study contributes to the literature by proposing and testing a research model for the adoption of with-video assignments. In detail, the research model examined in this paper accounts for 68% of the explained variance in students’ adoption of with-video assignments. This suggests that the chosen factors are very good predictors and the research model may be used as a conceptual framework by researchers in order to understand students’ behavior in the emerging area of video-based open-educational resources.

At a practical level, the results of the present study are of potential value to various stakeholders. In particular, for universities, schools, and educational technology companies that develop video learning materials and couple them with assignments, it is imperative that their clients/users/students actually adopt their productions. Therefore, knowing what personal characteristics influence the adoption of video materials may prove profitable in terms of money or time. For instance, a university that wants to introduce a video system to enhance or complement student learning will benefit from understanding the role of emotions, students’ age, or the ease with which the system can be used. Following, and even extending our initial model will enable universities or educational-technology providers to increase the rate of adoption of video materials by students.

Although the outcomes of this study may provide several theoretical and practical implications, they must be viewed in the light of the study’s limitations. First, the study examines with-video assignments in a particular setting and context. It may be the case that other tools and pedagogies using with-video assignments may result in different adoption and usage patterns, although we believe that our case study follows commonly accepted procedures and offers a first step towards understanding the adoption and use of with-video assignments. Second, the demographics of our respondents are not representative of the general population, and further examination is required of how different demographics and technology experience might influence our model. Third, the content of the case study (web development) might also have impacted our results. In addition, we only include positive emotions here, since they have been found to be more important than negative ones. However, students may experience both types of emotions at the same time (Pappas et al. 2016b); thus, future studies should include negative emotions as well. Finally, it should be noted that the sample of this study was relatively small for applying SEM. However, we used PLS-SEM, which is able to provide reliable results even for small samples (Atif et al. 2015; Gefen et al. 2000). Future studies may also employ different methodologies that examine asymmetric relations between the variables and are appropriate for small samples as well, such as the fuzzy-set qualitative comparative analysis (fsQCA) (Pappas et al. 2016a, c).

Despite these limitations, the present study presents a novel view on the issue of with-video assignment adoption. The proposed model, although used here to examine with-video assignment adoption, might be applicable to other video-based open-educational resources. Of particular importance are outcomes regarding MOOCs, since it is critical to define what beliefs and attitudes promote individuals to engage with the video-based materials (Khalil et al. 2016). We hope that the outcomes of this study serve to open a discussion on rethinking adoption models for open-educational resources and spark a wave of empirical investigations in this direction.

Conclusions

This study offers evidence on the use and adoption of with-video assignments for learning, and provides empirical support regarding how adoption of this medium may be increased. Towards this direction, we extend TAM by including students’ emotions, and test a research model that explains 68% of students’ intention to adopt with-video assignments in their learning. Students’ age, perceived ease of use, perceived usefulness and emotions are significant determinants of their behavior towards this medium. Therefore, our research extends previous studies, which have focused primarily on cognitive perceptions, by including emotions as prime determinants of students’ behavior. Finally, this study identifies the different behavior of students towards with-video assignments throughout the semester (i.e., during the lecture or the exam period) by analyzing learning analytics with aggregated time series visualizations. Future research should thus be aware of students’ emotions towards the adoption of a new medium, as well as their diverse behavior during a class year.

Footnotes

Notes

Acknowledgements

This work was carried out during the tenure of an ERCIM “Alain Bensoussan” Fellowship Programme. This study was funded by The Research Council of Norway, project FUTURE LEARNING (Grant Number 255129/H20). This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No 704110.

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

References

  1. Agudo-Peregrina, Á. F., Hernández-García, Á., & Pascual-Miguel, F. J. (2014). Behavioral intention, use behavior and the acceptance of electronic learning systems: Differences between higher education and lifelong learning. Computers in Human Behavior, 34, 301–314.CrossRefGoogle Scholar
  2. Atif, A., Richards, D., Busch, P., & Bilgin, A. (2015). Assuring graduate competency: A technology acceptance model for course guide tools. Journal of Computing in Higher Education, 27(2), 94–113.CrossRefGoogle Scholar
  3. Barclay, L. J., & Kiefer, T. (2014). Approach or avoid? Exploring overall justice and the differential effects of positive and negative emotions. Journal of Management, 40(7), 1857–1898.CrossRefGoogle Scholar
  4. Beaudry, A., & Pinsonneault, A. (2010). The other side of acceptance: Studying the direct and indirect effects of emotions on information technology use. MIS Quarterly, 34(4), 689–710.Google Scholar
  5. Brotherton, J. A., & Abowd, G. D. (2004). Lessons learned from eClass: Assessing automated capture and access in the classroom. ACM Transactions on Computer-Human Interaction (TOCHI), 11(2), 121–155.CrossRefGoogle Scholar
  6. Buchanan, T., Sainter, P., & Saunders, G. (2013). Factors affecting faculty use of learning technologies: Implications for models of technology adoption. Journal of Computing in Higher Education, 25(1), 1–11.CrossRefGoogle Scholar
  7. Cohen, J. (1988). Statistical power analysis for the behavior science. Hillsdale: Lawrence Erlbaum Association.Google Scholar
  8. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340.CrossRefGoogle Scholar
  9. Day, J. A., & Foley, J. D. (2006). Evaluating a web lecture intervention in a human–computer interaction course. IEEE Transactions on Education, 49(4), 420–431.CrossRefGoogle Scholar
  10. De Guinea, A. O., & Markus, M. L. (2009). Why break the habit of a lifetime? Rethinking the roles of intention, habit, and emotion in continuing information technology use. MIS Quarterly, 33(3), 433–444.Google Scholar
  11. Dhonau, S., & McAlpine, D. (2002). “Streaming” best practices: Using digital video-teaching segments in the fl/esl methods course. Foreign Language Annals, 35(6), 632–636.CrossRefGoogle Scholar
  12. Donkor, F. (2011). Assessment of learner acceptance and satisfaction with video-based instructional materials for teaching practical skills at a distance. The International Review of Research in Open and Distributed Learning, 12(5), 74–92.CrossRefGoogle Scholar
  13. Evans, C. (2008). The effectiveness of m-learning in the form of podcast revision lectures in higher education. Computers and Education, 50(2), 491–498.CrossRefGoogle Scholar
  14. Fornell, C., & Larcker, D. F. (1981). Structural equation models with unobservable variables and measurement error: Algebra and statistics. Journal of Marketing Research, 18(3), 382–388.CrossRefGoogle Scholar
  15. Fox, A. (2013). From moocs to spocs. Communications of the ACM, 56(12), 38–40.CrossRefGoogle Scholar
  16. Gefen, D., Straub, D., & Boudreau, M.-C. (2000). Structural equation modeling and regression: Guidelines for research practice. Communications of the association for information systems, 4(1), 7.Google Scholar
  17. Giannakos, M. N. (2013). Exploring the video-based learning research: A review of the literature. British Journal of Educational Technology, 44(6), E191–E195.CrossRefGoogle Scholar
  18. Giannakos, M. N., Krogstie, J., & Aalberg, T. (2016a). Video-based learning ecosystem to support active learning: Application to an introductory computer science course. Smart Learning Environments, 3, 11.CrossRefGoogle Scholar
  19. Giannakos, M. N., Sampson, D. G., Kidziński, Ł., & Pardo, A. (2016b). Enhancing video-based learning experience through smart environments and analytics. In Paper presented at the workshop on smart environments and analytics in video-based learning (SE@ VBL).Google Scholar
  20. Giannakos, M. N., & Vlamos, P. (2013). Educational webcasts’ acceptance: Empirical examination and the role of experience. British Journal of Educational Technology, 44(1), 125–143.CrossRefGoogle Scholar
  21. Guo, P. J., Kim, J., & Rubin, R. (2014). How video production affects student engagement: An empirical study of mooc videos. In Paper presented at the proceedings of the first ACM conference on learning@ scale conference.Google Scholar
  22. Hannafin, M. J. (1984). Guidelines for using locus of instructional control in the design of computer-assisted instruction. Journal of instructional development, 7(3), 6–10.CrossRefGoogle Scholar
  23. Harley, D., Henke, J., Lawrence, S., McMartin, F., Maher, M., Gawlik, M., et al. (2003). Costs, culture, and complexity: An analysis of technology enhancements in a large lecture course at UC Berkeley. Center for Studies in Higher Education. http://escholarship.org/uc/item/68d9t1rm. Accessed 14 Jan 2017.
  24. Heilesen, S. B. (2010). What is the academic efficacy of podcasting? Computers and Education, 55(3), 1063–1068.CrossRefGoogle Scholar
  25. Hibbeln, M. T., Jenkins, J. L., Schneider, C., Valacich, J., & Weinmann, M. (2016). Inferring negative emotion from mouse cursor movements. MIS Quarterly. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2708108. Accessed 14 Jan 2017.
  26. Hox, J. J., & Bechger, T. M. (1998). An introduction to structural equation modeling. Family Science Review, 11, 354–373.Google Scholar
  27. Jadin, T., Gruber, A., & Batinic, B. (2009). Learning with E-lectures: The meaning of learning strategies. Educational Technology and Society, 12(3), 282–288.Google Scholar
  28. Jarvis, C., & Dickie, J. (2009). Acknowledging the ‘forgotten’and the ‘unknown’: The role of video podcasts for supporting field-based learning. Planet, 22(1), 61–63.CrossRefGoogle Scholar
  29. Kay, R. H., & Loverock, S. (2008). Assessing emotions related to learning new software: The computer emotion scale. Computers in Human Behavior, 24(4), 1605–1623.CrossRefGoogle Scholar
  30. Kazlauskas, A., & Robinson, K. (2012). Podcasts are not for everyone. British Journal of Educational Technology, 43(2), 321–330.CrossRefGoogle Scholar
  31. Khalil, M., Kastl, C., & Ebner, M. (2016). Portraying MOOCs Learners: a Clustering Experience Using Learning Analytics. Research Track, 265.Google Scholar
  32. Kim, J., Guo, P. J., Cai, C. J., Li, S.-W. D., Gajos, K. Z., & Miller, R. C. (2014). Data-driven interaction techniques for improving navigation of educational videos. In Paper presented at the proceedings of the 27th annual ACM symposium on user interface software and technology.Google Scholar
  33. Kleftodimos, A., & Evangelidis, G. (2016). An interactive video-based learning environment supporting learning analytics: Insights obtained from analyzing learner activity data. In Y. Li, M. Chang, M. Kravcik, E. Popescu, R. Huang, Kinshuk, et al. (Eds.), State-of-the-art and future directions of smart learning. Lecture notes in educational technology (pp. 471–481). Singapore: Springer.CrossRefGoogle Scholar
  34. Lee, M.-C. (2010). Explaining and predicting users’ continuance intention toward e-learning: An extension of the expectation–confirmation model. Computers and Education, 54(2), 506–516.CrossRefGoogle Scholar
  35. Lee, M. K., Cheung, C. M., & Chen, Z. (2005). Acceptance of internet-based learning medium: The role of extrinsic and intrinsic motivation. Information and Management, 42(8), 1095–1104.CrossRefGoogle Scholar
  36. Lee, B.-C., Yoon, J.-O., & Lee, I. (2009). Learners’ acceptance of e-learning in South Korea: Theories and results. Computers and Education, 53(4), 1320–1329.CrossRefGoogle Scholar
  37. Leijen, Ä., Lam, I., Wildschut, L., Simons, P. R.-J., & Admiraal, W. (2009). Streaming video to enhance students’ reflection in dance education. Computers and Education, 52(1), 169–176.CrossRefGoogle Scholar
  38. Liu, C., & Forsythe, S. (2011). Examining drivers of online purchase intensity: Moderating role of adoption duration in sustaining post-adoption online shopping. Journal of retailing and consumer services, 18(1), 101–109.CrossRefGoogle Scholar
  39. Ljubojevic, M., Vaskovic, V., Stankovic, S., & Vaskovic, J. (2014). Using supplementary video in multimedia instruction as a teaching tool to increase efficiency of learning and quality of experience. The International Review of Research in Open and Distributed Learning, 15(3), 275–291.CrossRefGoogle Scholar
  40. Maag, M. (2006). Podcasting and MP3 players: Emerging education technologies. Computers Informatics Nursing, 24(1), 9–13.CrossRefGoogle Scholar
  41. McCombs, S., & Liu, Y. (2007). The efficacy of podcasting technology in instructional delivery. International Journal of Technology in Teaching and Learning, 3(2), 123–134.Google Scholar
  42. McGreal, R., Sampson, D. G., Chen, N.-S., Krishnan, M. S., & Huang, R. (2012). The open educational resources (OER) movement: Free learning for all students. In Paper presented at the 2012 IEEE 12th international conference on advanced learning technologies.Google Scholar
  43. Mikalef, P., Pappas, I. O., & Giannakos, M. (2016). An integrative adoption model of video-based learning. The International Journal of Information and Learning Technology, 33(4), 219–235.CrossRefGoogle Scholar
  44. Ngai, E. W., Poon, J., & Chan, Y. (2007). Empirical examination of the adoption of WebCT using TAM. Computers and Education, 48(2), 250–267.CrossRefGoogle Scholar
  45. O’brien, R. M. (2007). A caution regarding rules of thumb for variance inflation factors. Quality and Quantity, 41(5), 673–690.CrossRefGoogle Scholar
  46. O’brien, A., & Hegelheimer, V. (2007). Integrating CALL into the classroom: The role of podcasting in an ESL listening strategies course. ReCALL, 19(02), 162–180.CrossRefGoogle Scholar
  47. Pappas, I. O., Giannakos, M. N., & Sampson, D. G. (2016a). Making sense of learning analytics with a configurational approach. In Paper presented at the proceedings of the workshop on smart environments and analytics in video-based learning (SE@ VBL), LAK2016.Google Scholar
  48. Pappas, I. O., Kourouthanassis, P. E., Giannakos, M. N., & Chrissikopoulos, V. (2014). Shiny happy people buying: The role of emotions on personalized e-shopping. Electronic Markets, 24(3), 193–206.CrossRefGoogle Scholar
  49. Pappas, I. O., Kourouthanassis, P. E., Giannakos, M. N., & Chrissikopoulos, V. (2016b). Explaining online shopping behavior with fsQCA: The role of cognitive and affective perceptions. Journal of Business Research, 69(2), 794–803.CrossRefGoogle Scholar
  50. Pappas, I. O., Mikalef, P., & Giannakos, M. N. (2016c). Video-Based Learning Adoption: A typology of learners. Paper presented at the Proceedings of the workshop on Smart Environments and Analytics in Video-Based Learning (SE@VBL), LAK2016.Google Scholar
  51. Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879.CrossRefGoogle Scholar
  52. Rienties, B., & Rivers, B. A. (2014). Measuring and understanding learner emotions: Evidence and prospects. Learning Analytics Review, 1, 1–28.Google Scholar
  53. Ringle, C. M., Wende, S., & Becker, J.-M. (2015). SmartPLS 3. Bönningstedt: SmartPLS.Google Scholar
  54. Roca, J. C., & Gagné, M. (2008). Understanding e-learning continuance intention in the workplace: A self-determination theory perspective. Computers in Human Behavior, 24(4), 1585–1604.CrossRefGoogle Scholar
  55. Roehl, A., Reddy, S. L., & Shannon, G. J. (2013). The flipped classroom: An opportunity to engage millennial students through active learning. Journal of Family and Consumer Sciences, 105(2), 44.CrossRefGoogle Scholar
  56. Saadé, R., & Bahli, B. (2005). The impact of cognitive absorption on perceived usefulness and perceived ease of use in on-line learning: An extension of the technology acceptance model. Information and Management, 42(2), 317–327.CrossRefGoogle Scholar
  57. Scherer, K. R. (2005). What are emotions? And how can they be measured? Social Science Information, 44(4), 695–729.CrossRefGoogle Scholar
  58. Traphagan, T., Kucsera, J. V., & Kishi, K. (2010). Impact of class lecture webcasting on attendance and learning. Educational Technology Research and Development, 58(1), 19–37.CrossRefGoogle Scholar
  59. Ullrich, C., Shen, R., & Xie, W. (2013). Analyzing student viewing patterns in lecture videos. In Paper presented at the 2013 IEEE 13th international conference on advanced learning technologies (ICALT).Google Scholar
  60. Van Zanten, R., Somogyi, S., & Curro, G. (2012). Purpose and preference in educational podcasting. British Journal of Educational Technology, 43(1), 130–138.CrossRefGoogle Scholar
  61. Wachtler, J., & Ebner, M. (2015). Impacts of interactions in learning-videos: A subjective and objective analysis. In Paper presented at the EdMedia: World conference on educational media and technology.Google Scholar
  62. Wachtler, J., Hubmann, M., Zöhrer, H., & Ebner, M. (2016). An analysis of the use and effect of questions in interactive learning-videos. Smart Learning Environments, 3(1), 13.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  • Ilias O. Pappas
    • 1
  • Michail N. Giannakos
    • 1
  • Patrick Mikalef
    • 1
  1. 1.Department of Computer and Information ScienceNorwegian University of Science and Technology (NTNU)TrondheimNorway

Personalised recommendations