1 Introduction

The world is going through tough times with the Covid-19 pandemic. Inevitably, there has been severe impacts on education systems around the globe. Schools and universities were closed, and millions of kids, adolescents and young adults have been out of schools and universities. Nichols (2003) pointed out that the Internet could be seen as (i) another delivery medium, (ii) as a medium to add value to the existing educational transaction or, (iii) as a way to transform the teaching and learning process. The research and discourse surrounding quality of online learning provisions, student engagement and satisfaction has been ongoing by both proponents and opponents of online learning (Biner et al. 1994; Rienties et al. 2018). With the abrupt shift and uptake of online learning, due to the Covid-19 pandemic, such discourse finds its relevance much beyond the classic academic research and debate. It is linked to the future of teaching and learning in technology-enabled learning environments. Arguably, the adoption of technology has disrupted the traditional teaching practices as teachers often find it difficult to adjust and connect their existing pedagogy with technology (Sulisworo 2013). Similarly, if informed policy decisions are not taken, this can affect the knowledge transfer processes as well as reduce the efficiency of teaching and learning processes (Ezugwu et al. 2016).

One of the challenges of online learning relates to students’ learning experiences and achievement. Sampson et al. (2010) stated that students’ satisfaction and outcomes are good indicators for assessing the quality and effectiveness of online programs. It is of concern for institutions to know whether its students, in general, are satisfied with their learning experience (Kember and Ginns 2012). Another essential element for quality online education is learner engagement. Learner engagement refers to the effort the learner makes to promote his or her psychological commitment to stay engaged in the learning process, to acquire knowledge and build his or her critical thinking (Dixson 2015). While there are different conceptualisations of student engagement (Zepke and Leach 2010), advocates of learning analytics tend to lay emphasis on the analysis of platform access logs including clicks on learning resources when it comes to student engagement in online learning (Rienties et al. 2018). The proposition is that being active online through logins, active sessions and clicks actually reflects actual engagement in an online course and result in better student performances. However, this model mainly works in classic online modules, and there is limited availability of literature measuring students’ engagement in activity-based hybrid learning environments where there is a mix online and offline activities (Rajabalee et al. 2020).

In this research, the aim was to investigate the relationships between students’ reported engagement, their satisfaction levels and their overall performances in an online module that was offered to first-year University students of different disciplines (Science, Engineering, Agriculture, Humanities, Management). The learning design followed an activity-based learning approach, where there was a total of nine learning activities to complete over two semesters. The focus was on skills development and competency-based learning via a learning-by-doing approach. There were 844 students enrolled on the module, and they were supported by a group of seven tutors. The end of module feedback, comprising mainly of open-ended questions, aligned with the Online Student Engagement (OSE) model, and the Online Learning Consortium model of student satisfaction were coded and analysed accordingly. Furthermore, the correlation between student satisfaction, engagement and their performances were established.

The findings of this research contribute to the existing knowledge through new insights into determining students’ engagement in online courses that follow an activity-based learning design approach. It is observed in this study, in line with other research that learning dispositions, such as the reported engagement, perceived satisfaction and student feedback elements could be useful dimensions to add to a learning design ecosystem to improve student learning experiences with the objective to move towards a competency and outcomes-based learning model. Based on the results and findings, the implications for institutional e-learning policy are discussed.

2 Literature review

2.1 Learner satisfaction and experiences in e-learning environments

Learner satisfaction and experiences are crucial elements that contribute to the quality and acceptance of e-learning in higher education institutions (Sampson et al. 2010). Dziuban et al. (2015) reported that the Online Learning Consortium (formerly known as Sloan Consortium) considered student satisfaction of online learning in higher education as an essential element to measure the quality of online courses. Different factors influence learner satisfaction such as their digital literacy levels, their social and professional engagements, the learner support system including appropriate academic guidance and the course learning design (Allen et al. 2002).

According to Moore (2009), factors such as the use of learning strategies, learning difficulties, peer-tutor support, ability to apply knowledge and achievement of learning outcomes indicate those elements that impact on the overall satisfaction of students in online learning. A learning strategy is a set of tasks through which learners plan and organize their engagement in a way to facilitate knowledge acquisition and understanding (Ismail 2018). Enhancing the learning process with appropriate learning strategies may contribute to better outcomes and performances (Thanh and Viet 2016). Aung and Ye (2016) reported that students’ success and achievement were positively related to student satisfaction.

Students, in online courses, often experience learning difficulties, which encompass a range of factors such as digital literacy, conceptual understanding, technical issues and ease of access (Gillett-Swan 2017). These difficulties if not overcome on time, tend to reduce learning effectiveness and motivation, and may also affect their overall satisfaction (Ni 2013). Learner support may include instructional or technical support, where tutors and other student peers engage collectively to help students tackle issues that they encounter during the course. Such support, especially when students face technical difficulties is vital to overcome challenges and impacts on overall student satisfaction (Markova et al. 2017). The ability of students to apply knowledge, and to achieve the intended learning outcomes, also impacted on student satisfaction and quality of learning experience (Mihanović et al. 2016).

Student perception is the way students perceive and look at a situation from a personal perspective and experience. When students have a positive perception, they are more likely to feel satisfied with the course (Lee 2010). It is, therefore, crucial to understand how students think about a course, certainly to determine its implications on their academic experiences. A negative feeling is an emotion that students sometimes express concerning their learning experience. It could be in the form of anxiety, uneasiness, demotivation and apprehension or in terms of their readiness to use technology (Yunus et al. 2016). At the same time, negative feeling tends to have an impact on student online learning experience and their satisfaction (Abdous 2019). Learner autonomy relates to students’ independence in learning. It indicates how students take their responsibility and initiative for self-directed learning and organize their schedules. Cho and Heron (2015) argued that learner autonomy in online courses influences student experiences and satisfaction.

2.2 Measuring student satisfaction in online courses

Student satisfaction is an essential indicator of students’ overall academic experiences and achievement (Virtanen et al. 2017). There are different instruments to measure student satisfaction in an online environment. Using survey questionnaires is generally standard practice for measuring learner satisfaction. Over the years, a variety of tools such as Course Experience Questionnaire (Ramsden 1991), National Student Survey (Ashby et al. 2011) and Students’ Evaluations of Educational Quality (Marsh 1982), were developed and used to measure student satisfaction. The Satisfaction of Online Learning (SOL) is an instrument that was established to measure student’s satisfaction in online mathematics courses (Davis 2017). It consisted of eight specific components, comprising of effectiveness and timeliness of the feedback, use of discussion boards in the classroom, dialogue between instructors and students, perception of online experiences, instructor characteristics, the feeling of a learning community and computer-mediated communication. The Research Initiative for Teaching Effectiveness (RITE) developed an instrument that focussed on the dynamics of student satisfaction with online learning (Dziuban et al. 2015). RITE assessed two main components namely, learning engagement and interaction value and, encompasses items such as student satisfaction, success, retention, reactive behaviour patterns, demographic profiles and strategies for success. Zerihun et al. (2012), further argued that most assessments of student satisfaction are based on teacher performance rather than on how student learning occurred. Li et al. (2016) used the Student Experience on a Module (SEaM) questionnaire, where questions were categorized under three themes, to explore the construct of student satisfaction. The three themes contain inquiries related to the (1) overall module, (2) teaching, learning and assessment and (3) tutor feedback.

2.3 Learner engagement in online courses

One of the critical elements affecting the quality of online education is the need to ensure that learners are effectively and adequately engaged in the educational process (Robinson and Hullinger 2008; Sinclair et al. 2017). Learner engagement refers to the effort the learner makes to promote his or her psychological commitment to stay engaged in the learning process to acquire knowledge and build his or her critical thinking (Dixson 2015). It is also associated with the learner’s feeling of personal motivation in the course, to interact with the course contents, tutors and peers, respectively (Czerkawski and Lyman 2016). There are different models to measure learner engagement in learning contexts. Lauría et al. (2012) supported the fact that the number of submitted assignments, posts in forums, and completion of online quizzes can quantify learner’s regularity in MOOCs. Studies using descriptive statistics reported that consistency and persistence in learning activities are related to learner engagement and successful performance (Greller et al. 2017). Learner engagement is also about exploring those activities that require online or platform presence (Anderson et al. 2014). Those online activities can be in the form of participation in discussion forums, wikis, blogs, collaborative assignments, online quizzes which require a level of involvement from the learner. Lee et al. (2019) reported that indicators of student engagement, such as psychological motivation, peer collaboration, cognitive problem solving, interaction with tutors and peers, can help to improve student engagement and ultimately assist tutors in effective curriculum design.

2.4 Measuring learner engagement in online courses

Kuh (2003) developed the National Survey of Student Engagement (NSSE) benchmarks to evaluate students’ engagement through their skills, emotion, interaction and performance, applicable mainly to the traditional classroom settings. Another model relevant to the classroom environment is the Classroom Survey of Student Engagement (CLASSE) developed by Smallwood (2006). The Student Course Engagement Questionnaire (SCEQ) proposed by Handelsman et al. (2005), uses the psychometric procedure to obtain information from the students’ perspective to quantify students’ engagement in an individual course.

Roblyer and Wiencke’s (2004) proposed the Rubric for Assessing Interactive Qualities of Distance Courses (RAIQDC) which was designed as an instructive tool, to determine the degree of tutor-learner interactivity in a distance learning environment. Dixson (2010) developed the Online Student Engagement (OSE) scale model using the SCEQ model of Handelsman et al. (2005) as the base model. It aimed at measuring students’ engagement through their learning experiences, skills, participation, performance, and emotion in an online context. Dixson (2015) validated the OSE using the concept of behavioural engagement comprising of what was earlier described as observational and application learning behaviours. Dixson (2015) reported a significant correlation between application learning behaviours and OSE scale and a non-significant association between observation learning behaviours and OSE. Kahu (2013) critically examined student engagement models from different perspectives, namely behavioral, psychological, socio-cultural and holistic perspective. While the framework proposed is promising for a holistic approach to student engagement in a broader context of schooling, the OSE model as proposed by Dixson (2015) aligns quite well with the conceptual arguments of Kahu (2013) in the context of students’ engagement in online courses.

Gelan et al. (2018) measure online engagement by the number of times students log in the VLE to follow a learning session. They also found that students who tend to show higher regularity level in their online interaction and by attending more learning session were successful, compared to non-successful students. Ma et al. (2015) used learning analytics to track data related to teaching and learning activities to build an interaction-activity model to demonstrate how the instructor’s role has an impact on students’ engagement activities. An analysis of student emotions through their participation in forums and their performance in online courses can serve as the basis to model student engagement (Kagklis et al. 2015). They further observed that the students’ participation in forums was not directly associated with their performances. The reason was that most students preferred to emphasize working on their coursework as they will be given access to their exam, upon completion of a cumulative number of assignments and obtaining their grades. Therefore, although students tend to slow down their participation, they were still considered engaged in the online course.

Activity-based learning is an approach where the learner plays an active role in his or her learning through participation, experimentation and exploration of different learning activities. It involves learning-by-doing, learning-by-questioning and learning-by-solving problems where the learners consolidate their acquired knowledge by applying their skills learnt in a relevant learning situation (Biswas et al. 2018). These activities can be in the form of concept mapping, written submission and brainstorming discussions (Fallon et al. 2013). The study of Fallon et al. (2013) used the NSSE (National Survey of Student Engagement) questionnaire to measure and report on students’ engagement in learning materials and activities. They found encouraging results whereby they could establish that students responded positively to the activity-based learning approach, and there has been an enhancement in students’ participation and engagement. In line with this, Kugamoorthy (2017) postulated that the activity-based learning approach has motivated and increased student participation in learning activities as well as improved self-learning practices and higher cognitive skills. Therefore, student participation in activity-based learning model encourages students to think critically and develop their practical skills when they learn actively and comprehensively by involving cognitive, affective and psychomotor domains.

2.5 Student performances, satisfaction and their engagement in online courses

Research has demonstrated that activities that encouraged online and social presence, enhance and build learner confidence and increase performance are critical factors in engagement (Anderson et al. 2014; Dixson 2015). Furthermore, Strang (2017) found that when students are encouraged to complete online activities such as self-assessment quizzes, this promotes their learning and engagement and hence result in higher grades. Tempelaar et al. (2017) postulated that factors such as cultural differences, learning styles, learning motivations and emotions might impact on learner performances. Smith et al. (2012) deduced that students’ pace of learning and engagement with learning materials are indicators of their performance and determinants of learning experience and satisfaction. Macfadyen and Dawson (2012) found that variables such as discussion forum posts and completed assignments, can be used as practical predictors of learner performance, and thus can be used to help in learners’ retention and in improving their learning experiences. Pardo et al. (2017) utilized self-reported and observed data to investigate they can predict academic performance and understand why some students tend to perform better. They used a blended learning module the collected data related to students’ motivational, affective and cognitive aspects while observed data was related to students’ engagement captured from activities and interactions on learning management system. They deduced that students adopting a positive self-regulated strategy participated more frequently in online events, which could explain why some students perform better than others.

3 The research context

The module that was selected for this study was a first-year online module offered to students of the first year across disciplines. The module used an activity-based learning design consisting of nine learning activities. There were no written exams, and the first eight learning activities counted as continuous assessment, and the ninth activity counting as an end of module assessment. The module focused on the learning-by-doing approach, through authentic assignments such as developing a website, use an authoring tool, engage in critical reflection through blogging and YouTube video posts, general forum discussions as well as drill and practice questions such as online MCQs. The learning design principle that guided the pedagogical approach was the knowledge acquisition, application and construction cycle through sharing & reflective practice (Rajabalee et al. 2020). Although the module was fully online, it is necessary to point out, that not all of the learning activities necessitated persistent online presence for completion. For instance, students could download specific instructions from the e-learning platform, carry out the learning activities on their laptops, and then upload the final product for marking. The students further completed an end-of-module feedback activity using an instrument designed by the learning designers. The questions in the feedback activity were mainly open-ended and were in line with the OSE questionnaire and the Sloan instrument to measure student satisfaction in online courses. The approach was not survey based, but mainly taking a more in-depth qualitative approach as proposed by Kahu (2013). In this study, the research questions are set as follows:

  • To what extent does performances and engagement of students impact on students’ satisfaction in the online module.

  • How did students feel concerning the delivery of the module, their learning outcomes and their overall experience?

4 Methodology

The approach was to engage in an exploratory research study. The aim was to retrieve and analyze the data collected and accessible for an online module through the application of descriptive learning analytics. Such data are related to student satisfaction, their reported engagement in the online module and their overall performances. This study was based on the actual population of students who enrolled on the module. Consequently, there was no sampling done. Enrolment was optional as the module was offered as a ‘General Education’ course to first year students. It was open to students in all disciplines. All the students come from the national education system of Mauritius having completed the Higher School Certificate. The age group of the students were between 19 and 21. The student feedback was an integral component of the module and counted as part of a learning activity. Students who followed this module had initially agreed that information related to their participation and contributions in the course be used for research purposes in an anonymous manner. All student records were completely anonymized prior to classification and analysis of data.

4.1 Profile of participants

The students came from different disciplines, as highlighted in Table 1. All participants had the required digital literacy skills, and they have followed the Information Technology introductory course as well as the national IC3 (Internet and Core Computing Certification) course at Secondary Level. Seven tutors and students facilitated the module with student groups ranging from 100 to 130 per group. The role of the tutors was mainly to act as a facilitator for the learning process and to mark learning activities and to provide feedback to the students. The Table 1 below, contains appropriate information about the participants across disciplines and gender for the module. Table 2 provides the information about the 179 students who did not complete the student feedback activity of the module.

Table 1 Student enrolments in the online module per Discipline and Gender
Table 2 Distribution of students who completed the Feedback activity

4.2 Methods

This research used a mixed-method approach, given the nature of the research question. The primary method was quantitative data-gathering and analysis through measures of the degree of association between variables. It was also essential to process qualitative data that was available through student feedback. The qualitative research studied student satisfaction and perceptions concerning their online engagement via the end of module feedback questions. The quantitative part mainly focused on applied statistics such as t-tests and correlation testing to find the relationships between variables such as learner engagement, performance and level of participation. The quantitative aspects of the analysis were used in conjunction with the findings from the qualitative research analysis to understand better the underlying issues and theoretical underpinnings related to the learning environment and the learning processes of the students.

4.3 Student performance model

The Student Performance Model in this research has been initially conceptualized in line with the literature as a function of engagement, satisfaction and continuous assessment marks. The continuous assessment consisted of eight learning activities as follows:

Activity

Activity Type

Marks weight

1

Online discussion forum

25% of total continuous assessment mark

2

Drill and Practice activity

3

Self-reflection

4

MCQs

5

Blog post analysis

75% of total continuous assessment mark

6

Concept Mapping

7

Video Analysis

8

Use of software

The final assessment (activity 9) was a mini project where students were expected to apply the knowledge acquired through the continuous learning activities (1–8). Each student mark is moderated by another tutor in an independent manner as per the regulations of the University.

4.4 Defining and measuring student engagement

The literature reports several ways to measure students’ engagement in classroom settings as well as in online learning environments. The Online Student Engagement (OSE) questionnaire is one such instrument. However, it is a self-report of students’ perceived engagement done in survey style using a Likert scale. For the current module, there were two constraints to apply to the OSE to determine the students’ perceived engagement. The first constraint was that the module was not running in an experimental context. Therefore, at the time of conception and delivery, it was not predetermined that student engagement would be a variable to be measured. The second constraint was that the module followed the activity-based learning design model. The OSE mainly seeks feedback from students where the classic e-learning model is applied where the content is at the heart of the learning process. For the current module, the researchers had to adopt a different approach to extract reported student engagement data, from the end of module feedback activity.

4.5 Measuring student satisfaction through the end of module feedback activity

The course designers, therefore, wanted a different way for constructive feedback to be given by students through the elaboration of a set of open-ended guided questions. The students had to report on their experiences in the course from (i) the learning outcomes achievement perspective, (ii) the learner support processes including tutor and peer support (iii) their learning strategies and ways of tackling the different learning activities, and (iv) learning difficulties encountered and how they engaged in resolving and overcoming such challenges, in line with the Sloan Consortium Quality in Online Education Framework (Moore 2009). From such type of feedback, the tutoring team and the course designers would be able to understand better how the students engaged in the course from a qualitative perspective, and what were their satisfaction levels after completing the module.

Such information was therefore obtained mainly in qualitative form as students would mostly narrate about their learning experiences in the course. There are a series of approaches for qualitative data analysis, which process data sets through a systematic review. For this particular research, there were three possibilities in terms of the study of the qualitative data gathered through the feedback questions given to the students, namely grounded theory, phenomenology and content analysis. After careful consideration concerning the research questions and the literature, content analysis was deemed more appropriate for this research as it is a method of analyzing data which are obtained or collected from open-ended questions, surveys, interviews and observations (Creswell 2009). It uses a systematic approach when analyzing contents and documents.

For this study, deductive content analysis was used as a research approach to explore the learners’ feedback and experiences and to make meaning to the data. Firstly, concerning the engagement of students, the aim was to extract relevant meaning from data that could form codes related to the Online Student Engagement (OSE) scale as defined in the literature. Secondly, codes were obtained concerning the data related to students’ satisfaction as described in the paragraph above from the responses received. Finally, there was a need to move to quantitative content analysis to be able to conduct descriptive statistical analyses to answer the relevant research question. Table 3 below highlights the related themes to group the codes for both the perceived students’ engagement and satisfaction.

Table 3 Relevant themes for Online Student Engagement and Student Satisfaction

4.6 Classification of student satisfaction and engagement levels

Based on the questions in the reflective activity that guided learners’ reflection for their feedback, the researchers established a classification for the perceived level of engagement and the perceived satisfaction level of the students. The instrument used is provided as an annex. As regards to student engagement, the overall engagement is defined as a combination of (i) the learning strategies employed by the student to complete the learning activities; (ii) the involvement of the student in peer and tutor communication; (iii) the achievement of the learning outcomes reflected in their performances; and (iv) their ability to apply their knowledge acquired to demonstrate skills and competencies. They were used to study perceptions of learners in the online course and to describe their level of satisfaction, as shown in Tables 4 and 5 below. The researchers adopted single coding as the coding approach. However, where uncertainties occurred or in cases of ambiguity, the tutor group validated these elements, including the themes that emerged from the coding process. In this process, therefore, inter-coder reliability could not be calculated. Single-coder reliability has been argued by Potter and Levine-Donnerstein (1999, p. 265) to be more reliable when the complexity of the task is low as compared to high complexity tasks where multiple-coder approach would be more reliable. In this research, the coding process was not complicated, as it mainly related to codes and themes established from student satisfaction and their engagement from two well-defined instruments from the literature such as the OSE and the Sloan Quality Guidelines. The single-coder approach was, therefore justified in this case.

Table 4 Classification of Level of Satisfaction
Table 5 Classification of Level of Engagement

The classification and explanatory rubrics in Tables 4 and 5 below were established through consensus with the tutor team and taking into consideration the relevant literature on student satisfaction and engagement. To classify the level of each student, the extracted codes from each student entry were used as guideline to decide on the classification. Each theme as described in Table 3 above carries a maximum of 2 points and has been further classified as follows:

  • A score of 0 is set if theme is not relevant (i.e. there is no codes) to the feedback of the student.

  • A score of 1 if the theme is partly relevant (i.e. not more than half of the codes) to the feedback of the student.

  • A score of 2 if the theme is fully relevant (i.e. more than half of the codes) to the feedback of the student.

4.7 Limitations of the research

In this research, actual data that were available were retrieved and analyzed for a module that was not designed to be offered in an experimental context. Student feedback was therefore a classic process of questions elaborated by the learning design team to gather information to judge the learning experience of participants. While self-reporting tools like the Online Student Engagement (OSE) would have been helpful to compensate, the fact that the course had already taken place, meant that the OSE questionnaire was not administered beforehand. This deficiency was addressed through the student feedback data-collection which was designed during the courseware development process and was aligned with established models of student satisfaction that gathered information from the students with respect to their own perceived engagement in the course. Through a qualitative analysis obtained by coding the responses of the students, the issue of student engagement has been adequately addressed from that perspective. Another limitation relates to the number of students who completed the feedback activity. As the exercise was not compulsory, not all students completed the student feedback, so the coding and analysis of feedback is limited only to those who effectively responded. Sampling was not a significant concern here, as the research subjects were not selected through a sampling technique, but responses sought from whole cohort. The results of the research with respect to the questions where the student feedback is available cannot be generalized as being representative of the whole cohort and have to be interpreted with this constraint in mind. Finally, the findings of this research relate to a course which was designed to suit a diverse set of student profiles and specific findings cannot be considered to be applicable to other modules in different specific contexts and following different pedagogical designs.

5 Findings & Results

5.1 Descriptive statistics

Out of the 844 students, 179 did not participate in the feedback process, and consequently, there is no related data for them to compute their perceived satisfaction and their reported engagement level. As can be seen from the tables below, the majority of students reported being moderately satisfied (44.7%). On the reported engagement level, 32.2% reported being moderately engaged as compared to 29.4% and 17.2% who reported high and low engagement, respectively. Those that were missing have further been classified as ‘Not Reported’ for the and were excluded from further analysis.

The coding for the perceived satisfaction and reported engagement was done as per the themes in Table 3. For each theme, the students’ feeling for each code was rated on a scale [0,1,2] and the rubric in Table 5. A value of 0 relates to a reported low score of the feedback, 1 for an average rating, and 2 for high score feedback. A sum of the components is carried out to get the cumulative score for each set of themes under Engagement and Satisfaction. Given that Engagement had only four themes, the maximum possible score was eight while for satisfaction, the eight themes would cumulate to a maximum possible total of 16. The Skewness test (near to zero) and the Kurtosis value (−1, −0.9) for both variables reveals that the distribution can reasonably be assumed to follow a normal distribution (Tables 6 and 7).

Table 6 Level of perceived satisfaction
Table 7 Level of Reported Engagement

The box plots below illustrate the distribution for the reported engagement and perceived satisfaction for this group concerning Gender and Discipline. In both plots, the median line for gender is lower for males.

The box plot below represents the distribution for the reported engagement and the perceived satisfaction of students of this cohort. The reported seems to be lower than the reported engagement levels.

5.2 RQ1: To what extent does performances and engagement of students impact on students’ satisfaction in the online module.

We test three hypotheses for this research question.

5.2.1 Hypothesis #1: There a significant difference between mean satisfaction levels and engagement level of students, within disciplines and from the gender perspective.

A one-way ANOVA was conducted to compare the mean satisfaction levels of students from different disciplines. Normality checks and Levene’s test were carried out, and the assumptions met. There was no significant difference in the perceived satisfaction of students across disciplines [F (4,660) = 0.098, p = 0.983]. Similarly, there were no significant differences between the reported engagement levels of student across disciplines. [F(4,660) = 0.355, p = 0.840]. Furthermore, there were no significant differences concerning gender for both the perceived satisfaction and the reported engagement level of the students in this cohort as per the ANOVA Table 8 below.

Table 8 Mean differences of perceived satisfaction and reported engagement w.r.t Gender

5.2.2 Hypothesis #2: There is a correlation between students’ satisfaction level and reported engagement level for the current cohort.

Correlation analysis was used to measure the degree association between the perceived satisfaction level and their reported engagement in the module. Since the reported engagement and the perceived satisfaction were inferred from the same feedback questionnaire, through different codes and themes, it is observed that there was a strong positive correlation between the two variables. The variance inflation factor (VIF) values nearing to 1 suggested that collinearity was not a problem as per the Table 9 below.

Table 9 Calculation of VIF values to test collinearity effects on the Reported Engagement variable

The scatter plot below illustrates the spread of values for the reported engagement and the perceived satisfaction of students.

From the figure, it can be deduced that the perceived satisfaction of a student in a module will depend on his or her reported engagement level. The more a student feels engaged in the course, he or she will be more satisfied. However, this deduction emanates from self-report instruments used by the student to report on his or her learning experiences.

5.2.3 Hypothesis #3: There is a correlation between students’ satisfaction level and their performances.

The scatter plot below illustrates the mark distribution for both the continuous learning activities and the final learning activity with respect to the satisfaction of the students.

Given that final performance marks and the reported satisfaction could be assumed to follow a normal distribution. In contrast, the continuous learning marks followed an asymmetric distribution, and two separate correlation tests were carried out. The Pearson correlation coefficient was calculated for the final performance and reported satisfaction and the Kendal Tau non-parametric test for the continuous assessment and the reported satisfaction. The correlations for both cumulative assessment and final mark with the reported satisfaction is significant (p < 0.01) and this has been shown in Tables 10 and 11 below.

Table 10 Correlation between Final Assessment and reported satisfaction
Table 11 Correlation between Cumulative Assessment and reported satisfaction

5.3 RQ 2: How did students feel, concerning the delivery of the module, their learning outcomes and their overall experience?

Only 665 students provided their feedback in a narrative as per the questionnaire provided to them. The rationale of this qualitative part of the study was to examine the relationships between students’ perception of their learning experience towards this module and their performance levels. The overall performance in the final assessment demonstrated that high performers were 22.4% (n = 149), average performers were 63.8% (n = 424), and low performers were 13.8% (n = 92) of the students. In terms of gender, 35.6% (n = 237) of the students who provided their feedback were male, and 64.4% (n = 428) were female. Feedback data gathered was then organized and coded. Overall, the total number of 2366 of codes were recorded. While high performers in the final assessment contributed an average of 3.9 total codes, average performers contributed an average of 3.5, and low performers contributed an average of 3.3 total codes. Table 12 contains a descriptive summary of each code.

Table 12 Summary and definition of codes

Table 13 explains how each level of students in the final assessment reported their feedback under the different codes devised. Hence, the coded statements were compared with the students’ performances from each level (High, Average, Low). For example, out of 208 codes categorized as ‘IT skills acquired’, 25% were reported by high performers in the final assessment. In contrast, 62.02% were reported by average performers, and low performers reported 12.98%.

Table 13 Number of codes per category of performers in the final assessment

Table 14 explains how each level of students in the cumulative assessment activities reported their feedback under the different codes devised. The coded statements were compared with the students’ performances in Activities 1 to 8 from each level (High, Average, Low). For example, out of a total of 130 codes categorized as ‘Developed learner autonomy’, 76.15% of the codes were reported by high performers in the cumulative assessment. In contrast, 23.08% were reported by average performers, and low performers reported 0.77%.

Table 14 Number of codes per category of performers in cumulative assessment

Table 15 explains how students from each discipline reported their feedback under the different codes devised. The coded statements were compared within disciplines. For example, out of a total of 192 codes categorized as ‘had a negative feeling about the course’, both Engineering and Science disciplines reported 28.13% of the codes. In contrast, 23.44% were reported by Law and Management, Humanities reported 16.67%, and Agriculture disciplines reported 3.65%.

Table 15 Number of codes per discipline

The pie chart below illustrates the code distributions with respect to the % of occurrences in the feedback.

20.4% of reported codes demonstrated that students had built an overall positive perception from the module, and 18.9% were related to having attained positive achievement. Most of the themes, (except ‘negative feelings about the course’, ‘mixed feeling and experiences’ and ‘encountered technical difficulty’) would contribute to give a positive indication of perceived satisfaction in the course

“…The experience, skills and knowledge that I have acquired in this module will no doubt be of great help to me in the future. I am already applying some of the things I have learned here in my studies, for example, concept mapping. I learned from the you-tubing activity that I can actually create simple animations to convey information in a more interesting manner... There is so much more to learn about educational technologies, but so far this module has been a very enriching experience …”

(Student B4157, female, Science discipline, High performer category in Cumulative Assessment, Final Assessment = 6.5, Cumulative Assessment = 8.7)

“…This is one of the modules I have mostly appreciated during my 1st year in the university… During the course, I have been able to learn numerous things … However, this has not just been a module, it has been a self-development course as far as I am concerned; Through this coursework, I have gained the experience needed to efficiently and effectively use technology, multimedia tools and employ modern ICT in education. As an end note, I would like to congratulate the members of the department for their excellent support, guidance and having offered us such a pleasant module to work on…”

(Student B7772, male, Engineering discipline, Average performer category in Final Assessment, Final Assessment = 6, Cumulative Assessment = 8.125)

“…This module helps in widening our knowledge. It helps in making practical use of new assets that was once unused and unknown. E.g. the cartoon maker, multimedia assignments. Also, it is an interactive module where different people share their views. In this way students widen their knowledge as well as share their knowledge… Personally, I really learn a lot from this module. I got to explore my own hidden talents and discover new applications. I think this module will be a real help in the future…”

(Student ID B2842, female, Science discipline, Low performer category in Final Assessment, Final Assessment = 4, Cumulative Assessment = 7.9)

8.8% of the 2366 codes related to the different ICT-related skills that students acquired in the module. While many of these related to the use of social media, forums as well as software and not excluding computer-mediated communication the code was named “IT skills acquired”

“…my idea of this module was plainly that I will get to learn new IT software… There are too many benefits I obtained from this module. I have also been able to use the software, apply IT to education, and it is fun as well as fruitful…”

(Student B2480, female, Law & Management discipline, High performer category in Final Assessment, Final Assessment = 8, Cumulative Assessment = 8.225)

“…I think that this module has increased my creativity level, and my technology knowledge is broader than before. Also, through constantly editing my work on Microsoft word, this has improved my writing… To be able to work out the units, I have done some research on Google and gone through the given materials thoroughly…”

(Student B6609, female, Engineering discipline, High performer category in Cumulative Assessment, Final Assessment = 7, Cumulative Assessment = 7.8125)

The above comment highlights how ICT skills such as repeated use of word processing software which seems a simple process could result in improved writing skills, and that Google search was also a skill that was valued by students. In contrast, in the comment below, it was evident that for other students, the development of advanced digital skills was valued and welcome.

“…I have also developed the skills to create and manage educational technologies materials including websites and cartoon software. Through this coursework, I have gained the experience needed to efficiently and effectively use technology, multimedia tools and employ modern ICT in education…”

(Student B7772, male, Engineering discipline, Average performer category in Final Assessment, Final Assessment = 6, Cumulative Assessment = 8.125)

“…Actually, it helped me in using and managing technological processes… this was an interesting module which helped me to improve my learning skill technologically…”

(Student B4126, female, Science discipline, Average performer category in Cumulative Assessment, Final Assessment = 6.5, Cumulative Assessment = 6.3125)

“…this module was a challenge to me, but I ended up enjoying the different activities offered. It helped in improving my IT skills…”

(Student A2406, female, Humanities discipline, Low performer category in Cumulative Assessment, Final Assessment = 7, Cumulative Assessment = 4.8625)

Students in their feedback further reported critical thinking, creativity and practical skills as well as learner autonomy. While 8.1% were reported as ‘Developed creative/practical skills’, 2.9% were recorded as ‘Developed critical thinking/reflective ability’ and 5.5% as ‘Developed learner autonomy’ of the total codes. Critical thinking, creativity and acquisition of practical skills were the core competencies to be developed for this module.

“…this module has been an aid to me in developing the skill of being able to criticize a piece of my own work or others; to be analytical about every simple details, to be able to make a constructive opinion. As benefit, I have also much appreciated the fact that all the basic knowledge/information for the different tasks were always already provided…”

(Student B9533, female, Science discipline, High performer category in both Final and Cumulative Assessment, Final Assessment = 7.5, Cumulative Assessment = 7.4)

“…With the various activities proposed, I came to learn to analyze things with a more critical eye and as far as I could, provide constructive criticism on several aspects which stood out to me. This not only helped me in this particular module but in my other classes as well with quite a few topics overlapping and which gave me an edge and a number of different viewpoints on these…”

(Student C0295, female, Law & Management discipline, Low performer category in Final Assessment, Final Assessment = 5, Cumulative Assessment = 7.5625)

The fact that learners were in an online module, practically on their own with minimum tutor interaction, required them to take charge of their learning process. Students reported how they had to solve problems on their own, including the planning of the time to work on the module to meet deadlines and to make sufficient effort to acquire the minimum required competencies and to ensure successful completion of the module.

“…Educational technology has indeed increased my knowledge as well as improved my learning skills. It indeed motivated me in my learning process as one can learn at his own pace and at any time within the day. It helped me to assume my responsibility as a student and to submit assignments within the given delay time…”

(Student B1497, female, Engineering discipline, High performer category in Cumulative Assessment, Final Assessment = 6, Cumulative Assessment = 8.475)

“…I am now definitely a fanboy of the e-learning system. The reasons are flexibility; work at your own pace, at your own time and in your own way!”

(Student B0167, male, Science discipline, High performer category in Final Assessment, Final Assessment = 8.5, Cumulative Assessment = 6.9375)

“…One benefit from this module was that I was able to do all the work at my own pace and feel free to do it whenever I had time. There was no constant pressure, there was a deadline to be respected, and I only had to manage my time to submit my work, and it was done without any pressure…”

(Student B1779, female, Humanities discipline, Average performer category in Cumulative Assessment, Final Assessment = 7, Cumulative Assessment = 7.0375)

“…Taking on the role of leader for group work get to you to mature a lot and be more responsible, but it takes a lot of hard work… I never thought I would know so much one day…developing self-discipline…”

(Student A2640, female, Law & Management discipline, Low performer category in Cumulative Assessment, Final Assessment = 7, Cumulative Assessment = 4.7625)

As it can be seen by the above comments, depending on learner preferences, self-paced independent learning is often welcome by students, and the need to assume responsibilities is an interesting value proposition that can result in more autonomy and commitment of the learner. The other aspect, which was prevailing among the codes obtained, was “Learning Strategies – Personal development” with 6.1% and ‘Social interaction/communication’ which was 14.4% of the codes. Learners reported how they tackled the different learning activities, and how they overcame any barrier and interacted with other learners and tutors through the forum discussion for support (Figs. 1, 2, 3, 4 and 5).

Fig. 1
figure 1

Distribution of perceived satisfaction and reported engagement w.r.t Gender

Fig. 2
figure 2

Distribution of perceived satisfaction and reported engagement

Fig. 3
figure 3

Scatter plot – reported engagement v/s perceived satisfaction

Fig. 4
figure 4

Scatter plot for Cumulative Assessment and Final Assessment w.r.t students’ satisfaction

Fig. 5
figure 5

Number of occurrences per code

“…This module has taught me many things, especially in terms of time management and developing a pedagogical approach to my work. This is something I never really paid attention to before working on Educational Technologies assignments. For once I could put myself in my teachers and lecturers’ places and comprehend the different approaches they have to take when explaining a certain topic!”

(Student B2456, female, Agriculture discipline, High performer category in Cumulative Assessment, Final Assessment = 7, Cumulative Assessment = 7.375)

“…the module helps us in our personal development as well as introduces us to what is necessary in education if ever, we are interested in the teaching field…”

(Student ID A4709, female, Science discipline, High performer category in Final Assessment, Final Assessment = 7.5, Cumulative Assessment = 7.5625)

“…This module has given me great experience…learnt strategies before doing any journals like I have done the outlines first in order to avoid messing the ideas and go out of subject…”

(Student ID B8920, female, Humanities discipline, Low performer category in both Final and Cumulative Assessment, Final Assessment = 2.567, Cumulative Assessment = 4.0625)

“…Each weekend, I dedicated 4 hours to do the homework… I planned my work on Saturdays and carried it out on Sundays. I gained better planning and better time management skills…”

(Student A4901, female, Law & Management discipline, Average performer category in Cumulative Assessment, Final Assessment = 5, Cumulative Assessment = 6.125)

The students described techniques that helped them to learn and achieve the outcomes. As can be seen, by the above comment, a feeling of fun was apparent given that students had to learn in different ways such as inquiry-based learning which gave them a degree of flexibility and variety of learning processes. E.g. there were consequential learning outcomes, which resulted in a particular competency about dealing with, different image formats. Furthermore, as could be seen in many comments, students understood the concept of “just-in-time” learning where they could acquire specific skills through research on Google. They could even view tutorials on YouTube at the time of execution of a particular task related to an assignment (e.g. conversion to ZIP format before uploading an assignment on the platform).

8.1% of the codes, however, mentioned some form of negative feeling and inadequate learning experience overall. These were mainly related to students not finding the pertinence of the module, lack of digital skills, or who had communication issues with peers and tutors. At the same time, another 6% of the codes highlighted technical difficulty experienced by students due to poor Internet connection or difficulty in solving technical issues including installation and configuration of software or uploading of their assignments.

“…I did encounter several difficulties. I would not understand how to use a program or as for the eXe software, I could not save my works…at times I had to do the activities again and again. It was tiring…”

(Student B2180, male, Engineering discipline, High performer category in Cumulative Assessment, Final Assessment = 6, Cumulative Assessment = 7.8)

“…it is difficult for me to complete it alone, I am not used to the different tool on the computer, sections has been more complicated, difficult to go throughout the steps without a basic knowledge of how to use the different functions on the screen of the computer…”

(Student B5681, female, Humanities discipline, Average performer category in Cumulative Assessment, Final Assessment = 6.3, Cumulative Assessment = 6.375)

“… less teacher-student interaction, less student-student interaction, in all there is a lack of communication, there were lack of feedback from our tutors about the learning activities being done. No result of how we were working…”

(Student B2107, male, Engineering discipline, Average performer category in Final Assessment, Final Assessment = 6, Cumulative Assessment = 6.6875)

“…Trouble with assignment…It was a disaster…I did the activities 1 to 4, 9 and 13 and even the feedback I am not sure what I did wrong because this site holds record of only 2 of my uploads…I think it is lacking in the communication department...I think that the forum is not effective…”

(Student B3527, female, Humanities discipline, Low performer category in Cumulative Assessment, Final Assessment = 6, Cumulative Assessment = 4.9875)

The codes representing a negative feeling and the occurrence of technical difficulties can provide interesting insights into either a range of pre-emptive or just-in-time measures that can be taken by course developers, tutors and administrators to provide timely support to the learners during the course itself. This may significantly improve the learning experience and overall perception of learners as if they are detected early, they can prevent dropouts, frustrations and poor performances from occurring. However, the positive side concerning the current module is that the codes representing negative feelings and technical difficulties represent 14.1% only of the total number of codes generated. Many of those who expressed that they had technical difficulties also highlighted what they did to overcome them. It is important to mention however, that in this module, the experience of technical difficulties and developing the necessary skills to deal with, then are part of the core learning outcomes, as many educators precisely abandon technology or show reluctance to embrace technology-enabled teaching precisely because of their lack of confidence in their own digital skills. Finally, 0.9% of total codes were reported as ‘Mixed feeling and experience’ where the students had neither a positive nor a negative experience in the course.

“…even if instructions were given, I used to find some activities really difficult…Overall it was a fun as well as difficult experience…” (Student A1261, female, Humanities discipline, High performer category in Cumulative Assessment, Final Assessment = 6.5, Cumulative Assessment = 7.175)

“I had difficulty to meet the deadlines as I was more stressed by my first-year core modules. I was also not very familiar with a lot of the computer directed tasks… I am quite satisfied with the work…”

(Student A2967, female, Humanities discipline, High performer category in Final Assessment, Final Assessment = 7.5, Cumulative Assessment = 6.9375)

“At the beginning of the module, I find quite interesting. Then, it was very tough… The storyboard was very interesting, yet I found quite problems on drawing the storyboard but fortunately, after many difficulties I succeeded in doing it…”

(Student B6023, female, Law & Management discipline, Average performer category in Final Assessment, Final Assessment = 7, Cumulative Assessment = 7.225)

“So, the only thing I can finally say is that educational technology’s module is neither so difficult nor easy…” (Student B3016, female, Humanities discipline, Low performer category in Final Assessment, Final Assessment = 5, Cumulative Assessment = 7.95)

In summary, while there are some cases where students still complained about the lack of tutor responses and interactions while other students commended the independence they were given and found tutors’ support to be more than adequate. It further emerged that the majority of the students irrespective of overall performances reported a high level of satisfaction. The level of satisfaction was, therefore, not directly related to the performances as it could be observed that high performers could also express mitigated feelings. In contrast, some low performers reported a positive sense of satisfaction.

6 Discussion

Student engagement is an important issue in higher education and has been the subject of interest from research, practitioner and policy-making perspectives. There are different models of engagement that have been studied and proposed. The reliability of self-reported data of students and the lack of a holistic model incorporating multiple dimensions have been the subject of critical analysis by researchers (Kahu 2013). The issue of engagement has also been widely discussed in the context of online learning and different instruments which are mainly survey-based such as the OSE model (Dixson 2015) have been developed. The challenge of a reliable model to define student engagement in online courses remains based on the findings of this research as well. In terms of practical course design, there is a need for learning designers to define beforehand, the student engagement model that would be applied prior to the start of a course, and then conceive their learning activities accordingly.

A positive, but weak association was established between reported engagement with respect to the continuous learning marks and the performances in the final activity. If reported engagement in this context can be defined as the extent to which the student felt connected and committed to the module, it does not necessarily imply that their performances (by way of marks achieved) would reflect that. The findings related to the association of the reported engagement of students concerning the different learning domains, however, contradict the findings of Dixson (2015) who reported a significant correlation between application learning behaviours and OSE scale and a non-significant correlation between observation learning behaviours and OSE. Regarding the reported satisfaction and engagement, it was observed that the higher level of reported engagement resulted in higher levels of satisfaction from the students. However, since the same feedback instrument was used to derive codes related to engagement and satisfaction, this might explain the relatively strong association between the two. This finding is however coherent with the claims of Hartman and Truman-Davis (2001) and Dziuban et al. (2015) who established that there is a significant correlation in the amount and quality of learner interaction with learner satisfaction.

It was also observed that tutor support had played an important part in shaping the students’ level of satisfaction as some students expressed negative feelings when the tutor support was not adequate. Proper academic guidance, as reported in the literature, is a contributing factor in learners’ performances, achievement and satisfaction (Earl-Novell 2006). While it has been established in the literature that in general student satisfaction is not linked to performances, a significant positive correlation was observed in this study between perceived satisfaction and both continuous learning marks and the final performance marks. However, the degree of association, as measured by the correlation coefficient (.108) was weak. In this respect, further analysis through linear regression revealed that perceived satisfaction was not a significant predictor of performances.

If the intention behind the adoption of e-learning is to improve the teaching and learning experiences of on-campus students as argued by Moore (2009) and Abdous (2019), institutional policies will need to focus mainly on digital learning and technology-enabled pedagogies. This is in line with the critical approach taken by Kahu (2013) arguing that student engagement should be about developing competencies in a holistic manner goes beyond the notion of just ‘getting qualifications’. In this research, the activity-based learning design was at the heart of the offer of such a course. The Internet acted mainly as a means to transform the teaching and learning process (Nichols 2003) as skills acquisition, and competency-based outcomes were critical to the learning design. The findings show that irrespective of the overall performances of the students, the majority of them appreciated the learning design, the educational experience, but not necessarily the fact that it was online. Therefore, such approaches mean that institutional leaders should reflect on how to design online courses using competency-based design to better engage students to improve student satisfaction and overall experiences. In that context, there is a need ensure that learning design guidelines is at the heart of the e-learning related policies. The core idea is to engage in a paradigm shift from teacher to learner-centred methods. Learner-centred approaches further imply that the right balance has to be established between mass-customization (one-size-fits-all), and personalized learner support within such environments. Learner support is an essential aspect of quality assurance to be taken into account in technology-enabled learning policies (Sinclair et al. 2017).

In this research, descriptive analytics was used to analyze data related to student performances, satisfaction and reported engagement. In line with Macfadyen and Dawson (2012), we can see a learning analytics approach has helped to give some constructive meaning to the data gathered on the e-Learning platforms to understand better our students’ learning patterns and experiences. Therefore, learning analytics is an essential disposition that institutional e-learning policies have to consider. Sentiment analysis, for instance can add value to the learner support framework, as it allows the tutor(s) to focus his or her efforts on supporting primarily those who are experiencing difficulties while maintaining a minimum level of interaction with those independent learners. This argument has been supported in the literature by different authors (Lehmann et al. 2014; Tempelaar et al. 2015).

The module under study relied mainly on asynchronous tutor intervention when it comes to learner support. Such a model of tutor support has been predominant in online distance education (Guri-Rosenblit 2009). However, with the exponential development in Internet infrastructure and video conferencing technologies, real-time synchronous tutor intervention is more and more being adopted, giving rise to the concept of “Distributed Virtual Learning (DVL)”. DVL allows for tutor-student interaction in real-time, especially where students report problems, or when built-in analytics such as sentiment analysis can flag students who are at risk. The concepts that embody DVL have to be duly taken into consideration by policymakers.

7 Conclusion

From this research, it emerged that students’ satisfaction and their engagement are essential elements defining their learning experiences. Analysis of the feedback revealed that technical difficulties and lack of tutor support create a sense of frustration even if the student ultimately performs well. It is important that such emotions are captured just-in-time during the time the module is offered as timely action can then be taken to address student concerns. At a time, where institutions are moving to e-learning to ensure continuity of educational services, there are important policy implications for the longer-term effectiveness in terms of learning outcomes and student experience.