1 Introduction

In engineering education, the integration of technology for instructional purposes is essential, reflecting trends in higher education as a whole (Çeven & Albayrak, 2020; Koretsky & Magana, 2019). This shift includes assessment methods, particularly with the increase in online assessment due to the COVID-19 pandemic (Gurkan & Cigdem, 2022; Koretsky & Magana, 2019; Koretsky et al., 2022; Sanchez-Lopez et al., 2023). Moving traditional classroom activities and assessments to online platforms can be challenging and requires careful planning to effectively engage students beyond the physical classroom. Without proper planning, the effectiveness of online learning and assessment for teachers may be compromised.

Online formative assessment is crucial in engineering education, especially in optimizing laboratory time. As educational practices shift to online platforms, integrating formative assessment becomes essential. Formative assessment provides continuous feedback in real-time, addressing the challenges of student motivation in online environments (Black & Wiliam, 1998). Incorporating formative assessment is a strategic response to the evolving educational landscape in engineering education. It enhances effectiveness and promotes engagement in online learning experiences (Cigdem & Oncu, 2015; Gurkan & Cigdem, 2022; Nicol & Macfarlane-Dick, 2006; Savander-Ranne et al., 2008; Whitelock, 2009). Furthermore, assessment serves as a crucial tool for educators to evaluate and refine their teaching methods (Boud, 1990; Sadler, 1998). It also provides valuable insights into student progress, allowing for timely interventions and adjustments (Alruwais, 2018; Broughton et al., 2013). By leveraging formative assessment, instructors can continuously improve their teaching practices and ensure that students receive the support they need to succeed in their learning journey.

To improve online learning outcomes, it is essential to measure learner engagement. Delivering content in an engaging and enjoyable manner fosters active participation (Wentzel et al., 2020) Additionally, introducing novel and creative activities can stimulate motivation (Camilleri & Camilleri, 2020). Educators play a vital role in promoting engagement by implementing rewards and incentives, taking into account pedagogical perspectives and learner requirements (Çakıroğlu et al., 2017).

The use of interactive multimedia in online learning environments has been shown to enhance learners’ attention, curiosity, interest, and excitement, while also improving their perseverance, knowledge construction, and critical thinking skills (Zainuddin et al., 2020). These advancements have led to the evolution of online learning across various domains and have furthered the implementation of game-based instructional strategies in diverse educational settings. Gamification is a technique that has been popularized by technology-enhanced learning. It offers opportunities to explore how rewards, incentives, and similar mechanisms can be effectively employed for instructional purposes (Domínguez et al., 2013; Hanus & Fox, 2015). Researchers have proposed ‘gamification’ as a means of motivating individuals to adopt sustainable behaviors, both privately and publicly, such as learning, exercise, and sustainable consumption (Koivisto & Hamari, 2014).

1.1 Gamification and formative quizzes

Gamification is a strategy that has gained popularity in online learning environments. According to Gamified Learning Theory (GLT), it involves using game-enabled activities to influence learning-related behaviors outside of gaming contexts (Landers, 2014). Previous studies have investigated gamified learning in various disciplines (Domínguez et al., 2013; Hanus & Fox, 2015; Mimouni, 2022; Zainuddin et al., 2022). However, its use in engineering education is still relatively unexplored. Engineering education is a unique field that is characterized by its multidisciplinary nature, strong technological influence, emphasis on practical skills and project-based learning, and extensive laboratory hours. However, there has been limited attention in the literature on the convergence of gamification and formative assessment within this specialized context. It is essential to articulate these research gaps to highlight the novelty and significance of our study. We investigate how the integration of gamification impacts learner achievement and engagement within an engineering course. This will bridge the identified gap and provide valuable insights into effective instructional strategies in this specialized domain.

Gamification creates dynamic and immersive learning environments by integrating elements such as competition, rewards, and interactive challenges (Landers, 2014). Gamification is defined as the intentional incorporation of game-inspired elements into educational practices with the aim of increasing learner engagement (Indriasari et al., 2020; Zainuddin et al., 2020a). However, the effectiveness of gamification depends on various contextual factors and user engagement (Hamari et al., 2014). Although gamification shows potential for enhancing learning processes and outcomes, its sustainable benefits in educational settings are inconclusive (Oprescu et al., 2014; Sanchez et al., 2020). According to GLT, gamification indirectly affects learning-related behaviors rather than directly impacting learning outcomes (Landers, 2014). Incorporating gamified elements into online platforms has been shown to motivate learners (Chou, 2019), enhance engagement (Plass et al., 2015), and maximize learning outcomes (Dichev & Dicheva, 2017; Smiderle et al., 2020).

Online games and gamified learning activities have been shown to effectively stimulate learners’ enjoyment, focus, and accomplishment of learning objectives (Sailer et al., 2017; Sangrà et al., 2012; Wang, 2008). These strategies encourage learners to become more productive and involved in learning environments, fostering increased participation and moderate improvements in learning outcomes (Dias, 2017; Ibáñez et al., 2014). However, despite the increasing interest in gamification in education, several challenges persist. These include the need for more empirical evidence, a deeper understanding of its mechanisms, and effective implementation strategies (Dichev & Dicheva, 2017).

In a gamified instructional design, either specific parts of learning activities or the entire course could be gamified depending on a macro or micro level of expectations. The level as well as the visibility of gamification in a course could range from a very little attempt, where learners might be ignorant of the initiatives for gamification, to a great deal of enrichment, where the course as a whole looks a lot like a typical game (Philpott & Son, 2022a).

The incorporation of gamification into assessment procedures, particularly through formative assessment, has become more prevalent in educational settings (Black & Wiliam, 1998; Boud, 1990; Sadler, 1998). Formative assessment, when seamlessly integrated into educational processes, provides valuable feedback to both students and teachers, facilitating continuous improvement in the learning experience (Black & Wiliam, 1998; Boud, 1990). Online formative assessment, which includes weekly quizzes, has emerged as an effective approach for monitoring student progress, providing immediate feedback, and fostering increased interaction between instructors and students (Alruwais, 2018; Broughton et al., 2013; Cigdem et al., 2024; Gurkan & Cigdem, 2022).

Due to the ready availability of these resources outside the traditional classroom, online formative assessment is often recommended as an effective approach (Gaspar Martins, 2016; Johnson, 2006). Despite requiring students to extend their study efforts beyond class time, online exams offer immediate feedback (Bangert-Drowns et al., 1991), and provide insight into their understanding of each topic (Gaspar Martins, 2016). Notably, advancements in learning management system (LMS) technology have empowered instructors to swiftly design, deploy, and assess various evaluation tools, such as quizzes, surpassing the efficiency of traditional paper-based tests and fostering increased interaction between instructors and students (Gurkan & Cigdem, 2022).

Incorporating gamified elements into e-learning modules offers a promising avenue for enhancing student engagement and motivation during online quizzes and assessment activities (Bolat & Taş, 2023; Cigdem et al., 2024; Zainuddin et al., 2020a). As an example, weekly online quizzes are among the simplest and most effective strategies that could get students engaged with learning processes (Bolat & Taş, 2023; Gurkan & Cigdem, 2022). With the help of weekly online quizzes, instructors are able to assess learners’ performance regularly and challenge them to participate more in an online activity including fun challenges and friendly competition (McLaughlin & Yan, 2017). Incorporating gamification into an e-quiz module as an element of an online learning management system, Sanchez et al. (2020) claim that learners tend to take the preparatory exams more frequently when gamification elements are added. This point also confirms the findings of the study done by Landers and Landers (2014) on learners’ tendency to engage in formative quizzes as a result of gamification elements added into the system.

1.2 Leaderboards as an element of gamification

There are different types of elements to gamify online learning activities. One of the most common design elements of gamification is leaderboards (Bai et al., 2020; Hamari et al., 2014; Landers et al., 2017) owing to its usefulness and ease of installation (Hamari et al., 2014). Leaderboards, in the context of gamification, refer to visual representations that showcase the relative performance and achievements of individuals within a learning environment. Typically presented in a competitive format, leaderboards display participants’ progress, scores, or rankings, fostering a sense of competition and achievement (Chou, 2019; Christy & Fox, 2014).

The decision to investigate the impact of league tables underlines their influential role in shaping student behavior and engagement. As well as providing transparent feedback on individual performance, leaderboards have the potential to encourage healthy competition between students. This dynamic element of gamification is particularly noteworthy for its ability to increase motivation, encourage active participation and contribute to a more immersive learning experience. Numerous studies indicate the advantages of using leaderboards in online learning environments. For instance, a lot of studies conducted within the context of gamified courses show that leaderboards could positively affect learners’ performance (Bai et al., 2021; Landers et al., 2017, 2019); course engagement (Barata et al., 2017; Scales et al., 2016); and the number of course-related tasks completed by learners (Domínguez et al., 2013; Huang & Hew, 2018; Legaki et al., 2020; Mekler et al., 2017; Tan & Hew, 2016). Furthermore, leaderboards could often be used to motivate users to play the game repeatedly and so increase the time spent on the task (Bai et al., 2021; Landers & Landers, 2014). Leaderboards could also be used as external incentives to improve learners’ academic performance (Mekler et al., 2017).

Our study aims to investigate the specific effects of leaderboards as an element of gamification on student achievement and engagement within the unique context of engineering education. Through this exploration, we seek to shed light on the potential benefits and challenges associated with the use of leaderboards in enhancing learning outcomes and student engagement in engineering courses.

Although leaderboards have been linked to positive effects in the literature, it is important to acknowledge some associated concerns. Some studies suggest that the engagement facilitated by leaderboards may be short-lived (Bai et al., 2020; Koivisto & Hamari, 2014, 2019). Furthermore, an excessive focus on leaderboard rankings may not necessarily lead to effective learning outcomes and could result in lower task quality (Domínguez et al., 2013; Huang & Hew, 2018; Philpott & Son, 2022b; Tan & Hew, 2016). Competitive scoring systems may benefit high-performing students but could have negative consequences for underperforming ones (Bai et al., 2020; Çakıroğlu et al., 2017). Furthermore, the use of extrinsic rewards such as leaderboards may lead to short-term performance gains but can ultimately diminish learners’ intrinsic motivation over time (Hanus & Fox, 2015; Philpott & Son, 2022b). To address these concerns, our study aims to take a balanced approach by emphasizing the value of learning while using extrinsic motivators. In addition, we will integrate collaborative learning opportunities and feedback mechanisms to create a supportive and engaging learning environment for all students.

1.3 Understanding behavioral engagement in engineering education: Implications for LMSs

Engagement is considered to be a crucial factor in academic success and learning outcomes (Çiğdem & Öncü, 2023; Fredricks et al., 2004; Guo et al., 2014; Hutain & Michinov, 2022; Saqr et al., 2023). Although Fredricks et al. (2004) offer a specific perspective on the relationship between engagement and achievement, it is important to acknowledge the broader consensus in the literature that students’ prior academic performance often serves as a strong predictor of their future achievements. The importance of considering diverse evidence and perspectives in the literature is emphasized by this prevailing view. Several studies have highlighted the strong link between prior achievement and subsequent academic success (Kitsantas & Zimmerman, 2009; Schneider & Preckel, 2017).

Engagement has behavioral, emotional, and cognitive aspects (Fredricks et al., 2004). Behavioral engagement is crucial for academic success and involves active participation in learning activities, such as attending classes, participating in discussions, completing assignments, and preparing for exams (Hazzam & Wilkins, 2023; Saqr et al., 2023). Teacher-student relationships, classroom environment, and instructional strategies have a significant impact on behavioral engagement. Educators can improve engagement by utilizing active learning methods, promoting learner autonomy, offering choices, and fostering a supportive classroom climate.

Previous research has consistently demonstrated a positive correlation between behavioral engagement and academic achievement across all levels of education. For example, high school students who exhibit higher levels of behavioral engagement tend to achieve better grades and are more likely to graduate on time (Finn & Rock, 1997). A meta-analysis conducted by Wang et al. (1993), supports these findings, indicating that students who are more behaviorally engaged in their studies generally perform better academically. Fredricks et al. (2004) argue that engagement predicts academic success better than IQ or prior academic performance. Additionally, Wang and Eccles (2013) found that students who are more engaged tend to earn higher grades and pursue further education.

Engagement can be assessed using a variety of methods, such as self-reports, instructional ratings, and observation techniques (Fredricks & McColskey, 2012; Lane & Harris, 2015). Learning management systems (LMSs) have become a popular tool for evaluating learner engagement with the advancement of technology (Kokoç & Altun, 2021; Redmond et al., 2018). LMS data provides real-time and non-intrusive measures of behavioral engagement (Henrie, Halverson et al., 2015; Wang, 2019), making it a valuable resource for monitoring and evaluating student engagement.

When assessing learning engagement in blended courses, it is common to analyze frequency characteristics. These characteristics include quantitative and temporal aspects derived from LMSs (Wang, 2021). Quantitative characteristics entail measuring the number of learning materials accessed (Henrie, Halverson et al., 2015; Hsiao et al., 2019; Hu et al., 2014; Lu et al., 2017), the quantity of messages posted and replied to in forums (Macfadyen & Dawson, 2010), and the completion of specific tasks (Hsiao et al., 2019; Lu et al., 2017; Macfadyen & Dawson, 2010; Saqr et al., 2023). On the other hand, temporal characteristics involve analyzing the time spent on particular activities (Saqr et al., 2023). The use of LMSs enables scalable measurement of learner engagement and provides insights into learning experiences not typically observed in traditional contexts (Henrie et al., 2015).

Previous research has demonstrated a correlation between behavioral engagement in LMS activities and academic achievement (Ellis et al., 2017; Henrie et al., 2015a; Saqr et al., 2023; Wang, 2019). Some studies have demonstrated that learners’ behavioral engagement in LMSs is associated with self-regulated learning and academic performance (Naumann & Salmerón, 2016; Wang, 2019). However, there is limited research on the relationship between achievement and behavioral engagement based on data log analysis, specifically in engineering courses. Understanding these patterns can facilitate successful learning experiences for learners. Therefore, the objective of this study is to incorporate online formative quizzes into an engineering course to improve learners’ behavioral engagement. Additionally, the study aims to measure the level of engagement using LMS features.

1.4 Previous studies: Informing current research

In this section, we highlight insights from prior research to underscore the importance of our study within the broader academic context. Hanus and Fox (2015) examined the impact of gamification on student engagement and academic performance, uncovering a gradual decline in motivation and satisfaction among students in gamified courses compared to non-gamified ones. This suggests the need for cautious implementation of gamification in educational settings due to its potential effects on student motivation and achievement. Additionally, Landers et al. (2017), found that leaderboards effectively motivate participants to perform at high levels, aligning with the principles of goal-setting theory. Notably, our analysis focuses on objective assessments, excluding subjective evaluations.

Sanchez et al. (2020) conducted a quasi-experimental study on gamified learning. They observed that completing more quizzes improved subsequent test scores, supporting the testing effect. However, the study suggested a potential novelty effect, with gamification showing greater benefits for higher-achieving students. This indicates the need for longitudinal studies considering individual differences. Smiderle et al. (2020) emphasised the impact of individual personality traits on the effectiveness of gamification, indicating that its influence varies based on student characteristics.

Zainuddin et al. (2020) recommended the use of gamification in learning to enhance motivation, engagement, and social influence, highlighting the transformative potential of gamified learning and identifying avenues for future research. Zainuddin, Shujahat, Zainuddin et al. (2020a, b) compared learner performance and engagement between traditional and gamified instruction using various applications. The study suggests that incorporating game elements into the classroom could be an innovative approach to enhancing student engagement and learning outcomes. Bai et al. (2021) conducted a study on the effects of leaderboards on learning performance, motivation, engagement, and perceptions in online courses. The study found differences between absolute and relative leaderboards in terms of fostering comparison, competitiveness, and motivation among students.

These studies highlight the intricate and multifaceted nature of gamification and leaderboards in educational contexts. They offer valuable insights for informing our research on gamified formative quizzes in engineering education.

1.5 Purpose and rationale

Although there has been an increase in the number of experimental studies investigating and supporting the effectiveness of gamification in online learning environments (Bai et al., 2020), empirical studies conducted to examine online courses enriched with gamified formative quizzes using leaderboards are limited in number. Designed with an experimental purpose, the current study aimed to investigate the impact of gamified formative assessment on learners’ achievement and engagement in a blended higher education course in engineering education.

The previous research on the impact of gamification on learner performance and motivation exhibits varying degrees of consistency (Buckley & Doyle, 2016; Dichev & Dicheva, 2017). While some studies (Borras-Gene et al., 2016; Cigdem et al., 2024; Frost et al., 2015; Landers et al., 2017) suggest positive effects on performance and motivation, others (Attali & Arieli-Attali, 2015; de-Marcos et al., 2014; Domínguez et al., 2013; Hanus & Fox, 2015) report mixed or inconclusive results. Overall, there is a lack of consensus in the literature regarding the effectiveness of gamification strategies in educational contexts.

Although some studies on gamification tend to examine a combination of various game elements (Bai et al., 2020), it is suggested that investigation of gamification should focus on each element individually to understand their relative effectiveness (Deterding, 2012). As stated in (Mekler et al., 2017), studying the effects of individual game elements on outcomes such as performance, motivation, and engagement could provide a more accurate understanding of how well the game element under investigation could operate in a given context. In this way, educators could have better insights about what to use as a game element in their courses, and researchers could focus better on how well the elements like leaderboards would work in such platforms. This argument also emphasizes the appropriacy of the current study, which focused on leaderboards as one element of gamification.

Despite the growing popularity of gamification in the educational sector (Hanus & Fox, 2015), there might also be some potential areas of concerns about the educational benefits of gamification, especially for engineering education. Gamification might have been perceived positively by users in many studies, but no real effect has been reported (Hanus & Fox, 2015). Also, most of the studies have reported short-term interventions no longer than three weeks (Buckley & Doyle, 2016; Tan & Hew, 2016). The current study which included an intervention implemented over an eight-week period has also a complimentary feature for such findings.

There has been little research to date that has attempted to empirically study the effects of an individual game element on learners’ achievement and engagement, particularly in engineering education. Considering the scarcity of studies regarding technical courses in engineering education and the conflicting results reported in previous studies, more empirical research is needed on the issue under investigation. Being conducted in an engineering education context, the current study aimed to fill an important gap in the literature.

On the basis of the points depicted above, this study investigated how leaderboards added to formative assessment as a game element would affect learners’ achievement and engagement in an engineering course. In this framework, the main research question guiding the whole design is as follows:

RQ: What are the differences in learner achievement and engagement between engineering students assessed through gamified formative quizzes (experimental group) and those assessed through non-gamified formative quizzes (control group) in the Hydraulic and Pneumatic (HP) course?

Based on this main research question, two sub-questions were set:

  1. RQ1:

    How does the use of online formative quizzes in the HP course affect learners’ academic achievement in the experimental group compared to the control group?

  2. RQ2:

    How does the use of online formative quizzes in the HP course influence learners’ engagement in the experimental group compared to the control group?

2 Method

2.1 Research context and participants

The research setting was the department of Mechatronics Technology at a public university in Türkiye. The participants of the study consisted of 159 s-year engineering students. Given the specific research setting of a military school, it should be noted that all participants in this study were male, and their ages fell within the range of 19 to 22 years. All of them were boarding students living in the dormitories of the university.

Having been randomly divided into 6 classes, each of which included 27 to 30 students, during their enrollment to the department, the participants took the HP course in a blended way for four hours per week. The HP course is a 14-week course consisting of a 7-week Hydraulic Unit and a 7-week Pneumatics Unit. The content list of the Hydraulic Unit includes Definition of Hydraulic System and Concepts, Hydraulic Pumps, Hydraulic Actuators, Hydraulic Oil Tanks and Fittings, Hydraulic Valves, Hydraulic Circuit Applications, Problems in Hydraulic Systems and Possible Solutions. The study was carried out during the first half of the course and focused only on the content of the Hydraulic Unit.

2.2 Research design

The study was conducted as an experimental design which adopted the pretest-posttest control group design as a type of quasi-experimental studies. It is quasi-experimental because the participants were not assigned to the groups (control or experimental) randomly as they had already been registered in pre-established classes of the department since their enrollment. The intervention, comprising instruction and gamified formative online assessment integrated with leaderboards, spanned eight weeks in total. It commenced with a pre-test in the first week, followed by six weeks of instruction and gamified formative online assessment, and concluded with a post-test in the final week.

2.2.1 Independent variable

The independent variable of the study was ‘gamified formative assessment’ whose operational definition was as follows online formative quizzes designed with the Leaderboards element of gamification for the content of the HP course offered to engineering students at the Mechatronics Department.

2.2.2 Dependent variables

There were two dependent variables that were considered to be influenced by the independent variable: (a) learner achievement and (b) learner engagement.

Learner achievement was operationally defined in this study as at the end of the intervention, learner’s achievement was assessed through a practical exam that covered the content of the eight-week intervention in the HP course, along with a post-test conducted as a theoretical exam.

The HP course evaluates student success through theoretical and practical examinations. The validation process, which involved four PhD researchers, further strengthened the validity and reliability of the course materials. One of the researchers holds a PhD in Mechatronics Engineering, another in Electronics and Communication Engineering, while the remaining two specialize in education. The other two researchers are PhD candidates. The design of these examinations follows the University’s curriculum requirements, academic standards and departmental guidelines. At the beginning of each semester, departmental meetings are convened in which faculty members, curriculum committee members, and department heads participate in discussions to finalize the course assessment process. These meetings address a range of issues including learning objectives, course content, industry requirements and academic standards. To ensure a comprehensive assessment of students’ knowledge and skills, it was decided that both theoretical and practical (hands-on) examinations would be used. The theoretical component is conducted face-to-face in regular classrooms and includes all participants. Meanwhile, the practical examination takes place in the departmental laboratory, where experimental sets are used under the careful supervision of the course tutors. This multi-faceted approach to assessment aims to provide a well-rounded assessment of students’ skills in both theoretical knowledge and practical application.

Learner engagement was operationally defined in this study as:

the time spent by the participants on the weekly quizzes of the HP course during the eight-week intervention, the number of the attempts by the participants to take the weekly quizzes and the number of the completed quizzes.

All the quantitative data related to learner achievement were obtained from the Course Portal, which is the name given to the online learning platform designed for the HP Course and calculated by using the log records for each participant.

2.3 Intervention process

The whole process followed during the intervention is given in Fig. 1. Before the start of the intervention, the researchers and the instructors held several meetings to decide how to design the course and create the course materials (videos, lecture notes, worksheets, quizzes, and exams).

The development of the learning platform followed a systematic instructional design approach, employing the widely recognized ADDIE (Analysis, Design, Development, Implementation, and Evaluation) process (Peterson, 2003). This process ensures a comprehensive and structured framework for creating effective educational interventions. After the HP course was designed in the Course Portal, all the second-grade engineering students were registered into the course under six classes based on their pre-established classes since their enrollment at the department.

The same two instructors taught all the groups participating in the study. This ensured consistency in teaching practices and minimized the potential impact of instructor variability on the outcomes. Both of the instructors were experienced in delivering the course content and were provided with clear guidelines for the implementation of the online formative quizzes as part of the intervention.

Fig. 1
figure 1

Intervention process: Quasi-experimental design

The intervention lasted a total of 8 weeks, during which both the experimental and control groups participated in the HP course. Throughout this period, the learning platform with online formative quizzes was implemented as a key component of the intervention. Each week included a structured schedule of class time dedicated to the course, with participants engaging in the online learning activities, assessments, and collaborative elements facilitated by the platform.

The breakdown of class hours for the intervention was as follows: two hours per week for instructional content delivery and two hours per week for hands on projects. Students have been able to use the portal at any time for activities (online formative quizzes) designed to increase engagement.

To begin, the researchers developed a 20-question pre-test consisting of multiple-choice and short-answer questions based on the content of the Hydraulic Unit. This was administered online during the first week of the semester to assign the classes to the groups in a balanced manner. The pre-test results were checked descriptively to see the participants’ existing knowledge of the course content and if there would be knowledge gaps between different groups. Based on the descriptive results of the pre-test (see Table 1), the existing knowledge of the pre-established classes did not seem to be highly different, and accordingly they were divided into two groups: three classes (n = 78), the classes A, B, and C as the Experimental Group (EG); and three classes (n = 81), the classes D, E, and F as the Control Group (CG).

A further analysis was conducted to compare the pre-test results of the Experimental Group (EG) and Control Group (CG) using an independent-samples t-test. This analysis aimed to determine whether statistically significant differences existed between the groups in terms of their baseline knowledge prior to the intervention. The results revealed no statistically significant difference between the EG and CG, t(157) = 0.174, p > .05. Both groups demonstrated similar levels of knowledge at the outset of the study, with mean scores of 37.83 for the EG and 37.39 for the CG (see Table 1).

Table 1 Differences in pre-test results by groups

In the next step, both groups (EG and CG) took the same blended course with the same title, the same content and the same materials that had been created and uploaded in the Course Portal (the online learning platform of the department). The only difference was the nature of the formative quizzes, which were given to the groups in different ways: gamified vs. non-gamified. Although the number of the quizzes as well as the items were exactly the same for both groups (see Table 2 for the details), CG was given the quizzes in a regular format, where the learners could only see their own scores and the correct answers. However, EG was given the quizzes with the leaderboards feature on, which made it possible for the learners to see the different rankings: (a) their individual ranking within the group and the department as well as the top five scoring performances among learners; (b) the ranking of each class including the highest three and the lowest three performing classes).

Table 2 Course content and formative assessment procedures

Each week during the intervention, the online formative quizzes consisting of the items selected randomly from the item pool of the course was administered to both groups. The items in the question bank were prepared by the researchers and the two instructors in accordance with the learning objectives of the course. They were organized as multiple choice and short answer questions used in the learning management system. The screenshot of the formative quizzes designed for both EG and CG is displayed in Fig. 2.

Fig. 2
figure 2

Sample screenshot of the formative quizzes

The Course Portal was inaccessible to any guest. The participants were able to access the quizzes when they logged in the platform with their username and password. The screenshot of the interface designed is displayed in Fig. 3 for EG. The Leaderboards (Activity Results Plugin) feature of the formative quizzes was used only for EG in two ways: (a) displaying the ranking of the highest and the lowest five performances each time; and (b) the ranking and the average score of each class within the experimental group.

Fig. 3
figure 3

Sample screenshot of the interface for EG

In the final stage of the intervention, post-tests, comprising both theoretical and practical exams, were administered. The post-test was identical for both groups in all respects. Practical exams took place in the laboratories during the concluding week of the intervention, while the theoretical exam was conducted in a single session at the end of that week.

The whole intervention lasted for eight weeks with one week of pre-test, six weeks of teaching and formative assessment and one week of post-test. The participants were informed, in advance, about all the procedures of the course and that taking the online formative quizzes was optional.

2.4 Data analysis

The data were analyzed through IBM SPSS 24 Package Program by means of descriptive and inferential statistics. The impact of gamified formative assessment on achievement and engagement were analyzed through independent-samples t-tests and analysis of covariance (ANCOVA). When performing t-tests and ANCOVA, necessary assumptions such as homogeneity of variance, normal distribution and independence of observation were checked in advance. The Levene’s tests checking the assumption that the variances of the two groups (experimental and control) are equal did not indicate significant values.

3 Results

3.1 Results on learner achievement

The impact of gamified formative assessment on learners’ theoretical exam achievement was assessed by conducting an independent-samples t-test on scores from the theoretical exam. The results of the t-test, presented in Table 3, revealed a statistically significant difference between the experimental and control groups, t(157) = 3.894, p = .001, indicating that engineering students who underwent gamified formative assessment during the intervention achieved higher scores (M = 68.20) on the theoretical exam compared to those assessed using non-gamified quizzes (M = 60.51).

Additionally, an examination of the standard deviations for both groups yielded interesting insights into student achievement. The standard deviation for the experimental group was 11.43, whereas for the control group, it was 13.35. This suggests that students in the experimental group, assessed using gamified quizzes, may exhibit more homogeneous achievement patterns compared to those in the control group. These findings suggest that gamified quizzes could serve as an effective tool for promoting more consistent learning outcomes.

Table 3 Differences in theoretical exam by groups

Another independent samples t-test was conducted to assess potential significant differences between the practical (hands-on activities) exam scores of the experimental and control groups. The results, displayed in Table 4, revealed no statistically significant difference in the scores obtained from the practical component of the post-test between the groups, t(157) = 0.908, p = .365, which was greater than the alpha level of 0.05. Upon closer examination of the practical exam results, it was observed that the group averages were closely aligned, with a mean of 88.60 for the Experimental Group (EG) and 86.64 for the Control Group (CG) (see Table 4).

Table 4 Differences in practical (hands on activities) exams by groups

In addressing the first research question (RQ1), a one-way ANCOVA was employed to mitigate the influence of participants’ baseline knowledge, assessed via a pre-test administered at the outset of the intervention. The ANCOVA, utilizing pre-test scores as the covariate and theoretical exam scores as the dependent variable, yielded results consistent with previous analyses. Following the adjustment for pre-test scores, statistically significant differences between the experimental and control groups were observed for the theoretical exam, F(1, 155) = 11.268, p = .01, η2 = .068 (see Table 5).

Table 5 ANCOVA Results for theoretical exam scores

3.2 Results on learner engagement

The engagement data were obtained from the online learning platform and recorded as the time spent by each learner on the formative quizzes, the number of their attempts to take the weekly quizzes and the number of the quizzes they completed. The descriptive results in regard to the participants’ engagement indicated that the number of the students taking the formative quizzes tended to decrease in both experimental and control groups as the intervention progressed, especially after the third quiz. Still, the decrease seemed to be more in the control group. This slight difference could be related to the gamified feature of the quizzes given to the experimental group. A similar result was observed also in the attempts to take the quizzes since there appeared more attempts by the experimental group (M = 25.01) than the control group (M = 18.36). This could mean that gamification probably made the participants be more engaged with the quizzes. As an expected consequence of the higher number of attempts, the experimental group (M = 3384) spent more time on quizzes compared to the control group (M = 2567). On average, students in the Experimental Group made approximately 25.01 attempts to take quizzes. In the Control Group, the average number of attempts was around 18.36. The higher standard deviation in the Control Group (22.78) suggests greater variability in the number of attempts among students compared to the Experimental Group (2050). The same applies to the time spent on quizzes. In practical terms, this could mean that quiz engagement in the EG is more homogeneous or consistent across students, whereas in the CG there is a wider range of behavior in terms of the number of attempts made. This information is valuable in understanding how students in the CG approach quiz participation differently from those in the EG. For example, the student who made the most quiz attempts (163) was in the CG. Upon analyzing the students in both groups who made the highest number of online quiz attempts, it was observed that the top three students in the CG made 163, 85, and 67 attempts, respectively. In contrast, upon examination of the EG, the three students with the highest number of attempts made 107, 105, and 97 attempts, respectively. The distributions revealed a more homogeneous distribution of attempts among students in the EG, whereas the high number of attempts in the CG exhibited heterogeneity, likely attributed to certain students solving more quizzes than the norm. All in all, a positive effect of the gamified quizzes was detected in descriptive indicators of the engagement data (see Table 6).

Table 6 Descriptive indicators of engagement

When the engagement data were subjected to inferential statistics, slightly different results emerged. To assess the influence of gamified formative assessment on learners’ engagement, an independent-samples t-test was conducted on the engagement data, comparing the Experimental Group (EG) and Control Group (CG). A statistically significant difference between the experimental and control groups was found for only one dimension within the engagement data: the number of quizzes completed, t(153) = 3.674, p = .001. This finding indicated that engineering students exposed to gamified formative assessment (M = 4.86) completed more weekly quizzes than those given regular (non-gamified) quizzes (M = 4.04) during the intervention. However, no statistically significant differences were observed for the time spent on formative quizzes, t(153) = 1.264, p = .208, or the number of attempts to take the quizzes, t(153) = 1.794, p = .075 (see Table 7).

Table 7 Differences in engagement

When both descriptive and inferential results (Tables 6 and 7) are assessed, we could interpret that gamification contributes to the participants’ engagement with the formative assessment in a positive way.

4 Discussion and conclusion

The overall results of the study showed that adding gamification to formative assessment procedures in an online learning platform can contribute to student achievement (theoretical exam scores) and only one aspect of student engagement (number of weekly quizzes completed). Although the contribution of gamification is limited to aspects of the course such as theoretical exam success and the number of weekly quizzes completed by each participant, gamification of formative quizzes with leaderboards seems to be particularly effective on theoretical exam success. Upon careful evaluation of the results, it is evident that the implementation of a basic leaderboard in the experimental group did not result in a significant increase in student participation as initially anticipated. However, it did lead to a significant increase in the number of quizzes completed in favor of the experimental group. It can be concluded that the number of completed quizzes also had a significant impact on the success of the theoretical exams.

As for the first point, the online formative quizzes gamified with the leaderboards element tended to improve the participants’ summative achievement in the theoretical exam administered at the end of the intervention. There are some studies that support this finding in different contexts. For instance, Mekler et al. (2013) and Sanchez et al. (2020) both reported that gamification could significantly improve learners’ performance.

Gamification elements to be adopted in online learning environments could be utilized as an innovative, fun, and stimulating way to show learners their progress. As one example, leaderboards could be used as a source of motivation for learners to embrace gamified learning environments because learners are able to see their activities being processed publicly and instantly and compare their progress with their classmates (Domínguez et al., 2013). Therefore, using leaderboards as external incentives is considered to improve learners’ academic performance (Mekler et al., 2017). Considering this argument, the positive effect of leaderboards on learners’ achievement was also in line with the findings of some other studies (Bai et al., 2021; Ibáñez et al., 2014; Landers et al., 2017; Landers et al., 2019; Mekler et al., 2017) claiming that leaderboards could directly or indirectly improve learners’ performance. On the other hand, the studies of Hanus and Fox (2015) as well as de-Marcos et al. (2014) contradicted with this finding by stating that learners’ scores could be lower even after participating in a gamified learning environment.

Regarding the second point, the online formative quizzes gamified with the leaderboards made a significant impact also on the participants’ engagement with the quizzes in terms of the number of the weekly quizzes completed by each participant throughout the intervention. This finding is consistent with the existing literature since some studies (Barata et al., 2017; Charles et al., 2011; Dichev & Dicheva, 2017; Scales et al., 2016) have already reported an increase in learner engagement with the help of gamification. It is essential to highlight that the gamification approach adopted in this study was intentionally modest, focusing solely on the incorporation of leaderboards. While the impact on engagement may not have been as pronounced as anticipated, this choice was guided by the aim to assess the effectiveness of a minimalistic gamification strategy in an online learning context.

When the descriptive data is examined, it is rather visible that the number of the participants who completed the weekly quizzes, the number of the attempts to take the quizzes, and the time spent on the quizzes tended to be higher in the experimental group than in the control group. Despite being not statistically significant for all three dimensions, this finding could be supported by confirming results from the previous studies (Domínguez et al., 2013; Huang & Hew, 2018; Legaki et al., 2020; Mekler et al., 2017). Some other studies (Bai et al., 2021; Landers & Landers, 2014) specifically stated that leaderboards could motivate learners to repeat formative quizzes and also increase task time in online learning platforms.

The finding that gamification did not have a statistically significant effect on the two dimensions of the engagement data (the time spent on the quizzes and the attempts to take the quizzes) could be interpreted with and linked to the increase in the theoretical exam scores of the participants because the gamified formative assessment, in fact, increased the number of the participants who took the quizzes or the number of the quizzes completed by each participant. Such a tendency could have caused a significant difference in the post-test scores (Landers et al., 2017; Landers & Landers, 2014). In other words, gamifying the online formative quizzes on a weekly basis encouraged the participants in the experimental group to take more quizzes each time, and thus their theoretical exam scores were influenced to be higher as an indirect impact of completing more quizzes.

As a final point of the descriptive data, the decrease in the number of the learners who completed the quizzes after the third quiz in both groups could be due to the short interaction of the learners with the leaderboards as stated also in the literature (Bai et al., 2020; Koivisto & Hamari, 2014, 2019).

In conclusion, while the results did not reveal a substantial increase in overall student engagement with the introduction of a simple leaderboard, the discussion sheds light on the value of cost-efficient gamification strategies, providing a nuanced perspective for future considerations in online learning environments.

4.1 Limitations and future research

In the literature regarding formative assessment, there is a frequently discussed phenomenon: test effect. The test effect, also known as ‘test-enriched learning,’ is the impact that formative quizzes could have on learning, which essentially refers to the increased retention of information and/or abilities acquired through testing (Larsen, 2013). With this perspective, using gamified weekly online quizzes might increase the test effect (Sanchez et al., 2020). This point could be interpreted as a limitation of the current study. A follow-up study in which the test effect is statistically controlled would eliminate such a limitation.

Another limitation is about measuring the effect of a single game element, leaderboards, and not exploring the influence of other gamification elements. A design in which different gamification elements are tested would yield more comprehensive results. Future researchers could also consider collecting qualitative data from learners about the impact of gamification in assessment procedures and thereby provide more insights into the issue under investigation. As our study is quasi-experimental, the results obtained are indicative and cannot be generalized. Finally, this study suggests that the gamified formative assessment is rather effective in the current setting, yet more research from diverse contexts and educational settings is needed on the positive effects of gamification.

4.2 Practical implications

4.2.1 Implications for Research

This study highlights the potential benefits of incorporating leaderboard as a gamification element into assessment procedures. Future research should further explore the specific features of other gamification elements that most effectively enhance learner achievement, focusing on different educational contexts and disciplines. Additionally, it is essential to investigate the long-term effects of gamified learning on student engagement and motivation, considering individual differences and characteristics. Further studies could also examine how different gamification elements, such as badges, leaderboards, impact various types of learners, particularly in maintaining engagement over extended periods.

4.2.2 Implications for teaching

The findings suggest that adding leaderboard as a gamification element into assessment procedures can significantly enhance learners’ achievement in courses. Instructors can gamify other instructional activities in online learning environments to foster engagement and motivation. However, it is crucial to consider individual differences and characteristics when designing gamified learning activities, as some learners might be reluctant to participate in competitive environments. Instructors should also be mindful of the potential for gamification to lose its appeal over time (Koivisto & Hamari, 2014; Sanchez et al., 2020). Therefore, attention should be given to the timing and duration of gamified activities to sustain long-term engagement and avoid diminishing returns.

4.3 Conclusion

The current study found that weekly online quizzes gamified using leaderboards in an online learning environment would yield an impact on learner achievement and learner engagement. However, such an impact could also be related to test effect. Gamification of formative quizzes might increase participation in quizzes, which could be explained through test effect. As another point, that impact might decrease after a certain point and be short-lived. For these reasons, more empirical research, including longitudinal studies, is needed to comprehensively test and justify the contribution of gamified formative assessment to both learner achievement and engagement.