1 Introduction

In the rapidly evolving landscape of educational technology, blended learning (BL) has become a prominent approach, seamlessly integrating face-to-face and online learning experiences (Hill & Smith, 2023; Rasheed et al., 2020). While previous research has emphasized the widespread adoption and benefits of BL, including improved academic achievement (Boelens et al., 2017; Hill & Smith, 2023), challenges faced by students, teachers, and educational institutions in its implementation are also recognized (Rasheed et al., 2020). Designing effective blended learning (BL) presents several key challenges, including diminished learner attention and a decline in motivation, which result in decreased engagement and participation in courses (Khaldi et al., 2023). Furthermore, students may face difficulties with preparatory tasks and quizzes prior to in-person classes, often due to inadequate motivation (Albiladi & Alshareef, 2019; Boelens et al., 2017).

Gamification, defined as the educational use of game mechanics and design principles extending beyond traditional games (Schell, 2008), has garnered attention. Studies highlight its positive impact on learning motivation, emphasizing the mediating role of psychological needs under the self-determination theory (Deci & Ryan, 2016). This positions gamification as a potential solution for addressing challenges in blended learning. Recent systematic research underscores the significance of leveraging gamification in online environments to enhance student engagement (Bizami et al., 2023; Jayawardena et al., 2021).

The increasing popularity of gamified tests has positively influenced academia, supporting blended learning models and formal education settings (Bolat & Taş, 2023; Saleem et al., 2022). Recent findings suggest that gamified assessments contribute to higher process satisfaction among students compared to traditional assessments (Cortés-Pérez et al., 2023; Sanchez et al., 2020). The advent of machine learning algorithms has given rise to adaptive gamified assessments, offering a novel approach to personalized testing and feedback, thereby enhancing learning autonomy (Llorens-Largo et al., n.d.). Therefore, this study focuses on investigating the impact of gamified assessment on blended learning.

While existing research has explored the impact of gamification in online environments (Can & Dursun, 2019; Ramírez-Donoso et al., 2023), a noticeable gap remains in understanding the specific effects of gamified tests in online settings, particularly within the context of K-12 education. Research on adaptive gamified assessments is limited, emphasizing the need for further exploration (Bennani et al., 2022). Consequently, this study primarily focuses on investigating adaptive gamified assessments, with research objectives centered around motivation and knowledge levels in early education. The research objectives are outlined as follows:

  1. 1.

    Does adaptive gamified assessment enhance learners' motivation on blended learning? What is the effect of the adaptive gamified assessment on learners' motivation?

  2. 2.

    Does adaptive gamified assessment improve learners’ academic performance on blended learning?

To address the challenges present in blended learning, this research contributes to the field by providing insights into the effects of machine learning-based gamified assessments on motivation and performance, offering valuable recommendations for the improvement of blended learning. The findings could also facilitate the design and adoption of blended learning, particularly in the context of K-12 education.

The subsequent sections will delve into a comprehensive literature review, conceptual framework, outline the chosen methodology, present results, and discussions, and conclude with implications and avenues for future research.

2 Literature review

2.1 Blended learning challenges and benefits

Blended learning has emerged as a popular educational model, distinct from traditional instructional methods. It represents a convergence of in-person and online learning, leveraging the strengths of each to enhance the educational experience (Poon, 2013). The hybrid approach combines classroom effectiveness and socialization with the technological benefits of online modules, offering a compelling alternative to conventional teaching models. It has identified significant improvements in academic performance attributable to blended learning's efficiency, flexibility, and capacity (Hill & Smith, 2023). The approach also facilitates increased interaction between staff and students, promotes active engagement, and provides opportunities for continuous improvement (Can & Dursun, 2019).

Despite these advantages, blended learning is not without its challenges, particularly for students, teachers, and educational institutions during implementation. Boelens et al. (2017) highlight that students often face self-regulation challenges, including poor time management and procrastination. The degree of autonomy in blended courses requires heightened self-discipline, especially online, to mitigate learner isolation and the asynchronous nature of digital interactions (Hill & Smith, 2023). Isolation can be a critical issue, as students engaged in pre-class activities such as reading and assignments often do so in solitude, which can lead to a decrease in motivation and an increase in feelings of alienation (Chuang et al., 2018; Yang & Ogata, 2023).

Teachers, on the other hand, encounter obstacles in technological literacy and competency. Personalizing learning content, providing feedback, and assessing each student can demand considerable time and effort (Cuesta Medina, 2018; Bower et al., 2015). These challenges can adversely affect teachers' perceptions and attitudes towards technology (Albiladi & Alshareef, 2019). Furthermore, from a systems perspective, implementing Learning Management Systems (LMSs) that accommodate diverse learning styles is a significant hurdle. It necessitates a custom approach to effectively support differentiated learning trajectories (Albiladi & Alshareef, 2019; Boelens et al., 2017; Brown, 2016). Current research efforts are thus focused on enhancing the effectiveness of blended learning and its facilitation of independent learning practices.

2.2 Gamification in education

Gamification in education signifies the integration of game design elements into teaching activities that are not inherently game-related. This approach is distinct from game-based learning, where the primary focus is on engaging learners in gameplay to acquire knowledge. Gamification introduces game dynamics into non-gaming environments to enrich the learning experience (Alsawaier, 2018).

With the progression of technology, gamification has become increasingly prevalent within educational frameworks, aiming to amplify student engagement, motivation, and interactivity (Oliveira et al., 2023). Empirical evidence supports that gamification can effectively address issues such as the lack of motivation and frustration in educational contexts (Alt, 2023; Buckley & Doyle, 2016). Components like levels and leaderboards have been successful as external motivators, promoting a competitive spirit among learners (Mekler et al., 2017). Furthermore, research indicates that gamification can have enduring effects on student participation, fostering beneficial learning behaviors (Alsawaier, 2018).

Despite these positive aspects, some scholarly inquiries have presented a more nuanced view, suggesting that gamification does not unilaterally enhance academic outcomes. These varying results invite deeper investigation into the conditions under which gamification can truly enhance the educational experience (Oliveira et al., 2023). In light of such findings, recent gamified designs have increasingly emphasized personalization, taking into account the unique characteristics, needs, and preferences of each student. Studies have explored the tailoring of gamification frameworks to align with diverse student profiles (Dehghanzadeh et al., 2023; Ghaban & Hendley, 2019), learning styles (Hassan et al., 2021), pedagogical approaches, and knowledge structures (Oliveira et al., 2023). However, the literature still presents contradictory findings, and there is a relative dearth of research focusing on learning outcomes (Oliveira et al., 2023).

2.3 Adaptive assessment in education

Adaptive learning harnesses technological advancements to create a supportive educational environment where instructional content is tailored to individual learning processes (Muñoz et al., 2022). This pedagogical approach is grounded in the principle of differentiated instruction, allowing for the personalization of educational resources to meet diverse learning requirements (Reiser & Tabak, 2014).

Adaptive assessments, stemming from the philosophy of adaptive learning, dynamically adjust the difficulty of questions based on a learner's previous answers, terminating the assessment once enough data is collected to form a judgment (Barney & Fisher Jr, 2016). In the digital age, with the proliferation of e-learning, there has been a significant shift towards adaptive computer-based assessments (Muñoz et al., 2022), utilizing AI-based modeling techniques (Coşkun, 2023), and emotion-based adaptation in e-learning environments (Boughida et al., 2024). These assessments are characterized by their ability to modify testing parameters in response to student performance, employing machine learning algorithms to ascertain a student’s proficiency level.

Prior studies on adaptive methods have revealed several advantages, such as delivering personalized feedback promptly, forecasting academic achievement, and facilitating interactive learning support. These advantages extend to potentially enhancing learner engagement and outcomes (Muñoz et al., 2022). However, adapting instruction to cater to varied skill levels remains a challenge, as does addressing the issues of demotivation and anxiety among students undergoing assessment (Akhtar et al., 2023). Consequently, current research is concentrated on boosting student motivation and engagement in adaptive assessments.

In the field of gamified education, adaptive gamification aims to merge adaptive learning principles with game elements to enrich the learning experience. This approach has been explored through the use of data mining techniques on student logs to foster motivation within adaptive gamified web-based environments (Hassan et al., 2021). Despite these innovative efforts, empirical research on gamified adaptive assessment is limited, as the field is still developing.

2.4 Integration of blended learning and gamified assessment

The combination of blended learning with gamified assessment has been recognized for its potential to increase student engagement, a critical factor often lacking in online learning compared to traditional classroom settings (Dumford & Miller, 2018; Hill & Smith, 2023). Studies investigating the role of gamification within online learning environments have found that it can enhance students’ achievement by fostering greater interaction with content (Taşkın & Kılıç Çakmak, 2023). Moreover, gamified activities that demand active participation can promote active engagement (Özhan & Kocadere, 2020).

Investigations into the efficacy of Gamified Assessment in online environments suggest that students may reap the benefits of its motivational potential. For instance, research has adapted motivational formative assessment tools from massively multiplayer online role-playing games (MMORPGs) for use in cMOOCs, demonstrating positive outcomes (Danka, 2020). Another study compared the effects of traditional online assessment environments to those employing gamified elements, such as point systems, observing the impact on student task completion and quality in mathematics assessments (Attali & Arieli-Attali, 2015). Collectively, these studies indicate that gamified tests can indirectly benefit learning by enhancing the instructional content.

While many studies affirm the efficacy of gamified tests as a valuable, cost-effective tool for educators in blended learning environments (Sanchez et al., 2020), there is a noted gap in research addressing individual differences within gamified testing. Particularly, empirical research on adaptive gamified assessment is scarce, with more focus on the computational aspects of system development than on the impacts on motivation and academic achievement. Furthermore, while studies suggest that gamified tests may enhance the 'testing effect'—the phenomenon where testing an individual's memory improves long-term retention—most of this research is centered in higher education (Pitoyo & Asib, 2020).

The use of gamification spans various educational levels, from primary and secondary schooling to university and lifelong learning programs. However, research focusing on the implementation of gamification in primary and secondary education tends to prioritize the perspective of educators and the application within instructional activities (Yang & Ogata, 2023), rather than the online assessment itself. Therefore, this study aims to advance the empirical understanding of the application of gamification in assessments and its potential to improve learning outcomes, particularly in early education.

3 Theoretical framework

3.1 Self-determination theory (SDT)

Self-Determination Theory (SDT) is a well-known theory of motivation that offers an in-depth understanding of human needs, motivation, and well-being within social and cultural environments (Chiu, 2021). Gamification, which applies gaming elements in non-game settings, frequently employs SDT to address educational challenges in both gamified and online learning platforms (Chapman et al., 2023). SDT distinguishes itself by its focus on autonomous versus controlled forms of motivation and the impact of intrinsic and extrinsic motivators, as characterized by Ryan and Deci (2000). Unlike intrinsic motivation, which is driven by internal desires, extrinsic motivation relies on external incentives such as rewards, points, or deadlines to elicit behaviors—commonly seen in the reward structures of gamified learning environments. In these adaptive gamified assessments, the provision of points and rewards for task completion serves to regulate extrinsic motivation, offering various rewards and titles each time a student completes an exercise task.

SDT is a comprehensive theory that explores the intricacies of human motivation. A subset of SDT, Cognitive Evaluation Theory, postulates that three innate psychological needs—autonomy, competence, and relatedness—propel individuals to act (Deci & Ryan, 2012). Autonomy is experienced when individuals feel they have control over their learning journey, making choices that align with their self-identity, such as selecting specific content areas or types of questions in an adaptive gamified assessment. Competence emerges when individuals encounter optimal challenges that match their skills, where adaptive gamified assessments can adjust in difficulty and provide feedback, thereby promoting skill acquisition and mastery. Relatedness is the sense of connection with others, fostered by supportive and engaging learning environments. In gamified contexts, this can be achieved through competitive elements and parental involvement in the learning process, enhancing the learning atmosphere.

The fulfillment of these psychological needs, particularly those of autonomy and competence, is central to fostering intrinsic motivation according to SDT. Figure 1 examines the adaptive gamified assessment process and how it aligns with SDT.

Fig. 1
figure 1

The structure of the adaptive gamified assessment

3.2 Principles of language assessment

The adaptive gamified assessment in this study utilizes Quizizz, an online educational technology platform that offers formative gamified tests to help students develop academic skills in various subjects, including English language (Göksün & Gürsoy, 2019). Drawing on the five principles of language assessment as outlined by Brown and Abeywickrama (2004), this study analyzes the adaptive gamified assessment. These principles—authenticity, practicality, reliability, validity, and washback—are foundational in foreign language teaching and assessment.

Practicality refers to the flexibility of the test to operate without constraints of time, resources, and technical requirements. Quizizz’s adaptive assessments are seamlessly integrated into blended learning environments, designed for time efficiency, and require minimal resources, making them suitable for a broad range of educational contexts. The platform's user-friendly design ensures that assessments are easily administered and completed by students, necessitating only an internet connection and a digital device (Göksün & Gürsoy, 2019).

Reliability is the extent to which an assessment consistently yields stable results over time and across different learner groups, providing dependable measures of language proficiency. Quizizz's algorithms adapt task difficulty based on learner responses, offering consistent outcomes and measuring student performance reliably over time (Munawir & Hasbi, 2021).

Validity concerns the assessment's ability to accurately measure language abilities in alignment with intended learning outcomes and real-world language application. Quizizz's assessments measure language skills that correlate directly with curriculum-defined learning outcomes, ensuring that results are valid representations of a student's language capabilities. The gamified context also mirrors competitive real-life scenarios, enhancing the authenticity of language use (Priyanti et al., 2019).

Authenticity indicates that assessments should mirror real-life language usage, providing tasks that are meaningful and indicative of actual communication situations. Quizizz's assessments incorporate tasks resembling real-world communicative scenarios, such as reading passages, interactive dialogues, and written responses that reflect authentic language use (Brown & Abeywickrama, 2004).

Washback refers to the influence of assessments on teaching and learning practices, which should be constructive and foster language learning. Quizizz's immediate feedback from adaptive assessments can positively affect teaching and learning. Instructors can utilize the results to pinpoint student strengths and areas for improvement, customizing their teaching strategies accordingly. Students benefit from being challenged at the appropriate level, bolstering motivation and facilitating the acquisition of new language skills in a gradual, supportive manner (Munawir & Hasbi, 2021).

Previous research has demonstrated that Quizizz has a significant impact on academic performance across various educational institutions (Munawir & Hasbi, 2021). As an exemplar of gamified adaptive assessment, Quizizz is designed to be practical and reliable while offering valid and authentic assessments of language proficiency. Moreover, it strives for a positive washback effect on the learning process, promoting effective language learning strategies and accommodating personalized education.

4 Methodology

4.1 Research design

This study employed a controlled experimental design within a quantitative research framework. The methodology involved several stages, as illustrated in Fig. 2. Firstly, participants were selected based on their responses to a pre-questionnaire and a pre-assessment, ensuring comparable baseline levels in English proficiency and computer literacy among all participants. Subsequently, participants were randomly assigned to either the control or the experimental group to ensure variability and minimize bias. Over a period of 20 weeks, a blended language learning intervention was administered to both groups. This intervention involved accessing identical online learning resources before and after traditional classroom sessions, with equal amounts of offline instruction time. Daily assessments were conducted throughout the intervention period. The experimental group completed gamified adaptive tests via Quizizz, while the control group undertook non-gamified adaptive tests on a computer. Upon completion of the intervention, surveys were conducted to assess the motivation levels of both groups and compare their English language proficiency. Data were collected from both pre- and post-assessments, as well as responses from the questionnaires and structured interviews.

Fig. 2
figure 2

Flowchart of the experimental process for assessing the impact of gamified learning on student outcomes

4.2 Participants

Forty-five English learners from primary schools in China, aged 8 to 10 years (M = 9.40, SD = 0.62), participated in this study. The sample comprised 25 girls (55.56%) and 20 boys (44.44%). Insights into students' previous experiences, motivations for formative assessments, and attitudes toward language learning were gathered through a pre-questionnaire. Informed consent was obtained from all participants and their guardians, and confidentiality and anonymity were maintained throughout the study. Participants see in Fig. 3 were randomly divided into a control group (n = 22; 12 girls and 10 boys) and an experimental group (n = 23; 13 girls and 10 boys). The experimental group received instructions on completing and utilizing the adaptive gamified assessment, Quizizz, while the control group completed non-gamified adaptive tests on a computer. Both groups adopted the same blended learning model and were informed of identical deadlines for weekly formative assessments, requiring an accuracy rate of over 90%. Immediate feedback was provided on the accuracy rates, and participants were informed they could attempt the assessment again if the target was not met.

Fig. 3
figure 3

Comparison of number and gender ratio in two groups

4.3 Instruments

The study utilized Quizizz's Adaptive Question Bank mode, offering a range of question difficulties and allowing students to progress at their own pace. The questionnaire was adapted from the Student Evaluation of Educational Quality (SEEQ), which has demonstrated a high level of reliability, with Cronbach's alpha ranging from 0.88 to 0.97. Additionally, according to Pecheux and Derbaix (1999), the questionnaire was designed to be as concise as possible for young learners and was administered in their native language, Chinese.

The content of the questionnaire includes a 5-point Likert scale used to measure students' attitudes toward adaptive gamified tests. The response options are as follows: strongly agree = 9, agree = 7, neutral = 5, disagree = 3, and strongly disagree = 1. The statements cover various aspects of gamified testing, including Engagement and Enjoyment, exemplified by 'You enjoy learning new words through game tests. Game tests make learning grammar and vocabulary more fun for you.' Anxiety and Confidence, as indicated by 'Game tests help you feel less worried about making mistakes in learning.' Understanding and Retention, highlighted by 'Playing educational games helps you understand your lessons better.' And preference over traditional testing methods, as shown by 'You prefer taking quizzes when they are like games compared to regular tests.' This total score will provide a cumulative measure of their attitude toward gamified language tests. In addition, there are questions asking participants to express their overall satisfaction with the blended learning experience as a percentage. This metric is instrumental in assessing the role of gamified testing within the blended learning framework. Furthermore, there are specific aspects of gamification: binary yes/no questions that delve into specific components and potential effects of gamified tests, such as the impact of leaderboards and rewards on motivation, and willingness to spend extra time on gamified tests.

Moreover, to explore the impact of adaptive gamified assessment on motivation, structured interviews were conducted with the experimental group. The questions, adapted from Chiu (2021), primarily focused on aspects of motivation such as amotivation, external regulation, intrinsic motivation, and the psychological needs related to relatedness, autonomy, and competency, as seen in Table 1. Responses were quantified on the same Likert scale, with options ranging from 'strongly agree' to 'strongly disagree.'

Table 1 The Structured interview questions

5 Results and discussion

5.1 Comparison language learning attitude scores and satisfaction of participants

To analyze the impact of adaptive gamified assessments on learners, the trajectory of language learning attitude scores and satisfaction percentage for two groups over the course of the experiment was explored, with results depicted in Fig. 4 and Fig. 5.

In Fig. 4, the total score of language learning attitude for the control group's online assessment and the experimental group's adaptive gamified assessment demonstrates an increasing trend as the experiment progressed. After 4 weeks, the language learning attitude scores of the control and experimental groups were 10 and 47, respectively. By week 16, the experimental group's score increased to 70, and after 20 weeks, the control group's score was 50, while the experimental group's score reached 75. A paired-samples t-test conducted via SPSS indicated that the attitude scores were significantly higher in the experimental group than in the control group (t(44) = -14.47, p < 0.001, SD = 4.73), as detailed in Table 2. This significant difference in attitude scores demonstrates the effectiveness of the adaptive gamified assessment in enhancing the language learning attitude of students over the duration of the experiment.

Fig. 4
figure 4

Change of language learning attitude scores of two groups

Table 2 Paired samples test of language learning attitude scores

Figure 5 reveals that as the experiment progressed, the students' dissatisfaction rates with gamification online learning decreased significantly in both groups. Initially, after 4 weeks, the average dissatisfaction rate for the control and experimental groups was 11% and 6%, respectively. As the experiment continued, the dissatisfaction rates declined, dropping to about 5% in the experimental group and 8% in the control group after 20 weeks. Paired t-test results further show a significant decrease in dissatisfaction (t(44) = 10.13, p < 0.001, SD = 0.87). This suggests a marked downward trend in students' dissatisfaction with gamified online learning over the duration of the study, in accordance with their attitudes towards adaptive gamified assessment.

Fig. 5
figure 5

Variation curve of dissatisfaction rate of gamification in two groups

Our research found that students maintain a positive attitude towards the blended learning model of online assessment, which aligns with previous research (Abduh, 2021; Albiladi & Alshareef, 2019), indicating that e-assessment can benefit online learning and teaching. However, a deeper comparison between non-gamified and gamified adaptive testing groups in terms of satisfaction and students' subjective perceptions reveals differences. The experimental group, which incorporated gamified adaptive testing, demonstrated a more positive attitude, corroborating the positive role of gamification in education as outlined by Bolat and Taş (2023). Gamified assessment promotes student motivation in a manner consistent with previous research (Bennani et al., 2022), and our study has similarly shown that gamified assessment positively influences learners' behaviors and attitudes (Özhan & Kocadere, 2020).

This result appears to contradict the findings of Kwon and Özpolat (2021), which suggest that gamification of assessments had a significantly adverse effect on students' perceptions of satisfaction and their experience of courses in higher education. Our findings, however, indicate that adaptive gamified assessments enhance motivation and engagement, thus contributing positively to the learning process for young learners. Furthermore, the motivational levels in the experimental group remained stable, whereas motivation in the control group decreased over time. This suggests that adaptive gamified assessments may help to sustain or enhance learner motivation within a blended learning environment.

5.2 Effect of adaptive gamified assessment on learners' motivation

To further examine the effect of adaptive gamified assessments, the standard error of dissatisfaction for both groups was evaluated, while also including a statistical analysis of the distribution of motivation within the experimental group. The outcomes of these analyses are depicted in Fig. 6.

Fig. 6
figure 6

Changing Curves of Satisfaction of Standard Errors of Two Groups

In Fig. 6, a notable decrease in standard error scores for both the control and experimental groups is observed as the experiment progresses. Initially, after 4 weeks, the standard error scores stood at 8 for the control group and 5 for the experimental group. At the end of the 20-week study period, these scores had diminished to 5.4 and 2.8, respectively.

This study's findings are consistent with previous research on the benefits of personalization in gamification. Rodrigues et al. (2021) reported that personalized gamification mitigated negative perceptions of common assessment activities while simultaneously motivating and supporting learners. This reinforces the pivotal role of adaptive assessment in tailoring learning experiences compared to traditional e-assessment methods. Furthermore, structured interviews conducted with the experimental group revealed the distribution of students' motivation in Table 3. For younger learners, external motivation induced by gamified testing was found to be predominant, with 73% of the children acknowledging its influence. Notably, the tests' impact on students' intrinsic motivation was also significant, especially regarding the sense of competency; 69% of students reported feeling an enhancement in their abilities. This finding presents a nuanced difference from Dahlstrøm's (2012) proposition that gamified products and services could both facilitate and undermine intrinsic motivation through supporting or neglecting the basic psychological needs for autonomy and competence. It suggests an alternate conclusion: the gamified adaptive assessment enhances intrinsic motivation and participation. Of course, the effectiveness of such interventions is significantly dependent on individual and contextual factors, thus highlighting the adaptive gamified approach's role in effectively moderating these effects.

Table 3 Motivational distribution among young learners in the experimental group

5.3 Impact of adaptive gamified assessment on academic performance

To evaluate the impact of adaptive gamified assessment on learners’ academic performance, the errors and system score data from the model tests of different groups were organized. Figure 7 depicts the error variation of the system model test, while Fig. 8 analyzes the change curve of the system’s average score data.

Fig. 7
figure 7

Variation curves of test errors of different models in two groups

Fig. 8
figure 8

Change curve of average learning scores of learners in Two Groups

Figure 7 demonstrates that systematic errors in model testing for both groups exhibited a decreasing trend over the course of the experiment. Initially, after 4 weeks, the model test errors were 22% for the control group and 23% for the experimental group. Following 16 weeks, both groups reached a minimum test error value of 3%. However, after 20 weeks, a rebound and increasing trend in model test errors were observed in both groups. Consequently, setting the experiment duration to 16 weeks appears to effectively improve the accuracy of the gamified assessment. A paired-samples t-test in Table 4 indicates a significant reduction in standard error (t(44) = -25.75, p < 0.001, SD = 2.09), reinforcing the effectiveness of the adaptive gamified strategy optimization in reducing learning standard errors and, consequently, improving learners' efficiency and knowledge acquisition.

Table 4 Paired samples test of standard error reduction

As shown in Fig. 8, the average learning scores of students in both groups increased as the experiment progressed. After four weeks, the average learning score was 25 for the control group and 48 for the experimental group. After 16 weeks, these scores increased to 36 and 66, respectively. By week 20, the average score for the experimental group slightly decreased to 63. This indicates that learners' average scores in different experimental groups peaked after 16 weeks. A comprehensive evaluation, which included a comparison of average learning scores and standard deviation (SD) changes, was used to assess the impact of the gamified assessment. The results are detailed in Table 5.

Table 5 Comparison of average learning score and sd of learners in two groups

These comparisons reveal that adaptive gamified assessments can enhance students' online learning experiences. This supports the findings of Attali and Arieli-Attali (2015), who demonstrated that under a points-gamification condition, participants, particularly eighth graders, showed higher likeability ratings. Additionally, the effect of gamified assessment on students' final scores was mediated by intrinsic motivation levels. This contrasts with previous studies on gamification in education, such as Alsawaier (2018), which indicated that students in gamified courses exhibited less motivation and lower final exam scores than those in non-gamified classes. Furthermore, the element of peer competition introduced by gamification was more meaningful to students with better results, aligning with the findings of Göksün and Gürsoy (2019). Adaptive gamified tests, serving as a formative assessment platform, have been found to positively influence young learners' learning outcomes. Moreover, gamified testing could reduce language anxiety, consistent with the study by Hong et al. (2022). Compared to traditional gamified assessments, adaptive assessments are better equipped to address issues of repetition and learner capability fit, and they align more closely with the principles of scaffolding in education, thereby enhancing students' academic performance.

6 Conclusion

This research explores the influence of adaptive gamified assessment within a blended learning context on young learners' motivation and academic performance. Grounded in Self-Determination Theory (SDT), this investigation categorizes student motivation and analyzes their engagement and learning capabilities in relation to non-gamified and gamified adaptive tests. The findings suggest that the gamified adaptive test can significantly help learners improve their motivation and foster enhanced language proficiency performance in a blended learning environment.

The study verifies the enhancing effect of gamified evaluation on the internalization of students' motivation (Özhan & Kocadere, 2020) and confirms the regulatory role of gamified elements in blended learning, aiding in increasing student participation and satisfaction (Jayawardena et al., 2021). Furthermore, the positive role of gamification in language learning and as a tool for reinforcing assessment is corroborated (Priyanti et al., 2019). This study extends our understanding of the motivational impacts of gamification in younger education settings, suggesting that while previous research indicated a lesser effect on intrinsic motivation for young learners (Mekler et al., 2017), the adaptive mode of gamified assessment could enhance students' sense of competency and, thereby, their intrinsic motivation. Additionally, this research integrates the relationship between motivation and academic level, suggesting that the transition from external motivations provided by rewards in adaptive gamified assessments to enjoying personalized feedback and growth can enhance satisfaction in blended learning, facilitating the internalization of motivation towards participation and language proficiency.

In terms of managerial and policy implications, the introduction of gamification into blended learning environments is advisable, not only as a teaching method but also as an assessment tool. Gamified assessment, with its interactive nature, can be used to alleviate negative impacts of language learning, such as anxiety and lack of confidence, especially for young learners who may benefit from guided external motivational factors. Educators should implement a variety of formative assessments using technology in evaluation activities, especially to promote active learning.

However, the short duration of the experiment and the limited sample size are insufficient to substantiate long-term positive effects of gamification. Future research should delve into a more nuanced examination of students' intrinsic motivation, with longitudinal tracking to observe the internalization of motivation. The inclusion of delayed tests can help study the long-term effects of gamification. Further research could also compare adaptive gamified experiments with gamified experiments to enhance understanding of how gamification influences the internalization of students' intrinsic motivation.