Abstract
The impact of digital technology on learning outcomes, specifically deep learning, has been a subject of considerable debate and scrutiny in educational settings. This study aims to provide clarity by conducting a meta-analysis of empirical publications that examine students' deep learning outcomes in relation to digital technology. A comprehensive search of databases and a thorough literature review yielded 60 high-quality, peer-reviewed journal articles that met the inclusion criteria. Using Review Manager 5.4.1 software, a meta-analysis was conducted to assess the overall effectiveness of digital technology. The calculated effect size indicates a positive influence of digital technology on students' deep learning outcomes. Furthermore, a moderator variable analysis revealed several significant findings: 1. Different categories of digital technology tools have a favorable impact on deep learning outcomes; 2. The duration of digital technology treatment does not significantly affect deep learning outcomes; 3. Digital technology demonstrates a highly positive influence on deep learning within the humanities and social sciences disciplines; 4. Combining online and offline utilization of digital technology in education leads to a substantially greater enhancement in deep learning compared to relying solely on online methods; 5. The effectiveness of digital technology on deep learning is enhanced when accompanied by appropriate instructional guidance; 6. Utilizing digital technology in a systematic manner produces different outcomes compared to fragmented approaches, highlighting the importance of a cohesive implementation; 7. Integrating digital technology with collaborative learning has a more pronounced effect on deep learning compared to independent learning. These findings contribute to our understanding of the impact of digital technology on deep learning outcomes and underscore the importance of thoughtful integration and instructional support in educational contexts.
Similar content being viewed by others
1 Introduction
Deep learning entails an active comprehension and discerning utilization of knowledge (Biggs, 1979; Biggs, 1987), coupled with the ability to transfer and apply knowledge to solve real-world problems. Ultimately, it advocates for a lifelong commitment to learning (National Research Council, 2012). It is a highly immersive form of learning with the aim of developing higher-order thinking skills (Lee & Choi, 2017). Deep learning emerges as a compelling imperative in the realm of education and pedagogy within the digital era, signifying a pivotal manifestation of the evolution and advancement of educational paradigms and learning methodologies. Moreover, it serves as a significant pathway for acquiring essential 21st-century skills (Pellegrino, 2017). The research focus on deep learning effectively addresses the imperative for lifelong education (Barros et al., 2013), the paradigm shift in educational concepts (Sterling, 2004), and the transformations in learning approaches, which collectively constitute pivotal factors in educational reform and progress. For learners, the pivotal role of deep learning resides in its facilitation of higher-order learning objectives, fostering knowledge retention, and enabling the seamless transfer of knowledge from classroom settings to real-world scenarios, thereby enhancing problem-solving capabilities.
The integration of digital technology has become a prominent characteristic of contemporary education (Ng, 2015). While numerous researchers have conducted experiments to examine the impact and effectiveness of digital technology on deep learning outcomes, a consensus has yet to be reached. Some studies suggest that digital technology significantly enhances deep learning (Al-Neklawy, 2017; Cai & Gu, 2019; Yuen & Naidu, 2007), while others indicate that digital technology does not necessarily promote deep learning and may even have negative effects (Lin et al., 2019b; Manzanares et al., 2019; Salmeron et al., 2017; Zhang et al., 2023). Moreover, additional research endeavors are imperative to delve into the multifaceted factors that exert influence on the outcomes of students’ deep learning when exposed to digital technology. Meta-analysis can provide a comprehensive perspective by synthesizing the diverse results of similar studies, thereby investigating the overall effects of digital technology. This study aims to address the following questions: Does the integration of digital technology truly augment the efficacy of deep learning? Is there notable heterogeneity in effect sizes observed across diverse studies? Which factors can explain the variations among these studies? To answer these questions, the present study employs a meta-analysis under PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines (Page et al., 2021) to quantitatively integrate relevant experimental research, analyzing the influence of different moderating variables in digital technology on the improvement of deep learning. This meta-analysis seeks to contribute to the existing studies on the subject and provide valuable insights for researchers, educators, and policymakers.
2 Research overview
2.1 Deep learning
Deep learning is commonly found in the fields of machine learning and education. In earlier studies on education, researchers described deep learning as a learning approach adopted by students during the learning process. This concept originated from Marton and Säljö (1976) who, through qualitative research, compared Swedish university students’ information processing during the reading of extensive prose passages and distinguished two different levels of information processing: “deep-level processing” and “surface-level processing”. Subsequently, Australian scholar Biggs, who has been devoted to research in the domains of learning processes and the classification of learning quality, attempted to associate the learning process with learning quality based on Marton and Säljö’s concept of surface-level and deep-level processing. Biggs (1979) discovered that students typically choose learning strategies consistent with their own motivation, and the combination of learning strategies and learning motivation is referred to as learning approach. There are three different learning approaches that students adopt during the learning process: the deep approach, the surface approach, and the strategic approach. Entwistle and Peterson (2004) posited that deep learning is a learning approach aimed at seeking meaningful understanding. Throughout the learning process, learners monitor their understanding and engage deeply in it, while surface learning is a learning approach aimed at fulfilling course objectives.
In the contemporary era, there has been a concerted global effort by international organizations and nations to explore the transformative potential of educational structures and information technology (United Nation, 2023). In light of the paradigm shift in talent cultivation and the educational reforms spurred by information technology, the concept of deep learning has resurfaced as a topic of paramount importance. Researchers have shifted their attention from examining learning approaches within the learning process to investigating the outcomes associated with deep learning. For instance, by aligning the National Research Council’s (NRC) categorization of deep learning abilities into three domains with the Hewlett N’s identification of six fundamental capacities, the AIR has proposed a comprehensive framework for deep learning. This framework encompasses three domains and six abilities, forming a cohesive structure for evaluating deep learning capabilities (William and Flora Hewlett Foundation, 2013).
In fact, the two aforementioned definitions of deep learning are not contradictory. Researchers generally consider deep learning as a learning approach or a learning outcome. Some emphasize that deep learning is both a learning approach and a learning outcome, wherein learners achieve deep learning outcomes through the process of deep learning (Marton & Säljö, 1976). Therefore, this study considers deep learning to include both the learning approaches adopted during the learning process and the learning outcomes.
2.2 Measurement of deep learning
Numerous cost-effective and widely used measurement tools have been developed by researchers. Commonly used tools include the Study Process Questionnaire (SPQ) (Biggs, 1978), the Learning Process Questionnaire (LPQ) (Biggs, 1991), the Inventory of Learning Process (ILP) (Schmeck et al., 1977), the Approaches to Studying Inventory (ASI) (Entwistle et al., 1979), the Approaches to Learning and Studying Inventory (Entwistle & McCune, 2004), the Approaches and Study Skills Inventory for Students (ASSIST) (Entwistle et al., 1997), as well as the revised two-factor versions of the Study Process Questionnaire (R-SPQ-2F) (Biggs et al., 2001) and the Learning Process Questionnaire (R-LPQ-2F) (Kember et al., 2004), which are adapted from SPQ and LPQ, respectively. In the context of assessing the outcomes of deep learning, researchers predominantly employ Biggs’ (Biggs & Collis, 2014) Structure of the Observed Learning Outcome (SOLO) taxonomy framework. Extensive interdisciplinary and cross-cultural comparative studies have been conducted using the aforementioned measurement tools to validate their reliability and effectiveness in terms of their structure and dimensions (Duff, 1997; Duff & McKinstry, 2007; Watkins, 2014). In this study, these measurement tools are collectively referred to as standardized measurements. Additionally, other measurement approaches are collectively referred to as self-developed measurements in this study.
2.3 Digital technology-enhanced learning
In the realm of digital technology-enhanced learning, various researchers have provided insightful definitions. Siemens and Tittenberger (2009) defined it as the utilization of digital tools and technologies to create, deliver, and support learning experiences and educational content. According to Wheeler (2012), technology-mediated methods encompassed a range of approaches aimed at facilitating student learning. These methods incorporated various components, such as assessment, tutoring, and instruction. The European Commission (2013) elucidated that digital technology-enhanced learning encompassed the use of technology to bolster and enrich the learning process, incorporating digital tools, resources, and platforms to facilitate interaction, collaboration, and access to information. Furthermore, Garrison and Kanuka (2004) posited that digital technology-enhanced learning entailed a pedagogical approach that seamlessly integrates technology, including multimedia, online communication, and interactive learning environments, to cultivate captivating and effective learning experiences. In summary, digital technology-enhanced learning entails the purposeful integration and application of digital technologies within educational settings, aiming to amplify the learning experience and enhance learning outcomes.
Numerous scholarly investigations have substantiated the notion that students’ engagement in deep learning is contingent upon a combination of individual factors and the surrounding learning milieu (DeLotell et al., 2010; Hall et al., 2004). In light of this understanding, leveraging digital technology presents a promising avenue for fostering deep learning among students. Building upon this premise, researchers have endeavored to amalgamate the objectives of deep learning with the integration of digital technology within learning contexts, with the aim of exploring efficacious designs for digital learning environments that accentuate the depth of learning.
According to the nomenclature used by researchers, this study classifies digital technology into four distinct categories: multimedia and interactive content, virtual assistance and tutorial tools, digital learning platform and environment, and integrated design of technology-enhanced learning. The category of multimedia and interactive content encompasses a range of approaches that leverage various digital resources. Examples include the integration of multimedia resources (Altun, 2018; Fenesi et al., 2015; Lee & List, 2019; Lee et al., 2021; Strømme & Mork, 2021) to enhance instructional materials, the development of hypertext teaching materials (Klois et al., 2013; Liu & Hmelo-Silver, 2009; Salmeron et al., 2017) that offer interactive navigation and exploration, and the utilization of digital textbooks (Sung et al., 2022) that provide dynamic and interactive learning experiences. Virtual assistant and tutorial tools provide interactive and personalized support to learners. This includes gamified learning (Aguiar-Castillo et al., 2021; Barrio et al., 2016; Chen, 2017; Erhel & Jamet, 2016; Sung et al., 2018) and other autonomous learning tools, such as concept maps (Cui & Yu, 2019). Digital platforms and environments have expanded the channels for interaction and feedback among teachers, students, and peers. This encompasses the use of Learning Management Systems (LMS) (Al-Neklawy, 2017), interactive reading systems (Chen et al., 2019b), and collaborative feedback among peers (Filius et al., 2019; Zhao & Li, 2021), which contribute to enhancing multidimensional interaction and feedback assessment. Integrated design of technology-enhanced learning refers to a comprehensive conceptual model that encompasses various elements and dimensions of the integration of technology in educational settings. Examples include combining multiple digital resources and instructional tool platforms in flipped classrooms (Bouwmeester et al., 2019; Jeong et al., 2019; Lin et al., 2019a; Sawras et al., 2020), and the implementation of asynchronous online teaching modes (Koszalka et al., 2021).
In addition, the pervasive use of mobile devices and the advancement of instant communication technologies have prompted researchers to explore the effects of fragmented learning and online collaborative learning within the realm of digital technology on deep learning (Xie, 2021). Thus, this study seeks to provide a comprehensive classification of digital technology usage, differentiating between fragmented and systematic approaches, and delineating learning approaches as independent and collaborative modes. By investigating these dimensions, the study aims to deepen our understanding of the influence of digital technology on deep learning outcomes.
3 Literature review and research questions
3.1 Empirical studies on the impact of digital technology on deep learning
A considerable number of researchers have undertaken numerous experiments and quasi-experimental studies to explore the influence of digital technology on deep learning among students. However, a consensus on this matter has not yet been attained. Several researchers have observed a notable positive impact of digital technology on deep learning among students. For instance, Sugden et al. (2021) conducted a study involving 63 university students to investigate the effects of online interactive live broadcasting on deep learning. The findings revealed a substantial enhancement in the deep learning abilities of participants after 14 weeks of engaging in online interactive live broadcasting learning. Similar results were reported by Aderibigbe (2021), who employed a quasi-experimental approach to explore the influence of online interactive discussions on student deep learning. The outcomes indicated that online discussions had the potential to enrich deep learning experiences.
In contrast, there exists a body of research that presents views contradicting the aforementioned research conclusions. These researchers have identified significant adverse effects of digital technology on deep learning among students. For instance, Trakhman et al. (2018) conducted an empirical study aiming to examine the impact of digital media versus traditional paper-based reading on reading comprehension. The findings indicated that traditional paper reading yielded superior outcomes in terms of deep learning when compared to reading through digital media. Vogt et al. (2022) conducted a study that specifically targeted emergency medicine students. The research aimed to compare the effects of online teaching methods with traditional offline teaching in terms of the depth of theoretical knowledge acquisition. Following one semester of instruction, it was observed that students in the traditional teaching group demonstrated superior performance in terms of deep learning when compared to the online teaching group.
As a result, it becomes evident that the experimental investigations into the influence of digital technology integration on student deep learning have not yet arrived at a consensus. This observation implies that the effects of digital technology on deep learning might be influenced by various factors and entail intricate underlying mechanisms. Therefore, a comprehensive understanding of this matter cannot be derived solely from the analysis of individual experimental findings. Furthermore, it is insufficient to determine whether the integration of digital technology fosters deep learning or identify the factors that may impact experimental outcomes.
3.2 Previous meta-analyses on digital technology-enhanced learning
The impact of technology on learning outcomes encompasses diverse dimensions that have undergone meticulous scrutiny through meta-analysis studies, rendering invaluable insights into the intricate relationship between technology and learning. A central focus of inquiry pertains to higher-order thinking skills, particularly critical thinking and reflective thinking. Within this realm, online peer assessment (OPA) has emerged as a potent catalyst for enhancing higher-order thinking, with a specific emphasis on convergent higher-order thinking skills (Yang & Tsai, 2010). Simultaneously, collaborative problem-solving facilitated by technology has demonstrated efficacy in fostering critical thinking, albeit with a more pronounced impact on attitudinal tendencies compared to cognitive skills (Xu et al., 2023).
Moreover, the realm of academic achievement has been examined within the context of technology integration. The influence of distinct log variables in online learning environments on student academic achievement exhibited nuanced relationships that are contingent upon specific courses (Wang & Mousavi, 2023). Additionally, comprehensive investigations into the phenomenon of dropout rates in Massive Open Online Courses (MOOCs) have uncovered a multitude of contributing factors, encompassing psychological, social, personal, course-related, and temporal elements, alongside the influence of motivation and interaction on dropout rates (Wang et al., 2023).
The impact of technology on creativity within STEM education has also attracted scholarly attention. Makerspaces, seamlessly integrating technology, have emerged as facilitators of creative thinking, providing an environment conducive to cultivating creativity. Furthermore, the impact of technology on learning outcomes extends to specialized fields such as nursing education, where technology-based educational tools have been observed to enhance knowledge, skills, and self-confidence among nurses and nursing students (Soomro et al., 2023).
Furthermore, the advent of personalized learning experiences has spurred investigations into the role of technology. By discerning diverse student profiles in technology-enhanced learning environments, tailored support and services can be precisely administered to address individual needs (Villalonga-Gomez & Mora-Cantallops, 2022). Moreover, the integration of online learning has engendered a positive impact on student equity, thereby ameliorating access to education for historically underrepresented groups (Stone, 2022).
While substantial research has been conducted on the impact of technology on learning outcomes in various domains, a notable gap remains in our understanding of how technology influences deep learning. Deep learning, characterized by the acquisition of profound conceptual understanding and the ability to transfer knowledge to complex real-world situations, warrants further investigation in the context of technological interventions. Exploring instructional strategies, learning environments, and interactive tools that facilitate meaningful engagement and knowledge construction holds promise for leveraging the potential of technology in fostering deep learning. A comprehensive understanding of how technology affects deep learning is crucial for educators and policymakers to harness the full potential of technology and design transformative educational experiences that cultivate deep understanding, critical thinking, and lifelong learning skills among learners.
3.3 Research questions
This study endeavors to reconcile the existing disparities pertaining to the efficacy of digital technology in facilitating deep learning among learners through a comprehensive meta-analysis, thereby shedding light on the underlying rationales for the observed variations. Specifically, it aims to address three pivotal research inquiries:
-
RQ1: To what extent does digital technology yield positive outcomes in facilitating deep learning among learners?
-
RQ2: Do substantial variations in effect sizes emerge across the diverse range of studies examined? RQ3: In the event of significant heterogeneity, what are the underlying factors that account for the divergent findings observed among the included studies?
4 Research design
The guidelines of the PRISMA 2020 statement were implemented, involving four distinct phases: search and selection, evaluation of literature quality, research methodology and tools, data analysis. This overall process was ongoing, cyclical, and interactive, with constant crosschecks performed by two researchers throughout screening, extraction, appraisal, and coding procedures.
4.1 Search and selection process
To address RQ1, which investigates the extent to which digital technology yields positive outcomes in facilitating deep learning among learners, this study rigorously adhered to the PRISMA guideline for the search and selection process. The methodology employed in this study encompassed multiple stages, as illustrated in Fig. 1, involving an initial search and screen, application of selection criteria, and a final selection and data extraction process related to relevant studies. To determine the inclusion of studies in the final review, a two-step procedure was implemented. Firstly, a rigorous screening of titles and abstracts from the initial electronic database searches was conducted, resulting in a subset of full articles deemed potentially eligible for the final review. Subsequently, a comprehensive assessment of the full articles was performed to ascertain their relevance to the research topic. This two-stage approach was adopted to ensure the thoroughness of the study selection process and minimized the risk of overlooking valuable and high-quality studies.
4.1.1 Initial search and screen
A systematic and thorough search was conducted to identify pertinent studies within the scope of this research. The search encompassed a comprehensive exploration of publicly available literature up until March 2023. Key concepts and search terms were carefully formulated to ensure the inclusion of literature that pertains to digital technology from diverse international perspectives. This rigorous approach to literature search aimed to capture a wide range of relevant studies, providing a comprehensive foundation for the subsequent analysis and synthesis of findings. The search strategies were tailored to align with the specific databases used, while maintaining consistency across all searches. Each search was performed using combinations of three distinct types of search terms. The first category encompassed terms related to education or training, such as digital learning, learning technology, technology-enhanced learning, distance education, remote learning, MOOC, e-learning, online learning, blended learning, flipped classroom, and distributed learning. The second category consisted of terms associated with learning approaches, including deep learning, deeper learning, deep-rooted learning, deep understanding, deep processing, deep strategy, and deep learner. Lastly, the third category encompassed study design terms, such as pretest, posttest, control group, comparison group, treatment group, and experimental.
Searches were conducted in the following databases: EBSCO, ERIC, Elsevier Science Direct online, Pro-Quest, Springer Link, Taylor & Francis online, Web of science and CNKI. To enhance the comprehensiveness of the review, the snowballing technique was used to broaden the scope of studies included in the analysis. The process involved two researchers independently conducting database searches and identifying articles that met the predetermined criteria. In instances where discrepancies arose, a third researcher facilitated discussions to reach a consensus. This rigorous approach ensured thorough coverage of relevant literature and minimized potential biases in the study selection process.
The initial electronic searches on the databases yielded 8695 relevant articles. Titles, abstracts and indexes were screened to exclude remove articles with irrelevant research topics, duplicate publications, studies reporting the same research in different forms or non-experimental studies. This step narrowed the literature search to 508 potentially relevant full-text articles that required deeper investigation compared to the selection criteria.
4.1.2 Selection criteria
Table 1 provides a comprehensive set of inclusion and exclusion criteria to identify all eligible articles. The selection process for this study involved considering only articles published in peer-reviewed journals, as this criterion serves as a valuable indicator of the quality and rigor of the research. By prioritizing peer-reviewed publications, the study ensured that the included articles have undergone a thorough evaluation process by experts in the respective fields. This rigorous selection criterion strengthened the overall quality and reliability of the evidence base, contributing to the robustness of the findings and conclusions (Shi et al., 2021). The inclusion criteria involved a thorough examination of studies to assess their compliance with the following requirements: (a) the study used a rigorous design (e.g. randomized controlled design, controlled quasi-experimental design); (b) the study reported quantitative data on student deep learning outcomes; and (c) the study reported at least one comparison of a digital technology-enhanced condition with a technology-free condition; (d) During the experiment, learners were not informed of the research purpose. Studies not meeting the established criteria were excluded from consideration. For instance, Aguiar-Castillo’s study (2021) examined the impact of gamification on deep learning within higher education. However, due to the absence of a comparison between gamification and gamification-free conditions, such as those found in experimental or quasi-experimental designs, this particular study was deemed ineligible for inclusion.
4.1.3 Final selection and data extraction
To address our research questions, namely RQ2: “Do substantial variations in effect sizes emerge across the diverse range of studies examined?” and RQ3: “In the event of significant heterogeneity, what are the underlying factors that account for the divergent findings observed among the included studies?”, we extracted the following information from the selected studies for moderator analysis: study authors and year of publication, functionality of digital technology, treatment duration, educational level, subject, sample size, measurement of deep learning outcomes, usage context, usage guidance, usage mode, and learning mode. (See Table 2, The coding schema) Two researchers independently performed data extraction, and any discrepancies between the extracted data were resolved through consensus between the two researchers. This resulted in the rejection of 448 journal articles from the initial pool of 508 potentially relevant publications. Articles were rejected based on the criteria described above. A total of 60 eligible articles were included in the final meta-analysis (Al-Neklawy, 2017; Altun, 2018; Artino & Stephens, 2009; Bakoush, 2022; Barrio et al., 2016; Bouwmeester et al., 2019; Bu et al., 2022; Cai & Gu, 2019; Chao et al., 2016; Chen et al., 2019a; Cui & Yu, 2019; Elbyaly & Elfeky, 2022; Ellis et al., 2016; Erhel & Jamet, 2016; Filius et al., 2019; Giannini et al., 2017; Hackett et al., 2023; Hu et al., 2021; Huang et al., 2019; Jeong et al., 2019; Jiang, 2022; Kazanidis et al., 2019; Klois et al., 2013; Koszalka et al., 2021; Lee & List, 2019; Lee & Choi, 2017; Li et al., 2023; Lin et al., 2019a; Lin & Chen, 2020; Lin et al., 2019b; List & Ballenger, 2019; Liu & Hmelo-Silver, 2009; Manzanares et al., 2019; Naaz et al., 2014; Park & Kim, 2014, 2016; Pei et al., 2020; Qin et al., 2020; Rassaei, 2021; Salmeron et al., 2017; Sawras et al., 2020; Shen et al., 2022a, b; Strømme & Mork, 2021; Sugden et al., 2021; Sung et al., 2018; Tarchi et al., 2021; Tiedt et al., 2021; Trakhman et al., 2018; Vogt et al., 2022; Wang et al., 2018a, b, 2021; Yan et al., 2022; Yang et al., 2018; Yao et al., 2022; Ye et al., 2019; Yeh, 2012; Yuen & Naidu, 2007; Zhou et al., 2021).
An essential differentiating factor in learning approaches that utilize digital tools is the strategic utilization of appropriate learning modes. This differentiation stems from the understanding that various learning modes engender distinct learning experiences by modifying the source of learning content, the nature of learner activities, and the instructional structures within the classroom (Fox & Docherty, 2019). Within the literature, two primary learning modes were commonly discussed: independent learning, characterized by individual learning activities, and collaborative learning, characterized by interdependent group learning activities. Independent learning involves active student engagement in meaningful learning activities, promoting student activity and involvement in the learning process. In contrast to traditional classroom learning, where students passively receive information, independent learning emphasizes student agency and participation. Collaborative learning, on the other hand, entails participants working together in a collective effort to solve problems. This approach emphasizes student interaction within interdependent groups, fostering the development of analytic skills, problem-solving abilities, and prosocial behaviors.
The second key distinction of digital technology in learning is the functionality. Based on the resources and functionalities offered by digital technology reported by the authors, the categories can include: (1) material and media, such as multimedia learning resources, hypertext, animation, and electronic textbooks; (2) tool and software, such as AI-based assessment tools, gamified learning, concept mapping tools, etc.; (3) platform and environment, such as interactive learning tools, VR learning environments, learning management systems, digital simulation platforms, virtual tutoring systems, etc.; (4) integrated design, which often involve the integrated use of the aforementioned three types of tools throughout the teaching process, such as blended learning, flipped classroom models, asynchronous online learning, etc.
4.2 Evaluation of literature quality
In meta-analyses, the quality of included studies can significantly impact the final outcomes. In this study, the assessment of literature quality was conducted using Review Manager 5.4.1 software, applying the Cochrane Bias Risk Assessment Tool (Collaboration Cochrane, 2020). The following domains were evaluated: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective outcome reporting, and other biases. Two researchers independently performed the literature evaluation, and any discrepancies between the evaluation results were resolved through consensus between the two researchers.
4.3 Research methodology and tools
Meta-analysis is a statistical method that involves reanalyzing multiple studies on the same topic, combining data from related experimental or quasi-experimental studies to obtain a pooled effect size and investigate the overall effect. In comparison to traditional descriptive literature reviews, meta-analysis provides a relatively scientific approach to exploring the reasons for variations in research findings, effectively resolving research controversies, and deriving comprehensive research conclusions. It has become an important research method in various disciplines, including medicine, education, psychology, economics, and more. In this study, the meta-analysis method was employed to investigate the impact of digital technology on students’ deep learning. The statistical and descriptive software package used in the meta-analysis was Review Manager 5.4.1 (Collaboration Cochrane, 2020).
4.4 Data analysis
In pursuit of a comprehensive exploration of the impact of digital technology usage on the effectiveness of deep learning and to effectively address RQ2, we employed Review Manager 5.4.1 software (Collaboration Cochrane, 2020) for heterogeneity tests and meta-analysis. Based on the sample sizes and 95% confidence intervals (CI), we calculated the standardized mean difference (SMD) of deep learning outcome data to assess the effect sizes of each study. Furthermore, variance analysis was conducted on the pooled studies. All analyses considered two-sided p-values less than 0.05 as significant.
Depending on the degree of heterogeneity observed in the dataset, we employed either fixed-effects models or random-effects models. The choice of model was determined by evaluating the variability and consistency of the effect estimates across studies. If significant heterogeneity is detected, indicating substantial differences between the study results beyond what would be expected by chance, a random-effects model is used. Conversely, if the heterogeneity is minimal or non-significant, suggesting a similarity in effect sizes across studies, a fixed-effects model is applied. The selection of the appropriate model aimed to provide a robust estimation of the overall effect while accounting for the inherent variability among the included studies. The I2 test was used to detect the presence of heterogeneity (i.e., the degree of inconsistency among study results) (Higgins et al., 2003). When heterogeneity was observed, sensitivity analysis was conducted to assess whether it significantly affected the results of the meta-analysis. Publication bias was evaluated by observing the shape of the funnel plot and calculating the fail-safe number when researchers tend to publish favorable results (Peplow, 2014). These data were used together to determine the presence of publication bias. Moderator analysis was conducted to assess which contextual factors could influence the results of the meta-analysis, allowing us to investigate the hypothesized variables.
4.5 Characteristics of included studies
As mentioned above, this review included 60 articles, resulting in 60 independent effect sizes. Table 3 describes the key elements of the 60 studies included in the meta-analysis. The studies were conducted in 19 different countries and regions, indicating the wide-ranging research on digital technology and deep learning. The publication years of the articles ranged from 2007 to 2023. The studies encompassed quasi-experimental or experimental designs. The sample sizes varied from 12 to 1060 participants, with the majority of studies having sample sizes below 100. The combined sample size consisted of 6185 participants in the experimental group and 7002 participants in the control group. The research participants represented all educational levels, including preschool, primary school, secondary school, undergraduate, and graduate levels. The curriculum covered humanities and social sciences, natural sciences, and other interdisciplinary subjects.
5 Research findings
5.1 Overall effect size of digital technology on deep learning (RQ1)
In response to RQ1, we conducted an analysis of the overall effect size of digital technology on deep learning. The results indicated a statistically significant difference in learning outcomes, as evidenced by the pooled effect size. Compared to traditional learning, digital technology yielded superior results (SMD = 0.68, 95% CI 0.45-0.92, p < 0.00001), as shown in Fig. 2. A sensitivity analysis was performed to assess the robustness and reliability of the results obtained in this study. This involved systematically excluding individual studies from the analysis one at a time to evaluate their influence on the overall findings. Remarkably, even after sequentially omitting each study, the pooled effect size in favor of digital technology-enhanced learning consistently maintained its magnitude and direction. This confirmed the initial observations of the original analysis and provides compelling evidence for a substantial and statistically significant positive impact of digital technology on deep learning outcomes across the board.
5.2 Heterogeneity test (RQ2)
To address RQ2, we performed a heterogeneity test on the cohort of 60 selected studies. Out of the 60 studies incorporated in the meta-analysis, a total of 6185 participants engaged in learning activities facilitated by digital technology, while 7002 participants underwent learning experiences without the utilization of digital technology. The inclusion of these studies allowed for a comprehensive examination of the impact of digital technology on learning outcomes, encompassing a substantial number of participants across various educational contexts. Due to variations in sample size, experimental designs, and digital technology used among the included literature, heterogeneity was inevitable. In this study, the heterogeneity test primarily relied on the I2 value. According to the criteria proposed by Higgins et al. (2003), I2 < 25% indicates low heterogeneity, I2 values between 25% and 75% indicate moderate heterogeneity, and I2 > 75% indicates high heterogeneity in the study. The results of the heterogeneity test in this study showed an I2 value of 99% and reached statistical significance (p < 0.00001), indicating a high level of heterogeneity among the selected studies. Therefore, a random-effects model was used to analyze the data. Among the 60 individual comparisons between learning with digital technology and traditional learning without technology, 47 reported significant positive effects, while 13 reported negative effects.
5.3 Publication bias
Publication bias, a critical factor impacting the reliability of research findings, was assessed in this study. The funnel plot, a widely employed method for detecting publication bias, was generated using Review Manager 5.4.1 software (refer to Fig. 3). Examination of the funnel plot revealed a symmetrical distribution of effect sizes from the included study samples around the average effect size. This indicated a lack of significant asymmetry or potential publication bias.
Additionally, the Nfs statistic, which estimates the number of unretrieved studies with null results needed to nullify the overall combined effect size to a non-significant level (Rosenthal, 1979), was calculated at a significance level of 0.01 using the following formula:
In the aforementioned equation, Z represents the Z value associated with each independent effect size, while k denotes the total number of included studies. The Nfs, an essential statistic in our analysis, reflects the magnitude of potential publication bias. Specifically, when the Nfs substantially surpasses the critical value of 5k + 10 (Rosenthal, 1991), it signifies a robust mean effect size, thus indicating no indication of publication bias (Hoeve et al., 2012). In this study, k = 60, ∑Z = 279.52, and Nfs0.01 = \({\left(\frac{\sum Z}{2.33}\right)}^2-k\) = (279.52/2.33)2–60 = 14331.76 > 310 (5*60 + 10). By conducting these analyses, we have accounted for and addressed potential publication bias, ensuring the validity and integrity of the research findings.
5.4 Moderator analysis (RQ3)
In response to RQ3, we have conducted a moderator analysis to explore the impact of digital technology on deep learning. The earlier heterogeneity analysis revealed a substantial diversity in effect sizes among the studies included in our research, necessitating a closer examination of the factors contributing to this variability through moderator analysis. To elucidate the underlying mechanisms influencing the influence of digital technology on deep learning, this study conducted a moderation analysis, considering the following variables as potential moderators: digital technology functionality, treatment duration, educational level, subject, sample size, measurement methods, usage context, usage guidance, usage mode, and learning mode. The findings indicated that the variability in effect sizes among the studies could be attributed to variations in usage context, usage guidance, and learning mode.
5.4.1 Experiment design: treatment duration, sample size and measurement methods
Table 3 presents the outcomes of the moderator analysis conducted on this specific category. Variables treatment duration, sample size and measurement methods were assessed as potential moderators in this category. The impact of treatment duration was explored by dividing the studies into two subsets: subset small sample (consisting of studies with 250 participants or fewer) and subset big sample (including studies with more than 250 participants). The analysis did not reveal a significant moderating effect of sample size on the magnitude of the deep learning effect (I2 = 0%, p > 0.05). Treatment duration was categorized into three subsets based on treatment duration: less than four weeks, four to eight weeks, and more than eight weeks. The analysis showed that treatment duration did not exert a significant moderating influence (I2 = 0%, p > 0.05) on the deep learning effect. Similarly, measurement method, which was categorized into subsets of standardized measurement and self-developed measurement, did not demonstrate a significant moderating effect (I2 = 0%, p > 0.05).
5.4.2 Functionality
The impact of digital technology functionality was investigated by categorizing studies into four subsets: material and media, tool and software, platform and environment, integrated design. (See Table 4) The subsequent analysis did not indicate a statistically significant moderating effect of functionality on the magnitude of the deep learning effect (I2 = 0%, p > 0.05). Therefore, there was insufficient evidence to suggest that variations in the functionality of the digital technology employed in the learning process resulted in differences in deep learning outcomes.
5.4.3 Educational level
To explore the impact of digital technology on students at various educational levels, the educational level variable was examined as a moderator. It was divided into six subsets: preschool, primary school, secondary school, undergraduate, graduate, and cross-age group. The results of the analysis revealed that educational level (I2 = 7.7%, p > 0.05) did not exhibit a statistically significant moderating effect on the effectiveness of digital technology-enhanced learning.
5.4.4 Subject
The impact of subject matter was categorized into three subsets: studies in humanities and social sciences, natural sciences, and other subjects (including cross-disciplinary subjects). The results indicated that subject matter did not significantly moderate the magnitude of the digital technology effect (I2 = 0%, p > 0.05). For studies in humanities and social sciences, the effect size was Z = 4.34 (p < 0.0001), indicating a highly significant positive impact of digital technology on deep learning in this domain. In natural sciences, the effect size was Z = 3.07 (p < 0.05), suggesting a statistically significant positive effect of digital technology on deep learning within this field, although to a lesser extent compared to humanities and social sciences. For other subjects, the effect size was Z = 1.60 (p > 0.05), indicating that the difference in deep learning outcomes between the digital technology and non-digital technology conditions was not statistically significant. This suggests that the impact of digital technology on deep learning in other subjects may be limited or inconclusive.
5.4.5 Usage context
The influence of usage context on the effectiveness of digital technology was investigated by categorizing studies into two subsets: online/offline usage and online usage only. The analysis revealed that the usage context significantly moderated the effectiveness of digital technology (I2 = 86.1%, p < 0.01). The findings indicated that the combination of online and offline usage resulted in better outcomes, as reflected by a higher effect size (Z = 7.39, p < 0.00001). This suggests that when digital technology is used in both online and offline settings, it has a substantial positive impact on deep learning. The effect size indicated a strong and statistically significant improvement in deep learning outcomes when digital technology was used in both online and offline contexts. On the other hand, when digital technology was used exclusively in online settings, the effect size was smaller (Z = 1.22, p > 0.05), and the difference in deep learning outcomes compared to non-digital technology conditions was not statistically significant. This implies that using digital technology solely in online environments may have limited effectiveness in enhancing deep learning.
5.4.6 Usage guidance
The influence of usage guidance on deep learning outcomes was investigated by categorizing studies into two subsets based on whether learners receive guidance in using digital technology. The analysis revealed that usage guidance significantly moderated the effectiveness of digital technology (I2 = 93.2%, p < 0.001). The results indicated that the presence of usage guidance had a substantial effect on deep learning outcomes (Z = 7.84, p < 0.00001). This suggests that when learners receive guidance on how to use digital technology effectively, it positively impacts their deep learning experience. The effect size indicated a strong and statistically significant improvement in deep learning outcomes when usage guidance was provided. On the other hand, when usage guidance was absent, the effect size was minimal (Z = 0.12, p > 0.05), and the difference in deep learning outcomes compared to non-guidance conditions was not statistically significant. This implies that without proper guidance, the use of digital technology may have limited effectiveness in enhancing deep learning.
5.4.7 Usage mode
The influence of usage mode on the effectiveness of digital technology in deep learning was investigated by categorizing studies into two subsets: fragmented and systematic, representing different patterns of technology usage by learners. The analysis revealed that usage mode significantly moderated the effectiveness of digital technology (I2 = 79.2%, p < 0.00001). The results indicated that systematic usage had a strong impact on deep learning outcomes (Z = 5.26, p < 0.00001). This suggests that when learners engage in a systematic and structured approach to using digital technology, it positively influences their deep learning experience. The effect size indicated a substantial and statistically significant improvement in deep learning outcomes when systematic usage was employed. On the other hand, fragmented usage demonstrated a non-significant effect on deep learning outcomes (Z = 0.13, p > 0.05). The difference in deep learning outcomes between fragmented usage and non-usage conditions was not statistically significant. This implies that when learners use digital technology in a fragmented and unstructured manner, it will not significantly enhance their deep learning experience.
5.4.8 Learning mode
The impact of the learning mode on the effectiveness of digital technology in deep learning was investigated by categorizing studies into two subsets: independent and collaborative learning. The analysis revealed that the learning mode significantly moderates the effectiveness of digital technology (I2 = 74.4%, p < 0.00001). The results indicated that collaborative learning exhibited a more pronounced effect on deep learning outcomes (Z = 4.62, p < 0.00001) compared to independent learning (Z = 8.82, p < 0.001). This suggests that when learners engaged in collaborative learning activities facilitated by digital technology, it had a positive and statistically significant impact on deep learning outcomes. The effect size indicated a substantial improvement in deep learning outcomes when learners adopted a collaborative approach. On the other hand, independent learning also demonstrated a significant effect on deep learning outcomes (Z = 8.82, p < 0.001). When learners engaged in independent learning activities supported by digital technology, it resulted in a statistically significant improvement in deep learning outcomes. The effect size indicated a substantial and highly significant enhancement in deep learning outcomes with independent learning. These findings indicated that both collaborative and independent learning approaches can effectively leverage digital technology to enhance deep learning outcomes.
6 Discussion
6.1 Research implications
This meta-analysis has addressed three research questions, with the first investigating the extent to which digital technology yielded positive outcomes in facilitating deep learning among learners. The analysis of effect sizes derived from the comprehensive pool of 60 selected studies provides robust evidence supporting the assertion that digital technology, compared to conventional approaches without digital assistance, significantly enhances deep learning outcomes. The statistical effect size we observed signifies a noteworthy transformation within the educational landscape, underscoring the substantial educational value that digital technology confers upon learners. It speaks to the emergence of pedagogical practices marked by depth, engagement, and comprehension that surpass the capacities of traditional instructional approaches.
For the second research question investigating whether substantial variations in effect sizes emerged across the diverse range of studies examined, the analysis results revealed that there was substantial heterogeneity in the effectiveness of digital technology across diverse studies concerning technology-enhanced learning. The significant level of heterogeneity revealed in the analysis carries several profound research implications. The diversity within the selected studies, with variations in sample size, experimental designs, and the types of digital technology employed, underscores the complex nature of the relationship between technology and learning outcomes. First and foremost, the exceptionally high I2 value of 99% suggests that the impact of digital technology on learning outcomes is far from uniform across the educational landscape. This outcome prompts us to delve deeper into the sources of this heterogeneity, inviting critical questions about the conditions and contexts under which digital technology proves most beneficial and the situations in which it may not yield the expected gains. It beckons the need for further research to explore the specific factors driving this diversity. Second, our finding that 47 out of the 60 individual comparisons between learning with digital technology and traditional learning reported significant positive effects indicates the potential benefits of incorporating digital technology into pedagogical practices across a wide array of scenarios. This implies that, when employed effectively, digital technology can be a powerful tool for enhancing learning outcomes. However, the 13 instances of negative effects within the dataset underscore the importance of a nuanced approach. These negative outcomes may arise due to misaligned instructional strategies, inadequate technology integration, or specific student characteristics. The onus now lies on educators and researchers to examine these cases in depth and discern the circumstances under which digital technology may have unintended consequences.
In addressing the third research question, “In the event of significant heterogeneity, what are the underlying factors that account for the divergent findings observed among the included studies?” ten potential moderator variables were subjected to analysis. Notably, the moderator variable analysis revealed the statistical significance (p < 0.05) of three moderators: usage context, usage guidance, and usage mode. In general, the moderators encompassed in this analysis can be classified into two distinct categories, each offering valuable insights. The first category pertains to the study design, encompassing crucial factors such as sample size, treatment duration, and measurement method. These variables allow for a comprehensive assessment of the methodological aspects governing the impact of digital technology on deep learning outcomes. The second category encompasses variables directly associated with the learning process within the context of digital technology-enhanced environments, including the functionality, educational level, subject, usage context, usage guidance, usage mode, and learning mode. Analyzing these variables provides a deeper understanding of the specific conditions under which digital technology exerts a more discernible effect on deep learning.
Consistency in the results of the moderator analysis pertaining to digital technology-enhanced learning aligns with findings from comparable studies. Previous research conducted by Tayebinik and Puteh (2013) and Nortvig et al. (2018) supports our investigation’s demonstration that the combined utilization of digital technology in blended settings exerts a more pronounced influence on enhancing learning outcomes compared to exclusive online usage. The robustness of these findings can be attributed to the blended design, which integrates face-to-face instruction with digital technologies. This integration facilitates diverse channels for learning assessment, processes, and feedback, effectively catering to differentiated learning needs, as supported by reference (Tempelaar, 2020). These outcomes emphasize the significance of considering the usage context when implementing digital technology in educational environments. The promising outcomes observed through the amalgamation of online and offline usage suggest that incorporating digital technology within a blended learning framework, integrating both online and offline activities, leads to substantial improvements in deep learning outcomes. For example, a recent empirical investigation by Broadbent et al. (2021) identifies self-regulated learning (SRL) as a moderator variable of learning outcomes. The study reveals varying effect sizes between online and blended learning contexts due to disparities in learners’ SRL. Online learners who demonstrate confidence, effective time management, and regulated effort experience the greatest benefits. Blended learners, to a lesser extent, also derive advantages from confidence and effort regulation. The differences in SRL levels among online learners potentially explain the disparities observed in deep learning outcomes. Furthermore, Ellis et al. (2021) found a positive and logical relationship between deep approaches to inquiry and deep approaches to online learning technologies, while surface approaches to inquiry aligned with surface approaches to online learning technologies. These findings have tangible implications for teaching and design, particularly for educators aiming to assist students in developing effective learning strategies within blended environments. In such environments, students need to integrate their experiences and ideas across face-to-face and online contexts, making the interplay between inquiry approaches and online learning technologies crucial. Taken together, these research findings underscore the importance of designing educational interventions that account for the usage context and leverage the benefits of blended learning approaches, thereby fostering significant advancements in deep learning outcomes.
Moderator variable analysis conducted on usage guidance in digital technology-enhanced learning yields consistent results in line with comparable studies. Furthermore, our investigation reveals that the effectiveness of digital technology in enhancing deep learning outcomes is more pronounced when accompanied by instructional guidance. Previous research has indicated that clear and well-structured teaching guidance can facilitate deep learning and reflective learning (Wang et al., 2015). A meta-analysis conducted by Lazonder and Harmsen (2016), which synthesized the results of 72 studies, demonstrated significant overall effects of guidance on learning activities. A recent empirical study by Thai et al. (2023), conducted in a flipped classroom setting, has revealed significantly higher learning performance among students studying in the guidance-supported condition, as well as significant changes in their self-efficacy beliefs and appreciation of feedback. The reason maybe the limitation of learning analytics and the need for instructors’ guidance in digital technology-enhanced learning context. A study by Topali et al. (2023) highlighted a lack of empirical studies exploring learning analytics for delivering feedback and limited attention to pedagogy in informing feedback practices. The findings underscore the need for systematization and evaluation of feedback, as well as the development of conceptual tools to guide instructors in designing learning analytics-based feedback. These studies provide valuable insights into the importance of guidance and feedback in digital technology-enhanced learning contexts, emphasizing the need for strategic implementation of usage guidance strategies to optimize the benefits of digital technology on deep learning outcomes.
The moderator variable analysis conducted on the usage mode in digital technology-enhanced learning yields consistent results in line with similar studies. Fragmented learning, while offering advantages such as freedom from time and space constraints, abundant learning resources, rapid content updates, and clear learning topics, also exhibits drawbacks such as weak knowledge connections, a lack of teaching feedback links, and interference with systematic learning (Xie, 2021). These findings underscore the importance of promoting systematic usage of digital technology in educational settings. When learners adopt a systematic approach, adhering to structured guidelines and employing technology in a cohesive manner, they are more likely to reap the benefits of digital technology in deep learning outcomes. Thus, integrating systematic usage strategies and providing learners with guidance on effective utilization of digital technology can optimize its impact on deep learning. In a study by Yang and Tsai (2010), it was found that within each level of learning conceptions, emphasis on fragmented and cohesive learning tend to be associated with surface and deep learning approaches, respectively. Tsai and Tsai (2013) have conducted a study to explore the relationship between conditions, students’ conceptions, and approaches to online argumentation. The results revealed that students with fragmented conceptions tended to adopt surface approaches in both conditions. Notably, students in the experimental condition showed potential for deeper approaches.
None of the three variables in the study design category, namely sample size, treatment duration, and measurement method, are identified as significant moderators of effects in this meta-analysis. This aligns with the observations made by Means et al. (2013), who have reported that sample size did not serve as a statistically significant moderator of online learning effects. Furthermore, the findings of Hew and Lo (2018) also demonstrated that study design did not function as a moderator variable in their meta-analysis. These consistent outcomes indicate that the aforementioned study design variables do not exert a substantial influence on the effects observed in digital technology-enhanced learning. The lack of significance suggests that factors such as sample size, treatment duration, and measurement method do not significantly alter the impact of digital technology on learning outcomes. It is worth noting that these findings echo those of previous studies, lending further support to the notion that these study design variables may have limited influence on the effectiveness of digital technology in enhancing learning outcomes.
Another variable within the learning process category, namely digital technology functionality, is not identified as a significant moderator of effects. The functionality of digital technology did not exert a statistically significant influence on the effect size of digital technology-enhanced learning. These findings suggest that, regardless of the specific functionality of the digital tools employed, as long as an appropriate approach is adopted in the context of digital technology-enhanced learning, it exhibits sufficient efficacy in consistently yielding advantages. The non-significant role of functionality implies that the impact of digital technology on learning outcomes is not solely contingent upon the specific features or capabilities of the tools themselves. Instead, it emphasizes the importance of employing effective instructional strategies and pedagogical approaches in conjunction with digital technology. For instance, in a recent study, Lee et al. (2023) found that employing sequential multi-level prompting strategies through e-books can significantly enhance learners’ problem-solving skills, which is considered an important aspect of deep learning. When educators and learners utilize digital tools in a purposeful and skillful manner, focusing on the integration of technology with sound pedagogy, the benefits of digital technology-enhanced learning can be realized irrespective of variations in functionality.
The analysis conducted in this meta-analysis does not identify educational level as a significant moderator of effects. These findings align with the outcomes of a comparable study conducted by (Shi et al., 2021), which also reported that learner type did not exhibit statistically significant moderation effects on the effectiveness of online learning.
No significant differences are found regarding the nature of the subject matter investigated in this study. This finding is in line with similar studies conducted by Shi et al. (2021), which has also observed non-significant variations when comparing flipped classroom/active learning studies involving different student subjects. Previous research has indicated that learners in the humanities and social sciences tend to demonstrate deeper learning approaches (Baeten et al., 2010). Notably, the effect sizes are found to be largest among students in these disciplines. However, the question of whether certain subject disciplines are more advantageous for deep learning remains inconclusive. Some researchers argue that humanities and social sciences are more conducive to the development of deep learning (Kember et al., 2008), while others propose contrasting viewpoints (Valk & Marandi, 2005). As a result, a consensus has yet to be reached, underscoring the need for further exploration in this area.
Learning mode is not found to be a significant moderator of effects in this meta-analysis. These findings align with a similar study conducted by Lu et al. (2021), which indicated that the combination of digital technology with collaborative learning mode had a more pronounced effect on deep learning. Collaboration is the only learning factor that exhibited both indirect effects (via the deep approach) and direct effects on higher-order thinking skills. These findings underscore the importance of considering learning modes when integrating digital technology in educational settings. Both collaborative and independent learning modes can effectively harness digital technology to enhance deep learning outcomes. However, it is noteworthy that the effect sizes for collaborative learning are larger compared to independent learning. This finding suggests that collaborative learning, when facilitated by digital technology, may have a more significant impact on deep learning outcomes. Previous research has also shown supportive results in this regard (Demir & Zengin, 2023). Therefore, educators and instructional designers should incorporate both collaborative and independent learning activities in their strategies, leveraging the advantages offered by digital technology. By providing opportunities for collaborative interactions and facilitating independent exploration, learners can maximize the potential of digital technology for deep learning.
The integration of digital technology in learning has garnered widespread acceptance and adoption among learners at various educational stages. In this study, we conduct a comprehensive synthesis of findings from 60 high-quality, peer-reviewed empirical research articles to examine the impact of digital technology-enhanced learning on deep learning outcomes. Our aim is to consolidate the available knowledge regarding the measurable influence of digital technology on students’ deep learning. The collective evidence suggests that digital technology has a statistically significant positive effect on student deep learning outcomes compared to traditional learning approaches. Moreover, when combined with blended learning strategies, strategic guidance and systematic using mode, digital technology demonstrates even greater efficacy in promoting deep learning. This highlights the significant potential of integrating digital tools and resources in educational settings to foster deep learning.
6.2 Research limitations
However, like any study, this meta-analysis has its limitations. Prior research has identified individual student characteristics such as gender (Arteche et al., 2009), intelligence level (Chamorro-Premuzic & Furnham, 2008), personality (Chamorro-Premuzic & Furnham, 2009), and initial learning styles (Fox et al., 2001), as well as school characteristics such as school type (Richardson et al., 1999) and school level, as important factors influencing student deep learning. Unfortunately, due to insufficient information from individual study, these factors could not be incorporated into the analysis in this meta-analysis. Future empirical research should strive to include more detailed reporting processes to facilitate variable analysis and expand our understanding of this subject. This study provides an initial multidisciplinary snapshot of the evidence. However, future research should conduct more evidence-based intervention studies to analyze the mechanisms of interaction between student individual characteristics, school characteristics, the utilization of digital technology tools, and deep learning outcomes under diverse application conditions. This will help validate the value of digital technology in promoting student deep learning. Additionally, emphasis should be placed on the bidirectional transformation and integration of theory and empirical research. Building upon empirical research, stronger theoretical investigations are required to offer guidance for the effective implementation of digital technology by educators.
Data availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
References
Aderibigbe, S. A. (2021). Can online discussions facilitate deep learning for students in General Education? Heliyon, 7(3), 6. https://doi.org/10.1016/j.heliyon.2021.e06414
Aguiar-Castillo, L., Clavijo-Rodriguez, A., Hernandez-Lopez, L., De Saa-Perez, P., & Perez-Jimenez, R. (2021). Gamification and deep learning approaches in higher education. Journal of Hospitality Leisure Sport & Tourism Education, 29, 14. https://doi.org/10.1016/j.jhlste.2020.100290
Al-Neklawy, A. F. (2017). Online Embryology teaching using learning management systems appears to be a successful additional learning tool among Egyptian medical students. Annals of Anatomy-Anatomischer Anzeiger, 214, 9–14. https://doi.org/10.1016/j.aanat.2017.07.001
Altun, D. (2018). The efficacy of multimedia stories in preschoolers' explicit and implicit story comprehension. Early Childhood Education Journal, 46(6), 629–642. https://doi.org/10.1007/s10643-018-0916-8
Arteche, A., Chamorro-Premuzic, T., Ackerman, P., & Furnham, A. (2009). Typical intellectual engagement as a byproduct of openness, learning approaches, and self-assessed intelligence. Educational Psychology, 29(3), 357–367.
Artino, A. R., & Stephens, J. M. (2009). Academic motivation and self-regulation: A comparative analysis of undergraduate and graduate students learning online. Internet and Higher Education, 12(3-4), 146–151. https://doi.org/10.1016/j.iheduc.2009.02.001
Baeten, M., Kyndt, E., Struyven, K., & Dochy, F. (2010). Using student-centred learning environments to stimulate deep approaches to learning: Factors encouraging or discouraging their effectiveness. Educational Research Review, 5(3), 243–260. https://doi.org/10.1016/j.edurev.2010.06.001
Bakoush, M. (2022). Evaluating the role of simulation-based experiential learning in improving satisfaction of finance students. International Journal of Management Education, 20(3), 18. https://doi.org/10.1016/j.ijme.2022.100690
Barrio, C. M., Munoz-Organero, M., & Soriano, J. S. (2016). Can gamification improve the benefits of student response systems in learning? An experimental study. Ieee Transactions on Emerging Topics in Computing, 4(3), 429–438. https://doi.org/10.1109/tetc.2015.2497459
Barros, R., Monteiro, A., Nejmedinne, F., & Moreira, J. A. (2013). The relationship between students’ approach to learning and lifelong learning. Psychology, 792–797. https://doi.org/10.4236/psych.2013.411113
Biggs, J. (1979). Individual differences in study processes and the quality of learning outcomes. Higher Education, 8(4), 381–394. https://doi.org/10.1007/BF01680526
Biggs, J., Kember, D., & Leung, D. Y. (2001). The revised two-factor study process questionnaire: R-SPQ-2F. British Journal of Educational Psychology, 71(1), 133–149. https://doi.org/10.1348/000709901158433
Biggs, J. B. (1978). Individual and group differences in study processes. British Journal of Educational Psychology, 48(3), 266–279. https://doi.org/10.1111/j.2044-8279.1978.tb03013.x
Biggs, J. B. (1987). Student Approaches to Learning and Studying. Research Monograph. ERIC.
Biggs, J. B. (1991). Approaches to learning in secondary and tertiary students in Hong Kong: Some comparative studies. Educational Research Journal, 6(1), 27–39.
Biggs, J. B., & Collis, K. F. (2014). Evaluating the Quality of Learning: The SOLO Taxonomy (Structure of the Observed Learning Outcome). Academic Press.
Bouwmeester, R. A. M., de Kleijn, R. A. M., van den Berg, I. E. T., ten Cate, O. T. J., van Rijen, H. V. M., & Westerveld, H. E. (2019). Flipping the medical classroom: Effect on workload, interactivity, motivation and retention of knowledge. Computers & Education, 139, 118–128. https://doi.org/10.1016/j.compedu.2019.05.002
Broadbent, J., Sharman, S., Panadero, E., & Fuller-Tyszkiewicz, M. (2021). How does self-regulated learning influence formative assessment and summative grade? Comparing online and blended learners. Internet and Higher Education, 50, 8. https://doi.org/10.1016/j.iheduc.2021.100805
Bu, C., Li, S., Yang, H., Wang, L., Zhang, T., & Zhang, S. (2022). Research on the internal mechanism, model and effectiveness of online deep learning. Journal of Distance Education, 40(06), 65–73. https://doi.org/10.15881/j.cnki.cn33-1304/g4.2022.06.004
Cai, H. Y., & Gu, X. Q. (2019). Supporting collaborative learning using a diagram-based visible thinking tool based on cognitive load theory. British Journal of Educational Technology, 50(5), 2329–2345. https://doi.org/10.1111/bjet.12818
Chamorro-Premuzic, T., & Furnham, A. (2008). Personality, intelligence and approaches to learning as predictors of academic performance. Personality and Individual Differences, 44(7), 1596–1603. https://doi.org/10.1016/j.paid.2008.01.003
Chamorro-Premuzic, T., & Furnham, A. (2009). Mainly Openness: The relationship between the Big Five personality traits and learning approaches. Learning and Individual Differences, 19(4), 524–529. https://doi.org/10.1016/j.lindif.2009.06.004
Chao, J., Chiu, J. L., DeJaegher, C. J., & Pan, E. A. (2016). Sensor-augmented virtual labs: using physical interactions with science simulations to promote understanding of gas behavior. Journal of Science Education and Technology, 25(1), 16–33. https://doi.org/10.1007/s10956-015-9574-4
Chen, B., Zhang, Y., Yang, B., Xiong, J., & Lin, L. (2019a). Study on effects of instructional interaction on deep learning in smart classroom. e-Education Research, 40(03), 90–97. https://doi.org/10.13811/j.cnki.eer.2019.03.013
Chen, C. M., Wang, J. Y., & Lin, Y. C. (2019b). A visual interactive reading system based on eye tracking technology to improve digital reading performance. Electronic Library, 37(4), 680–702. https://doi.org/10.1108/el-03-2019-0059
Chen, Y. C. (2017). Empirical study on the effect of digital game-based instruction on students' learning motivation and achievement. Eurasia Journal of Mathematics Science and Technology Education, 13(7), 3177–3187. https://doi.org/10.12973/eurasia.2017.00711a
Collaboration Cochrane. (2020). Review Manager (Version 5.4.1) [Computer software]. https://training.cochrane.org/online-learning/core-software/revman
Cui, J. J., & Yu, S. Q. (2019). Fostering deeper learning in a flipped classroom: Effects of knowledge graphs versus concept maps. British Journal of Educational Technology, 50(5), 2308–2328. https://doi.org/10.1111/bjet.12841
DeLotell, P. J., Millam, L. A., & Reinhardt, M. M. (2010). The use of deep learning strategies in online business courses to impact student retention. American Journal of Business Education, 3(12), 49–56. https://doi.org/10.19030/ajbe.v3i12.964
Demir, M., & Zengin, Y. (2023). The effect of a technology-enhanced collaborative learning environment on secondary school students' mathematical reasoning: A mixed method design. Education and Information Technologies. https://doi.org/10.1007/s10639-023-11587-x
Duff, A. (1997). A note on the reliability and validity of a 30-item version of Entwistle & Tait's Revised Approaches to Studying Inventory. British Journal of Educational Psychology, 67(4), 529–539. https://doi.org/10.1111/j.2044-8279.1997.tb01263.x
Duff, A., & McKinstry, S. (2007). Students' approaches to learning. Issues in accounting education, 22(2), 183–214. https://doi.org/10.2308/iace.2007.22.2.183
Elbyaly, M. Y. H., & Elfeky, A. I. M. (2022). The role of metacognition in promoting deep learning in MOOCs during COVID-19 pandemic. Peerj Computer Science, 8, 20. https://doi.org/10.7717/peerj-cs.945
Ellis, R., Bliuc, A. M., & Han, F. F. (2021). Challenges in assessing the nature of effective collaboration in blended university courses. Australasian Journal of Educational Technology, 37(1), 1-14. https://doi.org/10.14742/ajet.5576
Ellis, R. A., Pardo, A., & Han, F. F. (2016). Quality in blended learning environments - Significant differences in how students approach learning collaborations. Computers & Education, 102, 90–102. https://doi.org/10.1016/j.compedu.2016.07.006
Entwistle, N., Hanley, M., & Hounsell, D. (1979). Identifying distinctive approaches to studying. Higher Education, 8, 365–380. https://doi.org/10.1007/BF01680525
Entwistle, N., & McCune, V. (2004). The conceptual bases of study strategy inventories. Educational Psychology Review, 16(4), 325–345. https://doi.org/10.1007/s10648-004-0003-0
Entwistle, N. J., McCune, V., & Tait, H. (1997). The approaches and study skills inventory for students (ASSIST). Edinburgh: Centre for Research on Learning and Instruction, University of Edinburgh, 1-21.
Entwistle, N. J., & Peterson, E. R. (2004). Conceptions of learning and knowledge in higher education: Relationships with study behaviour and influences of learning environments. International Journal of Educational Research, 41(6), 407–428. https://doi.org/10.1016/j.ijer.2005.08.009
Erhel, S., & Jamet, E. (2016). The effects of goal-oriented instructions in digital game-based learning. Interactive Learning Environments, 24(8), 1744–1757. https://doi.org/10.1080/10494820.2015.1041409
European Commission. (2013). Opening up Education: Innovative teaching and learning for all through new Technologies and Open Educational Resources. Brussels, Belgium.
Fenesi, B., Vandermorris, S., Kim, J. A., Shore, D. I., & Heisz, J. J. (2015). One size does not fit all: older adults bebefit from redundant text in multimedia instruction. Frontiers in Psychology, 6, 9. https://doi.org/10.3389/fpsyg.2015.01076
Filius, R. M., de Kleijn, R. A. M., Uijl, S. G., Prins, F. J., van Rijen, H. V. M., & Grobbee, D. E. (2019). Audio peer feedback to promote deep learning in online education. Journal of Computer Assisted Learning, 35(5), 607–619. https://doi.org/10.1111/jcal.12363
Fox, R. A., McManus, I., & Winder, B. C. (2001). The shortened Study Process Questionnaire: An investigation of its structure and longitudinal stability using confirmatory factor analysis. British Journal of Educational Psychology, 71(4), 511–530. https://doi.org/10.1348/000709901158659
Fox, W. H., & Docherty, P. D. (2019). Student perspectives of independent and collaborative learning in a flipped foundational engineering course. Australasian Journal of Educational Technology, 35(5), 79–94. https://doi.org/10.14742/AJET.3804
Garrison, D. R., & Kanuka, H. (2004). Blended learning: Uncovering its transformative potential in higher education. The internet and higher education, 7(2), 95–105. https://doi.org/10.1016/j.iheduc.2004.02.001
Giannini, A. M., Cordellieri, P., & Piccardi, L. (2017). Reading a story: different degrees of learning in different learning environments. Frontiers in Pharmacology, 8, 11. https://doi.org/10.3389/fphar.2017.00701
Hackett, S., Janssen, J., Beach, P., Perreault, M., Beelen, J., & van Tartwijk, J. (2023). The effectiveness of Collaborative Online International Learning (COIL) on intercultural competence development in higher education. International Journal of Educational Technology in Higher Education, 20(1), 21. https://doi.org/10.1186/s41239-022-00373-3
Hall, M., Ramsay, A., & Raven, J. (2004). Changing the learning environment to promote deep learning approaches in first-year accounting students. Accounting Education, 13(4), 489–505. https://doi.org/10.1080/0963928042000306837
Hew, K. F., & Lo, C. K. (2018). Flipped classroom improves student learning in health professions education: a meta-analysis. Bmc Medical Education, 18, 1–12. https://doi.org/10.1186/s12909-018-1144-z
Higgins, J. P., Thompson, S. G., Deeks, J. J., & Altman, D. G. (2003). Measuring inconsistency in meta-analyses. Bmj, 327(7414), 557–560. https://doi.org/10.1136/bmj.327.7414.557
Hoeve, M., Stams, G. J. J., Van der Put, C. E., Dubas, J. S., Van der Laan, P. H., & Gerris, J. R. (2012). A meta-analysis of attachment to parents and delinquency. Journal of abnormal child psychology, 40, 771–785. https://doi.org/10.1007/s10802-011-9608-1
Hu, Y., Nie, J., Zhang, T., & Wu, B. (2021). Research on the effect of VR technology enabling experimental teaching from the perspective of embodied cognition. Modern Distance Education Research, 33(05), 94–102 https://kns.cnki.net/kcms/detail/51.1580.G4.20210922.1959.004.html
Huang, Z., Zhou, R., Zhao, C., & Wan, L. (2019). Empirical research on blended learning mode design for deep learning. China Educational Technology (11), 120-128. https://kns.cnki.net/kcms/detail/11.3792.G4.20191101.1204.032.html
Jeong, J. S., Gonzalez-Gomez, D., Conde-Nunez, M. C., & Gallego-Pico, A. (2019). Examination of students' engagement with R-SPQ-2F of learning approach in flipped sustainable science course. Journal of Baltic Science Education, 18(6), 880–891. https://doi.org/10.33225/jbse/19.18.880
Jiang, R. H. (2022). Understanding, investigating, and promoting deep learning in language education: A survey on chinese college students' deep learning in the online EFL teaching context. Frontiers in Psychology, 13, 18. https://doi.org/10.3389/fpsyg.2022.955565
Kazanidis, I., Pellas, N., Fotaris, P., & Tsinakos, A. (2019). Can the flipped classroom model improve students' academic performance and training satisfaction in Higher Education instructional media design courses? British Journal of Educational Technology, 50(4), 2014–2027. https://doi.org/10.1111/bjet.12694
Kember, D., Biggs, J., & Leung, D. Y. (2004). Examining the multidimensionality of approaches to learning through the development of a revised version of the Learning Process Questionnaire. British Journal of Educational Psychology, 74(2), 261–279. https://doi.org/10.1348/000709904773839879
Kember, D., Leung, D. Y., & McNaught, C. (2008). A workshop activity to demonstrate that approaches to learning are influenced by the teaching and learning environment. Active Learning in Higher Education, 9(1), 43–56. https://doi.org/10.1177/1469787407086745
Klois, S. S., Segers, E., & Verhoeven, L. (2013). How hypertext fosters children's knowledge acquisition: The roles of text structure and graphical overview. Computers in Human Behavior, 29(5), 2047–2057. https://doi.org/10.1016/j.chb.2013.03.013
Koszalka, T. A., Pavlov, Y., & Wu, Y. Y. (2021). The informed use of pre-work activities in collaborative asynchronous online discussions: The exploration of idea exchange, content focus, and deep learning. Computers & Education, 161, 14. https://doi.org/10.1016/j.compedu.2020.104067
Lazonder, A. W., & Harmsen, R. (2016). Meta-analysis of inquiry-based learning: effects of guidance. Review of Educational Research, 86(3), 681–718. https://doi.org/10.3102/0034654315627366
Lee, H. Y., & List, A. (2019). Processing of texts and videos: A strategy-focused analysis. Journal of Computer Assisted Learning, 35(2), 268–282. https://doi.org/10.1111/jcal.12328
Lee, J., & Choi, H. (2017). What affects learner's higher-order thinking in technology-enhanced learning environments? The effects of learner factors. Computers & Education, 115, 143–152. https://doi.org/10.1016/j.compedu.2017.06.015
Lee, J., Park, J. C., Jung, D., Suh, C. W., & Henning, M. A. (2021). Video learning strategies affecting achievement, learning approach, and lifelong learning in a flipped periodontology course. Journal of Dental Education, 85(7), 1245–1250. https://doi.org/10.1002/jdd.12572
Lee, Y. F., Chen, P. Y., & Cheng, S. C. (2023). Improve learning retention, self-efficacy, learning attitude and problem-solving skills through e-books based on sequential multi-level prompting strategies. Education and Information Technologies. https://doi.org/10.1007/s10639-023-11994-0
Li, H., Wu, D., Zhu, S., Guo, Q., & Luo, Z.-Q. (2023). Resrarch on the construction and application of smart classroom teaching model under the perspective of deep learning. Modern Educational Technology, 33(02), 61–70.
Lin, H. C., Hwang, G. J., & Hsu, Y. D. (2019a). Effects of ASQ-based flipped learning on nurse practitioner learners' nursing skills, learning achievement and learning perceptions. Computers & Education, 139, 207–221. https://doi.org/10.1016/j.compedu.2019.05.014
Lin, P. H., & Chen, S. Y. (2020). Design and evaluation of a deep learning recommendation based augmented reality system for teaching programming and computational thinking. Ieee Access, 8, 45689–45699. https://doi.org/10.1109/access.2020.2977679
Lin, X. F., Deng, C. L., Hu, Q. T., & Tsai, C. C. (2019b). Chinese undergraduate students' perceptions of mobile learning: Conceptions, learning profiles, and approaches. Journal of Computer Assisted Learning, 35(3), 317–333. https://doi.org/10.1111/jcal.12333
List, A., & Ballenger, E. E. (2019). Comprehension across mediums: the case of text and video. Journal of Computing in Higher Education, 31(3), 514–535. https://doi.org/10.1007/s12528-018-09204-9
Liu, L., & Hmelo-Silver, C. E. (2009). Promoting complex systems learning through the use of conceptual representations in hypermedia. Journal of Research in Science Teaching, 46(9), 1023–1040. https://doi.org/10.1002/tea.20297
Lu, K. L., Pang, F., & Shadiev, R. (2021). Understanding the mediating effect of learning approach between learning factors and higher order thinking skills in collaborative inquiry-based learning. Etr&D-Educational Technology Research and Development, 69(5), 2475–2492. https://doi.org/10.1007/s11423-021-10025-4
Manzanares, M. C. S., Garcia-Osorio, C. I., & Diez-Pastor, J. F. (2019). Differential efficacy of the resources used in B-learning environments. Psicothema, 31(2), 170–178. https://doi.org/10.7334/psicothema2018.330
Marton, F., & Säljö, R. (1976). On qualitative differences in learning: I—Outcome and process. British Journal of Educational Psychology, 46(1), 4–11. https://doi.org/10.1111/j.2044-8279.1976.tb02980.x
Means, B., Toyama, Y., Murphy, R., & Baki, M. (2013). The effectiveness of online and blended learning: A meta-analysis of the empirical literature. Teachers College Record, 115(3), 1–47. https://doi.org/10.1177/016146811311500307
Naaz, F., Chariker, J. H., & Pani, J. R. (2014). Computer-based learning: graphical integration of whole and sectional neuroanatomy improves long-term retention. Cognition and Instruction, 32(1), 44–64. https://doi.org/10.1080/07370008.2013.857672
National Research Council. (2012). Education for Life and Work: Developing Transferable Knowledge and Skills in the 21st Century. The National Academies Press. 10.17226/13398
Ng, W. (2015). New digital technology in education. Springer.
Nortvig, A.-M., Petersen, A. K., & Balle, S. H. (2018). A literature review of the factors influencing e-learning and blended learning in relation to learning outcome, student satisfaction and engagement. Electronic Journal of E-learning, 16(1), 46–55.
Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., & Brennan, S. E. (2021). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. International journal of surgery, 88, 105906.
Park, S., & Kim, C. (2014). Virtual Tutee System: a potential tool for enhancing academic reading engagement. Etr&D-Educational Technology Research and Development, 62(1), 71–97. https://doi.org/10.1007/s11423-013-9326-1
Park, S. W., & Kim, C. (2016). The effects of a virtual tutee system on academic reading engagement in a college classroom. Etr&D-Educational Technology Research and Development, 64(2), 195–218. https://doi.org/10.1007/s11423-015-9416-3
Pei, X. N., Jin, Y. L., Zheng, T. N., & Zhao, J. (2020). Longitudinal effect of a technology-enhanced learning environment on sixth-grade students' science learning: the role of reflection. International Journal of Science Education, 42(2), 271–289. https://doi.org/10.1080/09500693.2019.1710000
Pellegrino, J. W. (2017). Teaching, Learning and Assessing 21st Century Skills. https://doi.org/10.1787/9789264270695-12-en
Peplow, M. (2014). Social sciences suffer from severe publication bias. Nature, 10. https://doi.org/10.1038/nature.2014.15787
Qin, Y. Q., Yan, R. Y., & Sun, Y. X. (2020). The application of flipped classroom combined with locus of control analysis in lean entrepreneurship education for college students. Frontiers in Psychology, 11, 11. https://doi.org/10.3389/fpsyg.2020.01587
Rassaei, E. (2021). Implementing mobile-mediated dynamic assessment for teaching request forms to EFL learners. Computer Assisted Language Learning, 31. https://doi.org/10.1080/09588221.2021.1912105
Richardson, J. T., Morgan, A., & Woodley, A. (1999). Approaches to studying in distance education. Higher Education, 23-55. https://doi.org/10.1023/A:1003445000716
Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological bulletin, 86(3), 638. https://doi.org/10.1037/0033-2909.86.3.638
Rosenthal, R. (1991). Meta-analysis: a review. Psychosomatic medicine, 53(3), 247–271. https://doi.org/10.1097/00006842-199105000-00001
Salmeron, L., Naumann, J., Garcia, V., & Fajardo, I. (2017). Scanning and deep processing of information in hypertext: an eye tracking and cued retrospective think-aloud study. Journal of Computer Assisted Learning, 33(3), 222–233. https://doi.org/10.1111/jcal.12152
Sawras, M., Khosa, D., Lissemore, K., Duffield, T., & Defarges, A. (2020). Case-based e-learning experiences of second-year veterinary students in a clinical medicine course at the Ontario Veterinary College. Journal of Veterinary Medical Education, 47(6), 678–694. https://doi.org/10.3138/jvme.2018-0005
Schmeck, R. R., Ribich, F., & Ramanaiah, N. (1977). Development of a self-report inventory for assessing individual differences in learning processes. Applied psychological measurement, 1(3), 413–431. https://doi.org/10.1177/014662167700100310
Shen, J., Qi, H. Y., Chen, Y. Y., Mei, R. H., Sun, C. C., & Wang, Z. Y. (2022a). Incorporating modified team-based learning into a flipped basic medical laboratory course: impact on student performance and perceptions. Bmc Medical Education, 22(1), 9. https://doi.org/10.1186/s12909-022-03676-1
Shen, X., Zhang, B., & Feng, R. (2022b). A study of deep learning activities in blended learning environments: design, implementation and evaluation. e-Education Research, 43(01), 106-112+121. 10.13811/j.cnki.eer.2022.01.014
Shi, Y. H., Zhang, J. M., Yang, H. Y., & Yang, H. H. (2021). Effects of interactive whiteboard-based instruction on students' cognitive learning outcomes: a meta-analysis. Interactive Learning Environments, 29(2), 283–300. https://doi.org/10.1080/10494820.2020.1769683
Siemens, G., & Tittenberger, P. (2009). Handbook of Emerging Technologies for Learning. University of Manitoba Canada.
Soomro, S. A., Casakin, H., Nanjappan, V., & Georgiev, G. V. (2023). Makerspaces fostering creativity: a systematic literature review. Journal of Science Education and Technology, 19. https://doi.org/10.1007/s10956-023-10041-4
Sterling, S. (2004). Higher education, sustainability, and the role of systemic learning. In Higher education and the challenge of sustainability: Problematics, promise, and practice (pp. 49-70). https://doi.org/10.1007/0-306-48515-X_5
Stone, C. (2022). From the margins to the mainstream: The online learning rethink and its implications for enhancing student equity. Australasian Journal of Educational Technology, 38(6), 139–149. https://doi.org/10.14742/ajet.8136
Strømme, T. A., & Mork, S. M. (2021). Students' conceptual sense-making of animations and static visualizations of protein synthesis: a sociocultural hypothesis explaining why animations may be beneficial for student learning. Research in Science Education, 51(4), 1013–1038. https://doi.org/10.1007/s11165-020-09920-2
Sugden, N., Brunton, R., MacDonald, J. B., Yeo, M., & Hicks, B. (2021). Evaluating student engagement and deep learning in interactive online psychology learning activities. Australasian Journal of Educational Technology, 37(2), 45–65. https://doi.org/10.14742/ajet.6632
Sung, H. Y., Hwang, G. J., Chen, C. Y., & Liu, W. X. (2022). A contextual learning model for developing interactive e-books to improve students' performances of learning the Analects of Confucius. Interactive Learning Environments, 30(3), 470–483. https://doi.org/10.1080/10494820.2019.1664595
Sung, H. Y., Hwang, G. J., Wu, P. H., & Lin, D. Q. (2018). Facilitating deep-strategy behaviors and positive learning performances in science inquiry activities with a 3D experiential gaming approach. Interactive Learning Environments, 26(8), 1053–1073. https://doi.org/10.1080/10494820.2018.1437049
Tarchi, C., Zaccoletti, S., & Mason, L. (2021). Learning from text, video, or subtitles: A comparative analysis. Computers & Education, 160, 12. https://doi.org/10.1016/j.compedu.2020.104034
Tayebinik, M., & Puteh, M. (2013). Blended learning or e-learning? International Magazine on Advances in Computer Science and Telecommunications, 3(1), 103–110.
Tempelaar, D. (2020). Supporting the less-adaptive student: the role of learning analytics, formative assessment and blended learning. Assessment & Evaluation in Higher Education, 45(4), 579–593. https://doi.org/10.1080/02602938.2019.1677855
Thai, N. T. T., De Wever, B., & Valcke, M. (2023). Feedback: an important key in the online environment of a flipped classroom setting. Interactive Learning Environments, 31(2), 924–937. https://doi.org/10.1080/10494820.2020.1815218
Tiedt, J. A., Owens, J. M., & Boysen, S. (2021). The effects of online course duration on graduate nurse educator student engagement in the community of inquiry. Nurse Education in Practice, 55, 8. https://doi.org/10.1016/j.nepr.2021.103164
Topali, P., Chounta, I. A., Martinez-Mones, A., & Dimitriadis, Y. (2023). Delving into instructor-led feedback interventions informed by learning analytics in massive open online courses. Journal of Computer Assisted Learning, 22. https://doi.org/10.1111/jcal.12799
Trakhman, L. M. S., Alexander, P. A., & Silverman, A. B. (2018). Profiling reading in print and digital mediums. Learning and Instruction, 57, 5–17. https://doi.org/10.1016/j.learninstruc.2018.04.001
Tsai, P. S., & Tsai, C. C. (2013). College students' experience of online argumentation: Conceptions, approaches and the conditions of using question prompts. Internet and Higher Education, 17, 38–47. https://doi.org/10.1016/j.iheduc.2012.10.001
United Nation. (2023). Report on the Transforming Education Summit. https://www.un.org/sites/un2.un.org/files/report_on_the_2022_transforming_education_summit.pdf
Valk, A., & Marandi, T. (2005). How to support deep learning at a university. In Proceedings of the International Conference on Education.
Villalonga-Gomez, C., & Mora-Cantallops, M. (2022). Profiling distance learners in TEL environments: a hierarchical cluster analysis. Behaviour & Information Technology, 41(7), 1439–1452. https://doi.org/10.1080/0144929x.2021.1876766
Vogt, L., Schauwinhold, M., Rossaint, R., Schenkat, H., Klasen, M., & Sopka, S. (2022). At the limits of digital education. The importance of practical education for clinical competencies learning in the field of emergency medicine: A controlled non-randomized interventional study. Frontiers in Medicine, 9, 9. https://doi.org/10.3389/fmed.2022.993337
Wang, J. S., Pascarella, E. T., Laird, T. F. N., & Ribera, A. K. (2015). How clear and organized classroom instruction and deep approaches to learning affect growth in critical thinking and need for cognition. Studies in Higher Education, 40(10), 1786–1807. https://doi.org/10.1080/03075079.2014.914911
Wang, M. H., Wu, B., Kirschner, P. A., & Spector, J. M. (2018a). Using cognitive mapping to foster deeper learning with complex problems in a computer-based environment. Computers in Human Behavior, 87, 450–458. https://doi.org/10.1016/j.chb.2018.01.024
Wang, M. H., Yuan, B., Kirschner, P. A., Kushniruk, A., & Peng, J. (2018b). Reflective learning with complex problems in a visualization-based learning environment with expert support. Computers in Human Behavior, 87, 406–415. https://doi.org/10.1016/j.chb.2018.01.025
Wang, Q., & Mousavi, A. (2023). Which log variables significantly predict academic achievement? A systematic review and meta-analysis. British Journal of Educational Technology, 54(1), 142–191. https://doi.org/10.1111/bjet.13282
Wang, W., Zhao, Y. Y., Wu, Y. J., & Goh, M. (2023). Factors of dropout from MOOCs: a bibliometric review. Library Hi Tech, 41(2), 432–453. https://doi.org/10.1108/lht-06-2022-0306
Wang, Y., Li, Z.-X., Bai, Q.-Y., Yao, H.-Y., & Wang, C.-Y. (2021). Research on the peer feedback to promote deep learning in blended teaching. Modern Educational Technology, 31(05), 67–74.
Watkins, D. (2014). Correlates of approaches to learning: A cross-cultural meta-analysis. In Perspectives on Thinking, Learning, and Cognitive Styles (pp. 165-196). Routledge.
Wheeler, S. (2012). E-learning and digital learning. In N. M. Seel (Ed.), Encyclopedia of the Sciences of Learning (pp. 1109-1111). Springer US. https://doi.org/10.1007/978-1-4419-1428-6_431
William and Flora Hewlett Foundation. (2013). Deeper Learning Competencies. http://www.hewlett.org/uploads/documents/Deeper_Learning_Defined_April_2017.pdf
Xie, Y. (2021). Analysis of Advantages and Disadvantages of Fragmented Learning in the Era of Internet Big Data. 2021 2nd International Conference on Information Science and Education (ICISE-IE),
Xu, E. W., Wang, W., & Wang, Q. X. (2023). The effectiveness of collaborative problem solving in promoting students' critical thinking: A meta-analysis based on empirical literature. Humanities & Social Sciences Communications, 10(1), 11. https://doi.org/10.1057/s41599-023-01508-1
Yan, M., Chen, L., & Guo, J. (2022). How the key features of information technology influence multi-level learning outcomes. Jiangsu Higher Education(06), 102-109. 10.13236/j.cnki.jshe.2022.06.014
Yang, X. Z., Lin, L., Cheng, P. Y., Yang, X., Ren, Y. Q., & Huang, Y. M. (2018). Examining creativity through a virtual reality support system. Etr&D-Educational Technology Research and Development, 66(5), 1231–1254. https://doi.org/10.1007/s11423-018-9604-z
Yang, Y. F., & Tsai, C. C. (2010). Conceptions of and approaches to learning through online peer assessment. Learning and Instruction, 20(1), 72–83. https://doi.org/10.1016/j.learninstruc.2009.01.003
Yao, J., Li, Y., Pan, J., & Cheng, M. (2022). Research on the influence of dialogic peer feedback on college students’ online deep learning. Journal of East China Normal University (Educational Sciences), 40(03), 112–126. https://doi.org/10.16382/j.cnki.1000-5560.2022.03.010
Ye, X. D., Chang, Y. H., & Lai, C. L. (2019). An interactive problem-posing guiding approach to bridging and facilitating pre- and in-class learning for flipped classrooms. Interactive Learning Environments, 27(8), 1075–1092. https://doi.org/10.1080/10494820.2018.1495651
Yeh, Y. C. (2012). A co-creation blended KM model for cultivating critical-thinking skills. Computers & Education, 59(4), 1317–1327. https://doi.org/10.1016/j.compedu.2012.05.017
Yuen, S. T. S., & Naidu, S. (2007). Using multimedia to close the gap between theory and practice in engineering education. International Journal of Engineering Education, 23(3), 536–544.
Zhang, R. F., Zou, D., & Cheng, G. R. Y. (2023). Technology-enhanced language learning with null and negative results since 2000: A systematic review based on the activity theory. Education and Information Technologies. https://doi.org/10.1007/s10639-023-11993-1
Zhao, S. R., & Li, H. (2021). Unpacking peer collaborative experiences in pre-class learning of flipped classroom with a production-oriented approach. Sage Open, 11(4), 13. https://doi.org/10.1177/21582440211058203
Zhou, X.-F., Zhu, N.-N., Xue, F., & Sun, T. (2021). Design and implementation of online teaching of Surgical Nursing based on deep learning theory. Chinese Journal of Nursing Education, 18(04), 329–334.
Funding
The research leading to these results received funding from Shenzhen Institute of Information Technology under Grant Agreement No. SZIIT2022SK050.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
The author has no relevant financial or non-financial interests to disclose.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Wu, XY. Exploring the effects of digital technology on deep learning: a meta-analysis. Educ Inf Technol 29, 425–458 (2024). https://doi.org/10.1007/s10639-023-12307-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10639-023-12307-1