Introduction

During the last few years, there has been increasing interest in building bridges between two of the most influential research fields in educational psychology, self-regulated learning (SRL) and cognitive load theory (CLT). While for a long time the two fields have been studied disparately, there is now an increasing amount of studies that refer to both concepts. One such collection of studies has been published in a special issue from de Bruin and van Merriënboer (2017). However, Boekaerts (2017), who discussed the papers in this special issue, pointed out that it is still a long way to really intertwine both fields of research and to learn from and with each other (see also Seufert 2018).

Particularly in the current situation of the corona crisis where schools and universities closed their doors and learners have to learn on their own, the relevance of both concepts and their interplay is striking. Learners have to organize their learning setting and process on their own, very often lacking the strategies and skills to do so effectively. Thus, they might be easily overloaded. While dealing with the task at hand, they now also have to handle decisions the teachers usually are responsible for, like setting goals and planning how to proceed, and reflect on what they already learned and what has to be proceeded, where to find help, and so forth. Thus, they need to learn to self-regulate. Overall, beside dealing with the task on the so-called object level, learners also have to deal with these self-monitoring and regulatory processes on a meta-level. This is where the editors of this special issue and the group of researchers of the Emerging Field Group (funded by EARLI) start with their Effort Monitoring and Regulation framework (EMR-framework; de Bruin, Roelle, Baars, & EFG-MRE, this issue). To build a bridge from self-regulation to cognitive load, they link these concepts of self-regulated learning with the concept of mental effort and thus with CLT. Mental effort is crucial for dealing with the task as well as for self-regulation. And when learners are aware of their invested resources, this can serve as a cue to inform processes both on the object-level and on the meta-level.

Three Approaches to Study the Interplay Between Self-Regulation and Cognitive Load

In this special issue, nine review papers are integrated, which are all of highest quality and depth. They all contribute to the theoretical debate about synergies between SRL and CLT, either from a theoretical point of view or empirically by presenting a meta-analysis. In the EMR-framework, they nevertheless focus on different aspects. Their overall goal is to foster learning, but they use different approaches to do so: they either stimulate processes (1) on the object-level, (2) for the utilization of effort or (3) on the meta-level (it must be pointed out that these three perspectives do not fully match the three research questions introduced in the editorial of de Bruin et al. (this issue)).

Stimulating Learning on the Object Level

Most of the papers address the question of how learners can be activated while dealing with the learning task itself, i.e., with the object level. These generative activities are supposed to intensify the learning process and thus provide cues for learners’ metacognitive processes of monitoring and regulation. While, for example, drawing a map, writing a summary, or trying to retrieve knowledge from memory, learners can detect how well they have understood the learning content. Consequently, the accuracy of their monitoring is increased, i.e., their judgment of what they believe to have learned is more closely connected to their actual learning outcomes. The study of Prinz, Golke, and Witwer (this issue) presents a meta-analysis on the effects of such generative tasks, and they revealed a medium positive effect on metacomprehension accuracy. The effect on learning itself was also positive but rather small. Van de Pol, van Loon, van Gog, Baumann, and De Bruin et al. (this issue) found comparable results of visualizing activities, like mapping or drawing. These generative activities also turned out to improve monitoring accuracy, if learners were directly instructed to focus on relational and structural information in the text to be visualized. However, they did not improve learning outcomes, and the cues could also not always be used to make the right decisions on which text to restudy. Such a misalignment was also reported by Carpenter, Endres, and Hui (this issue) in their review on the effects of retrieval practice. Learners used this for checking what they already know, i.e., as a monitoring strategy but not as a learning strategy (to directly enhance knowledge) or as a regulation strategy by using the cues of retrieving for an informed choice of what to study next.

Thus, what is the overall goal of these approaches of stimulating generative activities on the object-level? Across most of the reported studies, a strong focus is on improving monitoring accuracy. It neither can be assured that learners can use this improved monitoring for improved regulation nor does it always pay off for learning outcomes. There could be different reasons why learners fail to make more out of these impulses. The first one is that in contrast to teachers, who seem to be able to use the generative activities as a diagnostic cue for appropriate study decisions (Van de Pol et al. this issue), the learners themselves might lack the ability to transform the diagnosis into the suitable treatment. The second reason might be that they might lack the willingness to apply the treatment. And the third reason might simply be that learners are overloaded with the additional generative task. This could be reduced when the generative tasks are assisted with explicit strategy hints or with support systems. But nevertheless, learner’s effort plays a crucial role in cue utilization.

Most of the above-mentioned papers actually discuss the question of how effort can be used as an additional diagnostic cue. This interplay between effort appraisals and metacognitive accuracy is summarized in the meta-analysis of Baars, Wijna, de Bruin, and Paas (this issue). Again, the focus of these studies is on monitoring accuracy. In their meta-analysis, Baars et al. found an overall negative correlation between learner’s effort appraisals and their monitoring judgments, i.e., if a task needed high mental effort, it was more likely associated with poorer judgments of one’s performance. Thus, one can assume that effort is seen as an indicator of difficulties. De Bruin et al. (this issue) discuss this as an appraisal of “effort as impossible.” In contrast, learners could also see effort as a deliberate attempt to understand the issue and thus as “effort as important.” In this case, the negative correlation with performance appraisals can be mitigated or even reversed. What learners actually “see” when rating their effort is an issue of how to conceptualize and measure effort, which will be discussed below.

Stimulating Effort Utilization

The second approach is to directly address and activate learners’ use of effort as a cue for regulation. In the review of van Gog, Hoogerheide, and van Harsel (this issue), studies are reported, which analyze students’ use of effort as a cue to decide whether to restudy a current task and which task to choose next. Thus, the focus is on analyzing and improving monitoring and regulation and in terms of Zimmerman’s cycle of self-regulation (Zimmerman 2005) even planning of what to do next. This regulatory process is especially demanding and students often fail, particularly when they are too young and might lack the necessary metacognitive knowledge or when the problems they have to deal with are too demanding themselves. Thus, in their paper, van Gog et al. (this issue) present a series of studies where learners are directly trained to make appropriate decisions on how to proceed based on a simple algorithm. Learners are asked to take into account their actual performance and their mental effort for the current task. By combining both appraisals, they decide to choose either an easier, more difficult, or comparably difficult task. If task performance was high with low effort, they should choose a more difficult task, if performance was low with high effort, they should choose a less difficult task, and so forth. This algorithm was conveyed by a video modeling example, and it actually improved learners monitoring accuracy and the task selection accuracy and above all also improved learning outcomes. Thus, with this intervention, all the steps of Zimmerman’s cycle of self-regulation (2005) were successfully addressed by using effort appraisals as a cue, as learners monitored and regulated by taking their appraisals into account for planning the following steps. Pursuing the whole cycle of self-regulation, i.e., not only to “calculate” what should have been done next but also to actually follow the plans, is surely depending on the willingness to do so or other motivational factors like self-efficacy (Van Gog et al. this issue).

Another approach, which less directly aims at training to use effort appraisals but nevertheless could be used as an “intervention,” is reported in several of the studies. If the effort rating item asks for the effort that was required or for the effort the learners invested, this drives learner’s attention to different aspects of load. While the requirement-wording results in a negative correlation with judgments of learning and thus would reflect an effort as difficulty view, the engagement-wording results in positive correlations, reflecting an effort as important view. Thus, with a simple change of item wording—or by even using both items—one would prompt to introspect different aspects of learners’ applied resources. Especially in the case of contrasting both items, learners might start to reflect not only on the “how much” but also on the “why” of their applied resources. That such a differentiated view on effort rating could be fruitful will be discussed below in more detail.

Stimulating Self-Regulation on the Meta-Level

The third approach to foster learning is to stimulate learners’ activities on the meta-level. The most classical examples are provided by Nückles, Roelle, Glogger-Frey, Waldeyer, and Renkl (this issue). They use journal writing as a tool to stimulate the reflection of the learning process. Learners are asked not only to apply cognitive strategies like organizing or elaborating the content they have learned but also to apply metacognitive strategies to reflect on their learning process. Writing learning journals could be shown as an effective means to improve learning outcomes. The authors discuss these effects in the light of two potential functions of journal writing, which are linked to cognitive load: on the one hand, the externalized writings can relieve learners as they provide an external memory source, but on the other hand, learners are asked to invest more cognitive resources in terms of germane cognitive load. Beside these potential positive effects, learners often do not invest these additional germane resources and thus do not apply appropriate or sufficient learning strategies in journal writing (Nückles et al. 2004). Thus, in a comprehensive study program, the authors analyzed how students can be best promoted in writing learning journals. They analyzed the effects of different types of prompts, in different order or duration of use, the effects of worked examples for self-explanations, and the effects of adapting or fading the support.

Again, a less straightforward but nevertheless effective means to enhance learner’s investment of germane resources was proposed by Eitel, Endres, and Renkl (this issue). The basic assumption is that cognitive load does not only depend on the instruction but also on how learners deal with this instruction. They could show that by simply providing hints that the design of the instruction is suboptimal, e.g., that it contains irrelevant information, learners were able to compensate by engaging in intensified self-regulation, in this case in using a selection strategy. This process of self-management shows that the agency of reducing or maybe even imposing load is not only due to instructional designers or teachers but also due to the learner him- or herself (see also Bannert 2002). However, Eitel et al. also point out that this self-management process does not come without cognitive costs as well as motivational costs and that the motivation to self-control depletes over time. Their studies impressively demonstrate that learners are sensitive to requirements that derive from the task itself or from their own investment. While they first react on the external strain by investing more germane resources, they then stop self-managing when their resources are depleted. Thus, their focus of effort monitoring changes from an external strain-view to an internal engagement-view.

Overall, independent of whether learners are activated on the object-level, in effort utilization or on the meta-level, learners are able to generate, use, or manage cues for optimizing self-regulation and cognitive load. In the following sections, I will point out what we can learn from the presented papers and which questions remain still open.

The Mutual Interplay Between Self-Regulation and Cognitive Load

The series of papers presented in this special issue all make clear that the two concepts of self-regulation and cognitive load are highly related. In the following, I will point out some arguments for the assumption that the relationship is also mutual: cognitive load can cause self-regulation and self-regulation can cause cognitive load.

Particularly the papers concerning the first and second research questions of this special issue, namely, how students monitor and regulate effort, provide evidence for the first direction of this relationship, i.e., that cognitive load causes self-regulation. Learners monitor their effort, and as a consequence, they adapt either their metacognitive judgment and thus the process of monitoring. If the effort was high, they more likely perceive their performance to be lower—given that they are data-driven in their appraisals (see Baars et al. this issue). In case that they deliberately invested effort to reach a goal—what would be a goal-driven appraisal—they would assume their performance being relatively higher. Learners can also take one step further in the self-regulation cycle and use their reflection of effort for regulating what to do next. The algorithm used in the study of van Gog et al. (this issue) even provides a direct recommendation of how to use one’s perceived cognitive load for regulatory decisions. However, learners could even gain more insight for regulation from their effort monitoring if this would be more specific. The data- versus goal-driven perspectives illustrate that it would surely make a difference for regulation on which aspect of effort one would reflect, either those aspects that hindered or those that fostered the learning process. While in the case of perceiving ineffective effort, they should or could change their strategy, and they could maintain their strategies in case of perceiving effective effort.

The papers concerning the third research question of the special issue, how learners optimize load while self-regulating, provide evidence for the second direction of the mutual relationship, i.e., that self-regulation causes cognitive load. The study program on journal writing widely discusses the cognitive and even the motivational costs for writing journals as a means for learning. That learners only invest this effort when they are prompted or assisted to do so shows that learners need sufficient resources, which could be their cognitive capacity, strategy knowledge, willingness, etc. or external resources like scaffolds. Beside these costs of self-regulation, Nückles et al. (this issue) highlight that the use of self-regulatory strategies can also reduce cognitive load. The externalized thoughts and comments in a learning journal can serve as an external memory storage. But again, for specifying which aspect of self-regulation causes either extraneous or germane cognitive load in order to infer the appropriate instructional conclusions, it would be necessary to measure these aspects specifically.

Regarding the cognitive costs that self-regulation can cause, the paper of Wirth, Stebner, Trypke, Schuster, and Leutner (this issue) makes one important point. The costs can be eliminated when learners execute self-regulation unconsciously. When learning processes, and I want to highlight specifically learning strategies, are highly automatized, they do not need working memory resources. This new perspective on self-regulation will be discussed below in more detail.

Overall, the cyclic and dynamic nature of the mutual relationship between self-regulation and cognitive load can best be demonstrated with the self-management study of Eitel et al. (this issue). The learners are informed that the learning material needs increased attention, i.e., learners perceive an increase of extraneous load. They then regulate by investing more germane resources. And as this additional effort adds to the overall demands, the available capacity might not suffice over a longer period of time. Learners will again experience an overload and react by regulating, in this case by dropping their compensatory strategy. This dynamic nature of learning and the differentiated view on cognitive load is a promising strategy for future studies, as was already recommended by Kalyuga and Singh (2016).

How Self-Regulation Is Conceptualized

The scope of the presented research in this issue highly depends on how the concepts of self-regulation and cognitive load are conceptualized.

With regard to the broadly acknowledged view of self-regulation as a cyclic process of planning, monitoring, and regulation, many of the contributions have a strong or even exclusive focus on monitoring and especially on monitoring accuracy, specifically performance monitoring accuracy (Baars et al., Carpenter et al., Prinz et al., van de Pol et al.; all in this issue). Only some studies also discuss the use of effort cues for regulation and thus in a way also for planning when regulation concerns the planning of subsequent tasks (van de Pol et al., van Gog et al., both this issue). However, this rather narrow focus could be broadened. Particularly the use of cognitive learning strategies should be taken into account, as they can directly influence learning outcomes. Generative tasks or effort monitoring ratings could help learners reflecting on the ease or effectiveness of strategy use and by using these appraisals for selecting more appropriate strategies. And even with the narrow focus on monitoring accuracy, the view could be broadened by analyzing the moderating role of motivational factors. As most self-regulation models point out, learner’s self-efficacy could influence all phases of self-regulation (for an overview, see Panadero 2017), and thus, it will also affect metacomprehension accuracy (Mengelkamp and Bannert 2010). Learners’ self-concept could be an even stronger predictor for effective self-regulation (Händel, De Bruin, & Dresel, 2020).

While all these aspects of metacognition refer to the skills of metacognitive regulation, there is one additional factor, Wirth et al. (this issue) raise in reference to Flavell (1979), namely, learner’s metacognitive knowledge. In order to make an appropriate decision on how to plan, to monitor, and to regulate, learners need knowledge about the task, the strategies, and themselves. The review of Carpenter et al. (this issue), for example, underlines the crucial role of tasks. While retrieval is a good strategy for memory tasks, it is not for problem-solving tasks. The review of van de Pol et al. and the meta-analysis of Prinz et al. (both in this issue) underline the different efficiency of generative activities and thus of different learning strategies. And in many of the presented studies, learner characteristics are discussed, that is, moderate self-regulation (e.g., learners’ expertise in Nückles et al. this issue). The question now arises, how one could foster learners’ metacognitive knowledge and thereby also the accuracy of monitoring or regulation. One possible approach would be feedback, as suggested by Carpenter et al. (this issue). With feedback, which does not only refer to the task itself but also on the strategies, learners could infer information about the characteristics of the task and their strategies, and they could learn about themselves and their preferences and abilities. Such an inference might need an additional trigger, i.e., a metacognitive prompt to use feedback for an intensive reflection process.

A completely new perspective on self-regulation is provided in the paper of Wirth et al. (this issue). They argue that self-regulation can be conscious, which is the way it is usually conceptualized, but also unconscious. They introduce the so-called resonant states in sensory memory which trigger a bottom-up and top-down interaction, where external stimuli are aligned with expectations. For using strategies, this could mean that a feature of the learning material, e.g., a presented text without subsections or paragraphs, could match the expectation of a difficult text, which might be linked with the strategy of underlining and note taking. Learners would automatically grasp a pencil. This text reading strategy would be started automatically and passively without reaching consciousness and thus without requiring working memory resources. This example reveals that for such an automated process, learners need substantial self-regulation knowledge and expertise. The trigger needs to find a fruitful ground for resonance, i.e. learners have to know which features of the text indicate complexity and which strategy would be functional in addition to being able to apply the strategy without attentional control from working memory. The concept of unconscious regulation will surely be an interesting starting point for deeper cognitive analyses of self-regulation with reference to specific memory functions as well as for new approaches to foster highly automatized self-regulation.

How Cognitive Load Is Conceptualized

For a research program on the interplay between self-regulation and cognitive load, it is also crucial to understand the underlying concept of cognitive load. This is best described in the paper of Scheiter, Ackerman, and Hoogerheide (this issue). They differentiate between mental load, which refers to the required resources for task completion, and mental effort, which refers to the invested resources. In the Effort Monitoring and Regulation framework, which frames all the papers in this issue, the focus is clearly set on mental effort. Nevertheless, in the center of the EMR-model, there is also the differentiation of different types of load as conceptualized by CLT, namely, intrinsic, extraneous, and germane aspects of cognitive load (Sweller et al. 1998). However, for all the papers that address effort appraisals as a cue for monitoring and regulation, effort is not differentiated in these categories and is mostly measured with a one-item rating based on Paas (1992). In the papers concerning optimizing load in self-regulation tasks, the three-partite concept of load is at least mentioned, but still not analyzed as distinctive outcome measures in the respective studies.

However, very often the authors refer to another differentiation which could be partly aligned with the classical categories, namely, the appraisal of effort in a data-driven or goal-driven way (Koriat 2018). This also matches the above-mentioned requirement versus engagement perspective. While a data-driven appraisal would refer to the required effort (and hence to mental load, as defined in accordance with Scheiter et al. this issue), the goal-driven appraisal would refer to the invested effort (and hence to mental effort). A simple change in the wording of an effort rating item from “how much effort was required” to “how much effort learners chose to invest” would make a great difference in what learners actually reflect on and what they use as a cue for monitoring and regulation.

How would these two categories match with the classical view of CLT? The goal-driven appraisal could be ascribed as germane processes as they reflect the engagement of a learner for schema construction processes. The data-driven appraisal however might as well arise from intrinsic and/or extraneous aspects of the task. As the agency of data- versus goal-driven appraisals is either external or internal to the learner, I would like to label them as passively experienced versus actively invested (see also Seufert 2018).

That such a differentiation of effort appraisals is actually meaningful for the interplay with self-regulation could be demonstrated in many of the papers. That means that the origin of working memory load matters and that learners react in different ways. This can be seen in the changing relations of data- versus goal-driven effort appraisals and monitoring accuracy (Baars et al. this issue; Oyserman et al. 2018). Thus, for future research, it would be fruitful to differentiate effort ratings in order to capture the potential aspects of origin and agency or learners’ beliefs about effort that affect effort ratings. Whether one would use the available instruments to measure cognitive load differentially (Klepsch et al. 2017; Leppink et al. 2013) or whether one would tailor the measure to those aspects that are relevant for the current learning task and its self-regulation affordances should in any case be decided in light of the discussion of Scheiter et al. (this issue). In their review, they invite researchers on cognitive load to learn from research on SRL and their multimethodological measures of self-regulation (Panadero et al. 2016). By combining objective measures with subjective measures, one could investigate potential biases in effort appraisals (see also Moreno 2010). However, finding an objective measure as a counterpart for validating subjective effort appraisals is a lot more challenging than it is for performance appraisals. Scheiter et al. (this issue) discuss that while for monitoring accuracy learner’s performance is an easily assessable and understandable objective measure for validation, there is no such clear relation for effort appraisals. Objective effort measures like time-on-task or physiological measures might be sensitive to changes, but as overall measures, they still do not provide insight into the origin of cognitive load and often are influenced by many other processes accompanying learning. Thus, an evaluation of their own validity would be difficult and consequently also their potential to externally validate subjective measures. Overall, measuring cognitive load, reliable, valid, and preferably differentiated is still a challenging issue.

Not only the measurement of cognitive load could profit from a glance toward SRL but also the conceptualization of cognitive load itself. As mirrored in the more human-centered dimension of goal-driven effort appraisals (Scheiter et al. this issue), learners’ motivation plays a crucial but theoretically still neglected role for the investment of effort. If learners are not willing to invest additional effort, they will fail to do so, even if the objective task affordances or the individual regulation skills would allow them to do so. This link to learner’s motivation is made in several papers (Carpenter et al. this issue, Nückles et al. this issue, and Van Gog et al. this issue.). It could be a beneficial approach, to directly ask learners to not only monitor their effort, differentiated into active and passive aspects, but also to rate their current motivational state. This would not only help researchers to better understand the complex interplay of these factors but also the learner to gain deeper metacognitive or—being more precise—meta-motivational knowledge.

The Linking Factor Between Self-Regulation and Cognitive Load—Learner’s Resources

All the papers contribute to learn more about the mutual relationship between self-regulation and cognitive load. However, there is one aspect which appears in many of the papers but which is not addressed elaborately. The relation between self-regulation and cognitive load highly depends on learner’s resources. For identifying these resources, a broader scope of relevant variables as it is inherent in SRL rather than in CLT is needed. Not only cognitive resources influence how learners experience load or apply self-regulation but also motivational and affective variables. The INVO-model of individual prerequisites for successful learning (Individuelle Voraussetzungen erfolgreichen Lernens; Hasselhorn, & Gold, 2006), which builds on the model of Good Information Processing from Pressley, Borkowski, and Schneider (1989), provides an overview of variables relevant for learning. Around the central wheel of effective learning, the model distinguishes three cognitive rack-wheels that gear into each other on the left side and two motivational and affective rack-wheels on the right side. Selective attention and working memory, learning strategies and metacognitive regulation, and domain specific prior knowledge are the cognitive factors. Motivation and self-concept and volition and emotions accompanying learning are the relevant motivational and affective factors. The rack-wheel metaphor illustrates that all these factors are highly interlinked. While the INVO-model states these factors to be crucial for effective learning in general, I want to highlight their relevance as resources for self-regulated learning.

The role of these resources is described in the model of self-regulation as a function of resources and imposed cognitive load (Seufert 2018). The actual amount of learner’s self-regulation thereby depends primarily on the task difficulty, which comprises the affordances of accomplishing the task. In this case, that task comprises both, dealing with the problem at hand on the object level and regulating the learning process on the meta-level. Moreover, the task difficulty might not only arise from objective affordances but also from the learners’ decision to engage more or less in the task as needed, because they, for example, enjoy or dislike the task or are particularly interested in it or not. Going along with task difficulty, two relevant forces moderate the reverse u-shaped relationship between task difficulty and the required amount of self-regulation (see also Fig. 1). The first one is that with increasing difficulty of a task, the available resources decrease. A task would be more difficult when learner’s prior knowledge, their strategy skills, their interest, willingness, etc. are low. The second force is the imposed amount of load that will increase with increasing task difficulty.

Fig. 1
figure 1

Self-regulation as a function of resources and imposed cognitive load

Based on the interplay of these two forces, on the one end learners might have no need to regulate because the task is very easy, and hence the resources to deal with it are sufficient, and thus, the imposed load is low. On the other end, they might not be able to regulate when tasks are too difficult, i.e., resources are no longer sufficient and the imposed load is very high. Only with medium task difficulty, resources and load are balanced to allow for effective self-regulation: learners have sufficient resources to deal with the task, and regulation can help to use these resources more effectively. The imposed load is high enough to work as a trigger to initiate self-regulation but not yet too high to hinder self-regulation.

For a research program on self-regulation and cognitive load as it is presented here, it would be illuminative to identify crucial resources and to systematically analyze their moderating role. With such broader perspectives, we might come to a better understanding of why self-regulation does sometimes not succeed and how learners could thus be supported.

Overall, for a deeper understanding of the interplay between self-regulation and cognitive load, the presented papers in this special issue provide an excellent starting ground, and the progress made since the first bridging attempts started in 2017 has been remarkable. The papers referring to research questions one and two show that effort can be used as a cue for improving monitoring accuracy and regulation. This view could be broadened toward studies on how to use effort cues for improved strategy use. The papers on research question three provide successful instructional means to improve either self-regulation or load. All the generative tasks, scaffolds, or feedback provide learners with additional information about the difficulty of the task, and the integrated effort ratings lead learner’s attention to monitor their invested effort. Thus, they can gain more metacognitive knowledge and refine their self-regulating skills. But the effects of interventions on self-regulation should also take into account more elaborated load measures (Bannert, 2002).

In completing the picture, learners could profit even more when their effort appraisals would have to be more elaborated and would ask more explicitly for different origins of load. Even if this would make the world more complicated as was stated by van de Pol et al. (this issue), it would be worth the effort. Beside asking for effort appraisals, the reflection could be even enriched by asking for appraisals of learner’s resources, particularly for their motivational state. If we could use multiple appraisal measures, subjective and objective ones, and cognitive as well as motivational and affective ones, we could learn more about the biases of different measures, but more importantly, we could complete our understanding of the complex interplay between self-regulation, cognitive load, and learners’ resources.