Observing a model that performs the desired actions and behavior has been a successful and well researched instructional technique for the last 30 years in the field of motor learning (McCullagh et al. 1989; Wetzel et al. 1994; Wulf and Shea 2002). The application of cognitive modeling in learning environments that focus on problem solving and reasoning in a variety of domains is increasingly advocated by modern educational theories (Collins et al. 1989; Jonassen 1999; van Merriënboer 1997). Cognitive modeling concerns covert cognitive processes that have to be explicated in order to become observable for a learner. At the same time, rapid developments in computer and software technology in the last decades have enabled the use of dynamic visualizations, such as animations and video, to illustrate abstract cognitive processes or concepts (Casey 1996; Chee 1995). In addition, developments in computer technology have facilitated the authoring and application of pedagogical agents, that is, computer-based characters that support learners with verbal feedback and guidance in order to engage them in more active learning (Clarebout et al. 2002). We refer to the combined use of animations with textual explanations and pedagogical agents in cognitive modeling as animated models. These animated models illustrate the solving of problems such as scientific problems (e.g., solving a problem about gravity), mathematical problems (e.g., probability calculation problems), or search problems (finding information on the Internet).

The pedagogical agent functions as a social model and guides the learner through the animation. This guidance pertains to some of the beneficial effects of cognitive modeling. To start with, the agent may clarify not only how a problem is solved, but also why a specific method has been chosen. Secondly, the agent may help learners to avoid typical errors by guiding their attention to specific parts of the animation and provide explanatory text.

For example, in solving a problem in the domain of probability calculation, it is important to know whether it is a ‘drawing with or without replacement’. For novices this concept may be rather abstract and difficult to understand. An animation can visualize the concept by showing what is happening, for instance, in a situation with mobiles. Imagine a mobile factory where on an assembly line six mobiles—each with a distinct color—are packed in a box. A controller blindly selects two mobiles to check them for deficiencies. The learner has to calculate the probability that the controller draws a yellow and a blue mobile from the box. The animated model may show a box with six mobiles. The first mobile that is drawn from the box can be visibly moved aside from the box. As is shown in Fig. 1, the pedagogical agent may move to the drawn mobile and explain that a mobile that is drawn should not be put back because you do not want to draw an already checked mobile again. Then the group of remaining mobiles in the box becomes encircled. The pedagogical agent moves to the box with mobiles and explains that the second mobile will be selected from the remaining mobiles.

Fig. 1
figure 1

Screen shot of the ‘Checking mobiles’ animated model which displays and explains why this is a ‘drawing without replacement’

A potential danger of showing the performance of a complex task with animations and textual explanations is that the limited cognitive capacity of learners might become overloaded. A theory that tries to align the structure of information and the way it is presented with human cognitive architecture is cognitive load theory (CLT: Paas et al. 2003, 2004; Sweller 1988, 1999, 2004; Sweller et al. 1998; van Merriënboer and Sweller 2005). For the processing of information two structures in human cognitive architecture are crucial. Working memory, where all conscious processing of information takes place, only has a limited processing capacity that is by far inadequate to meet the complexity of information that learners face in modern learning environments. The second structure, long-term memory, is a knowledge base with a virtually unlimited capacity that can serve as added processing capacity by means of schemas. Schemas comprise cognitive structures in which separate information-elements are aggregated in one specialized element that can be processed by working memory as a single element (Paas et al. 2003). CLT identifies three types of cognitive load. The first type, intrinsic cognitive load, is caused by the complexity of the subject matter and cannot be altered without compromising sophisticated understanding (Paas et al. 2004). The two other types of cognitive load are caused by the way that information is presented (Paas et al. 2004). The second type, extraneous or ineffective cognitive load, is imposed on working memory because of poorly designed instructional material. The third type, germane or effective cognitive load is imposed when information is presented in such a way that learning is enhanced, that is, when it facilitates the construction and/or automation of cognitive schemas. Examples of such activities are elaborating, abstracting, and inferring. The three types of cognitive load are not isolated but act as additive components. The combined load of these components cannot exceed the available cognitive capacity and, consequently, the high load of one component is at the cost of another component. From an instructional design point of view, especially extraneous cognitive load and germane cognitive load should be considered as communicating vessels as the reduction of extraneous cognitive load can free cognitive resources for an increase in germane cognitive load (Paas et al. 2003). An important objective of CLT is to avoid that learners spent mental effort on activities that do not contribute to learning (i.e., decrease ineffective cognitive load) and to promote mental effort on activities that contribute to learning (i.e., increase effective cognitive load). The central question in this study is how learners can be stimulated to invest effort in their learning, that is, increase germane cognitive load. We will investigate this question from two perspectives. The first departs from the stance that cognitive load theorists increasingly emphasize the motivational aspects of learning (Gerjets and Scheiter 2003; Paas et al. 2003; Paas et al. 2005; van Merriënboer and Sweller 2005) and that motivation is assumed to be a major contributor to the willingness of learners to engage in genuine learning activities (van Merriënboer and Ayres 2005). However, for learning to commence, instructional strategies have to be used that effectively guide the learner’s investment of mental effort and take account of the learner’s limited cognitive capacity. This is the focus of the second perspective that builds on the assumption that learners have to be stimulated to engage in some kind of active processing of the learning material in order to understand it (Chi et al. 1989; Mayer 2001; Wittrock 1974).

A theoretically potential instructional technique to increase the motivation of learners is to allow them to control the learning process (Kinzie 1990). However, reviews focusing on several dimensions of learner control are not conclusive with respect to the benefits of learning control (Kay 2001; Lin and Hsieh 2001; Niemiec et al. 1996; Williams 1996). In a review Skinner (1996) has classified the multiplicity of constructs of control. One of the most fundamental distinctions is that between the actual control (i.e., the objective control conditions in the context) and perceived control (i.e., the beliefs about the amount of control that is available). The term illusion of control is sometimes used when people have high perceived control in situations that are objectively uncontrollable (Skinner 1996; Langer 1975). This raises the question what will occur when learners believe that they have more control over the learning environment than actually can be exerted.

Cognitive Dissonance Theory (Festinger 1957) argues that individuals seek consistency among their cognitions (i.e., beliefs, opinions, and observations) and that a dissonance will occur in the case of an inconsistency between these cognitions. For example, participants in a study on the effects of control and effort on the cardiovascular and the endocrine systems were notified beforehand that they could exert control over the intensity of noise during task performance. However, in reality only the participants in the control conditions were allowed to actually exert this control over the noise. Participants who could not control the intensity of the noise, although they were told they could, experienced higher levels of stress, indicated by higher activation of the sympathetic nervous system (e.g., higher blood pressure) which is associated with stressing factors (Peters et al. 1998). In addition, some evidence exists that cognitive dissonance may undermine task performance (Elliot and Devine 1994; Pallak and Pittman 1972). The perceived control can be regarded as a cognition in the sense that it is a belief of a learner, whereas the actual or objective level of control is considered a cognition since it is an observation made by a learner. Under this assumption, an inconsistency and thus a negative effect on learning may occur when learners expect more control than they can actually employ. Some additional support comes from the locus of control literature. The locus of control is concerned with the question whether success or failure of activities can be attributed to one self (internal locus of control) or to other factors (external locus of control) (Kinzie 1990). In general an internal locus of control is believed to increase intrinsic motivation, which is associated with better performance (Fazey and Fazey 2001). In order to develop a sense of internal locus of control, it is argued that the learning environment must be obviously responsive to learner choices-individuals must perceive that the relevant changes in the instruction are a result of their decisions (Lepper 1985). With illusion of control the learning environment is obviously not responsive to learner choices and no sense of internal locus of control may develop. Consequently, it can be argued that illusion of control may hamper learning.

Research has shown that novices benefit from instructional methods that have learners study worked out solutions of problems (see for a review van Merriënboer and Sweller 2005). Despite these benefits there are also some disadvantages. To start with, the passivity inherent to only studying worked out solutions may undermine the motivation of learners. Secondly, the method may result in learning only stereotypical solutions for problems that may not be applicable to problems that differ from the ones learned during training (Sweller et al. 1998). Finally, once a learner understands the rationale behind the worked out solutions, the presentation of this information in yet more worked out solutions will become redundant and the cognitive load will turn from germane to extraneous (Renkl and Atkinson 2003). In addition, evidence is accumulating that active processing of learning material facilitates learning. In this respect the generation of self-explanations, in which learners try to explain the rationale of a problem solution to themselves, has proven to be an effective instructional method (Chi et al. 1989; Renkl 1997; Renkl and Atkinson 2002; Roy and Chi 2005). Also the provision of example-practice pairs, that is, learners first study a worked out solution and subsequently try to solve a similar problem themselves, has proved to be an effective way to introduce problem solving elements (Reisslein et al. 2006; Sweller and Cooper 1985; Trafton and Reiser 1993). For novices, however, engaging in new problems after studying an example may impose such a high cognitive load that negative learning effects may occur. Therefore, the completion strategy, in which the problem is only solved partly and the learner has to complete the solution, has been proposed. An alternative for the completion strategy in which self-explanations and worked out solutions are combined, is the conjunction of first studying a worked-out solution and subsequently solving the same problem. During the study stage of the studying-practice sequence learners can process specific information. This would be a problem when they start with practicing the problem solving skill because of the high demand on cognitive resources.

During the study stage the learners will construct a preliminary schema. Subsequently, this schema can be further refined with the information that is obtained during the practice stage when the learners perform skill them selves (Shea et al. 2000; Weeks and Anderson 2000; Wulf and Shea 2002). The alternation between first studying and then problem solving enriches the schema and also helps learners to integrate newly learned information with prior knowledge, which yields a more integrated knowledge base with increased accessibility, better recall, and higher transfer of learning.

In learning from animated models in the domain of probability calculation we hypothesize that task performance will enhance for learners whose expectations regarding control match the control than they can actually employ compared to learners whose expectations regarding control do not match the actual control. Moreover, we hypothesize that the alternation of studying an animated model and then solving the same problem will result in more elaborated schemas than arrangements in which learners only study animated models or first solve the problem and then study the problem solution in the animated model. In a factorial design with the factors Illusion of Control (No, Yes) and Instructional method (Studying-Practicing, Practicing-Studying, Studying-Studying) we predict that learners in the condition with no illusion of control will yield higher transfer performance compared to learners in the illusion of control condition. Furthermore, we predict that learners in the Studying-Practicing condition will outperform learners in the Practicing-Studying and Studying-Studying conditions on transfer performance.

Method

Participants

Participants were 90 pupils of pre-university education in the Netherlands (51 females and 39 males). Their mean age was 15.7 years (SD = 0.72). The participants were paid 10 euro for their collaboration. The participants were randomly assigned to one of the six conditions. This resulted in 15 participants in each of the conditions.

Computer-based learning environment

The computer-based learning environment was developed with Flash MX and consisted of the following parts: a prior knowledge test, an instructional component and an assessment component. All parts were user timed, that is, the participants could decide how much time they spent on each part. The experiment started with a prior knowledge test of 8 open questions and 4 multiple choice questions of varying difficulty. An example of such an open question is:

‘You are playing a game with some friends and it is your turn to throw a dice. If you throw sixes you win. What is the probability that you throw sixes?’

An example of a multiple choice question is:

‘You have a deck of cards from which you select 4 cards. You want to get an ace, king, queen and jack in this specific order. Does it matter whether you put back the selected cards before each new selection or not?

  1. a.

    Yes, your chances increase when you put back the selected cards

  2. b.

    Yes, your chances decrease when you put back the selected cards

  3. c.

    No, your chances remain the same whether you put back the selected cards or not

  4. d.

    This depends on the number of jokers in the deck of cards’

The level of prior knowledge was measured to serve as a covariate in the case that differences in prior knowledge would occur (although participants were randomly assigned to one of the six conditions).

The instructional component consisted of an introduction to probability calculation and the experimental treatment. The introduction comprised a brief explanation of concepts in probability calculation, such as randomization, individual events, complex events, and how counting can be used in calculating the probability. After this introduction, which was identical for all six groups, participants received condition-specific information about the learning environment. With a continue button the participant could start the experimental treatment in which eight problems in probability calculation had to be solved. The probability calculation problems were grouped in four problem categories which resulted from two important characteristics in probability calculation: The order of drawing (relevant vs. irrelevant) and replacement of drawing (without replacement vs. with replacement). For each problem category two problems were presented to enable learners to recognize structural similarities and dissimilarities between problems and thus learn not only how to solve problems but also when to apply which procedure. An example of such a problem is

‘In a factory mobile phones are produced. On a production line the mobiles receive a cover in one of six colors before they are packed in a box. Each box contains six mobiles in the colors red, black, blue, yellow, green, and pink. Before a box leaves the factory two mobiles are selected randomly and checked on deficiencies. What is the probability that you select the yellow and the blue mobile from one box?’

The factor Illusion of control was operationalized in the following way. Beside some condition-specific information all participants received the following information:

‘You will see a screen with 8 buttons. Each button refers to a problem in probability calculation. TAKE CARE! Although some problems look similar, they are really different. You have to select each button (and thus each problem), but you are free to select the order. Buttons that you have selected will be disabled. In the upper right corner of the screen with the buttons is a list in which the problems that you have selected are colored in red.’

The participants were notified that all the buttons had to be selected, but that they were free to select the order of the buttons. This information was given to the participants to generate expectations about the control they could exert over the order in which they selected problems during the instruction. In the illusion of control condition a mismatch between expected an actual control was realized with two manipulations. First, as shown in Fig. 2 the buttons had meaningless names so that in fact they did not know what they selected. Second, the learning environment was adjusted in such a way that the problems were always presented in the same order as listed in the upper right corner.

Fig. 2
figure 2

The screen with buttons from which learners could select problems in the illusion of control conditions

So, whether a learner for the first time pressed the button with the caption ‘Problem 7’ or the button with the caption ‘Problem 4’, the learning environment would start with the ‘Mountain bike ride 1’ problem. Regardless which button the learner pressed the second time, the ‘Footrace’ problem would be the second problem that would be presented. Although learners in this condition expected control over the problem selection, they gradually became aware that there was no control.

In the condition with no illusion of control these manipulations were not implemented. As shown in Fig. 3 the buttons had meaningful names. In this condition they had the control that they expected. First, they had information about the problems they could select. Second, the learning environment was adjusted in such a way that it responded to the selection of the learner. So, when a learner would select the ‘Pin code’ problem, this problem was presented. Learners in this condition expected control over the problem selection and could employ this control during the experiment.

Fig. 3
figure 3

The screen with buttons from which learners could select problems in the no illusion of control conditions

The animated models in all conditions were continuous and learner paced, that is, learners could use a pause and play button. Each animated model lasted 120 s. The problem-solving process in each animated model was completed with supportive written explanations by a pedagogical agent, which was implemented as a dolphin. The animated pedagogical agent moved across the screen to focus the learners’ attention while explaining and demonstrating one of two possible problem-solving processes. The method of individual events was applied in four animated models and implies that, first, the probability of individual events is calculated separately and, subsequently, the complex event is calculated by multiplying the individual events. For example, in the ‘Checking mobiles’ problem first the probability of selecting the yellow and the blue mobile was calculated (respectively 2/6 and 1/5) and these two probabilities were subsequently multiplied for calculating the probability of the complex event. The method of counting was applied in the other four animated models. This method implies that all possible combinations are balanced by the correct number of combinations. For example, suppose someone calculates the probability to guess a PIN code consisting of 4 figures. For each figure 10 different numbers (0 up to and including 9) can be chosen, whereas for 4 figures 10*10*10*10, that is 10,000, possible combinations can be chosen of which only one combination is correct. In the animated models the pedagogical agent explicated which considerations underlie the choice of one of the two methods.

The learning environment was configured in such a way that it could run in six modes reflecting the six conditions. In the two study–study conditions the participants observed an animated model in which a problem was solved two times in succession. In the two study–practice conditions learners observed an animated model in which a problem was solved once. Subsequently, the description of the same problem appeared on the screen with a text box below it in which they could solve the problem. Learners were forced to spend a minimum of 120 s solving the problem. When they tried to continue before the 120 s had passed, a message appeared suggesting to look again at the solution they had given. In the two practice–study conditions learners received a description of the problem on the screen with a text box in which they could solve the problem. The same time constraints and message were applied as in the study–practice conditions. After pressing a continue button an animated model was started, showing the solution of the problem they had just solved.

The three conditions with no illusion of control (no illusion of control/study–study, no illusion of control/study–practice, no illusion of control/practice–study) were identical to the three conditions with illusion of control (in order, illusion of control/study–study, illusion of control/study–practice, illusion of control/practice–study), with the exception that learners could determine the order of the problems they engaged in. In all conditions, after each problem, the participants were asked to score the mental effort they perceived when they engaged in the instructional activity on a one-item 9-point rating scale based on Paas (1992; see also Paas et al. 2003). This scale ranged from ‘very, very little effort’ to ‘very, very much effort’.

After the instructional component with the eight problems an assessment component followed consisting of twelve transfer tasks. Of the twelve tasks, eight tasks were near transfer tasks. The near transfer items were analogous to the problems that were solved in the animated models. The following is an example of a near transfer task:

‘In a pop music magazine you see an ad in the rubric FOR SALE in which a ticket for a spectacular concert of your favorite pop group is offered. Unfortunately the last 2 digits of the telephone number, where you can obtain information about the ticket, are not readable anymore. You really like to have the ticket and decide to choose the 2 digits randomly. What is the probability that you dial the correct digits on your first trial?’

The remaining four items were far transfer items. These far transfer items were different from the problems solved in the animated models. Take for instance the following example of a far transfer task:

‘In order to determine the final mark for a subject your teacher uses two complementary methods. First, you have to perform a practice task, followed by a test consisting of 8 multiple-choice questions. One out of five possible practice tasks (named A, B, C, D and E) is randomly assigned to you. You know you have done practice on task E a month ago. For the multiple choice questions your teacher uses a large pool of 100 different multiple-choice questions from which he randomly selects 8 questions for you. Also in this respect you have made a test before with 8 questions from this pool. What is the probability that you are assigned practice task E as well as the 8 questions you have had before?’

This far transfer task comprises a problem from a specific problem category, that is, order is not important and without replacement (the drawing of the multiple choice items), which has to be combined with one individual event (the practice task).

After each transfer task the participants were asked to score the mental effort they perceived when they solved the transfer task on a one-item 9-point rating scale based on Paas (1992; see also Paas et al. 2003). This scale ranged from ‘very, very little effort’ to ‘very, very much effort’.

Procedure

The experiment was conducted in one session and was run in the computer rooms of the participating schools. After welcoming the participants, the experimenter gave them a code to log in on the experimental environment. When the participants entered the environment, on the computer screen the purpose of the experiment was explained and an outline was given of the different parts of the experiment. First, the prior knowledge test was conducted. The instruction phase started after the prior knowledge test with the brief introduction to probability calculation. After reading the introduction they could press a continue button to engage in the experimental treatment. After each problem, participants were asked to score their perceived mental effort. By pressing a button they could then proceed to the selection screen in which they could select the next problem. Following the instruction phase a transfer test was administered. Participants could use a calculator as well as scrap paper during the transfer test. All input to the calculator was logged and the scrap paper was collected after the experiment. After each transfer item they were asked to score their invested mental effort. Finally, the participants were debriefed and thanked for their participation.

Scoring

For each open question of the prior knowledge test a list of correct answers was formulated. For each correct answer 1 point was assigned, otherwise 0 points. Computational errors were ignored and no partial credit was awarded. For each correct multiple-choice question participants received 1 point, otherwise they received 0 points. In total the maximum on the prior knowledge test could be 12 points. The mental effort scores that were administered after each problem were summed across all eight problems and divided by 8, resulting in an average score on mental effort ranging from 1 to 9. For each near and far transfer task a list of correct answers was formulated. Computational errors were ignored and no partial credit was awarded. Each near and far transfer item was assigned 1 point when it was correct and 0 points when it was incorrect. The maximum for the transfer tasks was therefore 12 points. The mental effort scores after solving the transfer tasks were tallied up and divided by 12, resulting in an average score for mental effort on transfer ranging from 1 to 9. Instruction time (in s) was defined as the time that the participants needed for the introduction (the basic theory of probability calculation) and the instruction component (the time spent on the eight problems). The time (in s) needed to accomplish the transfer tasks was logged by the computer. The computer logged both the start and the end time of the instruction.

Results

The dependent variables under investigation were instruction time (s), mental effort during instruction (score 1–9), performance on transfer (score 0–12), mental effort on transfer (score 1–9) and time on transfer tasks (s). For all statistical tests a significance level of 0.05 was applied. Due to technical failure the data of mental effort during instruction was only logged for 6 participants in each condition. Effect sizes are expressed in terms of omega-squared (ω2). Table 1 shows the mean scores and standard deviations of the dependent variables for all conditions.

Table 1 Mean scores and standard deviations (between brackets) on prior knowledge and the dependent variables for all conditions

We began our analysis with testing the measures that could be used as covariates for further analyses. The mean score on the prior knowledge test was 7.60 (SD = 2.62), indicating that the participants were not novices in the domain (the maximum score was 12). An ANOVA with the factors Illusion of control and Instructional method revealed neither main (Illusion of control: F(1, 84) < 1, ns; Instructional method: F(2, 84) < 1, ns) nor interaction effects (F(2, 84) < 1, ns). For instruction time, the ANOVA also revealed neither main effects (Illusion of control: F(1, 84) < 1, ns; Instructional method: F(2, 84) < 1, ns) nor an interaction effect (F(2, 84) = 1.88, MSE = 84,695.79, ns). Next, we tested for time on transfer tasks to determine whether it should be used as a covariate in further analyses. No differences were found on time on transfer tasks for Illusion of control (F(1, 84) < 1, ns) and Instructional method (F(2, 84) = 1.47, MSE = 149,871.54, ns). No interaction for Illusion of control and Instructional method was found on time on transfer tasks (F(2, 84) < 1, ns). The correlations between the dependent variables justified the use of different ANOVAs. Therefore, scores were analyzed with 2*3 ANOVAs with the factors Illusion of control (No vs. Yes) and Instructional method (Studying–Practicing, Practicing–Studying, Studying–Studying).

With regard to performance on transfer a main effect of Illusion of control was observed (F(1, 84) = 4.29, MSE = 6.73, p = 0.041, w 2 = 4%). Learners in the conditions without illusion of control performed better on transfer than learners in the conditions with illusion of control (M = 5.46, SD = 2.74 vs. M = 4.32, SD = 2.42). Neither a main effect on Instructional method (F(2, 84) < 1, ns) nor an interaction between Illusion of control and Instructional method (F(2, 84) = 1.03, MSE = 6.73, ns) was found.

Subsequently, mental effort during instruction was tested and no difference was found for either Illusion of control (F(1, 84) < 1, ns) or Instructional method (F(2, 84) < 1, ns), nor an interaction between Illusion of control and Instructional method was observed (F(2, 30) < 1, ns). Finally, mental effort on transfer was tested and no difference was found for either Illusion of control (F < 1, ns) or Instructional method (F(2, 84) = 2.07, MSE = 3.00, ns), nor an interaction between Illusion of control and Instructional method was observed (F(2, 84) = 1.21, MSE = 3.00, ns).

Discussion

These results confirm our first hypothesis which predicted that learners whose expectation regarding control matched the actual control that they could exert, would perform better on transfer tasks. In this study all learners were told that they could select the order of problems. Learners who could select the order of the problems (the no illusion of control conditions) showed higher transfer performance than learners who thought that they could select the order of the problem, whereas they actually received the problems in a fixed order (the illusion of control conditions). From a cognitive load theory point of view it can be argued that the (cognitive) dissonance between the expected and the actual control on the one hand and the lack of possibility to resolve this (cognitive) dissonance on the other hand may have engaged learners in cognitive activities that did not contribute to learning. For example, learners may have tried to figure out why the control was different from what they had expected. Consequently, these activities may have posed extraneous cognitive load on the cognitive system and thus wasted cognitive capacity that could otherwise be used for learning activities that contribute to learning. In the condition without illusion of control this cognitive capacity could be exerted for genuine learning activities. This pattern, however, is not found in differences on perceived mental effort in the conditions with and without illusion of control. The mental effort measure that was used did not differentiate between mental effort due to the perceived difficulty of the subject matter, the presentation of the instructional material or engaging in relevant learning activities. It is possible that the effects in the conditions with or without illusion of control on mental effort have neutralized each other. In other words, the illusion of control conditions may have imposed rather high extraneous cognitive load and low germane cognitive load, whereas the no illusion of control may have imposed rather low extraneous cognitive load and high germane cognitive load.

An implication of providing learner control in the selection of the problem order is that learners indeed show different patterns of selecting problems and that the higher performance may be attributable to these different patterns rather than a match between expected and actual exerted control. For this reason we further analyzed the patterns of problem selection in the conditions with no illusion of control. From the 45 participants in these conditions, 32 selected the order of the problems as it was presented (see Fig. 2) which was the same as the fixed order of the problems in the illusion of control conditions. In these cases learners started with the button at the top, then the button below it until the button at the bottom of the list. When only the learners were taken into account who selected the problems in the same order as in the illusion of control conditions, the no illusion of control conditions still yielded higher performance on transfer (M = 5.62, SD = 2.67 in the no illusion of control conditions vs. M = 4.32, SD = 2.42 in the illusion of control conditions).

The results failed to confirm the second hypothesis predicting that learners in the Studying–Practicing condition would perform better on transfer than learners in the Studying–Studying condition and the Practicing–Studying condition. This hypothesis was based on the assumption that the learners were novices. However, as the prior knowledge test has indicated the learners in this experiment already possessed some knowledge in the domain. There is accumulating evidence that the effectiveness of instructional guidelines depends on the level of domain knowledge of learners (Kalyuga 2005; Kalyuga et al. 2003; Reisslein et al. 2006). In fact, guidelines that are effective for novices in a domain may prove to be ineffective or even detrimental when applied to more proficient learners. In this respect the learners in this study may have had sufficient prior knowledge to manage the cognitive load imposed when they first had to practice and then study an animated model.

From a theoretical point of view these results contribute to cognitive load theory. Traditionally, cognitive load theory has focused more on instructors making instructional decisions rather than learners making these decisions. Nevertheless there are situations, for example when the expertise of learners increases, in which a more prominent role for the learner seems appropriate (Paas et al. 2003). In addition, one of the premises of cognitive load theory comprises that specific instructional design guidelines aim at specific types of learning activities, that is, no variety in learning activities is expected. As a consequence the pattern of extraneous and germane cognitive load is rather determined. However, as Fisher and Ford (1998) have argued, the allocation of effort toward learning activities is also driven by individual motivational processes, such as personal goals and interests. For this reason, Gerjets and Scheiter (2003) have proposed an augmented model of cognitive load theory in which learner goals and processing strategies moderate between the instructional design and the pattern of cognitive load. If learner control is included in this augmented model of cognitive load theory, the results of this study suggest that also expectations of learners regarding this control should be incorporated in such augmented model.

From a practical point of view these results have implications as well. The development of learning environments that respond to actions and choices of learners can be quite laborious and therefore relatively expensive. Designers of such environments have to take into account how they deal with the expectations of the learners that are going to use the learning environment.

The findings and conclusions provide directions for future research. In this study illusion of control was enforced by telling learners beforehand that they could control the selection of tasks. Learners in the conditions with illusion of control discovered that they could control the selection of tasks only marginally, whereas learners in the conditions without illusion of control could select the problems as they were told beforehand. However, it is not clear to what extent these learners indeed experienced a mismatch between expected and actually exerted control. For this purpose a valid and reliable instrument needs to be developed that measures the illusion of control. Moreover, an important avenue for future research is to unravel the effect of illusion of control on motivation by using questionnaires for intrinsic and extrinsic motivation and locus of control.

Finally, the results of this study are limited because of the limited scope of the instructional material (i.e., probability calculation with more focus on procedural knowledge rather than cause-and-effect explanations). Also the number of participants in each condition was rather low and replication of the results with larger samples is required. In addition a specific type of learners (i.e., pupils of pre-university education) was used who already had some level of prior knowledge in the domain of probability calculation. The results were also limited because the assessment took place immediately after the instruction and, consequently, nothing can be concluded about the long term effects. Future research is needed to determine whether the results apply to other domains, other types of learners, and if the effects will persist on delayed testing.

To conclude, the results of this study suggest that merely providing learner control in a learning context without considering the expectations of learners regarding the control can have a negative effect on learning.