How to Foster Functional Thinking in Learning Environments Using Computer-Based Simulations or Real Materials

  • Michaela Lichti
  • Jürgen Roth


As students encounter functional relationships in almost every grade, functional thinking is fundamental for students to participate in mathematics education and sciences successfully. Nevertheless, a lot of students develop misconceptions and face problems working on functional relationships appropriately. Thus, the encouragement of students’ functional thinking seems to be crucial. This study investigates whether the functional thinking of sixth graders should be fostered in a learning environment using real materials or computer-based simulations (GeoGebra). Furthermore, it is analyzed whether the media lead to different effects. A pre-post-test-intervention study (N = 282, two experimental groups: material vs. simulations, control group) was conducted. In the following article the two experimental groups will be focused on. The collected data was analyzed with Item Response Theory. A 2-dimensional Rasch model to determine the person ability with respect to functional thinking was estimated. By the use of plausible values, we conducted a mixed ANOVA. The difference concerning functional thinking between the experimental groups was compared. Even though both media led to a significant increase in functional thinking, the increase of the simulation group was significantly higher. Thus, results indicate that fostering functional thinking with simulations seems to be superior to the use of real materials.


Functional relationships Functional thinking Real materials Simulations GeoGebra Task design 

Functional relationships are fundamental for mathematics education. Students are confronted with this topic a lot during their schooldays: Preschool students have to learn to recognize number patterns (Blanton and Kaput 2005; NCTM 2000); lower-secondary level students have to be able to make generalizations about geometric patterns (NCTM 2000); students need to discover the differences between linear and nonlinear functions at upper-secondary level and do calculus in detail at high school (NCTM 2000). In order to address all these topics appropriately, students must develop an understanding of functional relationships and therefore, be competent in functional thinking (FT, Vollrath 1989). Furthermore, functional relationships are also ubiquitous in our everyday life. Imagine the relationship between the velocity at which a car is driven against the traveled distance. Another example considers the price you have to pay when refueling which depends on volume and cost per gallon. Unfortunately, studies repeatedly reveal that students face problems with the understanding of functional relationships and a lot of misconceptions exist (Leinhardt et al. 1990; Thompson 1994). Well-known and well-documented errors are the graph-as-picture error (first described in Janvier 1978) and the slope-height confusion (Leinhardt et al. 1990). According to the relevance of FT and the well-documented misconceptions, a basic knowledge and qualitative understanding of FT seem to be equally beneficial for students and teachers. The achievement of this knowledge, even before teachers start to cover functional relationships in class, might help to counteract misconceptions developing. Thus, fostering FT should ideally take place before students are confronted with functional relationships in class explicitly (before grade 7 in Germany). There is much evidence that this can be achieved by creating learning environments based on experiments with real materials or computer-based simulations. This study therefore investigates (1) whether such environments lead to significant effects on young students’ FT and (2) whether these effects differ regarding the media.

Functional Thinking

FT is understood as “thinking that is typical for the use of functions” (translated from Vollrath 1989, p. 6) and can be described by three main aspects: mapping, covariation, and function as object (Vollrath 1989, pp. 8–15; vom Hofe and Blum 2016, p. 248).

The aspect mapping entails understanding of the uniqueness of mapping. Every element x of the domain is mapped exactly to one element y of the range and the function is treated locally. Similarly, a function is described as an “input-output assignment” (Doorman et al. 2012, p. 1246): One variable is entered, whereas the mapped variable is the outcome. This idea is also presented as action conception (see, e. g., Breidenbach et al. 1992; Dubinsky and Harel 1992). Students at this level of FT see functions as a request to calculate. Comparably, the process of using a formula to generate a dependent variable based on an independent one is named as interiorization (Sfard 1991). The aspect mapping, furthermore, is the substantial component of the Dirichlet-Bourbaki definition of functions (Vinner and Dreyfus 1989), which is mostly used in school. It draws attention to a pointwise view of functions (Monk 1992).

In contrast, the aspect covariation focuses on variation: How does the dependent variable y alter (co-variation) when the independent variable x is varied? In accordance with this, one may refer to the “dynamic process of co-variation” (Doorman et al. 2012, p. 1246). Similarly, process conception, named as the second state of the development of FT describes the dependent variation of two corresponding sets (Thompson 1994). Looking at functions as processes, this level of development is also called operational conception (Sfard 1991). Furthermore, functions can be defined based on covariation: “a function, covariationally, is a conception of two quantities varying simultaneously […]” (Thompson and Carlson 2017, p. 44).

The third aspect function as object requires the students to consider a function as a whole. Therefore, a function needs to be seen as an independent object that can be manipulated (DeMarois and Tall 1999; Dubinsky and Harel 1992). For example, when students recognize that a function can be a process (covariation) and apply an action to this process (Dubinsky and Harel 1992), the function is “encapsulated to become an object” (Breidenbach et al. 1992, p. 250). As an example, functions can be summed up or removed up in their symbolic form. This way to deal with functions as objects is described as manipulating use (vom Hofe 2004, p. 53).

This rather normative description of FT can be underpinned with a much more practical approach. Students’ ability to deal appropriately with functions’ different forms of representation (formula, verbal description, graph, and table, see, e. g., Doorman et al. 2012) is fundamental to the understanding of functional relationships. Students should be able to change between these different forms and convert them into each other (Janvier 1978). They are almost forced to look at functions from different perspectives, when they use different forms of representation. This leads to a deeper understanding of functional relationships. In addition, it is possible to observe whether students are able to change between forms of representation. This offers the possibility of estimating the extent to which students’ FT is developed. The ability to change between forms of representation substantially depends on the three aspects according to Vollrath. For example, a connection between a graph and a table can be established using the aspect mapping. Converting a verbal description into a graphical representation often requires an understanding of covariation: Covariation is described in a text and needs to be converted into the slope of a graph. To convert a formula into a graph, it is necessary to recognize the type of function. The function has to be recognized as a whole.

Regarding young children, many results indicate that they are already able to deal with functional relationships appropriately. They seem to be able to identify, explore, and symbolize functional relationships (Blanton et al. 2007). Furthermore, there are indications that young children compare abstract quantities and thereby develop general relationships (ibid.). Focusing on the different aspects of FT, studies reveal that young children are able to perceive them appropriately (Blanton et al. 2007; Blanton and Kaput 2011; Mason 2008; Warren and Cooper 2005; Warren and Cooper 2006). Accordingly, the NCTM principles and standards (2000, p. 37) state that children in elementary grades should be able to “understand patterns, relations and functions” and “analyze change in various contexts”. Regarding the aspects mapping and covariation, children seem to be able to describe the correspondence and covariance of varying quantities and thus recognize the underlying relationship (Blanton and Kaput 2005). Students’ ability to think about mapping and covariation becomes visible, when they work on patterns and generalizations (Warren and Cooper 2005, 2006; Mason 2008). Students can analyze a repeating pattern step by step. Therefore, they use a table to systematically organize the different parts of the pattern. Consequently, they can think about mapping, when looking at the table horizontally. Looking at the table vertically, they automatically think about change and thereby about the covariation of the different quantities that are part of the pattern. Accordingly, table’s structure is identified as being very important to make covariance und correspondence visible (Blanton and Kaput 2005). When completing a pattern (Warren and Cooper 2006), students also need to deal with change and covariation. Furthermore, in part children seem to be able to use symbolic language to model some situations (Blanton and Kaput 2005). This ability of students can be seen as an indication for their initial ability to reason about functions as objects. As described above, the symbolic form of functional relationships is predestined to think about functions as a whole. Students using symbolic language at this early age start to generalize and are able to switch flexible between different expressions for the same thing (Mason 2008). Thereby naturally arises the need to manipulate these expressions (ibid.), what can be seen as an early manipulating use of functions. This is part of the function as object aspect.

To conclude, children of early grades are able to think functionally in a mathematical way. Furthermore, it is possible to identify the three aspects according to Vollrath (1989) in their thinking at different levels. Therefore, mapping, covariation, and function as object initially can be identified and probably fostered fruitfully. As a result, it is suggested that early mathematics education should extend to include functional thinking explicitly in class (Warren and Cooper 2006). One way may be to make students think about how two or more quantities vary in relation to each other (Blanton and Kaput 2011).

As a possibility to make students think about how quantities vary in relation to each other, the activity of performing experiments can be mentioned (Vollrath 1986). The process of performing experiments is very similar to the process covered by a functional relationship. According to the “input-output assignment” (Doorman et al. 2012, p. 1246), a functional relationship can be described as an experiment which starts with a given variable, e. g., a volume of water. Next, the experiment is performed and a new variable, the fill height of a glass, for example, is generated. The aspect mapping seems to be crucial as the “input-output assignment” becomes visible. After the generation of some new values of the dependent variable, covariation becomes apparent when students realize that the independent as well as the dependent variable change. While performing the experiment hands-on, the covariation also becomes tangible. Therefore, experiments give the possibility to facilitate students’ understanding of functional relationships and foster their FT by a scientific discovery process. This process includes generating hypotheses, testing hypotheses by performing experiments, and reflecting the results (De Jong 2005; Reid et al. 2003). Besides the use of real materials, studies also investigate and recommend the use of computer-based simulations (Goldstone and Son 2005; Jaakkola et al. 2011).

Real Materials and Computer-Based Simulations

Real materials, in general, offer the possibility of using every day common things in the context of mathematical tasks. By generating the values of a new dependent variable hands-on, a connection between the ‘real world’ and the mathematical content may develop and lead to a deeper and more lasting learning effect. An explanation for this effect could be that concrete information can be remembered easier than abstract ones (Goldstone and Son 2005). Real materials, used in an adequate way, make functional relationships tangible (Barzel 2000) and they offer authentic experiences (NSTA 2007). Whilst working independently with these materials, students can experience that many different relationships exist, while diverse and everyday variables are part of these relationships. Furthermore, students can realize that they are able to influence the relationships by manipulating the variables. The consciously generated relation to students’ everyday life is a positive aspect of the use of real materials (Kennedy et al. 1994). It emphasizes the relevance of mathematics. As students often do not see mathematics’ relevance and connection to their everyday life (Mitchell 1993), this can be named as a crucial aspect. In addition, using concrete materials, such as geoboards that can be manipulated hands-on, seems to be appropriate to teach abstract concepts (Moch 2001). Further advantages of real materials concern motivation (Goldstone and Son 2005), which is triggered by this hands-on approach to functional relationships. As a consequence, students can recall mathematical procedures used in such settings very well (Vollrath 1978). In addition, they often refer to contents that they covered during their work with real materials on a later occasion (Ganter 2013). According to these benefits, the study of Ganter (2013) reveals that real materials lead to a greater impact (η2 = .235) on FT than videos.

Computer-based simulations also have inherent advantages. In the context of this study, they need to be understood as animated situations that can be manipulated dynamically and as part of a Dynamical Mathematics System (DMS, see Fig. 1). Such simulations can be seen as an alternative to real materials by becoming virtual manipulatives (Moyer et al. 2002) and bring their own inherent advantages for the purpose of performing experiments.
Fig. 1

Animated situation of filling a vessel connected with its graphical form of representation. Graph as well as situation can be manipulated using the different buttons

To begin with, the variety offered by a DMS such as GeoGebra ( invites students to explore functions dynamically (Elschenbroich 2011). Students can use the simulation to become familiar with the graphical representation of a function, focus on single points or directly start to manipulate variables and observe the consequences. Different forms of representation are combined at the interface of the DMS and students can consider the graph, table, and formula of a function at the same time and therefore connect them. Such systems are called a multi-representation system (Balacheff and Kaput 1997, p. 471). It is possible to integrate a real situation as a simulated situation into this system (see Fig. 1). Thus, the connection between the situation and the other forms of representation also becomes visible. Changes that depend on manipulation and systematic variation (Roth 2008) can be observed in all representation forms simultaneously. In this way, covariation becomes perceptible. These opportunities offered by simulations integrated into a DMS turn simulations into a mediator between students and mathematical phenomena (Hoyles and Noss 2003). Therefore, computer simulations as tool (Doorman et al. 2012) for imparting mathematical knowledge are of significant importance, because “tools matter: they stand between the user and the phenomenon to be modeled, and shape activity structures” (Hoyles and Noss 2003, p. 341). Accordingly, the study of Jakkola et al. (2011) indicate that the use of simulations in the context of electric circuits has a significant impact on students’ knowledge (d = .78). Furthermore, they found that the combination of simulations and real materials also leads to such an effect (d = 1.14).

Regarding the benefits of both media and the results of the reported studies, it seems to be necessary to test both media against each other in the context of experiments to foster FT. An effect of real materials as well as simulations may be expected. The combination of both media would be a next step, which should be investigated as soon as we know how they individually influence FT.

To link the mathematical concept of FT, the setting of performing experiments, and the media real materials and computer-based simulations, the instrumental approach (Rabardel 2002) is used.

The Instrumental Approach

The instrumental approach is based on the ideas of Vygotsky (1930/1985). He describes problem-solving and the involved mental processes as an instrumental act (ibid., p. 313). This instrumental act depends on both materials and cognitive instruments. These instruments have a meaningful influence on the mental processes of problem-solving. Verillon and Rabardel (1995) and Rabardel (2002) distinguish between artifacts and instruments (Verillon and Rabardel 1995, p. 84 et seq.). An artifact is an arbitrary object until students know, which tasks they have to solve by using it and how they can use it in the context of the task. When the artifact thus enters into an important relationship with the student working on a task while applying appropriate mental schemes, the artifact becomes an instrument (Drijvers and Gravemeijer 2005). Therefore, this transformation depends on three aspects: the artifact, the task and the use of existing schemes that refer to the usage of the artifact and to the mathematical concepts that should be applied. The mental schemes also develop further, while they are applied in the process of transforming an artifact into an instrument. Drijvers and Gravemeijer (2005, p. 166) therefore conclude that “the instrument involves both the artifact and the mental schemes developed for a given class of tasks”. Furthermore, the artifact influences the mental schemes that are applied to use it and solve a task. This process is called instrumentation process (Rabardel 2002, p.103). Also, the applied mental schemes influence how the artifact is used. This is called instrumentalization process (ibid). The instrumentation process and the instrumentalization process are aggregated to the instrumental genesis (Rabardel 2002, p. 101), which covers the process of an artifact becoming an instrument.

Artifacts do not have to be real materials. A DMS such as GeoGebra and every single window or part of the used representation can also be seen as an artifact. Therefore, this medium can become an instrument, too (e. g., Artigue 1997; Lagrange 2000).

Using this theoretical framework, we assume that real materials and computer-based simulations can be used to foster the FT of young children regarding the three aspects according to Vollrath (see Fig. 2). The content and design of the media and the tasks were influenced by performing experiments.
Fig. 2

Model on the fostering of FT based on experiments with computer-based simulations and real materials according to the instrumental genesis (compare Lichti 2019, p. 46)

Research Questions

Based on the relevance of FT and according to the presented theory, this study seeks to address the following questions with a quantitative empirical approach: (1) Do learning environments based on experiments with real materials or computer-based simulations (GeoGebra) used to foster the FT of sixth graders lead to significant effects on their FT? (2) Do learning environments based on experiments with real materials lead to a significant different effect on FT than experiments with simulations?


Design, Sample, and Implementation

We performed an intervention study with pre-post-test-control-group design with two experimental groups (EG1: real materials and EG2: simulations) and one control group (CG). The sample comprised N = 282 students of grade 6 (MAge = 11.80, SD = .57, 115 girls, 167 boys) of 13 classes from 5 schools. Schools participated voluntarily. The study was implemented in July 2016, three weeks before the school year ended and before functional relationships were covered in mathematics education explicitly. We assumed that misconceptions concerning FT were not consolidated at this point of time.

The EGs consisted of the students of 11 classes from 4 public schools (N = 234). The students of each class were assigned to EG1 (N = 111) and EG2 (N = 123) in a random manner to avoid effects of class or teacher. The sample for the CG consisted of 2 classes from one private school with a classical humanistic focus (N = 48). It was recruited separately from the EGs for organizational reasons. Selecting the EGs and the CG separately influenced data analysis (see Methods section) and caused limitations regarding the interpretation of the results of our study. The limitations are discussed at the end of the article.

Sequence of Intervention

First, all students completed a test to measure their FT (40 min). This test was designed especially for this purpose (see Instruments Section). After one week, the students of the EGs took part in an intervention (180 min, 4 sequential lessons) to foster their FT. Students worked on equivalent tasks individually in both EGs. Therefore, every student received their own laptop or a box with real materials. No pair or group work was intended; furthermore, there was no support provided by a teacher. Instead, the students could use aid cards. The students were led through the different experiments only by reading and solving the tasks (see Tasks Section). The order to solve the tasks was dictated, nevertheless students could work at their own pace. According to a pilot study, we assumed that most of the students can finish all the tasks in 180 min. Nevertheless, the time, students worked on the tasks, was measured. Directly after finishing the experiments, the students took part in the post-test (fifth lesson, 40 min).

The CG had no intervention between the pre- and post-test, but regular mathematics lessons. Functional relationships were not part of its lessons. They also did the post-test one week after the pre-test.

Intervention Design

Designing the intervention, the types of media, the tasks leading through the experiments, the theoretical background in terms of FT, and the instrumental approach had to be considered.

At first, we chose four contexts that involved functional relationships and could be used to perform experiments: (1) rolling circles, the relationship between the diameter and circumference of a circle; (2) building cubes, the relationship of the edge length of a big cube given by the number of little cubes it consists of and the number of little cubes the big cube is built of; (3) filling vessels, the relationship between the fill volume and the fill height of a vessel; (4) sharpening pencils, the relationship of the number of rotations while sharpening a pencil and its remaining length (For details see Contexts Section and online Resource 1, ESM 1). These contexts could be presented both with real materials and computer-based simulations. They covered different functional relationships and did not focus on linear functions only (de Beer et al. 2015). In addition, these contexts offered the opportunity to require the same actions and procedures from the students using both media. This made the EGs comparable with regard to this condition. Furthermore, they led to experiments that were practicable for students of grade 6. Especially the choice of real materials needed to be considered carefully. For example, it would not have been feasible for 30 students aged 11–12 years to experiment with burning candles at the same time. The complexity of the simulations also had to be appropriate for the students’ age and experience, as students should be able to use them intuitively. After choosing contexts, we designed identical or at least equivalent tasks that should students guide through the experiments. The tasks had to focus on the three aspects of FT according to Vollrath (1989) by using all forms of representation except the syntactical form of representation. Students of grade 6 are not familiar with this. To test the experiments based on the chosen contexts and the tasks, we conducted a pilot-study.

The requirement of creating equivalent or even identical learning environments could not be fulfilled in every case, due to the obvious differences between the media and their inherent advantages, which we used and kept intentionally. Only when the inherent advantages of the media and consequently, these differences remained, were we able to draw conclusions about the actual use of these media in school, since the media are used in school because of their different advantages. The ecological validity seemed to be more important than the methodologically preferred leveling of the settings.

The main differences occur in the use of the media and the work with graphs. This led to differences regarding the expected instrumental genesis. Depending on the media students performed experiments differently. The material group performed all experiments hands-on, the simulation group did it virtually. To design the experiments comparable, students had to do the same actions, performing the experiments even though in real and virtual. E. g., when measuring a fill height by putting a ruler into glass full of water. Virtually, students also had to put a virtual ruler into a virtual glass with water to read of the fill height. Although the tasks required the students to do the same actions with the different media, the processes and schemes that were called by the students during the instrumental genesis differed. Consider, for example, the task in which it is necessary to figure out, how many little cubes are needed to build a big cube with an edge length of three little cubes (context: building cubes, see Contexts Section). The students of the material group used little wooden cubes to build one big cube in real. They had to refer to schemes like: What are the characteristics of a cube? Do I have to count them or can I do a calculation? Next, they had to concentrate on performing the experiment. Writing down their result in a table, they could realize the mapping of 3 to 27 cubes. Doing this experiment virtually, the simulation group first had to start the simulation by clicking a start button. They then had to observe how the big cube was assembled step by step. To solve the task, they had to call schemes like: How can I use the button of the simulation to start and stop? Do I have to count the little cubes? Can I do a calculation? Furthermore, these students had to understand, what is presented in the simulation. They needed to capture the 3D-presentation of little cubes building a big one, mentally. Figuring out the result, they next had to write it down in a table to realize that 3 is mapped to 27. As in the material group, the understanding of mapping was fostered. So, we assumed that both groups could develop an understanding of mapping, instead of differing processes and schemes.

As a consequence, presuming students to foster their FT by using both, the real or the virtual setting, we could compare their increase of FT after the intervention. The theoretically expected different processes taking place during the fostering offered the possibility to explain potential differences in the increase of FT between the EGs.

Looking at these different processes in detail, there were indications for assets and drawbacks evoked by both media. The material group could touch the cubes. Students did not have to capture a 3D-simulation. In contrast, they were occupied with assembling the little cubes, whereas the simulation group could observe this process and concentrate directly on the number of little cubes. However, the simulation group first had to understand how to use the simulation and to turn the cube around. Regarding further contexts, the simulation group also worked on abstract simulated situations. On the one hand, this could be beneficial, because the simulations were reduced to the most important facts. On the other hand, it could be adverse, because the connection with the real situation was perhaps not that obvious.

Furthermore, we assumed differences between the EGs regarding the use of the graphical representation. These differences were caused by the way the media were used, too. The material group was instructed to sketch graphs based on measured data on their own. They could use aid cards, if they were unsure of how to do this. Furthermore, by integrating solutions into the additional tasks or the aid cards, we ensured that students worked with correct graphs, even if their measured values were inaccurate. We assumed that students who measured the values assigned to each other and drew these values as single points into a coordinate system would connect the underlying situation and its graphical representation more lastingly and concretely. On the contrary, after generating single values by measuring in the simulation, the simulation group used the multi-representation system of GeoGebra to observe how single points and, in turn, the graph emerged in the coordinate system. The connection between the simulated situation and its graph was illustrated with connecting lines and color. As a result, students should recognize the connection of the situation to the graph and generate an understanding of the presented relation.

So, again we assumed that both groups could capture the relation of measured data and points in a coordinate system but again by different processes. This different use of graphical representations could in fact lead to a different impact on students’ ability to deal with graphs: The material group perhaps would learn more about how to create a graph, the simulation group possibly more about using a given graph. To mitigate this effect, the designed tasks included further interpretation of graphs and sketching of graphs in both groups. This means, both groups had to solve tasks without using the media that on the one hand requested them to interpret a given graph. On the other hand, based on a given situation, students in both groups had to draw graphs.

The Contexts

Based on the requirements set out above, we will now present the context building cubes in detail regarding the real materials and the computer-based simulations that were used. The other three contexts are presented as part of the online supplementary material (online Resource 1, ESM 1). The simulations are available online ( in English, in German).

Context No. 2: Building Cubes

The context no. 2 building cubes focused on a big cube assembled of little cubes. It involved imploring the relationship of the edge length of the big cube determined by the number of little cubes it consists of and the number of little cubes the big cube is built of. The students should be exposed to a cubic and discrete relationship. The comparison of this relationship to the context rolling circles (linear and continuous relationship) students worked on before allowed students to comprehend that functional relationships do not have to be linear and may differ regarding their domain.

Real Materials

The students received 125 little wooden cubes with an edge length of two centimeters (see Fig. 3a). The edge length of one little cube was called the length unit; the little cubes were called unity cubes. Students had to figure out how many unity cubes are necessary to assemble a bigger cube with a given edge length of unity cubes. While building bigger cubes out of the unity cubes, students could experience that more little cubes for any bigger cube are needed. Furthermore, they could find out that a different number of little cubes is needed with every step. So, students could learn to differentiate this functional relationship from a linear one (rolling circles, worked on before). By comparing these relationships with each other, the students could find the following difference: It is easy to create a circle with 2.5 cm as diameter. Building a big cube with an edge length of 2.5 unity cubes, however, is impossible. Based on this finding, students were asked whether connecting the points in the coordinate system would make sense. To support students to focus on the relevant functional relationship, a very detailed description of the materials and the experiment to be performed was given by the tasks. Accordingly, the materials were chosen. They were considered to be simply and common so that they were not distracting.
Fig. 3

Real materials for building cubes: wooden cubes (a); interface of the simulation for building cubes (b) (compare Lichti 2019, p. 153)

Computer-Based Simulation

Using the building cubes simulation (see Fig. 3b), students could figure out the number of little cubes at one edge and the related number of little cubes needed to build the big cube. After starting the simulation (start button), the students observed how little cubes were assembled to form a bigger cube. By dragging the slider, they were able to “build” the cube on their own. Furthermore, they could rotate the resulting cube (turn button) and stop the building process and the turning (stop buttons). It was possible to count the number of little cubes needed for every state. Additionally, students could use the choices cube 1–5. When students selected one of these, the shape of the cube (e. g., edge length = 3) that students wanted to observe was displayed. The little cubes could not be divided. Therefore, a cube with an edge length of 2.5 unity cubes, for example, could not be built. To make students concentrate on the relevant aspects of the simulation, focusing aids (Roth 2008, p. 30) were used. So, to highlight the edge of the big cube, the unity cubes forming the front, left edge were displayed in red. By opening the second graphic window (graphic) and starting the simulation again, students could watch the appropriate points being displayed in the coordinate system. By applying consistent colors and displaying the coordinates, the situation and the graphical representation were connected.

Progression of Tasks

To foster students’ FT, the main goal of the tasks leading students through the different experiments was to make them discover that different quantities can depend on each other and that the relationships covered also differ from each other (e. g., linear vs. cubic). Therefore, they should get in contact with graphs, tables and verbal descriptions of functional relationships. Students should work on these different forms of representation, interpret and change between them. Thereby, they should learn about mapping (rows of a table, points), covariation (columns of a table, slope of a graph) and function as object (change between different forms of representation, table as well as graph can display a situation as a whole).

The tasks for every context were introduced with a brief description of the context and a summary of the required activities. When it was appropriate, students were asked to make an estimation regarding the experiment beforehand. This replaced generating hypotheses, as students could not be asked to formulate expedient hypotheses on their own due to the students’ age, the fact that they had to work on their own, and the limited time. This was followed by the typical steps of an experiment (see e. g., De Jong 2005). Students were guided through these single steps of the experiments by the tasks they worked on. Looking at the context building cubes, for example, students first were instructed to estimate the number of little cubes needed to build a big one depending on a given edge length (tasks for Estimating). Next, they had to figure out the related values by performing experiments with real materials or simulations (tasks for Performing experiments). Both groups collected the identified pairs of values in a table. Subsequently, students drew the graph (material group) or observed the graph emerging (simulation group). They always had to think about the mapped quantities, the meaning of the points, and the correctness of connecting single points with each other (tasks for Understanding and connecting points). The next step was to work on tasks that required them to use their newly acquired knowledge (tasks for Application). For example, they had to determine depending values referring to values that were not part of the table or the graph. Finally, tasks followed that required them to perform a transfer (tasks for Transfer). All contexts followed this progression. As students working on the first context knew very little about functional relationships and then became familiar with them by working on different contexts, contexts 1 and 2 focused on understanding the functional relationship, the meaning of the points, and the graphs. Contexts 3 and 4 required students to concentrate on applying the new knowledge, performing transfer tasks, and interpreting results.

The order of the contexts was intentional: linear relationship, cubic relationship with discrete domain, undefined relationship considering speed (slope), and linear relationship for repetition. Therefore, students had to work on the tasks in a prescribed sequence.

To give an insight into the various types of tasks, we will now present two tasks from the already known context building cubes. Based on these tasks we will explain how the tasks of the material group differed from the tasks of the simulation group and in which cases they were identical. Examples of further tasks are part of online Resource 2 (ESM 2).

Task Examples

Example Building Cubes

Performing Experiments

The tasks 3.2 and 3.3 displayed in Table 1 first of all differ in length and complexity, as the simulation group needed an explanation of how to use the simulation first. While students of the simulation group used their time to read and understand the explanation and become familiar with the simulation, the material group had to prepare the cubes and assemble them. In the end, both groups were supposed to write down values generated during the experimental phase.
Table 1

Example of tasks for performing experiments based on the context building cubes (compare Lichti 2019, p. 168)

Material Group

Material: many little cubes

3.2 Use the little cubes to build a big cube with the edge length of 1, 2, 3, 4, and 5 little cubes. Note in your table how many little cubes are needed for which cube depending on its edge length.

Verify your estimation of task 3.1!

3.3 Enter your pairs of values as points into the coordinate system!

Simulation Group

Open simulation 2.

First choose which cube you want to build in the left window. Cube 3 means you want to build a cube with an edge length of 3 little cubes. When you press start, the big cube fills with little cubes.

The new-button resets the simulation. By dragging the slider, you can build the cube on your own. If you press turn the cube turns around.

Just have a try!

3.2 “Build” all big cubes that are possible using the little cubes. Note into the table, how many little cubes are needed to build a big cube dependent on its edge length in little cubes. Start with one little cube.

Verify your estimation of 3.1!

3.3 Press new. Remove the checkmark from graphic and choose points (right window) instead. Press start again.

Task 3.3 includes an important difference between the groups. On the one hand, the task of the simulation group was formulated in a more complex manner because more aspects had to be considered. On the other hand, the content of the tasks differed: The material group had to draw the graph. The simulation group, on the other hand, had to watch the simulation and observe how the graph emerged. Both groups had to work on the results focusing on the aspects mapping and object. After determining the related values, these values had to be assembled to a whole.

Due to the nature of the media, the processes activated by these tasks were different as explained in the Intervention Section. Instead, the aim to develop an understanding of mapping and function as object could be reached using both. The differences regarding the design and formulation of this task were also caused by the nature of the media and the affordance to enable students to use them on their own and without support by a teacher.

Understanding and Connecting Points

The tasks 3.4–3.7 shown in Fig. 4 were identical in both settings. Students had to think deeply about the meaning of points and the quantities that were mapped.
Fig. 4

Example of tasks for understanding points and connecting them based on the building cubes context (compare Lichti 2019, p. 172)

By identifying the point which they had not drawn or observed in their coordinate system and reflecting on the following question about its meaning, the students were encouraged to think about the number of 2.5 and 15.63 little cubes. They were meant to realize that this number of cubes was not possible, neither using simulations nor materials. Based on this recognition they should be able to understand that connecting points with each other in this case would have involved splitting cubes and assembling them. In this task the focus was on the aspect mapping. Furthermore, students acquired a first idea of discrete relationships.

Both groups had to solve this task based on a graph being part of the task. There was no need to use the media. Therefore, at first sight, the processes that took place, were the same. Students in both groups had to think about the meaning of points and connect them to the situation. Therefore, they had to refer to schemes they established before, when performing the fitting experiment. The material group mentally would refer to the real cubes, the simulation group to the virtual ones. So again, we found a difference in the processes. Nevertheless, the result and the potentially reached learning goal were the same. To connect the points would be wrong according to the nature of the cubes in real as well as in the simulation. Students’ increase of FT could be analyzed instead of the different way it was developed.

To illustrate in more detail, what happened during the intervention, Table 2 gives an example of students’ written responses generated during the intervention. As one can see, students of both EGs solved the task 3.6 “Why does this point (P (2.5, 15.63) not make sense?” presented in Fig. 4 comparably.
Table 2

Students’ written responses to the task 3.6 understanding and connecting points of the context building cubes (see Fig. 4)

Why does this point (P (2.5|15.63)) not make sense?

Students using real materials

Students using simulations

S1: Because it is a decimal.

S2: Because half cubes do not exist.

S3: Because you cannot intersect cubes, therefore not reasonable.

S4: Because this is also a decimal.

S5: This does not make sense, because 15.63 cubes cannot exist.

S6: Because there are no half cubes.

Annotation: S = Student


The test on FT was developed and validated during a pre-study (Lichti 2019). It was developed to measure the increase of FT in grade 6/7. It was constructed based on and according to items from PISA, TIMSS and VERA81 that focused on functional relationships. Furthermore, the three aspects of FT were taken into account using an operationalization referring to the aspects, the different forms of representation, and students’ abilities needed to solve appropriate tasks. The test consisted of 43 items, including single- and multiple-choice items (14) as well as open answer formats (29). 10 items were assigned to mapping, 18 items to covariation and 15 items to the object aspect. There were more items for the aspect covariation, because it occurs in a lot of different ways (slope, rate of change, absolute change). Furthermore, some of the items (10) included the contexts known from the intervention. The large number of items resulted from the approach to display the different aspects of FT in all their forms of appearances. Figures 5 and 6 show two examples of used items.
Fig. 5

The item hedgehog a) (covariation aspect) being part of the test on FT (see Lichti 2019, p. 91)

Fig. 6

The item way home (object aspect) being part of the test on FT (see Lichti 2019, p. 92)

Doing the pre-study, the test showed a good EAP-reliability of .77 and seemed to be valid. During the validation it was analyzed whether FT should be treated as one- or three-dimensional construct regarding the aspects mapping, covariation and function as object. The results indicate, that a one-dimensional structure fits data best (Lichti 2019). The test in German and English can be requested from the authors. The test was implemented among the students of the intervention study in a multi-matrix design (OECD 2014). In whole 4 booklets were used (A, A’, B, B′). A and A’ as well as B and B′ consisted of the same items in changed order. Every item was processed by half of the students (N = 141) in the pre- and also in the post-test. The advantage of this was that not every student had to solve every item. Thus, the test could be processed in 40 min. Pre- and post-test were linked using anchor items (10), which were processed by all students (N = 282). The other 16 (booklet A) and 17 (booklet B) items differed in pre- and post-test.

Evaluation Method

Data analysis was conducted according to Item Response Theory (IRT, see Yen and Fitzpatrick 2006; DeMars 2010). The dichotomous one-dimensional Rasch model was used. In contrast to classical test theory, IRT considers the relationship between the latent trait measured and the items. Accordingly, IRT applies a probability function that depends on item- and person-related parameters (DeMars 2010). The function p(xvi = 1) = f(θv, σi) describes the probability that an item i is answered correctly (xvi = 1) by a person v depending on the person ability θv and the item difficulty σi.

The test on FT (43 items) was coded based on the mathematically correct results of the items; an item could be answered correctly (1) or wrongly (0). In the end, we had a set of dichotomous data. For analysis we used the statistics-software R (Comprehensive R Archive Network, CRAN, and the package TAM (Kiefer et al. 2016). We first applied a one-dimensional Rasch model (\( p\left({x}_{vi}=1\right)=\frac{e^{\theta_v-{\sigma}_i}}{1+{e}^{\theta_v-{\sigma}_i}} \)). It assumes that the probability that person ν solves item i correctly ( xvi = 1 ) depends on the person ability θv and the item difficulty σi. We used this model to estimate an item difficulty for every item. Next, we applied the virtual persons’ approach (Fischer and Seliger 1997). This means that we considered the results from the pre- and post-test of one student to be the results of two different persons. This approach is based on the assumption, that the difficulty of an item does not change, but the person ability does. Therefore, with a sample size of N = 564 (282 virtual persons) we estimated the difficulties of the 43 items. Before we could use these item difficulties to estimate the person abilities, we had to control whether the data were Rasch scalable. Therefore, we first verified the fit statistics (Bond and Fox 2015) to ascertain whether the empirical data fitted the model appropriately. Then we examined the selectivity of our items, which had to be at least .25 (Adams and Wu 2002) and controlled for differential item functioning (DIF) regarding the factor gender (see e. g., Zumbo 1999). We wanted to determine whether the items worked differently when processed by boys or girls. Furthermore, we checked for local stochastic independence (Little 2014) between the items. As a result of these steps of the analysis, two items had to be excluded because of bad fit values (see online Resource 3, ESM 3); Fit values were not within the interval [0.8, 1.2]. After excluding them, we performed a further estimation using the virtual persons to determine the final item difficulties. We fixed these difficulties, when applying a two-dimensional Rasch model to estimate the students’ person ability in the pre- and post-test. This two-dimensional model separated the data of the pre- and post-test and resulted in comparable person abilities (FT) in the pre- and post-test for every student. We also applied a latent regression model (Robitzsch et al. 2018; called “background model”, see, e. g., Hartig et al. 2007) that consisted of collected covariates. Based on the so estimated distributions of the person abilities, we drew ten sets of plausible values (Mislevy et al. 1992), because plausible values consider the scattering of the person ability estimated by the Rasch measurement and not only the mean (DeMars 2010).

Next, we applied a mixed ANOVA (Field et al. 2013) after controlling data for normal distribution and homogeneity of variance. The mixed ANOVA is one of the most important possibilities to do an analysis of variance. It is used to test for differences between different groups within different times (repeated measures). It uses the variance between and in groups of different conditions. The mixed ANOVA (between factor: EG; within factor: time) was applied to all sets of plausible values. An effect for all sets would indicate the reliability of the results. The ten generated results were pooled using the F-values given by the mixed ANOVA using the packages mice (van Buuren and Groothuis-Oudshoorn 2011) and miceadds2 (Robitzsch et al. 2017). Next, we conducted a pairwise t-test with Bonferroni correction to examine each EG’s progress separately.

The learning progress of the CG could not be included in the comparison by the mixed ANOVA, because it was not part of the randomization and the CG consisted of a much smaller sample. Furthermore, the students of the CG differed striking from the students of the EGs. They reached a much higher ability of FT already in the pre-test. We decided that the EGs and the CG were not comparable by a mixed design because of this condition, although statistically this would have been possible. Therefore, the CG was analyzed using the plausible values with a Wilcoxon-Signed-Rank test. In the absence of normal distribution, we could not conduct a paired t-test.

In addition, the time students worked on the tasks of the intervention was compared (t-test) separating the students within the EGs to control for time on task.


Analysis of time on task showed that there was no significant difference between the EGs. The material group worked on the tasks M = 160.41 min (SD = 18.47) on average, the simulation group M = 161 min (SD = 19.66). The t-test resulted in t(231.61) = .235, p = .815.

After excluding two items that did not fulfill the conditions of Rasch measurement, the test on FT was Rasch scalable with appropriate fit values, selectivity, DIF and local stochastic independence (see online Resource 3, ESM 3). The estimation of the two-dimensional model, which was used to determine the person abilities, showed good reliabilities in the pre- and post-test: EAP-reliabilitypre-test = .869 and EAP-reliabilitypost-test = .871 as well as WLE-reliability pre-test = .77 and WLE-reliability post-test = .75. All means and standard deviations are presented in Table 3.
Table 3

Means and standard deviations of students’ pre- and post-test ability of FT and time on task


CG (N = 48)

M (SD)

EG 1, real materials (N = 111)

M (SD)

EG 2, simulations (N = 123)

M (SD)

Pre-test (logits)

.05 (.06)

- .34 (.03)

- .34 (.04)

Post-test (logits)

.17 (.07)

.09 (.06)

.41 (.06)

Time on task (min.)

No intervention

160.41 (18.47)

161.00 (19.66)

Annotation. CG: control group; EG: experimental group; M: mean; SD: standard deviation

Control Group

The CG (N = 48) was analyzed with a paired Wilcoxon-Signed-Rank test. The analysis showed that the test without intervention did not have a significant effect on students’ FT (V = 423, p = .091, d = .26).

Experimental Groups

The mixed ANOVA resulted in two significant effects: First, there was a main effect of time F(1, 11.79) = 36.90, p < .001, ηp2 = .554. Therefore, the FT of the EGs considered as one group increased significantly with a large effect from M = −.34 logits (SD = .035) up to M = .25 logits (SD = .06). The pairwise t-test with Bonferroni correction led to the result that the EGs did not differ before the intervention, but that the FT of both groups increased significantly from pre- to post-test (material group: t(110) = −9.42, p < .001, d = .85; simulation group: t(122) = −16.46, p < .001, d = 1.41, see also Table 3). Second, based on the mixed ANOVA there was an interaction effect between time and EG (F(1, 25.820) = 8.856**, p = .006, ηp2 = .090). So, the comparison of the increase in both groups from pre- to post-test led to a significant difference (see Fig. 7). As a result, the FT of the simulation group improved significantly more than the FT of the material group with a medium effect (ηp2 = .09).
Fig. 7

Increase in FT: comparison of EGs in pre- and post-test (compare Lichti 2019, p. 198)


The results show an increase of FT in the EGs as well as in the CG. The increase of the FT of the CG is not significant, but marginal. With regard to the small effect size of d = .26, it is possible that the processing of the test items leads to a slight increase of FT. Perhaps this is caused by the high ability of FT, which the CG already reached in the pre-test. The CG performed much better in the pre-test than the EGs (see Table 3). This may be a reason for their FT increasing by solving the test items (Matthew effect, see, e. g., Merton 1968). The effect sizes of the EGs are both large (d = .85 and d = 1.41). There seems to be an influence of both interventions on FT. Nevertheless, with regard to the CG being excluded from the mixed ANOVA, we cannot draw the conclusion for certain that the use of both media in such a setting fosters FT. The results may be seen as strong indications for this conclusion, which needs to be verified by a further study. We also found a medium interaction effect of time and EGs (F(1, 25.820) = 8.856**, p = .006, ηp2 = .09). The FT of the simulation group increases significantly more. Therefore, we can conclude that a learning environment as designed in this study using simulations seems to be more suitable to foster FT than one using real materials.

Considering our first research question, our results therefore indicate that both the usage of real materials and simulations lead to a significant increase of students’ FT. With regard to the marginal significant increase of the CG’s FT with a small effect, this result needs to be viewed as critical. We only can assume that there was not a relevant test effect on FT. Assuming this, both media empirically seem to be beneficial to foster the FT of sixth graders. Regarding our second research question, we found that in our concrete setting learning environments using simulations or real materials have a significantly different effect on FT. Computer-based simulations have a large effect on the development of FT (Cohen’s d = 1.41) and, therefore, generate more progress than the use of real materials that also lead to a large, but smaller effect (Cohen’s d = .85). Students’ time on task, which was nearly the same in both groups, does not seem to be relevant for this result.

These results have to be considered critically and with limitations. Although our sample size of N = 282 is sufficient for our analysis, it is small and we should consider our results as indications. In addition, students in our setting worked individually. So, we cannot draw any conclusions about team or group work, which is frequently applied in school, and how it would influence the results. This should be investigated in a further study. Furthermore, we excluded any teacher role, which is also contrary to class. If we want to apply our results in school, this factor needs to be considered, too. Moreover, we do not know whether the intervention has lasting effects, as we were not able to conduct a follow-up test. This should had been done after summer holidays at the beginning of grade 7, when students started to cover functional relationships in mathematics education. Therefore, it would not have been possible to draw conclusions on the long-term effect of the intervention. Furthermore, although our statistical results indicate that the use of simulations may be more effective in fostering FT, it needs to be emphasized that real materials, as described in the theoretical background, offer a lot of advantages for mathematics education in general and also lead to a large effect on FT. Think, for example, of the benefits of real materials for communication during group work, of their relation to the ‘real world’, and the ability to do experiments in an exact, focused and transparent manner.

Overall, our results are not generalizable. They depend on a concrete setting and learning environment being far away from what happens every day in class. In addition, the excluded CG hinders to make reliable statements about the general influence of real materials and simulations on the FT of sixth graders.

Nevertheless, referring to our theoretical background, possible explanations for the results can be found. According to the instrumental approach, different processes take place while students use materials or simulations to foster their FT (see Section Intervention Design). An important part of the instrumental genesis based on real materials is the hands-on working. The hands-on actions influence, which schemes were used to perform the experiments. Students’ FT should be developed based on these schemes. Possibly, the evoked schemes were not appropriate. Perhaps they made students be engaged to the real experiments in a way that the shift from measured data to a functional relationship was missed of. This leads to the assumption, that abstract and focused simulations becoming instruments could develop FT better. The evoked schemes that made students use the simulations to solve the tasks, perhaps, do not engage to the situation a lot. Students were able to think about the functional relationships covered. Similarly, the cognitive load theory (see, e. g., Sweller et al. 2011) may be an explanation. It needs to be considered that the cognitive resources of the material group during the usage of real materials were possibly occupied with the necessity of using real objects correctly and appropriately in the context of the tasks. As a consequence, the students perhaps could be hindered to develop their understanding of functional relationships and their FT. In contrast, it was maybe easier to process a focused and abstract simulation than to perform a detailed real experiment. Nevertheless, the assumption that it is much more difficult to capture the content of an abstract simulation also would have been possible. These considerations should be verified by a qualitative analysis based on the data from the intervention in order to obtain information about the processes that took place. Based on such an analysis it would be possible to identify differences in how the media foster FT. For instance, a qualitative content analysis of written responses of the intervention could reveal differences between the groups.

In addition, the large effect in both EGs may be explained by the use of 10 test items that refer to contexts being part of the intervention. Skipping these 10 items, the effects may decrease.


Referring to the results, the possible explanations, and limitations, further research is needed for a better understanding of how the media influences the FT of sixth graders. Furthermore, to design a setting in which both real materials and simulations are combined and perhaps complement each other, should be considered (compare the results of Jaakkola et al. 2011). At this point, we also have to take the results of Ganter (2013) into account. She found that the use of real materials is more beneficial for fostering FT than the use of videos. To align our findings could also be very fruitful. Videos may offer the possibility of fostering other aspects related to FT. It seems to be necessary to investigate the main differences between video and simulation in order to explain our findings. Furthermore, we can align our findings to, e. g., Warren and Cooper 2006 and Blanton et al. 2007, who state that the FT of young children is already developed at an early age and can also be influenced positively. Therefore, to cover functional relationships consciously during class seems to be important and expedient. With regard to class, we may also state that the use of both real materials and simulations seems to be promising to foster FT. Nevertheless, the role of the teacher and the impact of group work needs to be investigated. In addition, the detected increase of FT depends for certain on our concrete setting and the reflected used experiments and tasks. Therefore, the conclusion that the simple use of the media leads to an increase of FT would be far from realistic. Accordingly, to support teachers with regard to the construction of simulations and the deliberate choice of real materials as well as to an appropriate design of tasks should be thought of. In addition, we found some very early indications for the validity of our model (see Fig. 2) that was applied to describe the dependencies of FT, experiments and different media by the use of the instrumental approach. Both media showed an effect on FT. Also, they seemed to influence FT with regard to the aspects according to Vollrath (1989) and be determined by the actions and processes caused by the experimental setting.


To sum up, we can state that instead of all limitations that have to be considered carefully, the usage of a learning environment based on experiments with real materials or simulations offers a promising first approach for fostering FT in mathematics education. The setting and design of the intervention incorporating all three aspects of FT, different forms of representation, and experiments are highly relevant for this result and therefore restrict the possibility to generalize it. Nevertheless, based on our results the usage of simulations seems to be more beneficial. So, until qualitative analyses give more insight into the processes that take place while working with real materials and simulations in a learning environment designed to foster FT, our results indicate that computer-based simulations should be considered as the method of choice.


  1. 1.

    German comprehensive Studies in grade 8 in Mathematics and English

  2. 2.

    This package refers to the D2 statistic. It applies the mean of the F-values and the estimation of the average increase in variance. For Details see Enders 2010.



The authors would like to thank the reviewers for their valuable comments and suggestions. Due to their support, our article improved significantly.

This article is based on Lichti, M. (2019). Funktionales Denken Fördern. Experimentieren mit gegenständlichen Materialien oder Computer-Simulationen, Wiesbaden: Springer Spektrum (DOI It is adapted and translated by permission from Springer Nature: Funktionales Denken Fördern. Experimentieren mit gegenständlichen Materialien oder Computer-Simulationen, Michaela Lichti, Copyright 2019.


This research was funded by Deutsche Forschungsgemeinschaft (DFG, Graduiertenkolleg 1561).

Compliance with Ethical Standards

Conflict of Interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Supplementary material

41979_2018_7_MOESM1_ESM.pdf (950 kb)
ESM 1 (PDF 949 kb)
41979_2018_7_MOESM2_ESM.pdf (404 kb)
ESM 2 (PDF 403 kb)
41979_2018_7_MOESM3_ESM.pdf (107 kb)
ESM 3 (PDF 107 kb)


  1. Adams, R. J., & Wu, M. (Eds.). (2002). Pisa 2000: Technical Report. Paris: OECD.Google Scholar
  2. Artigue, M. (1997). Le logiciel DERIVE comme révélateur de phénomènes didactiques liés à l’utilisation d’environnements informatiques pour l’apprentissage. Educational Studies in Mathematics, 33(2), 133–169.CrossRefGoogle Scholar
  3. Balacheff, N., & Kaput, J. J. (1997). Computer-based learning environments in mathematics. In A. J. Bishop, M. A. Clements, C. Keitel, J. Kilpatrick, & C. Laborde (Eds.), International handbook of mathematics education: Part (Vol. 1, pp. 469–501). Dordrecht: Springer.Google Scholar
  4. Barzel, B. (2000). Ich bin eine Funktion [I am a function]. Mathematik lehren, 98, 39–40.Google Scholar
  5. Blanton, M. L., & Kaput, J. J. (2005). Characterizing a classroom practice that promotes algebraic reasoning. Journal for Research in Mathematics Education, 36(5), 412–446.Google Scholar
  6. Blanton, M. L., & Kaput, J. J. (2011). Functional thinking as a route into algebra in the elementary grades. In J. Cai & E. Knuth (Eds.), Early Algebraization. Advances in mathematics education (pp. 5–23). Heidelberg: Springer.CrossRefGoogle Scholar
  7. Blanton, M., Schifter, D., Inge, V., Lofgren, P., Willis, C., Davis, F., & Confrey, J. (2007). Early algebra. In V. J. Katz (Ed.), Algebra. Gateway to a technological future (pp. 7–14). Washington, DC: Mathematical Association of America.Google Scholar
  8. Bond, T. G., & Fox, C. M. (2015). Applying the Rasch model: Fundamental measurement in the human sciences (3. Ed.). New York: Routledge, Taylor and Francis Group.CrossRefGoogle Scholar
  9. Breidenbach, D., Dubinsky, E., Hawks, J., & Nichols, D. (1992). Development of the process conception of function. Educational Studies in Mathematics, 23(3), 247–285.CrossRefGoogle Scholar
  10. de Beer, H., Gravemeijer, K. P. E., & van Eijk, M. W. (2015). Discrete and continuous reasoning about change in primary school classrooms. ZDM, 47(6), 981–996.CrossRefGoogle Scholar
  11. de Jong, T. (2005). The guided discovery principle in multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 215–229). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  12. DeMarois, P., Tall, D. (1999). Function: Organizing Principle or Cognitive Root? In O. Zaslavsky (Hrsg.), Proceedings of the 23rd Conference of the international Group for the Psychology of Mathematics Education, PME XXIII (pp. 257–264). Haifa Israel.Google Scholar
  13. DeMars, C. (2010). Item response theory. Series in understanding statistics. Oxford: Oxford University Press.CrossRefGoogle Scholar
  14. Doorman, M., Drijvers, P., Gravemeijer, K., Boon, P., & Reed, H. (2012). Tool use and the development of the function concept: From repeated calculations to functional thinking. International Journal of Science and Mathematics Education, 10(6), 1243–1267.CrossRefGoogle Scholar
  15. Drijvers, P., & Gravemeijer, K. (2005). Computer algebra as an instrument: Examples of algebraic schemes. In D. Guin, K. Ruthven, & L. Trouche (Eds.), The Didactical Challenge of Symbolic Calculators. Mathematics education library (Vol. 36). Boston: Springer.Google Scholar
  16. Dubinsky, E., & Harel, G. (1992). The nature of the process conception of function. In G. Harel & E. Dubinsky (Eds.), The concept of function: Aspects of epistemology and pedagogy (pp. 85–106). Washington, DC: Mathematical Association of America.Google Scholar
  17. Elschenbroich, H.-J. (2011). Geometrie, Funktionen und dynamische Visualisierung [geometry, functions and dynamical visualization]. In T. Krohn (Ed.), Mathematik für alle. Wege zum Öffnen von Mathematik - mathematikdidaktische Ansätze. Festschrift für Wilfried Herget (pp. 69–84). Franzbecker: Hildesheim.Google Scholar
  18. Enders, C. K. (2010). Applied missing data analysis. New York. London: Guilford Press.Google Scholar
  19. Field, A., Miles, J., & Field, Z. (2013). Discovering statistics using R (reprint). Los Angeles: Sage.Google Scholar
  20. Fischer, G. H., & Seliger, E. (1997). Multidimensional linear logistic models for change. In W. J. van der Linden & R. K. Hambleton (Eds.), Handbook of modern item response theory (pp. 323–346). New York: The Guilford Press.CrossRefGoogle Scholar
  21. Ganter, S. (2013). Experimentieren - ein Weg zum funktionalen Denken: Empirische Untersuchung zur Wirkung von Schülerexperimenten [Doing experiments– a way to functional thinking: an empirical study about the effect of students’ experiments]. Didaktik in Forschung und Praxis: Vol. 70. Hamburg: Kovač.Google Scholar
  22. Goldstone, R. L., & Son, J. Y. (2005). The transfer of scientific principles using concrete and idealized simulations. The journal of learning sciences, 14(1), 69–110.CrossRefGoogle Scholar
  23. Hartig, J., Jude, N., & Wagner, W. (2007). Methodische Grundlagen der Messung und Erklärung sprachlicher Kompetenzen [methodological foundations of measuring and interpreting linguistic competencies]. In B. Beck & E. Klime (Eds.), Beltz Pädagogik. Sprachliche Kompetenzen. Konzepte und Messung; DESI-Studie (Deutsch-Englisch-Schülerleistungen-International) (pp. 34–54). Weinheim: Beltz.Google Scholar
  24. Hoyles, C., & Noss, R. (2003). What can digital technologies take from and bring to research in mathematics education? In A. J. Bishop, M. A. Clements, C. Keitel, J. Kilpatrick, F. K, & S. Leung (Eds.), Second international handbook of research in mathematics education (pp. 323–349). Dordrecht: Kluwer.CrossRefGoogle Scholar
  25. Jaakkola, T., Nurmi, S., & Veermans, K. (2011). Comparison of students’ conceptual understanding of electric circuits in simulation only and simulation-laboratory contexts. Journal of Research in Science Teaching, 48(1), 71–93.CrossRefGoogle Scholar
  26. Janvier, C. (1978). The interpretation of complex cartesian graphs representing situations – Studies and teaching experiments. Nottingham: University of Nottingham.Google Scholar
  27. Kennedy, L. M., Tipps, S., & Johnson, A. (1994). Guiding children’s learning of mathematics (7th ed.). Belmont: Wadsworth.Google Scholar
  28. Kiefer, T., Robitzsch, A., Wu, M. (2016). TAM: Test Analysis Modules. Retrieved from
  29. Lagrange, J. B. (2000). L’intégration des instruments informatiques dans l’enseignement: Une approche par les techniques. Educational Studies in Mathematics, 43(1), 1–30.CrossRefGoogle Scholar
  30. Leinhardt, G., Zaslavsky, O., & Stein, M. K. (1990). Functions, graphs, and graphing: Tasks, learning, and teaching. Review of Educational Research, 60(1), 1–64.CrossRefGoogle Scholar
  31. Little, T. D. (2014). Foundations (paperback). Oxford library of psychology: Vol. 1. Oxford: Oxford Univ. Press.Google Scholar
  32. Lichti, M. (2019). Funktionales Denken fördern. Experimentieren mit gegenständlichen Materialien oder Computer-Simulationen [Fostering functional thinking. Doing experiments with real materials or computer-based simulations]. Landauer Beiträge zur mathematikdidaktischen Forschung. Wiesbaden: Springer Spektrum.CrossRefGoogle Scholar
  33. Mason, J. (2008). Making use of children’s powers to produce algebraic thinking. In J. J. Kaput, D. W. Carraher, & M. L. Blanton (Eds.), Algebra in the early grades (pp. 57–94). New York: Taylor and Francis Group.Google Scholar
  34. Merton, R. K. (1968). The Matthew effect in science. Science, 159(3810), 56–63.CrossRefGoogle Scholar
  35. Mislevy, R. J., Beaton, A. E., Kaplan, B., & Sheehan, K. M. (1992). Estimating population characteristics from sparse matrix samples of item responses. Journal of Educational Measurement, 29(2), 133–161.CrossRefGoogle Scholar
  36. Mitchell, M. (1993). Situational interest: Its multifaceted structure in the secondary school mathematics classroom. Journal of Educational Psychology, 85(3), 424–436.CrossRefGoogle Scholar
  37. Moch, P. L. (2001). Manipulatives work. The Educational Forum, 66, 81–87.CrossRefGoogle Scholar
  38. Monk, G. (1992). Students' understanding of a function given by a physical model. In G. Harel & E. Dubinsky (Eds.), The concept of function: Aspects of epistemology and pedagogy (pp. 175–194). Washington, DC: Mathematical Association of America.Google Scholar
  39. Moyer, P. S., Bolyard, J. J., & Spikell, M. A. (2002). What are virtual manipulatives? Teaching Children Mathematics, 8, 372–377.Google Scholar
  40. National Council of Teachers of Mathematics (NCTM) (2000). Principles and Standards for School Mathematics. Reston, VA.Google Scholar
  41. National Science Teachers Association (NSTA) (2007). NSTA position statement: The integral role of laboratory investigations in science instruction. National Science Teachers Association.Google Scholar
  42. OECD (Organization of Economic and cultural Development). (2014). PISA 2012. Technical Report.Google Scholar
  43. Rabardel, P. (2002). People and technology: a cognitive approach to contemporary instruments. University of Paris 8, <hal-01020705>. Retrieved from: (October 14, 2018).
  44. Reid, D. J., Zhang, J., & Chen, Q. (2003). Supporting scientific discovery learning in a simulation environment. Journal of Computer Assisted Learning, 19(1), 9–20.CrossRefGoogle Scholar
  45. Robitzsch, A., Kiefer, T., Wu, M. (2018). TAM: Test analysis modules. R package version 2.12–18. Retrieved from: (October 14, 2018.
  46. Robitzsch, A., Grund, S., Henke, T. (2017). miceadds: Some additional multiple imputation functions, especially for mice. R package version 2.5–9, Retrieved from:. (October 14, 2018).
  47. Roth, J. (2008). Systematische Variation: Eine Lernumgebung vernetzt Geometrie und Algebra [Systematic variation: A learning environment connects geometry and algebra]. mathematik lehren. (146), 17–21.Google Scholar
  48. Sfard, A. (1991). On the dual nature of mathematical conceptions: Reflections on processes and objects as different sides of the same coin. Educational Studies in Mathematics, 22, 1–36.CrossRefGoogle Scholar
  49. Sweller, J., Ayres, P., Kalyuga, S. (2011). Cognitive Load Theory. Explorations in the Learning Sciences, Instructional Systems and Performance Technologies Vol. 1., Springer.Google Scholar
  50. Tall, D., & Vinner, S. (1981). Concept image and concept definition in mathematics with particular reference to limits and continuity. Educational Studies in Mathematics, 12(2), 151–169.CrossRefGoogle Scholar
  51. Thompson, P. W. (1994). Students, functions, and the undergraduate curriculum. In E. Dubinsky (Ed.), Issues in Mathematics Education: Vol. 4. Research in Collegiate Mathematics Education (pp. 21–44). Providence, R.I.: American Mathematical Society.Google Scholar
  52. Thompson, P. W., & Carlson, M. (2017). Variation, covariation, and functions: Foundational ways of mathematical thinking. In J. Cai (Ed.), Third handbook of research in mathematics education (pp. 421–456). Reston: National Council of Teachers of Mathematics.Google Scholar
  53. van Buuren, S., Groothuis-Oudshoorn, K. (2011). Mice: Multivariate Imputation by Chained Equations in R. Journal of Statistical Software, 45(3), 1–67. Retrieved from: (October 14, 2018).
  54. Verillon, P., & Rabardel, P. (1995). Cognition and artifacts: A contribution to the study of thought in relation to instrumented activity. European Journal of Psychology of Education, 10(1), 77–101.CrossRefGoogle Scholar
  55. Vinner, S., & Dreyfus, T. (1989). Images and definitions for the concept of function. Journal for Research in Mathematics Education, 20(4), 356–366.CrossRefGoogle Scholar
  56. Vollrath, H.-J. (1978). Schülerversuche zum Funktionsbegriff [student experiments on the notion of function]. Der Mathematikunterricht, 24(4), 90–101.Google Scholar
  57. Vollrath, H.-J. (1986). Search strategies as indicators of functional thinking. Educational Studies in Mathematics, 17, 387–400.CrossRefGoogle Scholar
  58. Vollrath, H.-J. (1989). Funktionales Denken [Functional thinking]. Journal für Mathematikdidaktik. (10), 3–37.CrossRefGoogle Scholar
  59. vom Hofe, R. (2004). "Jetzt müssen wir das ding noch stauchen!" Über den manipulierenden und reflektierenden Umgang mit Funktionen [now we have to compress the thing! About the reflecting use of functions]. Der Mathematikunterricht, 50(6), 46–56.Google Scholar
  60. vom Hofe, R., & Blum, W. (2016). “Grundvorstellungen” as a category of subject-matter didactics. Journal für Mathematik-Didaktik, 37(S1), 225–254.CrossRefGoogle Scholar
  61. Vygotsky, L. S. (1930/1985). Die instrumentelle Methode in der Psychologie [The instrumental approach in psychology]. In Ausgewählte Schriften (Bd. 1, pp. 309–317). Berlin: Volk und Wissen.Google Scholar
  62. Warren, E., & Cooper, T. (2005). Introducing functional thinking in year 2: A case study of early algebra teaching. Contemporary Issues in Early Childhood, 6(2), 150–162.CrossRefGoogle Scholar
  63. Warren, E., & Cooper, T. (2006). Using repeating patterns to explore functional thinking. APMC, 11(1), 9–14.Google Scholar
  64. Yen, W. M., & Fitzpatrick, A. R. (2006). Item response theory. In R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 111–153). Westport: Praeger.Google Scholar
  65. Zumbo, B. D. (1999). A Handbook on the Theory and Methods of Differential Item Functioning (DIF): Logistic Regression Modelling as a Unitary Framework for Binary and Likert-Type (Ordinal) Item Scores. Ottawa: Directorate of Human Resources Research and Evaluation, Department of National Defense.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Mathematics Education, Institute for MathematicsUniversity of Koblenz LandauLandauGermany

Personalised recommendations