## Abstract

Mathematical modelling tasks increasingly feature the use of digital tools and media. In this chapter, we discuss the wide variety of these. Until now, classifications for modelling tasks did not consider the use of tools and media. Therefore, we developed a new classification for ICT-based modelling tasks. One class relates to mathematics; the others differentiate across (1) modelling aspects unrelated to tool and media, (2) the task context, (3) the digital tools and media (CAS, Wikipedia, type of feedback, etc.) and (4) students’ anticipated activities guided by task regulations, such as group work or time restrictions. The classification was validated with three example tasks. A visual presentation based on the classification system enables the evaluation of qualities of a given ICT-based modelling task and can give insight into potential adaptations.

### Similar content being viewed by others

## Keywords

## 1 Introduction

All over the world, mathematical modelling is entering mainstream mathematics education, not just in classroom activities, but also in curricula and assessments (Frejd 2011; Vos 2013). Simultaneously, digital tools and media are embraced by education, and the combination of both has led to a wide variety of mathematical modelling tasks (Drijvers et al. 2016). On the one hand, there are open-ended modelling research projects within technology-rich environments, and on the other hand, there are tasks that are questionable to label as ‘modelling tasks’, yet these allow students to use digital tools (e.g. CAS, DGS). In this chapter, we first explore the wide variety between these extrema. Then, we review existing classifications of modelling tasks and classify aspects specific to ICT-based modelling tasks. The purpose of a classification is to obtain a plausible evaluation of the quality of ICT-based modelling tasks. We validate the new classification by applying it to three example tasks. A visualisation based on this classification allows us to describe strengths and weaknesses of a given ICT-based mathematical modelling task.

## 2 Examples of ICT-Based Modelling Tasks

To illustrate the variety, we present three examples of ICT-based modelling tasks. In the *Maypole Task*(see Fig. 41.1), the task designers offers a video to illustrate a traditional group dance in Bad Dinkelsdorf. Such media replace verbal descriptions and assist students in understanding the task context. Students are asked to process this situation with the help of mathematics. They are asked to write a number into an answer field, making this task potentially suitable for a digital test with automatic grading. Students can use a *clickable* calculator, which enables teachers/researchers to log students’ calculation activities. This task is not based on a problem from the dancers or the choreographer, and the focus is on a number answer. This makes the *Maypole* *Task* effectively a word problem on the application of Pythagoras’ theorem. In real life, the radius of the dancers’ circle is adapted to the available space, whereby the dancers shorten the ribbons by winding these around their hands. Also, one may note that the town of Bad Dinkelsdorf does not really exist. So, the city and the problem for the dancers are inauthentic, but the video shows an authentic dance.

The *Glider Task* (see Fig. 41.2) includes a movie showing the take-off by a glider plane, together with the displays of the altimeter indicating the altitude and the variometer indicating the rate of climb or descent. Just like in the *Maypole* *Task*, this movie shows an authentic situation, as demonstrated by the details like the dirt on the windscreen and the tiny features in the horizon. The students are asked to compare the two displays. The video playback can be moved back and forth by students to explore the situation. The openness regarding the approach to the task and the openness regarding the final answer invite students’ discussion. This makes the task potentially suitable for group work. It is anticipated that students will recognise a connection that will lead them to develop an intuitive and informal, yet meaningful understanding of the derivative function.

The *Algal Bloom Task* from Geiger and Redmond (2013) is a project-based task in a technology-rich environment. It starts from a large, authentic data set on the CO_{2} concentration in the Darling River, together with explanations about algae blooming, sunlight deprivation and the potential death of all life in the river. The question asks whether the present data are a cause for concern. This open-ended task allows for various approaches, does not target a single correct answer and is covered in two lessons, in which students work in pairs.

The above three example tasks show the wide variety in how digital tools and media can interact with mathematical modelling tasks. There is no linear scale from less digital to more digital, when comparing between traditional paper-and-pencil mathematical modelling tasks that request students to use digital tools like CAS or DGS, and tasks, in which students are asked to explore situations through digital media, like in the *Glider* *Task*. Also, there is no linear scale from less modelling to more modelling, when comparing between tasks with embedded digital media illustrating the task context, or tasks with *clickable* links to the world beyond the classroom. The variety in ICT-based modelling tasks is multi-dimensional, and our study aims at finding ways to describe, compare and evaluate these.

In this chapter, we will not discuss how digital tools and media shape and change communication, organisation, cognitive levels and other aspects of modelling activities. For this, we refer to recent research (e.g. Molina-Toro et al. 2019; Monaghan 2016; Williams and Goos 2012). Also, we will not study whether or not the use of digital tools and media within tasks lead to new modelling activities, more realistic contexts, more intense group work and so forth. Rather, we will exclusively focus on mathematical modelling tasks, in which digital tools and media are integrated. Since we want to describe, compare and evaluate these, we need criteria for important aspects across the tasks. Our study was guided by the following questions: (1) *Which criteria are suitable to describe and compare ICT*-*based modelling* *tasks**? (2) How can we classify and evaluate qualities of different ICT*-*based modelling tasks?* To answer these, we studied the literature. After several rounds of adapting and improving, we formulated a classification system. We used the three described tasks to validate the classification and evaluate the qualities of these tasks.

## 3 Classifying Tasks

There are a variety of ways to classify mathematical modelling tasks. The assessment framework from OECD (2013) distinguishes the following aspects: mathematical topics (e.g. geometry), mathematical concepts (e.g. angle, perimeter), mathematical competencies (e.g. reasoning, representing), modelling activities (mathematize; work mathematically; interpret/evaluate), task difficulty, task format (e.g. multiple choice), students’ digital tools (calculator, spreadsheet) and whether a task is presented on paper or screen. This framework gives us a first basis. However, it was typically developed for large-scale testing; it does not include, among others, criteria on openness or group work. This framework includes ‘task difficulty’, which is important in testing and can be determined for large student groups. This aspect is framed by the testing regime; if students were given more time or free access to Internet, the ‘difficulty’ could be different. Therefore, we will not include task difficulty into our classification, but rather include regulations that frame a task, such as allowing students to have ample time, peer collaboration or access to resources.

A comprehensive classification tailored to modelling tasks was created by Maaß (2010). This classification caters for a wide variety of modelling tasks and gives us a base to build on (see below). However, it does not include the use of tools and media. For this, we will use a description of digital aspects within modelling tasks by Geiger and Redmond (2013), but these authors only considered open project-based tasks in rich digital environments and not the less open tasks. So, we started from the classification system by Maaß (2010), restructured it and obtained five main classes. The first class pertains to the mathematics needed to solve the task, such as the topic (e.g. geometry) and the concepts (e.g. angles, perimeter). This class is substantial to a modelling task. The other four classes are explained below. Since we aimed for a classification that would enable a comparison across tasks, and an evaluation of qualities, we focused on developing classes that could be rated for higher or lower quality. Only the first class, regarding mathematical topic and concepts, cannot be rated. A summary of the classes, subclasses and ratings will be presented at the end of this chapter.

### 3.1 Modelling Tasks Without Considering Digital Tools and Media

Starting from Maaß (2010) and OECD (2013), we found classes for mathematical tasks describing competences required to solve the task (e.g. reasoning, representing). In some classifications of competencies, mathematical modelling is a subclass among other mathematical activities (e.g. Blomhøj and Jensen 2007). However, many mathematical activities can alternatively be perceived as sub-activities within mathematical modelling. In this chicken-and-egg dilemma, we chose the latter perspective, namely to view any given *mathematical activity* as potentially being a subclass of mathematical modelling, in particular as part of ‘working mathematically’. In our classification, we included this class, with the rating 0–4 for the number of competencies needed to solve the task. Another class from Maaß (2010) distinguishes between holistic modelling (students undertake the whole process) and atomistic modelling (students undertake a partial process, like only setting up the real model). We adapted this class by rating the number of *modelling activities*, in which the students were asked to engage in. Also, we included a class from Maaß (2010) regarding the *information given* in a task: superfluous (making for an overdetermined task), missing (underdetermined task), inconsistent (both over- and underdetermined) and matching.

In her classification, Maaß (2010) had three further classes, but these needed reconsideration when looking through the lens of a classification of ICT-based modelling tasks. One class was ‘nature of the relationship to reality’, which needed adaptation when considering virtual worlds, which have their own digital reality. The second class needing reconsideration was ‘type of representation’, which described texts and pictures, but not animations, video or other interactive representations. The third class pertained to openness (in solution methods), which we shall extend to openness to tools. We will return to these below.

### 3.2 Task Context in ICT-Based Modelling Tasks

In this class, we assert that a modelling task always contains a context with some problem that needs to be tackled mathematically. A first subclass here is the *reality reference* of the task context, which is the way the context is presented compared to the actual real world. For example, a task context can be designed as intentionally *artificial* (e.g. to simplify it to students). An artificial context can be perceived as a *digital reality*, like in games. If the task context is closer to the real world of humans, it can be *realistic* when it is experientially real and imaginable for students, even if not convincingly originating from real life. In the case where the presentation of the context contains evidence of its genuine existence, for example through a video, the task context, or parts of it, can be *authentic* (Vos 2018).

Modelling tasks always contain both a task context and a question or a request. So, we classify the relation between context and the problem posed, which is the *question reference* of the task context. A presented context can ‘beg’ for a question; one could imagine a task involving a video of citizens who present their context and ask for help to find a solution that has used value to them. When there is convincing evidence that people in the task context genuinely require an answer to their question, it is an *authentic* question. However, often the question is not presented with such urgency and authenticity; nevertheless, it can still be a *realistic* question. We assert that a task cannot have an artificial task context and an authentic question. However, a modelling task can have a meaningless, artificial question based on an authentic context (e.g. authentic data), which makes it a dressed-up word problem. In the case where both the task context and the question relate to the students’ current or future lives, we speak of a *relevant* question, distinguishing between student relevance (relevance from the students’ point of view) and relevance to life (relevance to the students’ future situations) (Greefrath et al. 2017).

Regarding the task context, we also include its representation. These can be text, diagrams or picture, which are static. A video can be played back, thus offering some interactivity. We can also imagine interactive animations that offer the students the possibility to explore the situation, for example through sliders to manipulate variables.

### 3.3 Aspects of the Digital Tool or Medium Within ICT-Based Modelling Tasks

In this chapter, we use the term *digital tools and media* as shorthand for overlapping terms like *ICT,* digital *technology*, digital *environments*, digital *worlds*, digital *products* and so forth. There are some ambiguities in these terms. For example, a video is generally perceived as a medium, but it can also be a tool for a designer to explain a task context, or a product created by students to report on their modelling project. We shall distinguish between *digital tools for students* to solve the task, like pocket or graphical calculators, CAS, DGS, spreadsheets, Wikipedia and so forth. We can also look at how the use of tools and media is regulated (*openness of tool use*). A task designer can encourage or restrict the students’ use of a certain digital tool or medium (“solve this task using CAS”). Digital tools and media are also available to designers, *teachers* and examiners, who can use tools for the presentation of a task, but also to administer students’ activities (logging answers) or for evaluation purposes. When a task is offered within a digital environment, there can be different *types of* *feedback*: a short response (right/wrong), or more elaborate feedback providing ‘an explanation about why a specific response was correct or not’ (Shute 2008, p. 160). The tool or medium can also allow a designer to frame the timing of the feedback (immediate or delayed).

### 3.4 Students’ Anticipated Activities and Task Regulations

Any task designer, teacher or examiner will anticipate certain *students’ activities* that are expected to be triggered by a given task. However, many mathematical modelling tasks are open towards approaches, to the use of tools and media or to different interpretations of contexts or answers. This implies that designers, teachers and examiners cannot (and should not) be fully able to foresee what students will do. Nevertheless, we included a subclass for students’ anticipated activities, in which the variation in students’ activities is rated, and we acknowledge that rating this quantitatively will be somewhat subjective.

A different subclass pertains to whether or not the task designer creates an *open* task, offering solution openness and/or answer openness. Also, Maaß (2010) had included this subclass, but we elaborate it with digital learning environments in mind. *Open* tasks are those that allow, for example, multiple solutions (at different levels). Open tasks can be classified according to the clarity of the initial and final states and the clarity and ambiguity of the transformation. When a modelling task is offered with an answer field in a digital environment, like in the Maypole task (Fig. 41.1), the mere presentation already announces that the task has little answer openness; nonetheless, the problem is open with regard to solution strategies. Some tasks can be classified as *(un)clear* *tasks*, including both a subjective and an objective component. The subjective component means that the perceived clarity depends on the students’ competencies and on the regulations that enable a student to gain further clarifications. The objective component refers to whether task-specific information can only be tapped with limited accuracy, even by experts with the best tools and media, like with some Fermi problems (see Ärlebäck & Bergsten 2010).

The subclass of *task regulations* pertains to rules set by designers, teachers or examiners. One such regulation is whether or not group work is allowed and whether students perform the work independently or may consult with others (including experts). Also, we can consider whether students have ample time to explore or be creative, or whether they are subject to a regime of time restrictions. When a task is used for a high stakes test, there will be pressure on students to find the answer that an authority will judge as ‘correct’. A task can also be geared towards the application of a certain formal mathematical concept. Oftentimes, such a task asks for theorems or algorithms that were recently learned in lessons. The Maypole task is a typical task on applying Pythagoras’ theorem, although one could conceivably estimate the length of the ribbon based on experience, based on a drawing, or based on a role play. In this subclass, we considered that the greater student’s independence and ownership, the better the task aligns with the spirit of mathematical modelling.

## 4 Results

Using the criteria and descriptions presented in Sect. 41.3, the three task examples, *Maypole*, *Glider* and *Algal Bloom task,* were rated by the authors within the scope of a qualitative research process. Based on the literature, we had reached a classification with five classes, of which the first regarding mathematical topic and concepts cannot be rated. The other four classes had subclasses theoretically derived, and these were ordinal and thus made accessible for quantifying. At the end of the process, the example tasks were rated for each subclass. In the few cases, in which the raters disagreed, the raters discussed the issue until agreement was reached on a common, final rate. This resulted in values assigned to the example tasks for each of the above-mentioned criteria (see Table 41.1).

The *Maypole task* mainly focuses on a few modelling activities and the digital tools and media play a subordinate role, with little openness regarding approaches, tool use or possible answers. The *Glider task* shows a higher overall potential in terms of modelling and digital tools and media. The modelling properties are particularly noticeable. And finally, the *Algal Bloom task*, in which many in the ICTMA community will consider the only ‘real’ mathematical modelling task of the three, scores highly overall.

A visual way to represent the classification of individual ICT-based modelling tasks is a net diagram. The lowest value of the scales is in the middle of the diagram. The potential of each example is directly apparent. A larger area indicates greater potential. The classes on the left generally express the use of digital media, and the classes on the right generally express the modelling potential (Fig. 41.3).

The diagrams show that the modelling potential of the *Glider task* is strong, whereas the *Maypole task* is limited in every class. Due to space limitations, we could not include the diagram of the *Algal Bloom task*, which was a nearly regular tridecagon.

## 5 Discussion, Conclusion, Recommendations

In applying the classification to the example tasks, we obtained a plausible evaluation of the quality of the tasks. We see strengths in the approach, but acknowledge its limitations. It would require further tasks to be evaluated by more raters to confirm the validity of the classification scheme and the reliability of the rates. The three selected examples already show that there can be no unambiguous weighting of the different criteria, since both the criteria for the modelling and the criteria for the digital tools and media describe different facets of the tasks, which cannot directly be compared to one another. One might observe that tasks with a high modelling potential seem to be visualised by a high degree of authenticity, relevance and a manifold of different modelling sub-competences. On the other hand, there are interactive multimedia tasks that perform more strongly on certain aspects relating to digital tools (e.g. CAS or DGS) and digital media (e.g. video). Weaknesses in modelling classes cannot be compensated by strengths in tool and media use and vice versa. The classification system shows that all these tasks aspects are difficult to compare, and that one needs a multi-dimensional view in describing, comparing and evaluating such tasks. Our classification system refines earlier classifications with respect to the use of digital tools and media. Also, the classification assists in analysing ICT-based mathematics tasks from the perspective of mathematical modelling education. A strength of this system is that it reveals possible ways to improve the quality of an ICT-based modelling task and how tasks can be improved by making suitable use of digital tools and media (like tasks embedded into virtual worlds). Our classification also shows that some tasks gain little from the use of tools and media, like the *Maypole task* (see Fig. 41.1). Other modelling tasks could not exist without digital tools and media. In the *Glider task* (see Fig. 41.2), the digital medium offers information that could not otherwise be given. Finally, we note that future developments in task development will undoubtedly require new criteria to extend the classification system.

## References

Ärlebäck, J. B., & Bergsten, C. (2010). On the use of realistic Fermi problems in introducing mathematical modelling in upper secondary mathematics. In R. Lesh, P. L. Galbraith, C. R. Haines, & A. Hurford (Eds.),

*Modeling students’ mathematical modeling competencies*(pp. 597–609). New York: Springer.Blomhøj, M., & Jensen, T. H. (2007). What’s all the fuss about competencies? In W. Blum, P. L. Galbraith, H.-W. Henn, & M. Niss (Eds.),

*Modelling and applications in mathematics education: The 14th ICMI study*(pp. 45–56). New York: Springer.Drijvers, P., Ball, L., Barzel, B., Heid, M. K., Cao, Y., & Maschietto, M. (2016).

*Uses of technology in lower secondary mathematics education*. Cham: Springer.Frejd, P. (2011). An investigation of mathematical modelling in the Swedish national course tests in mathematics. In M. Pytlak, T. Rowland, & E. Swoboda (Eds.),

*Proceedings of the Seventh Congress of the European Society for Research in Mathematics Education (CERME 7)*(pp. 947–956). Rzeszów, Poland: University of Rzeszów & ERME.Geiger, V., & Redmond, T. (2013). Designing mathematical modelling tasks in a technology rich secondary school context. In C. Margolinas (Ed.),

*Task design in mathematics education*(Vol. 1, pp. 121–130). Oxford: ICMI, HAL.Greefrath, G., Siller, H. S., & Ludwig, M. (2017). Modelling problems in German grammar school leaving examinations (Abitur) - Theory and practice. In T. Dooley & G. Gueudet (Eds.),

*Proceedings of the Tenth Congress of the European Society for Research in Mathematics Education (CERME 10)*(pp. 932–939). Dublin, Ireland: DCU Institute of Education and ERME.Maaß, K. (2010). Classification scheme for modelling tasks.

*Journal für Mathematik-Didaktik,**31*(2), 285–311.Molina-Toro, J. F., Rendón-Mesa, P. A., & Villa-Ochoa, J. (2019). Research trends in digital technologies and modeling in mathematics education.

*EURASIA Journal of Mathematics, Science and Technology Education,**15*(8), 1–13.Monaghan, J. (2016). Tools and mathematics in the real world. In J. Monaghan, L. Trouche, & J. M. Borwein (Eds.),

*Tools and mathematics*(pp. 333–356). Berlin, Germany: Springer.OECD. (2013).

*PISA 2012 assessment and analytical framework: mathematics, reading, science, problem solving and financial literacy*. Paris, France: OECD.Rellensmann, J., & Schukajlow, S. (2017). Does students’ interest in a mathematical problem depend on the problem’s connection to reality?

*ZDM—Mathematics Education,**49*(3), 367–378.Shute, V. J. (2008). Focus on formative feedback.

*Review of Educational Research,**78*(1), 153–189.Vos, P. (2013). Assessment of modelling in mathematics examination papers: Ready-made models and reproductive mathematising. In G. A. Stillman, G. Kaiser, W. Blum, & J.P. Brown. (Eds.),

*Teaching mathematical modelling: Connecting to research and practice*(pp. 479–488). Dordrecht: Springer.Vos, P. (2018). “How real people really need mathematics in the real world”—Authenticity in mathematics education.

*Education Sciences,**8*(4), 195. https://doi.org/10.3390/educsci8040195.Williams, J., & Goos, M. (2012). Modelling with mathematics and technologies. In M. A. Clements, A. Bishop, Keitel_Kreidt, C., Kilpatrick, J., & Leung, F. K. S. (Eds.),

*Third international handbook of mathematics education*(pp. 549–569). New York, NY: Springer.

## Author information

### Authors and Affiliations

### Corresponding author

## Editor information

### Editors and Affiliations

## Rights and permissions

## Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

## About this chapter

### Cite this chapter

Greefrath, G., Vos, P. (2021). Video-based Word Problems or Modelling Projects—Classifying ICT-based Modelling Tasks. In: Leung, F.K.S., Stillman, G.A., Kaiser, G., Wong, K.L. (eds) Mathematical Modelling Education in East and West. International Perspectives on the Teaching and Learning of Mathematical Modelling. Springer, Cham. https://doi.org/10.1007/978-3-030-66996-6_41

### Download citation

DOI: https://doi.org/10.1007/978-3-030-66996-6_41

Publisher Name: Springer, Cham

Print ISBN: 978-3-030-66995-9

Online ISBN: 978-3-030-66996-6

eBook Packages: EducationEducation (R0)