## 1 Introduction

All over the world, mathematical modelling is entering mainstream mathematics education, not just in classroom activities, but also in curricula and assessments (Frejd 2011; Vos 2013). Simultaneously, digital tools and media are embraced by education, and the combination of both has led to a wide variety of mathematical modelling tasks (Drijvers et al. 2016). On the one hand, there are open-ended modelling research projects within technology-rich environments, and on the other hand, there are tasks that are questionable to label as ‘modelling tasks’, yet these allow students to use digital tools (e.g. CAS, DGS). In this chapter, we first explore the wide variety between these extrema. Then, we review existing classifications of modelling tasks and classify aspects specific to ICT-based modelling tasks. The purpose of a classification is to obtain a plausible evaluation of the quality of ICT-based modelling tasks. We validate the new classification by applying it to three example tasks. A visualisation based on this classification allows us to describe strengths and weaknesses of a given ICT-based mathematical modelling task.

## 2 Examples of ICT-Based Modelling Tasks

The Glider Task (see Fig. 41.2) includes a movie showing the take-off by a glider plane, together with the displays of the altimeter indicating the altitude and the variometer indicating the rate of climb or descent. Just like in the Maypole  Task, this movie shows an authentic situation, as demonstrated by the details like the dirt on the windscreen and the tiny features in the horizon. The students are asked to compare the two displays. The video playback can be moved back and forth by students to explore the situation. The openness regarding the approach to the task and the openness regarding the final answer invite students’ discussion. This makes the task potentially suitable for group work. It is anticipated that students will recognise a connection that will lead them to develop an intuitive and informal, yet meaningful understanding of the derivative function.

The Algal Bloom Task from Geiger and Redmond (2013) is a project-based task in a technology-rich environment. It starts from a large, authentic data set on the CO2 concentration in the Darling River, together with explanations about algae blooming, sunlight deprivation and the potential death of all life in the river. The question asks whether the present data are a cause for concern. This open-ended task allows for various approaches, does not target a single correct answer and is covered in two lessons, in which students work in pairs.

In this chapter, we will not discuss how digital tools and media shape and change communication, organisation, cognitive levels and other aspects of modelling activities. For this, we refer to recent research (e.g. Molina-Toro et al. 2019; Monaghan 2016; Williams and Goos 2012). Also, we will not study whether or not the use of digital tools and media within tasks lead to new modelling activities, more realistic contexts, more intense group work and so forth. Rather, we will exclusively focus on mathematical modelling tasks, in which digital tools and media are integrated. Since we want to describe, compare and evaluate these, we need criteria for important aspects across the tasks. Our study was guided by the following questions: (1) Which criteria are suitable to describe and compare ICT-based modelling tasks? (2) How can we classify and evaluate qualities of different ICT-based modelling tasks? To answer these, we studied the literature. After several rounds of adapting and improving, we formulated a classification system. We used the three described tasks to validate the classification and evaluate the qualities of these tasks.

A comprehensive classification tailored to modelling tasks was created by Maaß (2010). This classification caters for a wide variety of modelling tasks and gives us a base to build on (see below). However, it does not include the use of tools and media. For this, we will use a description of digital aspects within modelling tasks by Geiger and Redmond (2013), but these authors only considered open project-based tasks in rich digital environments and not the less open tasks. So, we started from the classification system by Maaß (2010), restructured it and obtained five main classes. The first class pertains to the mathematics needed to solve the task, such as the topic (e.g. geometry) and the concepts (e.g. angles, perimeter). This class is substantial to a modelling task. The other four classes are explained below. Since we aimed for a classification that would enable a comparison across tasks, and an evaluation of qualities, we focused on developing classes that could be rated for higher or lower quality. Only the first class, regarding mathematical topic and concepts, cannot be rated. A summary of the classes, subclasses and ratings will be presented at the end of this chapter.

### 3.1 Modelling Tasks Without Considering Digital Tools and Media

Starting from Maaß (2010) and OECD (2013), we found classes for mathematical tasks describing competences required to solve the task (e.g. reasoning, representing). In some classifications of competencies, mathematical modelling is a subclass among other mathematical activities (e.g. Blomhøj and Jensen 2007). However, many mathematical activities can alternatively be perceived as sub-activities within mathematical modelling. In this chicken-and-egg dilemma, we chose the latter perspective, namely to view any given mathematical activity as potentially being a subclass of mathematical modelling, in particular as part of ‘working mathematically’. In our classification, we included this class, with the rating 0–4 for the number of competencies needed to solve the task. Another class from Maaß (2010) distinguishes between holistic modelling (students undertake the whole process) and atomistic modelling (students undertake a partial process, like only setting up the real model). We adapted this class by rating the number of modelling activities, in which the students were asked to engage in. Also, we included a class from Maaß (2010) regarding the information given in a task: superfluous (making for an overdetermined task), missing (underdetermined task), inconsistent (both over- and underdetermined) and matching.

In her classification, Maaß (2010) had three further classes, but these needed reconsideration when looking through the lens of a classification of ICT-based modelling tasks. One class was ‘nature of the relationship to reality’, which needed adaptation when considering virtual worlds, which have their own digital reality. The second class needing reconsideration was ‘type of representation’, which described texts and pictures, but not animations, video or other interactive representations. The third class pertained to openness (in solution methods), which we shall extend to openness to tools. We will return to these below.

In this class, we assert that a modelling task always contains a context with some problem that needs to be tackled mathematically. A first subclass here is the reality reference of the task context, which is the way the context is presented compared to the actual real world. For example, a task context can be designed as intentionally artificial (e.g. to simplify it to students). An artificial context can be perceived as a digital reality, like in games. If the task context is closer to the real world of humans, it can be realistic when it is experientially real and imaginable for students, even if not convincingly originating from real life. In the case where the presentation of the context contains evidence of its genuine existence, for example through a video, the task context, or parts of it, can be authentic (Vos 2018).

Regarding the task context, we also include its representation. These can be text, diagrams or picture, which are static. A video can be played back, thus offering some interactivity. We can also imagine interactive animations that offer the students the possibility to explore the situation, for example through sliders to manipulate variables.

### 3.3 Aspects of the Digital Tool or Medium Within ICT-Based Modelling Tasks

In this chapter, we use the term digital tools and media as shorthand for overlapping terms like ICT, digital technology, digital environments, digital worlds, digital products and so forth. There are some ambiguities in these terms. For example, a video is generally perceived as a medium, but it can also be a tool for a designer to explain a task context, or a product created by students to report on their modelling project. We shall distinguish between digital tools for students to solve the task, like pocket or graphical calculators, CAS, DGS, spreadsheets, Wikipedia and so forth. We can also look at how the use of tools and media is regulated (openness of tool use). A task designer can encourage or restrict the students’ use of a certain digital tool or medium (“solve this task using CAS”). Digital tools and media are also available to designers, teachers and examiners, who can use tools for the presentation of a task, but also to administer students’ activities (logging answers) or for evaluation purposes. When a task is offered within a digital environment, there can be different types of feedback: a short response (right/wrong), or more elaborate feedback providing ‘an explanation about why a specific response was correct or not’ (Shute 2008, p. 160). The tool or medium can also allow a designer to frame the timing of the feedback (immediate or delayed).

### 3.4 Students’ Anticipated Activities and Task Regulations

Any task designer, teacher or examiner will anticipate certain students’ activities that are expected to be triggered by a given task. However, many mathematical modelling tasks are open towards approaches, to the use of tools and media or to different interpretations of contexts or answers. This implies that designers, teachers and examiners cannot (and should not) be fully able to foresee what students will do. Nevertheless, we included a subclass for students’ anticipated activities, in which the variation in students’ activities is rated, and we acknowledge that rating this quantitatively will be somewhat subjective.

The subclass of task regulations pertains to rules set by designers, teachers or examiners. One such regulation is whether or not group work is allowed and whether students perform the work independently or may consult with others (including experts). Also, we can consider whether students have ample time to explore or be creative, or whether they are subject to a regime of time restrictions. When a task is used for a high stakes test, there will be pressure on students to find the answer that an authority will judge as ‘correct’. A task can also be geared towards the application of a certain formal mathematical concept. Oftentimes, such a task asks for theorems or algorithms that were recently learned in lessons. The Maypole task is a typical task on applying Pythagoras’ theorem, although one could conceivably estimate the length of the ribbon based on experience, based on a drawing, or based on a role play. In this subclass, we considered that the greater student’s independence and ownership, the better the task aligns with the spirit of mathematical modelling.

## 4 Results

Using the criteria and descriptions presented in Sect. 41.3, the three task examples, Maypole, Glider and Algal Bloom task, were rated by the authors within the scope of a qualitative research process. Based on the literature, we had reached a classification with five classes, of which the first regarding mathematical topic and concepts cannot be rated. The other four classes had subclasses theoretically derived, and these were ordinal and thus made accessible for quantifying. At the end of the process, the example tasks were rated for each subclass. In the few cases, in which the raters disagreed, the raters discussed the issue until agreement was reached on a common, final rate. This resulted in values assigned to the example tasks for each of the above-mentioned criteria (see Table 41.1).

The Maypole task mainly focuses on a few modelling activities and the digital tools and media play a subordinate role, with little openness regarding approaches, tool use or possible answers. The Glider task shows a higher overall potential in terms of modelling and digital tools and media. The modelling properties are particularly noticeable. And finally, the Algal Bloom task, in which many in the ICTMA community will consider the only ‘real’ mathematical modelling task of the three, scores highly overall.

A visual way to represent the classification of individual ICT-based modelling tasks is a net diagram. The lowest value of the scales is in the middle of the diagram. The potential of each example is directly apparent. A larger area indicates greater potential. The classes on the left generally express the use of digital media, and the classes on the right generally express the modelling potential (Fig. 41.3).

The diagrams show that the modelling potential of the Glider task is strong, whereas the Maypole task is limited in every class. Due to space limitations, we could not include the diagram of the Algal Bloom task, which was a nearly regular tridecagon.