Key didactic aspects of mathematical modelling (see Sects. 1.1, 1.2 and 1.3), in particular the possibilities of characterising and acquiring modelling competence (see Sects. 1.1.3 and 1.4), were explained in previous sections. Against this background, the important role that the teacher plays in the development of competences among students was highlighted and the necessary professional competence was demonstrated (see Sect. 2.2). In this context, different perspectives and conceptualisations were considered, taking into account two large scale national and international studies on empirical teacher competences (see Sect. 2.2.2). The subsequent presentations focused on the competence model of the COACTIV study, which together with the competence dimensions according to Borromeo Ferri and Blum (2010; see Sect. 2.3) formed the basis for the interpretation of a structural model of modelling-specific professional competence (Klock et al., 2019; Wess et al., 2021; see Sect. 2.4).

This theory of underlying forms of knowledge and structures is the prerequisite for the acquisition of specific knowledge and ability and corresponding affective-motivational aspects of teachers. However, until then there had hardly been any preliminary work for a theory-based approach to the collection of modelling-specific competence of teachers in which an instrument was developed with the help of theory (Borromeo Ferri & Blum, 2010; Maaß & Gurlitt, 2011). In parallel to the structural model of professional competence for teaching mathematical modelling, as described in Sect. 2.4, a related test instrument (see Sect. 3.5) was developed, whose construction is described below.

1 Test Development

The operationalisation of the structural model was based on the principle of rational and effective test construction (Bühner, 2011; Burisch, 1984; Downing and Haladyna, 2011). In this process, an instrument—in German (Klock and Wess, 2018)—was developed to capture aspects of professional competence for teaching mathematical modelling. The content of the item construction was based on the individual components of the structure model (see Sect. 2.4). For the first version of the instrument, four scales were designed for self-reported previous experiences (15 items), four scales for beliefs (32 items), two scales for self-efficacy expectations (24 items) and four scales for modelling-specific pedagogical content knowledge (103 items). Two of the latter scales are recorded using task examples with case-based text vignettes to query the underlying facets in different non-mathematical contexts and learning situations. All items used in the test are formulated in a closed form, whereby combined true-false, multiple-choice, and mapping tasks to capture modelling-specific pedagogical content knowledge are used in addition to Likert scales to capture the self-reported prior experiences, beliefs, and self-efficacy expectations.

The items in this first test version were checked in a small sample (N = 8) as part of a qualitative pre-pilot with a subsequent discussion. The aim was to revise incomprehensible or inaccurate items. Subsequently, the test was first checked and quantitatively evaluated with a small sample size (N = 66), where critical items were excluded using statistical key parameters and didactic considerations. As a result, the number of items in the various scales decreased to 15 items for the self-reported previous experiences, 16 items for beliefs, 24 items for self-efficacy expectations and 71 items for modelling-specific pedagogical content knowledge.

Finally, the instrument was comprehensively piloted using a random sample of pre-service teachers from several German universities (N = 156) in various events (Klock and Wess, 2018). Excerpts of the related confirmatory factor analysis of the underlying competence model as well as a larger sample replication study (N = 349) confirming the design of the model in the forms were presented in Sect. 2.4.4. For a deeper consideration, please refer to the contributions of Klock et al. (2019) and Wess et al. (2021). In addition, a classification of the results of the replication study in relation to general standards in the form of quality criteria for quantitative test instruments can be found in Part 4 of this book.

The executed test duration including instructions is approximately 70 min, while the maximum processing time of the test is 60 min. It is conducted as a single test in groups. The test books are filled in anonymously. A personal code is generated at the beginning of each test book to link different test sheets of individual participants in the course of studies with multiple measurement points.

In order to provide an insight into the test book for the collection of professional competence for teaching mathematical modelling, the test items are described below and explained in detail using individual examples. All the contents of the instrument have been designed with reference to item formulation guidelines (Bühner, 2011; Impara & Foster, 2011). The complete instrument with all tasks can be found in the attached test book (see Sect. 3.5). The corresponding solutions can be found in the Appendix.

2 Operationalisation of Test Items: First Test Part

The test begins with the generation of the personal code for the anonymised assignment of the subjects (e.g. as part of pre-post examinations) and some brief questions on general information (gender, age, school-leaving examination grade, last mathematics grade, second subject and semester). This is followed by the items for self-reported previous experiences, beliefs and self-efficacy expectations for mathematical modelling in a first part of the test, before the focus is placed on the individual facets of the modelling-specific pedagogical content knowledge (see Sect. 3.3). The former is recorded using a five-point Likert scale with the expressions “strongly disagree” (rated at 1), “disagree” (rated at 2), “neutral” (rated at 3), “agree” (rated at 4), and “strongly agree” (rated at 5). Five points were chosen to provide participants with a medium expression and not to force them to choose a position on the statement. Five points also make it possible to express one's own degree of approval in a differentiated manner by means of a verbal gradation (Bühner, 2011; Reckase, 2000).

2.1 Self-reported Previous Experiences in Mathematical Modelling

Short scales were used to control the self-reported previous experiences in mathematical modelling, wherein three or six items are used to capture self-assessments of different areas of experience. For this purpose, an item pool was developed and it led to the formation of four scales after exploratory factor analysis. Items 2.2, 2.4, 2.6, 2.8, 2.10 and 2.13 form the “Teaching and preparing for mathematical modelling” scale, items 2.1, 2.5 and 2.11 form the “Treatment of mathematical modelling” scale, items 2.7, 2.12 and 2.15 form the “Modelling tasks” scale and items 2.3, 2.9 and 2.14 form the “Modelling in the classroom” scale. All items are positively formulated, so there is no need to reposition. Sample items are specified in Table 3.1.

Table 3.1 Sample items for self-reported previous experience in mathematical modelling

The first scale, “Treatment of mathematical modelling,” measures the extent to which mathematical modelling has generally played a role in the pre-service teachers’ secondary school courses. No focus is placed on certain aspects, so that both scientific and didactic events are included in the assessment.

The second scale, “Teaching and preparing for mathematical modelling,” measures the degree to which the pre-service teachers have been, and feel prepared, for teaching mathematical modelling. This scale includes both items to assess the extent to which the development of modelling competence among students has played a role in the courses that have been completed so far and items to assess their own competence in this area.

The “Modelling tasks” scale records whether modelling tasks have already been processed in another event. It gives an impression of how students have gained experience with modelling tasks and acquired modelling competence. In this scale, both scientific and didactic events can contribute to the value of a scale.

A final scale, “Modelling in the classroom,” records whether students already have experience of modelling. The focus is not only on school education and practical experience in the context of internships, but also extracurricular activities, such as experience in tutoring, are recorded by the scale.

2.2 Beliefs in Mathematical Modelling

In conceptualising the beliefs for mathematical modelling, as described in Sect. 2.4.2, epistemological beliefs and beliefs about teaching and learning mathematics are distinguished (Woolfolk Hoy et al., 2006). In addition, beliefs about learning (Voss et al., 2013), that is constructivist and transmissive beliefs, contributes to the operationalisation of mathematical modelling beliefs. Therefore, the beliefs about mathematical modelling are operationalised with the help of two scales, a constructivist and a transmissive scale, whereby the scale of constructivist-oriented beliefs can consist of three theoretical sub-scales.

Epistemological beliefs generally refer to the structure and genesis of knowledge (Buehl & Alexander, 2001) and have been operationalised in terms of mathematics teaching on the formalism aspect, the application aspect, the process aspect and the schema aspect (see Sect. 2.4.2). Items of the application aspect are suitable for the operationalisation of epistemological beliefs regarding mathematical modelling due to their application reference. The items record the extent to which mathematical modelling is considered an everyday or social benefit (see Table 3.2). This eventually led to the scale “Beliefs for the application of mathematical modelling,” which consists of items 3.2, 3.5, 3.12 and 3.15. Item 3.5 is negatively worded and must be reverse-scored for scaling.

Table 3.2 Sample items on beliefs about mathematical modelling

Beliefs for teaching and learning mathematics are operationalised in terms of mathematical modelling regarding educational aims of teachers (see Sect. 2.4.2). It records the participants’ agreement to statements that grant mathematical modelling a legitimate place in mathematics education and consider it important to promote modelling competence (see Table 3.2). Items 3.1, 3.3, 3.9 and 3.13 form the “Mathematical modelling in the classroom” scale.

Existing items from Staub und Stern (2002) were used to record the beliefs about learning in their constructivist and transmissive forms, as they were also used in the COACTIV study (Voss et al., 2013). Items on constructivist beliefs represent perspectives that students should discover their own ways of solving tasks, work independently and discuss their ideas for solutions (see Table 3.2). Items for transmissive beliefs represent the view that teachers should teach detailed procedures and provide schematics even for application tasks. These kinds of beliefs have been included, as constructivist views have a connection with positive beliefs about modelling and transmissive views tend to be accompanied by negative beliefs about modelling (Kuntze & Zöttl, 2008; Schwarz et al., 2008). Both types of beliefs cannot be understood as two opposing poles. Rather, they are adjoining negatively correlated constructs, such as those empirically proven by Voss et al. (2013). For this reason, two scales were formed. The “Constructivist beliefs” scale is based on items 3.3, 3.8, 3.11 and 3.14, whereas the “Transmissive beliefs” scale is based on items 3.6, 3.7, 3.10 and 3.16.

2.3 Self-efficacy Expectations for Mathematical Modelling

The self-efficacy expectations for mathematical modelling are conceptualised in Sect. 2.4.3 on the basis of ideas about the own effectiveness in the diagnosis of performance potentials in mathematical modelling processes. For operationalisation, items were developed that require self-assessment of one's own effectiveness and recognising the skills of students in the phases of the modelling process (see Sect. 1.1.2) using written results. Positive and negatively worded items have been created for each phase.

In factor analysis, two one-dimensional scales emerged. Sample items are shown in Table 3.3. Items 4.1, 4.4, 4.9, 4.10, 4.13, 4.14, 4.15, 4.16, 4.17, 4.18, 4.21, 4.22 and 4.23 for the modelling phases such as simplifying/structuring, mathematising, interpreting and validating form the scale of “Self-efficacy expectations for mathematical modelling,” since these are specific phases with specific diagnostic requirements for the modelling process (see Sect. 2.4.3). Items 4.5, 4.7, 4.8, 4.11, 4.12, 4.19, 4.20 and 4.24 for diagnosing written results or mathematical work form the “Self-efficacy expectations for mathematical work” scale, since they are empirically distinct from the items for the other phases of the modelling process. Items 4.13 to 4.24 must be reverse-scored before scaling due to their negative formulation. Not all items in the test book were included in the scale. Items 4.2, 4.3 and 4.6 could not be assigned by factor analysis.

Table 3.3 Sample items for self-efficacy expectations for mathematical modelling

3 Operationalisation of Test Items: Second Test Part

In the second part of the test, the facets of the modelling-specific pedagogical content knowledge are collected using combined-true-false, multiple-choice, and mapping tasks (see Sect. 2.4.1). This approach serves as an economic and adequate test of the above facets by reducing the processing, evaluation and solution time and at the same time reducing the probability of guess (Bühner, 2011; Ebel & Frisbie, 1991; Impara & Foster, 2011). Alternatively, the probability of the guess could be considered as an additional parameter in the evaluation using a 3PL model. Due to the lack of model validation tests and the lack of specific objectivity of the 3PL model, this approach was abandoned.

Compared to closed answer formats, open or semi-open response formats would have required encoding of each individual answer, which would have led to low evaluation objectivity. Another test instrument under development to assess competences for teaching mathematical modelling also uses closed item formats. In this context, Borromeo Ferri (2019) also describes the above-mentioned problems that open item formats pose. The difficulty of the dichotomous evaluation was met in all knowledge scales by a normative composition based on a multi-stage expert survey as well as a theoretical foundation based on current results of didactic research on mathematical modelling.

3.1 Knowledge about Modelling Tasks

Combined-true-false items are used to record the sub-facet characteristics, development, and processing of modelling tasks, among other things, to gather knowledge about modelling tasks. For this purpose, three items are combined and evaluated together. The task is rated as correct if all three items have been answered correctly. The corresponding scales consist of items 5.1.[1-4], 5.2.[1-4] and 5.3.[1-4].

The example shown (see Fig. 3.1) is about the basic characteristics of modelling tasks. Modelling tasks can thus be overdetermined as well as underdetermined. An example of a specific task is the Fire-brigade Task (Blum, 2011), which requires only some of the information specified to be used for the solution. Likewise, the reverse is possible, where the task does not contain all the information needed to resolve it. An example of such an underdetermined task is the Lighthouse Task (Borromeo Ferri, 2010), where the missing information (such as the Earth’s radius) must be determined using everyday knowledge, estimates or research. The first two statements would therefore be “true,” while the last statement is considered to be “false,” since the openness of a task is an essential feature of (good) modelling tasks (see Sect. 1.3).

Fig. 3.1
figure 1

Combined multiple-choice sample item regarding knowledge about modelling tasks

Another subfacet of knowledge about modelling tasks, the analysis and classification of modelling tasks regarding an appropriate catalogue of criteria, can be measured only in conjunction with specific requirement situations which allow the classification of reality-related tasks. This is achieved by assigning or reorganising tasks that relate to the modelling tasks used to gather knowledge about modelling processes and knowledge about interventions (see Sect. 3.3.3). This item format is used for an economic review of knowledge structures, cause-effect relationships or abstraction skills, with a low probability of guess (Bühner, 2011). Specifically, items 8.1, 8.2, 8.3, 8.4 and 8.5 set out the task of analysing four of the aforementioned modelling tasks with regard to theoretically sound criteria for modelling tasks (see Sect. 1.3.3) and ranking them accordingly. Thus, the assignment is considered correct if one of the two options set out based on the multi-level expert survey has been implemented. The example (see Fig. 3.2) deals with the criterion of openness, in which the modelling tasks presented are to be entered into the columns of the table with increasing intensity from left to right. The ranking sequences “(3)(2)(4)(1)” and “(3)(2)(1)(4)” are rated as correct and are therefore coded as 1, while the remaining 22 options are rated as incorrect and coded as 0.

Fig. 3.2
figure 2

Sample item for assigning and/or reorganising regarding knowledge about modelling tasks

3.2 Knowledge about Concepts, Aims and Perspectives

Knowledge about concepts, aims and perspectives of mathematical modelling is gathered only using multiple-choice items. These measure selected aspects of theoretical background knowledge such as knowledge about modelling cycles and their suitability for different purposes or different perspectives of research on mathematical modelling (Kaiser & Sriraman, 2006).

The example in Fig. 3.3 covers the sub-facet of modelling cycles. The task is considered as correct when the first alternative answer has been ticked, which explicitly aims at the designs of Leiss et al. (2010) for the modelling cycle from a cognitive psychological perspective. Since the fourth statement takes a contrary position and the situation model, contrary to the third answer option, is cognitively formed by the individual (Kaiser et al., 2015), these, as well as the second statement, which is realised, for example in Schupp’s cycle (1989), represent distractors of the item under consideration.

Fig. 3.3
figure 3

Sample item for knowledge about concepts, aims and perspectives

3.3 Knowledge about Modelling Processes and Knowledge about Interventions

Knowledge about modelling processes and knowledge about interventions can only be measured in conjunction with presented requirements that allow diagnosis and evaluation of intervention in a situational context. There are two main ways to provide such a situational context: (1) The situation is illustrated by a video vignette or (2) The situation is described by text vignettes. Since the analysis of video vignettes is perceived as more burdensome and the more cognitively demanding medium because of the parallelism of the actions they represent (Syring et al., 2015), the cognitively less burdensome text vignettes are used here to illustrate the requirements. In this context, an upstream general scenario (see Fig. 3.4) creates a teaching context that provides general information on the social form, on the experience of students with modelling tasks and their level of performance, on the processing time and on previous interventions, in addition to the specific requirements.

Fig. 3.4
figure 4

General scenario for the requirement situations

The requirements are presented by modelling tasks and related text vignettes, so-called task vignettes, which illustrate a discussion of students in a small group when they are working on the task. There are six task vignettes in total. Five of the six contexts of the modelling tasks were taken from the literature. The tasks are as follows:

  • Traffic Jam (Maaß and Gurlitt, 2011),

  • Safe Victory (Blum et al., 2010),

  • Milk Carton (Böer, 2018),

  • Filling Up (Blum, 2011),

  • Container (Greefrath & Vorhölter, 2016).

All tasks were slightly modified by replacing or explaining unclear terms, to use them in test situations. However, the heart of the tasks remained the same. The modelling tasks can be assigned to different class levels, which are specified with the task at hand. The test instrument includes tasks for the sixth, eighth, ninth, tenth, and twelfth class. The modelling tasks used in the test instrument have a rather low complexity. The use of more complex modelling tasks would lead to the test score not primarily determined by knowledge about modelling processes and knowledge about interventions, but also essentially the modelling competence of participants. Therefore, participants must be able to easily penetrate the modelling tasks.

The Traffic Jam Task (Maaß & Gurlitt, 2011) is used below to illustrate the test items (see Fig. 3.5). For each modelling task, a student discussion was formulated that describes a typical difficulty in the modelling process.

Fig. 3.5
figure 5

Task vignette for traffic jam (cf. Maaß & Gurlitt, 2011)

Knowledge about modelling processes

Knowledge about modelling processes is characterised, inter alia, by the identification of the modelling phase and the identification of the difficulty (see Sect. 2.4.1). As a consequence of identifying a difficulty, the teacher should reach a judgement in order to meet the requirement of adequate diagnostics (Heinrichs & Kaiser, 2018). This judgement is operationalised through the formulation of a support goal. As these three steps provide a diagnostic basis, they also operationalize important facets of teacher modelling-specific diagnostic competence of teachers (cf. Borromeo Ferri and Blum, 2010).

The facets shown are recorded using three items per task vignette. Since six task vignettes are included in the test instrument, the “Knowledge about modelling processes” scale consists of 18 items. It consists of items 7.[1-6].1, 7.[1-6].2 and 7.[1-6].7. The scale consists of multiple-choice items with four answer options.

The first item type (see Fig. 3.6) asks for the identification of the modelling phase in which the learners in the shown conversation (see Fig. 3.5) are actually located. This is because students do not necessarily have to be in the same modelling phase when working together on modelling tasks. Students can simultaneously address several aspects that can be assigned to different modelling phases.

Fig. 3.6
figure 6

Sample item to identify the modelling phase

In the sample task vignette (see Fig. 3.5), statements can be clearly associated with the Simplify/Structure phase. By saying, “We really need to know how many vehicles are in the traffic jam,” and “Yes, we don't know how long it takes for every vehicle,” it becomes clear that the students identify relevant and irrelevant aspects and thus make structuring. The two statements, "How are we going to calculate how long it takes?” and “We can divide the 20 km by the 6 hours, then we know how fast it would take.” also point to approaches of mathematisation. Since the statements on the previous phase predominate and even student 2 reacts to the impulses of his classmates, the Simplify/Structure phase can be primarily assigned for the work in a small group. At this point, the answer option “Mathematise” is a distractor that leads to high item difficulty.

After the identification of the modelling phase, participants must diagnose a potential difficulty (see Fig. 3.7). Since the Simplify/Structure modelling phase has been identified, typical difficulties in the formation of a real model can be considered. The only difficulty that can be attributed here is “problems in making assumptions” (see Sect. 1.4.2). The statement "We do not have any more information" shows that the students are not used to making assumptions. Instead, they compute based on the given data, which is an inappropriate approach to solving the problem.

Fig. 3.7
figure 7

Sample item to identify the difficulty

A final step is to set a support goal for the small group (see Fig. 3.8). The lack of ability or willingness of students to make assumptions suggests the support goal of “Independent acquisition and evaluation of information.” To avoid sequential effects, the support goal items were placed after the intervention items. This should avoid simply ticking the matching support goal item after answering the diagnosis item. This approach was described by students in the framework of the qualitative pre-pilot (see Sect. 3.1). The chosen placement requires that the participants first go through and then answer the intervention items. After that, they think again about the support goal item, so that premature answering of the items is avoided.

Fig. 3.8
figure 8

Sample item to determine the support goal

Knowledge about interventions

Knowledge about interventions is operationalised using four items per task vignette (see Fig. 3.9). Since six task vignettes are included in the test instrument, the scale consists of 24 items. It consists of items 7.[1-6].3, 7.[1-6].4, 7.[1-6].5 and 7.[1-6].6 respectively. The items consist of statements that represent potential interventions in the situation illustrated by the task vignette. Participants are encouraged to identify potentially appropriate interventions for the independence-oriented development of modelling competence. The scale consists of true-false items with the two answer options “suitable” and “unsuitable,” one of which is rated as correct.

Fig. 3.9
figure 9

Sample item for knowledge about interventions

Since the probability of guess is 50% for two response options, the additional “do not know” response option was given. If students do not know the solution, they will be given an alternative answer and the number of correct solutions through guessing is reduced.

A decision as to whether an intervention is appropriate or unsuitable will be taken based on the criteria of adaptive intervention (see Sect. 1.4.1). The first statement, “First, estimate how long a car is” is considered as not adaptive. Although it has a content-methodical fit to the difficulty of the learner (make assumptions), it is not minimal because it strongly interferes with the content of the solution process. It is also not independence-oriented since the intervention is highly specific.

The second intervention, “First consider only part of the problem, e.g. how many cars are actually struck in traffic” can be evaluated as potentially adaptive. It is adapted in terms of content and methodology, since it addresses the difficulty of the students. It is minimal because it does not add any additional information to the solution process, and it is independence-oriented since it is less directive. Although the request makes a proposal for further work, it does not specify how the number of vehicles needs to be determined. For example, the proposal to divide the problem into sub-problems is an example of a potentially adaptive strategic intervention. The other two statements are evaluated using the same methodology.

After the operationalisation of the test, instrument has been presented, Sect. 3.5 shows the entire test book before describing the fully tested quality of the test instrument on the basis of various main and secondary criteria in Sect. 3.4. First, information on how to conduct the test is described in Sect. 3.5.

4 Information for Conducting the Test

The test can be conducted as a single test. It is used by pre-service teachers for secondary schools (general school, high school) with the subject of mathematics. The test includes a total of 126 items. Of these, 15 items are self-reported previous experiences, 16 items are beliefs, 24 items are self-efficacy expectations, and 71 items are professional knowledge about teaching mathematical modelling. The executed test duration including instructions is approximately 70 min. The maximum processing time is 60 min. Each test person requires a test book and a pen. No aids are allowed. The test begins with the instructions by the test instructor. It must be ensured that all participants have a pen and the test book in front of them, so that the processing can start after the instructions. These are provided on the first page of the test book. No questions are answered during the processing time.

5 Test Book

Create your personal code according to the following schema

figure a

1. General Information

figure b

2. Previous Experiences

figure c

3. Beliefs

figure d

4. Self-efficacy

figure e
figure f

5. Knowledge about Modelling Tasks

figure g
figure h
figure i
figure j

6. Knowledge about Concepts, Aims and Perspectives

Please check the appropriate box (only one per task).

figure k
figure l
figure m
figure n
figure o
figure p

7. Knowledge about Modelling Processes and Interventions

Tasks and associated text vignettes that describe student conversations while performing modelling tasks are illustrated below. The tasks and text vignettes are used to diagnose, define support goals and derive appropriate interventions in these situations. The situations are characterised by the following framework conditions:

You are a teacher at a secondary school and your students at the specified grade level work on the tasks in a small project in groups of 3. The students have already gained experience with modelling tasks in advance. The situations presented arise in the first half of the processing time. The students in consideration have an average level of performance for the respective grade level. You observe the students during extracts of the conversations. You have not yet interfered in the learning process.

7.1 Traffic Jam (9th Grade)

Traffic jams often occur at the beginning of the summer holidays. Christina is stuck in a 20 km traffic jam for 6 hours. It is very warm and she is extremely thirsty. There is a rumour that a small truck is supposed to supply the people with water, but she has not yet received anything. How long will it take for the truck to supply all people with water?

figure q
STUDENT 1::

We really need to know how many vehicles are stuck in the traffic jam.

STUDENT 2::

Huh? Right!

STUDENT 1::

How do we calculate how long it takes? A lot of things are missing from the task!

STUDENT 3::

Yeah, we don't know how long it takes for every vehicle.

STUDENT 2::

It is a dumb task.

STUDENT 1::

We can divide the 20 km by the 6 hours, then we know how fast it would take.

STUDENT 3::

Exactly! We do not have any more information.

figure r
figure s

7.2 Stockpile Material (6th Grade)

From both sides of the national road L1081, a route is being constructed to bring the illustrated cone dumps to the open-cast mine that is 5.5 km away. The 8.2 million m3 of stockpile material will be transported across the L1081. The entire fleet of transporters will then transport the stockpile material 16 h a day. 12 months are planned for this transport work. To ensure transport performance, the fleet will be expanded by 10 dump trucks, each with a payload of 96 tons.

Develop a model calculation for the transport of stockpile material if 1 m3 of the waste has a mass of approximately 2 tons and the transport has to be completed within one year.

figure t
STUDENT 1::

We need to know how many dump trucks they need.

STUDENT 2::

And we have to estimate how long they take to drive there.

STUDENT 1::

And how long to unload.... and load.

STUDENT 3::

But, if there are multiple trucks, they cannot always be loaded directly.

STUDENT 2::

Yeah, lots of things to consider. They do not work for 16 hours either, they have breaks, smoking breaks and such.

STUDENT 3::

How do we get all this into one formula?

STUDENT 1::

Boah, [leans back] no idea. It is way too hard.

figure u
figure v
figure w
figure x

7.3 Safe Victory (12th Grade)

These four dice are described by their nets.

Two players choose a dice one after the other. After that, everybody throws the dice once. Whoever has the higher score wins. Develop a strategy with which the winning probability of the second player is the highest.

figure y

[Student 1 previously calculated the expected values for each cube.

\( E(A) = \frac{{10}}{3},{\mkern 1mu} E(B) = 3,{\mkern 1mu} E(C) = \frac{8}{3},{\mkern 1mu} E(D) = 3] \)

STUDENT 1::

Is this possible?

STUDENT 2::

Yeah, if I take C, you have to take A, because it is the highest.

STUDENT 3::

And if I take A, you can choose between B and D, because they are the same. Makes sense, right?

STUDENT 1::

Exactly.

figure z
figure aa

7.4 Filling Up (10th Grade)

Mr. Stein lives in Trier, 20 km from the Luxembourg border. He drives his VW Golf to refuel in Luxembourg, where there is a fuel station just across the border. One litre of petrol costs only €1.05 here as compared to €1.20 in Trier.

Is the ride worth it for Mr. Stein?

figure ab
STUDENT 1::

[Has previously carried out the following calculation:

$$ \text{x} \cdot 0.15 \frac{{\rm C}\!\!\!\!\!\!\!=}{1} = 2 \cdot 20\, \text{km} \cdot \frac{81}{100\, \text{km}} \cdot 1.05 \frac{{\rm C}\!\!\!\!\!\!\!=}{1} \Rightarrow x \approx 22.41 ] $$
STUDENT 2::

Strange, do you only have to fill up so little to make it worthwhile? But that is a very little. I would not have thought so. My father still takes canisters with him when he goes refuelling.

STUDENT 3::

How much fuel goes into a car?

STUDENT 1::

50 litres, maybe?

STUDENT 3::

Yes, that would be realistic. Then he would not even need to take even one canister.

figure ad
figure ae

7.5 Milk Carton (12th Grade)

Not only for financial reasons, but also from an environmental point of view, it makes sense to consider what packaging should look like, so that least possible material is used. The picture shows a commercial milk carton. What should the milk carton look like so that the least possible material is used?

figure af

[The students have prepared the following calculation in advance:

$$V = 1~{\text{l}} = {\text{a}} \cdot {\text{b}} \cdot {\text{c}} \Leftrightarrow {\text{a}} = \frac{{1{\text{~l}}}}{{b \cdot c}}$$
$$O = 2ab + 2bc + 2ac = \frac{{2{\text{~l}}}}{c} + 2bc + \frac{{2{\text{~l}}}}{b}].$$
STUDENT 1::

That is not possible now.

STUDENT 2::

Why not, just derivate and then set zero.

STUDENT 1::

Yeah, of what, b or c?

STUDENT 3::

Mh, just go after b.

STUDENT 1: :

[calculates: \(\left. O' = 2c - \frac{{2~l}}{{b^{2} }} = 0 \right]\) And now? I still have the b and the c.

figure ag
figure ah

7.6 Container (8th Grade)

Containers are used on many construction sites to store construction goods or to collect construction waste. These containers have a special shape, which is intended to simplify loading and unloading. How much sand is in the container shown?

figure ai

STUDENT 1: There is exactly 7,160,000 cubic metres of sand in there. Is that true?

STUDENT 2: I guess you were right, you calculated that with calculator.

STUDENT 1: Clearly. Then that is fine.

STUDENT 3: It is certainly right. I can present that.

figure aj
figure ak

8. Knowledge about Modelling Tasks

Please place the tasks “Container” (1), “Filling Up” (2), “Safe Victory” (3) and “Milk Carton” (4) in order with regard to the following criteria for modelling tasks. Note the numbers corresponding to the tasks in the table on the next page.

  1. (1)

    Container

figure al

Containers are used on many construction sites to store construction goods or to collect construction waste. These containers have a special shape, which is intended to simplify loading and unloading. How much sand is in the container shown?

  1. (2)

    Filling Up

figure am

Mr. Stein lives in Trier, 20 km from the Luxembourg border. He drives his VW Golf to refuel in Luxembourg, where there is a fuel station just across the border. One litre of petrol costs only €1.05 here as compared to €1.20 in Trier.

Is the ride worth it for Mr. Stein?

  1. (3)

    Safe Victory

figure an

These four dice are described by their nets.

Two players choose a dice one after the other. After that, everybody throws the dice once. Whoever has the higher score wins. Develop a strategy with which the winning probability of the second player is the highest.

  1. (4)

    Milk Carton

figure ao

Every day tons of packaging waste is generated in Germany. Not only for financial reasons, but also from an environmental point of view, it makes sense to consider what packaging should look like, so that least possible material is used. The picture shows a commercial milk carton. What should the milk carton look like so that least possible material is used?

figure ap