ZDM

, Volume 44, Issue 3, pp 325–340

The conceptualisation of mathematics competencies in the international teacher education study TEDS-M

Authors

    • University of Vechta
  • Gabriele Kaiser
    • University of Hamburg
  • Sigrid Blömeke
    • Humboldt University Berlin
Original Article

DOI: 10.1007/s11858-012-0432-z

Cite this article as:
Döhrmann, M., Kaiser, G. & Blömeke, S. ZDM Mathematics Education (2012) 44: 325. doi:10.1007/s11858-012-0432-z

Abstract

The main aim of the international teacher education study Teacher Education and Development Study in Mathematics (TEDS-M), carried out under the auspices of the International Association for the Evaluation of Educational Achievement (IEA), was to understand how national policies and institutional practices influence the outcomes of mathematics teacher education. This paper reports on the definition of effective mathematics teacher education in TEDS-M, distinguishing between mathematics content knowledge and mathematics pedagogical content knowledge as essential cognitive components of mathematics teachers’ professional competencies. These competence facets were implemented as proficiency tests based on extensive coordination and validation processes by experts from all participating countries. International acceptance of the tests was accomplished whereas, by necessity, national specifications had to be left out, as is common in comparative large-scale assessments. In this paper, the nature of the TEDS-M tests for the primary study is analysed and commented on detail. The aims are to increase our understanding of mathematics content knowledge and mathematics pedagogical content knowledge, which are still fuzzy domains, to provide a substantive background for interpretations of the test results and to examine whether some educational traditions may be more accurately reflected in the test items than others. For this purpose, several items that have been released by the IEA are presented and elaborately analysed in order to substantiate the test design of TEDS-M. Our main conclusion is that the overall validity of the TEDS-M tests can be regarded as a given, but that readers have to be aware of limitations, amongst others from a continental European point of view.

1 Introduction

The aim of the international comparative study Teacher Education and Development Study in Mathematics (TEDS-M), carried out from 2006–2009 under the auspices of the International Association for the Evaluation of Educational Achievement (IEA), was to understand how national policies and institutional practices influence the outcomes of mathematics teacher education.1 The international study was based on national representative samples of primary and lower secondary mathematics teachers from 15 countries. Due to the focus of this issue, this paper is limited to primary teacher education; thus the part of the TEDS-M study referring to secondary mathematics teacher education is not covered here.

The main research questions of TEDS-M were:

‘What is the level and depth of the mathematics and related teaching knowledge attained by prospective primary and lower secondary teachers? How does this knowledge vary across countries?’ (Tatto et al. 2008, p. 13)

In order to measure the effectiveness of mathematics teacher education, TEDS-M developed a conceptual model of mathematics teachers’ professional competencies whose promotion is the central goal of mathematics teacher education. Based on the approach by Shulman (1986), TEDS-M describes mathematics teachers’ professional competencies consisting of mathematics content knowledge (MCK), mathematics pedagogical content knowledge (MPCK) and general pedagogical knowledge (GPK) as essential cognitive components complemented by personality traits and beliefs. This paper focuses on the knowledge components MCK and MPCK, which were implemented in TEDS-M as proficiency tests. The knowledge in these sub-domains of more than 13,000 future primary school teachers in their last year of teacher education was measured by a paper-and-pencil test. The theoretical background of the test is summarised in the framework of TEDS-M (Tatto et al. 2008), to which the following analyses refer.

The TEDS-M concept of teachers’ mathematical competencies was the result of a long and intense discussion among the participating countries, in which international acceptance was eventually accomplished. Our comments from a German point of view do not target this basic validity of the study, which is described in the paper by Senk et al. in this issue. We only point to some limitations important to consider when discussing the results.

In order to achieve international acceptance, national specifications of what we understand by “mathematics content knowledge” or by “mathematics pedagogical content knowledge” had—by necessity—to be left out. Further, if one analyses the theoretical framework of TEDS-M, core decisions reveal in addition that the understanding of teaching and learning processes was slightly more connected to approaches predominantly taken in English-speaking countries and less to continental European traditions on subject-related reflections, called Fachdidaktik in German or didactique in French.

As Pepin (1999) pointed out, the emergence of research on teacher knowledge in a particular subject has in continental Europe led to the development of subject-related didactics which describe the pedagogical transformation of disciplinary content to teaching content taking into account the whole teaching-and-learning process. According to Pepin (1999), these continental traditions are based on educational philosophical, theoretical reflections, and they include normative descriptions of teaching-and-learning processes. It is revealing that the development of subject-related didactics did not start until the end of the nineteenth century, taking place within the transformation of teacher education in many European countries (Schneuwly 2011).

In English-speaking countries, these perspectives of the continental European debate on subject-related didactics can be found partly in the debate either of curriculum theory or of educational psychology (Kansanen 1999). However, reflections on the knowledge transformation, that is, its student-related simplification throughout the process to teaching knowledge, called elementarisation in Germany—central for the tradition of didactics—can hardly be found. The concept of elementarisation described by Ball and Bass (2000) appears similar, but focuses on the unpacking or decompressing of mathematical content as a main task of the teacher.

In contrast, the Anglo-American type of educational research has been from the beginning more outcome-based and thus to a large extent based on empirical studies, in order to identify and determine influential factors (as predictors) of successful teaching and learning in order to understand the relationship. Broader normative, subject-related reflections were of lower importance. As Westbury (2000) pointed out, the dominant features of the US curriculum tradition were of an organisational nature, referring to schools as institutions, where teachers were expected to be agents for an optimal school system.

Kaiser (1999, 2002) described the understanding of mathematics and mathematics teaching in English-speaking countries as more algorithm-oriented. The conceptual understanding of mathematics, an understanding of mathematical structures as well as of argumentation and proof, are of lower relevance than in the above mentioned subject-oriented Fachdidaktik. According to Kaiser (1999) and Kaiser et al. (2006), such subject-oriented views on mathematics and its education are indicative for continental European mathematics traditions. (For an overall discussion on European tradition concerning didactics and other central European concepts see Hudson and Meyer 2011.)

These differences in basic orientations of the countries participating in TEDS-M led to decisions about the objectives of the TEDS-M test, the considered knowledge domains and knowledge facets, and the item development, as will become apparent in our later descriptions of the TEDS-M proficiency test. The conceptual understanding of mathematics and an understanding of mathematical structures, as well as argumentation and proof and heuristic problem solving, were of slightly lower relevance in the MCK and MPCK tests than they would be in the tradition of Fachdidaktik (for a similar critique of the TEDS-M test from an East Asian perspective see the paper by Hsieh, Lin and Wang in this issue).

The following elaborations on the conceptual framework of TEDS-M and the nature of the tests focus on such differences. In this way, we intend to increase the understanding of mathematics content knowledge and mathematics pedagogical content knowledge, which are still fuzzy domains. The overall reliability, validity and credibility of the tests has already been demonstrated many times (Senk et al. in this issue; Blömeke et al. 2012; Blömeke et al. 2011). Now, we can look beyond what was accomplished in order to examine further research needs.

2 Teachers’ professional competencies as theoretical framework of TEDS-M

Teachers’ professional tasks in everyday school life are extensive and manifold. However, teaching is the core task of teachers, and thus the development of teaching abilities internationally constitutes the main function of teacher education. Correspondingly, teaching abilities are described as the main objective by various educational documents all over the world (see the documents by the German Standing Conference of the Ministers of Culture and Education on teacher education, KMK 2004a; or the documents by the US National Council for Accreditation of Teacher Education, NCATE 2008), which are the starting point of the theoretical framework of TEDS-M. The teaching abilities—called ‘professional competencies’—include cognitive as well as affective-motivational facets (Richardson 1996; Thompson 1992; Weinert 2001). According to Shulman (1986) and Bromme (1992), three domains of knowledge as main cognitive components of mathematics teachers’ professional competencies can be discriminated: MCK, MPCK and GPK.

In addition, beliefs and affective traits such as motivation, and also metacognitive abilities such as self-regulation, are indispensable parts of the professional competencies of teachers, as displayed in Fig. 1.
https://static-content.springer.com/image/art%3A10.1007%2Fs11858-012-0432-z/MediaObjects/11858_2012_432_Fig1_HTML.gif
Fig. 1

Conceptual model of teachers’ professional competencies

This framing of teachers’ professional competencies is visualised also in the TEDS-M framework, where cognitive and affective-motivational facets of the future teachers’ competencies were measured as criteria for effective teacher education. The future teachers’ MCK and PCK were assessed in every participating country of TEDS-M, as well as their subject-related beliefs and professional motivations. Germany, Taiwan and the USA also assessed the GPK in a supplementary study. Metacognitive abilities, however, were not part of the TEDS-M surveys.

In commenting on the TEDS-M definition of professional competencies, one can point to the following benefits and limitations. Instruction is the core task of teachers all over the world; thus, their main activity is broadly covered. Furthermore, evidence in fact suggests that successful teaching depends on professional knowledge and teacher beliefs. Thus, the multidimensional nature of teacher competencies is taken into account.

However, successful accomplishment of all teacher tasks requires skills that go beyond merely teaching. In addition to the organisation and planning of teaching and learning processes, teachers are responsible for the social education of students, cooperation with parents, students’ counselling, active participation in school development, and many other activities. Therefore, the standards on teacher education by the German Standing Conference of the Ministers of Culture and Education (KMK 2004a) extend the demands on teacher education beyond teacher-related competencies. Successful teacher education should thus advance communicative skills and impart strategies in order to prevent and overcome conflicts. Furthermore, the standards demand knowledge about legal conditions of school and education, qualifications about cooperation with colleagues, and awareness of stress management methods. The limitation of the TEDS-M framework to the teachers’ core task of teaching does not reduce the relevance of these responsibilities of teacher education.

2.1 Conceptualisation of MCK

As TEDS-M is the first cross-national large-scale study on teacher education the theoretical conceptualisation of MCK and MPCK as well as developing proficiency tests necessitated extensive work and an enormous amount of time before the realisation of the study. So, in 2002, representatives from the countries participating in TEDS-M met for the first time to discuss their nationally and culturally shaped conceptions on the professional knowledge of mathematics teachers. The aim was to develop a collective cross-national core of MCK and MPCK which was approved by every participating country.

The result emerging from this process was a definition of MCK that predominantly focused on teachers’ tasks rather than normative—often implicit—curricular requirements. Thus, a teacher’s mathematical knowledge was expected to cover from a higher and reflective level at least the mathematical content of the grades the teacher would teach. In addition, a teacher was considered to need to be able to integrate the educational content into the overall mathematical context as well as to connect the content to higher levels of education. The mathematical proficiency test was oriented towards these aspects. All items were categorised into levels of difficulty arising from the item’s curricular level. In detail, the novice level of difficulty indicates mathematics content that is typically taught at the grades the future teacher will teach. The intermediate level of difficulty indicates content that is typically taught one or two grades beyond the highest grade the future teacher will teach and, finally, the advanced level of difficulty indicates content that is typically taught three or more years beyond the highest grade the future teacher will teach (Tatto et al. 2008, p. 37).

The operationalisation of the content domains was guided by the TIMSS framework as described in the TIMSS assessment framework (Mullis et al. 2008). Table 1 shows the four content domains assessed in the primary study of TEDS-M.
Table 1

Analytical description of the four content domains included in TEDS-M (according to Tatto et al. 2008, p. 36)

Number

Whole numbers

Fractions and decimals

Number sentences

Patterns and relationships

Integers

Ratios, proportions and percentages

Irrational numbers

Number theory

Geometry

Geometric shapes

Geometric measurement

Location and movement

 

Algebra

Patterns

Algebraic expressions

Equations/formulas and functions

 

Data

Data organisation and representation

 

Data reading and interpretation

Chance

In order to accomplish their professional activities, teachers need cognitive skills in addition to content knowledge. These cognitive skills were included in the development of the mathematics test for TEDS-M. Hence, the three cognitive domains knowing, applying and reasoning (according to TIMSS) were specified in addition to the content domains (Tatto et al. 2008, p. 37). Together, the cognitive and the content domains constituted a heuristic tool for the item development.

The sub-domain knowing includes various abilities such as recalling definitions and properties, recognising and classifying geometrical objects or sets of numbers, carrying out algorithmic procedures, retrieving information given in graphs and tables, and using measuring instruments. The sub-domain applying refers to abilities such as selecting efficient operations, methods or strategies for solving problems, generating and applying appropriate models for routine problems, and representing information and data in diagrams and tables. Finally, the sub-domain reasoning includes the abilities to prove and reason mathematically and to analyse and characterise mathematical relations (Tatto et al. 2008, p. 37). Although it is not directly mentioned in the framework, the analysis and characterisation of mathematical relations involves the ability to describe and present them.

The assessment of TEDS-M focused at the primary level on the cognitive domain applying, followed by the domains knowing and reasoning.

The nature of the content domains covered in the TEDS-M test can be commented on from a critical stance as follows. The standard repertoire of mathematics education internationally includes three of these domains, namely number, algebra and geometry (see Schmidt et al. 1997; NCTM 2000; KMK 2003, 2004b). In TEDS-M, MCK is, therefore, mainly measured by items from these domains. In contrast, the topic area data and probability is strongly unequally implemented in the mathematics curricula of schools and teacher education in the participating countries and in many countries does not belong in the core curriculum (see the curricular analyses of Schmidt et al. 1997; for the state of discussion at the end of the 1990s see Li and Wisenbaker 2008). However, a growing interest in this domain has become apparent in many countries due to its relevance for applications in everyday life and sciences. For instance, the NCTM standards for teaching in the USA include “Data Analysis and Probability” throughout from kindergarten to college (NCTM 2000) and thus give this domain a prominent place. The educational standards for mathematics teaching in Germany also specify “data and probability” as a key domain for both primary school and lower secondary school examinations (KMK 2003, 2004b). As a consequence of this inconsistent state of discussion, the mathematical content in this domain was reduced in TEDS-M to basic ideas of the concept of probability and data handling. Thus, it is incorporated to only a minor extent into the definition of MCK.

Another area the test left out was the use of technology that nowadays constitutes an increasingly important part of teacher education and working life in many countries. However, it is difficult to simulate technology use in paper-and-pencil tests and so we would need other test formats. Another aspect that is demanded, for example by the German standards of teacher education but could not be displayed by the test, concerns the development of students’ mathematical concepts. Understanding mathematics as a science that contains fundamental structures with regard to content and specific procedural methods constitutes an aspiring learner’s concept of mathematics education. Of course, these concepts about the nature of mathematics influence the beliefs on mathematics, which were surveyed in another part of TEDS-M, but the assessment of professional competencies was not influenced by those concepts. Again, complex item formats were needed to capture this, which go beyond the limitations of paper-and-pencil tests in large-scale assessment.

Thus, of the three cognitive domains covered in the MCK part of the TEDS-M test, most emphasis was put on items addressing the domain of application. This not only provides a systematic connection to the general design of IEA studies but also connects TEDS-M to approaches of cognitive psychology. Special reference is made to the taxonomy of Anderson et al. (2001) who advanced the taxonomy of Bloom concerning cognitive processes. To be able to apply mathematical knowledge is highly significant to the future teachers’ tasks whereas the mere possession of declarative knowledge may complicate practical implementations (Gruber and Renkl 2000; Anderson et al. 2001).

As part of the domain reasoning, argumentation and proof were included in the TEDS-M test. However, their emphasis is narrower than in European traditions on argumentation and proof, and only seldom refer to differentiations developed in educational traditions from continental Europe (see Reid and Knipping 2010).

Modelling competencies are covered by the sub-domains reasoning and applying. “Generating an appropriate model, such as an equation or diagram, for solving a routine problem” (Tatto et al. 2008, p. 37) comes into the category applying, while solving “problems set in mathematical or real-life contexts where future teachers are unlikely to have encountered closely similar items” (Tatto et al. 2008, p. 38) is in the category reasoning. However, cognitive skills of solving non-routine problems were not taken into account during test development. In particular, the validation of a result as an essential step of a modelling process is not mentioned in the description of the cognitive domains. Dealing with non-routine problems is generally rather extensive and time-consuming and is thus almost impossible to realise within an accelerated test.

2.2 Conceptualisation of MPCK

Reaching a consensus about the essential knowledge and abilities that mathematics teachers should possess in the area of MPCK proved to be an even greater challenge in TEDS-M than for mathematics content knowledge. In this regard, theories and trends are affected even more strongly by educational traditions and culture. The conceptualisation of MPCK was oriented towards the teacher’s core task of teaching. For TEDS-M, two sub-domains of MPCK were differentiated according to Shulman (1986) and Fan and Cheong (2002): Curricular knowledge and knowledge of planning for mathematics teaching and learning and knowledge of enacting mathematics for teaching and learning (Tatto et al. 2008 p. 38).

As specified in the framework, the sub-domain curricular knowledge and knowledge of planning for mathematics teaching and learning not only contains knowledge about the primary school’s mathematics curriculum, but also covers the ability to identify the key ideas in learning programmes, seeing connections within the curriculum, establishing appropriate learning goals and knowing different assessment formats. Furthermore, this sub-domain refers to various abilities and skills that are essential to concrete planning of mathematical lessons in primary school. This applies to the selection of an adequate approach to mathematical ideas, choosing appropriate teaching methods, identifying different approaches for solving mathematical problems and predicting typical students’ responses for the purpose of choosing assessment formats. In order to interpret and evaluate students’ mathematical solutions and arguments in school as well as providing appropriate feedback, it is necessary to possess the abilities of analysing and diagnosing which are assigned to the sub-domain of enacting mathematics for teaching and learning. In addition, this sub-domain contains the abilities to guide the classroom discourse as well as to explain or represent mathematical concepts or procedures (Tatto et al. 2008, p. 39).

This theoretical frame guided the item development in the field of mathematics pedagogy. In the domain of the curricular and planning knowledge, the tasks especially relate to identifying mathematical key ideas and conceptions in mathematical tasks and problems and to analysing a task’s mathematical content with respect to the required precognition and level of difficulty. Additionally, consequences for the planning of teaching due to thematic changes of the curriculum’s organisation should be identified in the process. Further, abilities are demanded in order to reveal adequate approaches for mathematical ideas and select appropriate methods to represent mathematical situations.

Furthermore, TEDS-M includes items in which tasks or problems given to students have to be analysed in terms of possible understanding difficulties and students’ responses. The knowledge of enacting mathematics teaching and learning was measured by tasks predominantly referring to analysing and evaluating students’ mathematical solutions or arguments. The focus is on items referring to the already mentioned limited concept of reasoning and argumentation in accordance with the requirements with regard to the content of primary school mathematics.

Analogous to the items concerning MCK, the MPCK items were categorised into the three levels of theoretical difficulty, i.e. novice, intermediate and advanced.

Regarding the nature of MPCK covered in the TEDS-M test, one can state that both domains of MPCK characterise substantial knowledge and skills required to teach mathematics effectively. They can be described as an internationally accepted common core of MPCK that is universally required by future mathematics teachers. However, even this core largely corresponds to elements of mathematics pedagogy that are limited to the conveyance of mathematical ideas. Furthermore, the two foci “curricular knowledge and knowledge of planning” and “knowledge of enacting mathematics” are primarily in accordance with an orientation towards curriculum theory and educational psychology dominating in English-speaking countries, in contrast to continental European countries.

But even in the US, current school standards for mathematics education go beyond conveying content. Thus, pupils are expected to acquire process-related competencies based on mathematical content. The standards developed by NCTM, which have influenced the discussion all over the world (see the standards compulsory in German schools since 2003), include competencies such as problem solving, reasoning and proof and suggest connections that additionally deal with applications to non-mathematical topics or the area of communication (NCTM 2000). Nowadays, in many countries, teachers are requested to teach with regard to the development of student competencies. The corresponding approach of Niss (2003) on competence-oriented mathematics education is widely accepted. However, the extent of the future teachers’ capability to support students’ acquisition of process-related competencies is only marginally surveyed in TEDS-M and the development of competencies such as modelling skills is completely left out of consideration.

National characteristics of MPCK from individual participating countries had, of course, to be excluded as well, because of cultural boundaries and dependences on educational traditions. For example, scaffolding measures, which support students at different ability levels or from different ethnic or linguistic backgrounds, play a prominent role in German mathematics teacher education, but are not covered by the TEDS-M test. Hsieh, Lin and Wang (in this issue) describe several topics, important in Taiwanese mathematics education, which could not be considered in TEDS-M and which may explain the difference between the high achievements of Taiwanese students in TEDS-M in contrast to their unsatisfactory achievements in national Taiwanese tests. Likewise, neither theoretical knowledge about preschool age mathematical knowledge development nor the knowledge about research in mathematics pedagogy was an object of the TEDS-M study.

3 Test design and example analysis of items

3.1 Test design

The instrument development in TEDS-M was guided by and based on the preparatory study “Mathematics Teaching in the 21st Century” (MT21), an independent six-country study which aimed at developing central conceptualisations of professional knowledge for prospective mathematics teachers and its measurement. MT21 produced, amongst others, items for the TEDS-M test to measure the knowledge in mathematics, mathematics pedagogy and general knowledge for the teaching of future lower secondary teachers (Schmidt et al. 2011; for a detailed description of the framework and the instruments used see Blömeke et al. 2008).

Because MT21 focused on prospective teachers for secondary level, new items had to be developed concerning the assessment of future primary school teachers with the involvement of national research teams as well as the international project management of TEDS-M. Two American studies in particular provided a basis for the item development, namely the study “Knowing Mathematics for Teaching Algebra” (KAT) of Michigan State University (Ferrini-Mundy et al. 2005) and “Learning Mathematics for Teaching” (LMT) of the University of Michigan (Hill et al. 2008).

The adapted items and the newly developed items were translated and retranslated in all participating countries. Subsequently, a review process was performed by the international project management examining whether international standards were maintained. The translated items were simultaneously tested for adequacy, correctness and clarity of wording by experts in mathematics and mathematics pedagogy in each respective country.

In 2007, the items underwent extensive field testing in eleven countries. Items were only carried into the main study if their psychometric properties proved to be suitable in regard to descriptive statistics. Explorative and confirmative factor analyses were applied as well. The time for the testing of MCK and MPCK was limited to 60 min. To enable the use of item response theory (IRT) methods and in order to report reliable measures for the sub-domains, it was decided to apply a rotated booklet design. For the primary study, five blocks were used in a balanced incomplete block design. The number of items in the booklets for the sub-domains of the MCK and MPCK were nearly uniformly distributed. The items were designed in three question formats: multiple-choice (MC), complex multiple-choice (CMC), and open constructed-response (CR) requesting a short self-dependent verbalised answer. For the scoring of the open constructed-response items, detailed coding manuals were developed to assure uniformity. Further, extensive training courses were arranged internationally in the first instance and were subsequently organised nationally in every participating country to make the TEDS-M staff familiar with the manuals. Double encoding was adopted in order to control uniform coding.

Regarding the nature of the TEDS-M test design it can be stated that MCK was measured by 73 items and MPCK by 33 items. The range of item numbers represents an imbalanced focus of the testing, which led to quite imbalanced information being derived from the test. Although several of the European countries agreed that the measurement of MPCK should be the core of the study, as it had been in MT21 (Schmidt et al. 2011), the International Study Centre decided to focus on MCK. Thus, the objective was rather to report several facets of MCK than of MPCK.

3.2 Example item analysis

In order to provide insight into the nature of the TEDS-M test, selected items—featuring special aspects of the items used—and their specific requirements are presented in the following detailed item analyses, which are partially based on ACER documents. The provision of background information of the items, the percentage correct frequencies2 as indicators of the countries’ range of proficiency and our analysis are intended to help the reader to reflect on the commentaries we offer. The complete set of 34 TEDS-M primary items released by the IEA together with coding guides is available on the ACER TEDS-M website (http://www.acer.edu.au/research/projects/iea-teacher-education-development-study-teds-m/).

3.2.1 MCK item examples

The first two tasks are both related to the MCK domain algebra and require cognitive skills in the category of applying (ACER 2011, p. 34 and p. 20). They demonstrate in particular the differences between the levels of difficulty.

The task ‘pattern of matchsticks’ (Fig. 2) was classified as being at novice level of difficulty since its mathematical content might be implemented in primary school by using a hands-on or iconic approach. Pre-algebraic skills are required in order to solve the given task by identifying the regularity and continuance of the geometrical pattern. The required arithmetic operations are easy and minor calculation errors can be detected and revised owing to the predefined multiple-choice answers. On average, 75 % of the future primary school teachers marked the correct answer ‘33’ (B). Thus, the task was relatively easy from an empirical point of view as well.
https://static-content.springer.com/image/art%3A10.1007%2Fs11858-012-0432-z/MediaObjects/11858_2012_432_Fig2_HTML.gif
Fig. 2

TEDS-M example measuring MCK in the domain algebra (Source: ACER 2011, p. 34)

The percentage correct range of this item is remarkable, since 94 % of future teachers from Taiwan were capable of solving the task correctly, but this only applied to 12 % (!) of the future teachers from Georgia who predominantly selected option E. We assume that the participants who chose the wrong answer of 42 counted the first figure’s number of matchsticks, which is 6. Given the fact that each subsequent figure pictures another additional square, they may have added 4 (instead of 3) matchsticks for each following figure (6 + 9 × 4 = 42).

At first glance, the task ‘pattern of seats’ (Fig. 3) appears to be similar to the task ‘pattern of matchsticks’, but it explicitly asks for formulation of an algebraic expression for the figured numerical sequence in addition to recognising the regularity and its continuance. Furthermore, the item was given in an open response format without possible responses given. The term ‘2n + 2’ was rated as the correct answer as well as each equivalent term such as ‘2(n + 1)’ and ‘(n × 2) + 2’ (Australian Council for Educational Research for the TEDS-M International Study Center 2011, p. 21).
https://static-content.springer.com/image/art%3A10.1007%2Fs11858-012-0432-z/MediaObjects/11858_2012_432_Fig3_HTML.gif
Fig. 3

TEDS-M example measuring MCK in the domain algebra (Source: ACER 2011, p. 20)

Because of its thematic reference to lower secondary school mathematics, the item was classified as intermediate level difficulty. For this item, 50 % of the participants succeeded in formulating an appropriate term. Again, the percentage correct frequencies of the participating countries covered the full range from 4 % in Georgia to 93 % in Singapore.

Solving the task ‘Venn diagrams on quadrangles’ (Fig. 4) from the domain geometry requires knowledge about subset relations of types of quadrangles. Therefore, it is necessary to be familiar with squares being special rectangles and rhombuses, whereas all of these three quadrangles are parallelograms. In addition, the future primary school teachers are requested to detect the figure of this particular relation within the given Venn diagrams. The item’s level of difficulty was classified as novice in advance, due to its reference to primary school geometry with regard to content. The correct answer is ‘Mia’ (C), which was on average given by 60 % of the study participants. The percentage correct frequency was lowest in Germany with 38 % and highest in Taiwan reaching 89 %. Less than half of the future teachers from Germany being able to solve this item correctly might indicate a rather low understanding of geometrical relations or uncertainty dealing with the types of figures pictured. However, another possible explanation for the displayed high difficulty might be the inadequate translation of the term ‘rhombuses’ to the German word ‘Rhomben’ since this term is less common than the synonymous German term ‘Rauten’.
https://static-content.springer.com/image/art%3A10.1007%2Fs11858-012-0432-z/MediaObjects/11858_2012_432_Fig4_HTML.gif
Fig. 4

TEDS-M example measuring MCK in the domain geometry (Source: ACER 2011, p. 8)

Mathematical knowledge in the domain number was required in order to solve the task ‘irrational numbers’ (Fig. 5), which was given in a complex multiple-choice format. A priori, the task was classified as advanced level of difficulty, because it requires knowledge about irrational numbers being defined as real numbers that cannot be expressed as fractions consisting of two integers. Considering the given response options, only π is an irrational number (Item A). A correct classification was given by 74 % of the study participants on average, thus indicating a rather low complexity as well as great familiarity with the number π. The percentage correct frequency in Georgia (37 %) was even below the guessing probability while 89 % of the future teachers from Taiwan solved the task correctly.
https://static-content.springer.com/image/art%3A10.1007%2Fs11858-012-0432-z/MediaObjects/11858_2012_432_Fig5_HTML.gif
Fig. 5

TEDS-M example measuring MCK in the domain number (Source: ACER 2011, p. 31)

Item B was solved correctly by almost every future primary teacher (average 89 %; range 53 % in Georgia—99 % in Taiwan). The percentage correct frequency for item C was on average 69 % (range 53 % in Georgia—95 % in Singapore). Item D proved to be of greatest empirical difficulty since more than half of the study participants (59 %) were not capable of answering it correctly. The number of incorrect answers was especially high in Botswana (84 %) while it was only 24 % in Thailand.

The task “game of dice” (Fig. 6) originates from the mathematical domain data and the cognitive domain applying. The chances of profit of two teenagers who play the game have to be compared. An extremely time-consuming possibility to find the correct answer is counting every favourable and every possible dice score, whereas a faster possibility is given by plausibility considerations that necessitate a deeper understanding. Josie has greater chances of winning (B), because there are more possibilities to compile the differences 0, 1 and 2 than 3, 4 and 5 from the results of the two dice.
https://static-content.springer.com/image/art%3A10.1007%2Fs11858-012-0432-z/MediaObjects/11858_2012_432_Fig6_HTML.gif
Fig. 6

TEDS-M example measuring MCK in the domain data (Source: ACER 2011, p. 4)

This task requires an application of knowledge about computations with probability and was internationally rated as advanced level of difficulty, although the subject matter is in some countries such as Germany generally conveyed at the beginning of secondary school. The testing revealed a high degree of empirical difficulty as well. The correct option B was on average chosen by only 29 % of the study participants, ranging from merely 5 % in Georgia to 51 % of the future teachers from Taiwan.

On average, 33 % of the prospective teachers claimed that Josie and Farid have equal chances of winning (option A). Presumably, equal probabilities for the possible differences of two dice scores were erroneously inferred from the assumption that every elementary event is equiprobable according to Laplace or classical probability. The chances of winning are mistakenly assumed to be equal, because Josie and Farid both selected three of the possible differences.

Finally, 69 % of the participants from Georgia as well as 30 to 40 % of the prospective teachers from a number of other countries believed that it is impossible to conclude who is the person with the greater chance of winning (option D). Due to the fact that predicting the outcome of random experiments, such as throwing a die, is impossible, it is concluded that the chances of winning are unpredictable as well. The prospective teachers marking this option either held an insufficient understanding of the function and possibilities of data and probability or they misapprehended the term “chance” and thought they should predict the outcome of the dice-throwing.

Summarising the analysis of the MCK items, we can point out that the items refer to standard topics from the primary and secondary school levels. However, in this context the geometry item (Fig. 4) particularly demonstrates that a novice-level classification does not imply that primary school students are able to answer the item correctly. A comprehensive and profound knowledge is required for the solution of the item that thematically refers to primary school. The TEDS-M test for the primary school level did not include items requiring higher mathematical knowledge that is taught in university courses.

Only 5 of 73 MCK-items are assigned to the sub-domain data with the presented item (Fig. 6) being the most challenging. Another item requires calculating the probabilities of events of a Laplace experiment while the others require interpretation of diagrams. The empirical difficulty of all four items was rather low. This low aspiration level together with the relatively small number of items shows also the minor importance of the sub-domain data in the MCK-test. However, a further four items assigned to the sub-domain data were applied when measuring MPCK.

The task ‘pattern of seats’ (Fig. 3) contains characteristics of modelling competencies such as generating an appropriate model for solving a routine problem (sub-domain applying). There are several other items in the test that require the translation of a verbally given context into a mathematical term and accordingly the interpretation of a term according to its real-world context. Apart from these abilities, the test does not measure modelling competencies.

3.2.2 MPCK item examples

The task ‘fuel consumption’ (Fig. 7) demonstrates the combination of MCK and MPCK being measured in one task consisting of two items. Both items belong to the content domain of number and they were assigned to an intermediate level of difficulty previous to the testing. The first mathematical part (a) requires cognitive abilities in the field of applying. The correct answer to the question is ‘8.0’, which can be developed by using proportionality arguments or the rule of three. On average, the item was solved correctly by 78 % of the study participants, empirically indicating a rather basic level of difficulty.
https://static-content.springer.com/image/art%3A10.1007%2Fs11858-012-0432-z/MediaObjects/11858_2012_432_Fig7_HTML.gif
Fig. 7

TEDS-M example measuring MCK in the domain number (a) and MPCK in the domain curricular knowledge and knowledge of planning for mathematics teaching and learning (Source: ACER 2011, p. 9)

The second part of the question concerning MPCK requires knowledge from the field of planning instructions and is related to the sub-domain curricular knowledge and knowledge of planning for mathematics teaching and learning. It is expected to identify difficulties of primary school learners calculating with decimal numbers, in this case the division of 2.4 by 30 (or 3). Subsequently, the question is expected to be simplified in a constructed-response format. A possible correct answer from the scoring guide (ACER 2011, p. 11) was: ‘A machine uses 3 litres of fuel for every 30 hours of operation. How many litres of fuel will the machine use in 100 hours if its fuel consumption remains constant?’

Simplifying the calculation keeping the decimal number 2.4 was also rated as being correct if the overall calculations became easier, for example: ‘A machine uses 2.4 litres of fuel for every 50 h of operation. How many litres of fuel will the machine use in 100 h if its fuel consumption remains constant?’ If the new problem varied contextually but still required a simplified calculation, the answer was accepted as well.

The proportion of future teachers within the 15 participating countries who solved the task correctly ranged between 18 % in Georgia and 82 % in Singapore. On average, 55 % of the future teachers’ answers were rated as appropriate.

The task ‘pattern of teeth’ (Fig. 8) requires an interpretation as well as a comparison of the given diagrams. It was assigned to the MPCK domain enacting mathematics for teaching and learning and was classified as intermediate level of difficulty. The future teachers were expected to provide an answer that specified both a similarity and a significant difference of the given diagrams. Describing that both diagrams show equivalent data is an example of an accepted similarity, as is stating that both diagrams are pictograms. Declaring that Mary arranged people according to the amount of their lost teeth as opposed to Sally, who listed each person separately, exemplifies an accepted difference. The item was classified as partly correct if just one common feature or one difference was mentioned. On average, 30 % of the test persons completely solved the task while 37 % reached a partial solution. The range of complete answers was between 4 % in Georgia and 73 % in Taiwan.
https://static-content.springer.com/image/art%3A10.1007%2Fs11858-012-0432-z/MediaObjects/11858_2012_432_Fig8_HTML.gif
Fig. 8

TEDS-M example measuring MPCK in the domain enacting mathematics for teaching and learning (Source: ACER 2011, p. 24)

The task ‘introducing length measurement’ (Fig. 9) was used in TEDS-M in order to assess the MPCK in the domain curricular knowledge and knowledge of planning for mathematics teaching and learning. In advance, it was classified as advanced level of difficulty requiring the prospective teachers to analyse the described teaching method and to specify two reasons to justify it.
https://static-content.springer.com/image/art%3A10.1007%2Fs11858-012-0432-z/MediaObjects/11858_2012_432_Fig9_HTML.gif
Fig. 9

TEDS-M example measuring MPCK in the domain curricular knowledge and knowledge of planning for mathematics teaching and learning (Source: ACER 2011, p. 41)

Answers were accepted as correct if the reasoning included two of the following three arguments in favour of the chosen type of introduction:
  • Supporting an understanding of what ‘measurement’ actually is (using the given objects as a unit enables the understanding of the fundamental idea of measurement being a comparison of an unknown length with a well-established dimension).

  • Identifying the need for standard units (i.e. the use of a non-standard unit results in differences of measured values and can show the need for a standard unit).

  • Choosing the most appropriate unit (using objects of different lengths fosters an examination on the question which unit/object is the most appropriate to measure a given length).

Naming one of the given aspects was classified as partly correct while non-cognitive criteria such as motivating the students were not accepted, because of their lack of specificity. Empirically, the task appeared to be extremely challenging since merely 9 % of the future primary teachers succeeded completely in solving the task correctly by specifying two reasons (range from 2 % in Georgia to 19 % in Singapore). Less than half of the study participants (40 %) named one of the three reasons listed above. The first reason was named most frequently (by 26 % of the future teachers) since the fundamental idea of measurement immediately suggests this reason. Germany was the only participating country where the future teachers most frequently named the second option, which is not unexpected as the need of using standard units is emphasised in the subject-related didactical literature (see Padberg 1997).

Summarising the analysis of the MPCK items, it can be seen that the three described tasks illustrate different facets of the MPCK concept of TEDS-M. Item (b) of the task ‘fuel consumption’ (Fig. 7) demands an analysis of the mathematical content of the posed problem with respect to the required precognition of primary children and the adaptation of the problem. Teachers’ daily routine is composed of such analysing and evaluating the appropriateness of tasks as to their applicability in specific classroom situations. The same applies for analysing and interpreting students’ solutions as demanded by the example ‘pattern of teeth’ (Fig. 8). The task ‘introducing length measurement’ (Fig. 9) surveys the knowledge about mathematical conceptions concerning measurement which can also be assigned to the substantial knowledge and skills required to effectively teach mathematics. The German didactical teacher education conveys those mathematical conceptions to the future teachers. This is a suitable example to clarify the distinct setting of priorities of German subject-related didactics. The subject-related didactics (Fachdidaktik) not only deals with mathematical conceptions but also concerns the students’ corresponding learning process and its constructive support. For instance, the ‘didactical gradation’ which was developed by Radatz and Schipper (1983) should be mentioned in the context of the introduction of the concept of length measurement since it constitutes a relevant part of the didactical training in mathematics teacher education in many German universities. The didactic gradation proposes a model to develop the concept of quantities. In this process, an idea about quantities can be perceived based on nine stages beginning with initial experiences in playful situations. Subsequently, additional stages are passed, such as directly comparing different representations of quantities, indirectly comparing units of measurement at random, and recognising the invariance of quantities.

4 Concluding remarks

Overall, the MCK and the MPCK of future teachers was successfully conceptualised and efficiently surveyed through the proficiency tests of TEDS-M. The items measured the knowledge in the domains as priori defined and they were suitable for identifying different proficiency levels of future primary school teachers from various countries. Therefore, we can confirm the reliability and validity of the tests from an international point of view.

A great challenge in the development and design of the test was the separation of the two domains mathematics content knowledge and mathematics pedagogical content knowledge. It is impossible to construct disjoint sub-domains, because the solution of an item in the domain MPCK generally requires MCK.

A few items, which are classified by TEDS-M as belonging to the sub-domain MPCK, at first glance rather seem to need MCK. In the following we will show the difficulty concerning the distinction of MPCK and MCK and their inseparable linkage. Due to the difficulty of separating the domains it has to be decided from case to case whether an item set within a teaching context refers to MPCK or MCK only. Two sample items will be discussed, which are both embedded into a teaching context but which measure different knowledge.

The first example item to be discussed is displayed in Fig. 6 and is taken from the mathematical domain data. The item is embedded into a classroom context and requires the evaluation of the correctness of given answers. According to the framework the requirement ‘Analysing or evaluating students’ mathematical solutions or arguments’ (p. 39) belongs to the MPCK sub-dimension ‘Enacting Mathematics for Teaching and Learning’. However, the correct solution of the item merely requires mathematical knowledge, because for example neither the pre-knowledge of the students nor their solution approaches need to be analysed or taken into account. Therefore, this item was defined to measure MCK, although it is embedded into a teaching context.

In contrast to the item described, there are items based on teaching situations which are defined to measure MPCK due to the reference of the item to classroom activities. The following example uses a released item taken out of the study for future secondary teachers, which requires the evaluation of students’ answers. This item is adapted from a study by Healy and Hoyles (1998) and used with their permission in TEDS-M (Fig. 10).
https://static-content.springer.com/image/art%3A10.1007%2Fs11858-012-0432-z/MediaObjects/11858_2012_432_Fig10_HTML.gif
Fig. 10

TEDS-M example measuring MPCK in the domain enacting mathematics for teaching and learning (Source: ACER 2011, p. 12)

This item refers to the mathematical domain number and the cognitive domain reasoning and can obviously not be solved without mathematical knowledge. From a mathematical point of view, an argument that 6 = 2 × 3 is a prime factorisation, which cannot be further reduced, would be expected. However, mathematical knowledge is not sufficient in order to assess whether a specific kind of proof is accepted as valid in school. The embedding of the task in a classroom context is not artificial, but has substantive consequences for the correct assessment of the proofs given. From a university mathematical point of view none of the described statements would be accepted as valid proofs—arguments concerning the prime factorisation would be expected. However, acceptable proofs at university mathematics level cannot simply be transferred into mathematics teaching. In order to justify which statement can be accepted in mathematics teaching as a valid argument or proof, mathematics didactical reflections are necessary, for example knowledge about different kinds of arguments or proofs such as formal proofs and pre-formal proofs or generic proofs (for an overview see Reid and Knipping 2010). Based on this kind of knowledge Kate’s answer would be evaluated as a valid and coherent proof formulated in a pre-formal language, known as “content-related argumentation” in German didactics (Blum and Kirsch 1991; Wittmann and Müller 1988). Leon’s and Maria’s answers cannot be accepted as valid, but for different reasons. Leon’s argument is only example-based without referring to a generalisable core necessary for a generic proof (Mason and Pimm 1984), and resembles an empirical argumentation common in Anglo-Saxon classrooms (see the empirical study by Kaiser 1999). Maria’s argument is correct at the beginning, before the error occurs. Hers is the most formal argument of the three, which might lead many test persons to think that it must be valid. To summarise, these kinds of assessments need a deep understanding in the area of MPCK, based on a sound knowledge of MCK as well.

As described in this paper, the TEDS-M items—validly capturing the international core of teachers’ professional competencies—could not cover the entire spectrum of the MCK and MPCK of the future primary teachers, which vary due to cultural and traditional differences among the participating countries. In addition, the orientation of the theoretical framework and the test development towards a pragmatic conception of teaching and learning, predominant in English-speaking countries, means that other theoretical conceptions, for example those common in continental Europe, were taken into account to a slightly lesser extent. Items referring to the continental European tradition of didactics with a focus on subject-related reflections would refer more strongly to the topic area of argumentation and proof, which is not of high importance within the TEDS-M test. Continuing this strand of the debate, these missing items would refer to the different functions of proof within teaching-and-learning processes and would tackle various classification systems of proofs in the European mathematics didactics such as pre-formal and formal proof, taking into account the different level of formality and content-related reflections. From our national view in Germany, the role of ‘Anschauung’, insufficiently translated as visualisation or imagination, which plays a special role in the German didactics on proof—in contrast, for example, to the French debate (see Knipping 2008)—would allow interesting open items on argumentation and proof, in which the role and function of imagination or visualisation could be reflected by the future teachers. However, such national specifics are difficult to cover in comparative large-scale assessments.

Another important kind of item missing in the TEDS-M test is items on concept development and concept introduction, which are at the heart of the German subject-related didactics (so-called Stoffdidaktik), but have links to the other European tradition of didactics. Possible items could deal with different basic ideas of central mathematical concepts such as number, percentages or fractions and different ways to introduce them. It is an interesting side remark that the basic idea of percentage and its different constituents are labelled in German with different notions, not known in English and not translatable. Such limitations are a strong plea for additional national studies on teaching and teacher education—anchored on the international TEDS-M scales in order to keep these as benchmarks but designed to reflect national peculiarities.

Furthermore, ideas for possible items refer to the diagnosis of students’ errors based on a detailed subject-related analysis of cognitive barriers, which would overcome the more organisational orientation of the American curriculum debate as described at the beginning of this paper (for possible examples see Schwarz et al. 2008). More open items would allow per se more insight into the professional knowledge of future teachers and would give the chance to display the richness of various cultural traditions from all over the world.

The ambitious work carried out by TEDS-M is to be continued with further studies on the conceptualisation and testing of the professional knowledge of future teachers. A few studies are planned or already under way, with varying aims. For example, a group of US, German and Taiwanese researchers (to which the authors of this paper belong) follow up the tested cohort in their first year of teaching, evaluating the development of their professional knowledge. Based on a theoretical model describing the development of expertise of teachers (Blömeke 2002), the study evaluates their professional competence in activity-oriented settings using short (3–5 min) video-clips, in which classroom situations are shown. After observing the classroom situation, the participants are requested to answer several questions containing different levels of requirement. On the one hand, Likert rating scales are applied to assess the teachers’ perceptive and observational qualities. On the other hand, open question formats are used in order to assess the teachers’ analytical abilities and their abilities to anticipate possible learning progressions and suggest teaching alternatives. A special question format focuses on areas such as the teachers’ adequate continuance of a given teaching situation, formulating subsequent homework or providing cognitively stimulating exercises.

For example, the following question originates from one of the videos showing a third grade class of a German primary school. The video focuses on a boy called Tim with suspected dyscalculia. During the video sequences, Tim is working on a weekly schedule in mathematics containing several subtraction tasks when the teacher is offering individual help with the aid of Dienes blocks. The scene ends with Tim solving subtraction tasks algorithmically. Thus, the video sequence offers several indicators arguing for and against the diagnosis of dyscalculia. Therefore, the teachers are asked to answer the following question:

Tim is suspected to have dyscalculia. But this diagnosis is still arguable and has to be based on indicators that are predominantly ambiguous.

Describe three clearly identifiable indicators from the video that support this diagnosis.

However, the situation is often not clear-cut. Describe two situations which support the assumption that Tim does not have dyscalculia.

These questions require the participants to recollect and reflect on Tim’s approaches and solutions to the mathematical tasks. Initially, indicators have to be perceived as relevant information about Tim’s abilities and misconceptions. Some clearly identifiable indicators that can be observed in the video and might support the diagnosis of dyscalculia include, for example, counting calculation methods or using fingers for calculations, inverting left and right, and the non-usage of material structures. By contrast, there are several incidents where Tim’s approaches show age-appropriate mathematical abilities, including unbundling the Dienes rods while subtracting and bundling the remaining blocks in the process of counting (we thank Jessica Benthien for this idea for a video clip).

With this approach we aim to evaluate the professional knowledge of teachers in a realistic setting which allows the participants to show their teaching competence and competence of promoting learning processes based on careful observations of classroom situations and judging their adequacy.

Footnotes
1

The international costs of TEDS-M were funded by the IEA, the National Science Foundation (REC 0514431) and the participating countries. In Germany, the German Research Foundation funded TEDS-M (DFG, BL 548/3-1). The instruments are copyrighted by the TEDS-M International Study Center at MSU (ISC). The views expressed in this paper are those of the authors and do not necessarily reflect the views of the IEA, the ISC, the participating countries or the funding agencies.

 
2

The calculations of the percentage frequencies are based on the international TEDS-M data set version 3.0, provided by the IEA.

 

Copyright information

© FIZ Karlsruhe 2012