## Abstract

This study argues that the works of philosopher Jürgen Habermas can provide useful directions for mathematics education research on statistical literacy. Recent studies on the critical demands posed by statistical information in media highlight the importance of the communicative component of statistical literacy, which involves students’ ability to react to statistical information. By adapting Habermas’ construct of communicative rationality into a framework for statistical literacy, a novel analytical tool is presented that can provide theoretical insights as well as in-depth empirical insights into students’ communication about statistical information. Central to the framework are the four validity claims of comprehensibility, truth, truthfulness, and rightness which interlocutors need to address to engage in statistical communication. The empirical usefulness of the framework is shown by presenting the results of a study that examined Grade 5 students’ responses to fictional arguments about the decline of Arctic sea ice. The Habermas-based framework not only reveals that complex evaluations of statistical arguments can take place even in Grade 5 but also shows that students’ evaluations vary greatly. Empirical results include a content-specific differentiation of validity claims through inductively identified sub-categories as well as a description of differences in the students’ uses of validity claims.

### Similar content being viewed by others

Avoid common mistakes on your manuscript.

## 1 Introduction

Due to the wave of news reporting during the COVID pandemic, researchers in mathematics education have focused their attention on the ways mathematics and statistics are used in media items (Aguilar & Castaneda, 2021; Engledowl & Weiland, 2021; Gal & Geiger, 2022) or official government reports (Aguilar & Castaneda, 2021; da Silva et al., 2021). Such research highlights the increased critical demands that emerge from the fact that statistical information in media items is embedded in argumentation in a social world. The main construct of such research is statistical literacy, for which Gal’s (2002) classical definition states that statistical literacy consists of two interrelated components: “(a) people’s ability to *interpret and critically evaluate* statistical information, data-related arguments, or stochastic phenomena [… and] (b) people’s ability to *discuss or communicate* their reactions to such statistical information” (Gal, 2002, pp. 2–3). In this paper, these two components will be referred to as the (a) *interpretative component* and the (b) *communicative component* of statistical literacy. Many studies are primarily concerned with the interpretative component, for example, by focusing on the demands for understanding media reports (Aguilar & Castaneda, 2021) or by investigating students’ ability to understand statistical problems and their reasoning for problem solutions (Callingham & Watson, 2017). Only few scholars explicitly refer to the communicative component of statistical literacy. One example is Weiland (2017), who relates the communicative component to the notion of “writing the world with statistics” (Weiland, 2017, p. 39) when presenting the results of a statistical investigation.

This paper argues that the communicative aspect of statistical literacy requires a deeper conceptualization in order to specify the knowledge that citizens need to react to a media landscape of complex and politically oriented argumentations building on statistical information. To conceptualize such a view on the communicative component of statistical literacy, this paper draws on the works of Habermas (1981, 1984, 1998). In mathematics education research, Habermas’ work has already been gainfully employed for the content of mathematical proof and argumentation (Guala & Boero, 2017; Morselli & Boero, 2011; Zhuang & Conner, 2022). This study demonstrates the empirical utility of such an approach on the communicative component of statistical literacy with a report on students’ reactions to statistical information on climate change in Grade 5. The Habermas perspective reveals the richness of students’ use of validity claims as well as the differences between students’ communicative reactions, which sketches a possible path towards developing the communicative component of students’ statistical literacy.

## 2 Habermas’ rationality construct

### 2.1 Validity claims for communicative rationality

In his influential work *Theory of Communicative Action* (Habermas, 1981) as well as in related works (Habermas, 1984, 1998), Habermas develops the foundations of a social theory building on the notion of *communicative action*, in which interlocutors coordinate their action based on a discursively established shared understanding of their lifeworld. Habermas conceptualizes rationality as an individual’s ability to provide reasons for action, which is central to his whole theory:

The rationality of a person is proportionate to his expressing himself rationally and to his ability to give account for his expressions in a reflexive stance. […] We say that he not only behaves rationally but is himself rational if he can give account for his orientation toward validity claims. (Habermas, 1998, p. 310)

The term “validity claim” is used by Habermas to distinguish between different grounds for evaluation of why a statement, a plan for action, or a speech act might be true or acceptable. These validity claims are drawn from different *core structures* of rationality: knowledge, purposive activity, and communication (Habermas, 1998). In discourse, all three core structures are intertwined. Nevertheless, each core structure comes with its own type of justification which Habermas conceptualizes through different types of rationality. Knowledge is evaluated in terms of *epistemic rationality*, which is related to the propositional nature of knowledge. A person draws on epistemic rationality to provide reasons why a statement (e.g., the Pythagorean theorem) is true. Purposive activity relies on *teleological rationality*. Validity claims behind rational plans for action justify the expected effectiveness of these plans (e.g., why a specific method of proof for the Pythagorean theorem is used, see Morselli & Boero, 2011). Finally, communication is evaluated based on *communicative rationality*:

communicative rationality is expressed in the unifying force of speech oriented towards reaching understanding, which secures for the participating speakers an intersubjectively shared lifeworld, thereby securing at the same time the horizon within which everyone can refer to one and the same objective world. (Habermas, 1998, p. 315)

Habermas (1981, 1998) specifies three validity claims for communicative rationality which a hearer employs in accepting or rejecting speech acts produced by a speaker: (1) *Truth*, in which a statement is evaluated based on the relation to the external objective world: truth is a relation between sentences and the reality about which we make statements (Habermas, 1984). (2) *Truthfulness*, which relates the statement to the internal, subjective world of the speaker: A speaker is truthful when he deceives neither himself nor others (Habermas, 1984), and (3) *Rightness*, which refers to “norms and commands that are recognized in an intersubjectively shared social world” (Habermas, 1998, p. 317). In his earlier works, Habermas also recognizes (4) *comprehensibility* as a validity claim: If a statement is grammatically and pragmatically well-formed, it is considered comprehensible. Comprehensibility is an internal relation between symbolic statements and the related system of rules after which our statements are formed (Habermas, 1984). In his later works, Habermas considers comprehensibility to be a precondition to questioning validity claims rather than a validity claim itself. However, as it proves empirically useful, this study follows approaches by other researchers (e.g., Cukier et al., 2004) and adopts comprehensibility as a validity claim for its theoretical framework.

### 2.2 Habermas in mathematics education research

In mathematics education research, Habermas’ work is utilized in discussion on mathematical proof and argumentation (Boero & Planas, 2014; Guala & Boero, 2017; Zhuang & Conner, 2022). Morselli and Boero (2011) argue for the usefulness of Habermas’ construct:

In a long-term research perspective, we think that Habermas’ construct is a promising analytic instrument in mathematics education because it connects the individual and the social by taking into account the epistemic requirements of ‘mathematical truth’ in a given cultural context and the ways of ascertaining and communicating it by means of suitable linguistic tools. (Morselli & Boero, 2011, p. 454)

In their work, Habermas’ types of rationality are used to conceptualize different perspectives on mathematical proof, for example, the “epistemic validity” versus the “problem-solving character” of proof (Morselli & Boero, 2011, p. 458), which are related to epistemic rationality and teleological rationality. Differentiating between different rationality constructs allows the authors to gain in-depth insights into students’ proof and argumentation processes.

As Boero and Planas (2014) point out, “Habermas does not deal with the educational problems related to rational behaviors in the classroom” (Boero & Planas, 2014, p. 3). Therefore, researchers in mathematics education have made their own adaptions of Habermas’ work. For example, Zhuang and Conner (2022) combine Habermas’ types of rationality with Toulmin’s argumentation model to analyze teachers’ questioning strategies for engaging students in collaborative argumentation. They find that teachers’ questions are especially effective if they combine elements of multiple types of rationality. Guala and Boero (2017) adapt the types of rationality as an explicit professionalization content for teachers by designing a “rationality tool” for teachers to use for analyzing their students’ proofs. They show how the tool gradually improves the precision and depth of the teachers’ analyses. Urhan and Bülbül (2023) explain students’ difficulties with proving in terms of an unsupported switch from epistemic rationality to teleological rationality. Goizueta (2014) argues that in educational settings, validity claims are evaluated within social and epistemological classroom environments and proposes “validity conditions” as a construct to describe the constraints underlying such evaluations.

As this brief sketch of related research shows, the potential observed by Morselli and Boero (2011) seems to be fruitful for the domain of mathematical proof. This study aims to fulfill this promise for a different research domain, namely statistical literacy. As content-specific adaptions have shown to be required and useful (Boero & Planas, 2014), such theoretical work is engaged in the following section.

## 3 Communicative rationality for statistical literacy

### 3.1 Mathematics and communication

Habermas’ central focus on communication resonates with research in mathematics education that has conceptualized the role of mathematics for communication. Fischer (1988) considers mathematics—including statistics—to be simultaneously a “system of mass-communication” and a “means of communication” (Fischer, 1988, p. 28). As means, mathematics provides “materialized communication about non-material matters” (Fischer, 1988, p. 28) in that mathematics provides the symbols for grasping intangible and abstract relations between objects and for making them communicable and manipulable. As a system, mathematics consists of concepts that follow their own set of rules which users of mathematics have to submit to. People are simultaneously subjects and objects of mathematics, using mathematics as means for communication while being shaped by the system’s rules (Fischer, 1993). Since this system of communication is the same for everyone, mathematics connects the individual and society, which allows citizens to better understand their problems and each other through acts of “communicative self-reflection” (Fischer, 1993, p. 212).

Jablonka (2003) emphasizes the role of mathematics for communication by finding that “communication by means of mathematical language, interpretation of statements that contain quantitative arguments, and critical evaluation of mathematical models are all essential to an emancipatory mathematical literacy” (Jablonka, 2003, p. 79). Notably, mathematical literacy requires “an awareness of applications that affect society, and to develop a consciousness of the limits of reliability of mathematical models” (Jablonka, 2003, p. 89). While Jablonka (2003) explicitly refers to mathematical literacy here, these requirements resonate well with the knowledge bases required for statistical literacy (Gal, 2002). The difference between communication with mathematical language and interpretation of statements could be described under Habermas’ perspective as a difference between types of rationality: Interpretation involves understanding what is meant by a statement and why it might hold, which draws on epistemic rationality. Communication involves convincing, persuading, or coordinating with others, which draws on communicative rationality. Although communication revolves around the interpretation of statements, Habermas’ perspective reveals that each also draws on a different kind of rationality.

These general observations find actuality in the high critical demands placed on readers of statistical information in online media items (Gal & Geiger, 2022). In Fischer’s terms, the character of mathematics and statistics as a system and means of communication simultaneously enables and forms public discourse of socially relevant issues like the COVID pandemic or the climate crisis. An interlocutor who wants to evaluate validity claims such as truth or truthfulness of a data-based argument requires knowledge of uses and limits of mathematical and statistical models.

### 3.2 Statistical literacy as mediator between individual and society

Recent studies on statistical literacy have begun to focus on the use of statistical information in media and public discourse (Aguilar & Castaneda, 2021; da Silva et al., 2021; Engledowl & Weiland, 2021; Gal & Geiger, 2022; Jablonka & Bergsten, 2021; Kollosche & Meyerhöfer, 2021). Although they do not directly reference Habermas’ ideas, these studies situate statistical literacy as a mediator between individual and society, a relation for which Morselli and Boero (2011) claim that Habermas’ rationality construct is particularly wellsuited to analyze.

Aguilar and Castaneda (2021) aim to “identify the mathematical competencies needed to decode the official information on the development of the pandemic” (Aguilar & Castaneda, p. 228). This metaphor of “decoding” communication relates to understanding the world of mathematical and statistical language, which resonates with the validity claim of *comprehensibility*. Kollosche and Meyerhöfer (2021) show that the statistical procedures behind seemingly simple measures such as “current COVID cases” or “deaths of COVID” are obfuscated in the reports and incomprehensible to an audience consisting of laypersons. This problematization of the understandability of measures could be considered a problematization of *comprehensibility*. Da Silva and colleagues (2021) show how engaging with charts on the Brazilian COVID situation “allows for a deeper understanding of how Brazilian inequality and regional diversity can explain the behavior of high mortality rates due to COVID-19 in the country” (da Silva et al., p. 276). Here, a connection between the statistical charts and the external objective world of inequality and regional diversity in Brazil is pointed out, providing a possible example for the evaluation of the validity claim of *truth*. In contrast, Jablonka and Bergsten (2021) show how politicians justify policy decisions without explicating the connection of the statistics to the policy. Under a Habermas perspective, these justification strategies are able to establish policies while questions surrounding *truth* remain unclear. Engledowl and Weiland (2021) illustrate how government reports on COVID can be intentionally misleading according to the governing party’s political goals. They propose that educators should engage students in questions surrounding the subjective purpose of the reporting agency and the political environment. This proposal could be considered a call for the importance of evaluating the *truthfulness* as well as the *rightness* of claims.

Such considerations resonate well with Weiland’s (2017) conceptualization of critical statistical literacy, which points attention to the need of articulating one’s own political or social context as well as intentions when reporting the results of a statistical investigation. Under a communicative rationality perspective, this articulation of context is an action that provides insights into one’s own subjective world and social world, which allows an interlocutor to better judge the *truthfulness* and *rightness* of an argument. In this way, Weiland’s (2017) conceptualization of critical statistical literacy can be understood as a call for statistical literacy to not only be interpreted in terms of *comprehensibility* and *truth* but also in terms of *truthfulness* and *rightness*.

### 3.3 The framework of communicative rationality for statistical literacy

As outlined earlier, Gal (2002) describes the communicative component of statistical literacy as referring to “people’s ability to *discuss or communicate* their reactions” (Gal, 2002, p. 3) to statistical information and data-based arguments. Under Habermas’ perspective, statistical communication refers to a process in which two interlocutors reciprocally ascertain each other of an intersubjectively shared world and coordinate their actions using statistical means. These means are produced in a system of communication (Fischer, 1988) which defines what can be expressed with statistics and how statistical arguments are formed. The way an interlocutor can address validity claims in statistical communication is also shaped by this system of communication. C*ommunicative rationality for statistical literacy* requires attention to content-specific aspects of statistical literacy that influence an individuals’ ability to address validity claims.

For specifying statistical literacy, Gal (2002) uses worry questions like “where did the data come from?” or “are there alternative interpretations?” (Gal, 2002, p. 16). Similarly, Habermas (1984) uses problematizing questions for specifying ways to address validity claims. *Comprehensibility* is addressed through questions like “What do you mean?”, *truth* through “Is it truly the way you say?”. *Rightness* is addressed through “Are you allowed to do this?”, *truthfulness* through “Is he deceiving me?” (Habermas, 1984, pp. 107–108). These questions focus on a statement and different “world references” (Habermas, 1981, p. 439): the *objective world* for truth, the *social world* for rightness, and the *subjective world* for truthfulness. To this, a *language world* is added in this study for comprehensibility.

Some problems emerge when adapting these world references for statistical literacy. First, Habermas specifies *truth* as a reference to an objective world, meaning “the totality of entities concerning which true propositions are possible” (Habermas, 1998, p. 310). Mathematical truth, on the other hand, can be conceived in different ways (Hersh, 1997): a *Platonist* view conceptualizes true mathematical statements as explicating relations between ideal mathematical objects, which resonates with the *truth* of Habermas’ objective world. However, a *Formalist* view conceptualizes true mathematical statements as well-formed symbolic expressions according to the “system of communication” (Fischer, 1988), which resonates with the *comprehensibility* of the language world. A second problem emerges concerning *rightness*. For Habermas (1998), *truth* is a validity claim for assertoric speech acts (e.g., statements, claims) which are validated by referring to the objective world. *Rightness* is reserved for regulative speech acts (e.g., commands, requests) which are validated by referring to the social world. However, as Weiland (2017) points out, data-based arguments can be expressions of power by authorities who own the necessary data and control the framing of the statement. In order to challenge such statements, citizens need to consider the social world, thus creating the theoretical need for allowing assertoric speech acts to be addressed under the validity claim of rightness.

Table 1 provides this study’s *framework of communicative rationality for statistical literacy*. The worry questions are based on the questions provided by Gal (2002) but expanded for the validity claims of truthfulness and rightness. As outlined above, some of these ideas take a departure from Habermas’ definitions. To include a Formalist view on mathematics, the validity claim of *comprehensibility* is adapted as referring to the mathematical syntactical rules. The idea of well-formedness is also expanded for the representation of statistical arguments: Statistically well-formed arguments provide the necessary information for comprehending the sampling method or the necessary information for understanding the underlying models. The “language world”, here, therefore encompasses natural language and symbolic expressions as well as generally accepted rules on how statistical arguments should be produced (e.g., providing all information, creating informative graphs, using acceptable sampling size). *Truth* relates to the “objective world” which can be mathematics as well as phenomena described by experience or by data. The validity claim of *truthfulness* gains importance for statistical literacy because the use of mathematics and statistics is not value-free but used for subjective reasons (Ernest, 2018; Porter, 1995). The validity claim of *rightness* provides a way of criticizing seemingly objective statistical claims: When questioning statistics as a vehicle for projecting power (Weiland, 2017), an interlocutor who argues the illegitimacy of a claim needs to address social norms or ethical standards.

## 4 Research question

So far, this study has argued that the communicative component of statistical literacy needs additional elaboration. By drawing on general theoretical constructs of Habermas (1981, 1984, 1998) and relating them to research in mathematics education (Fischer, 1988; Gal, 2002; Gal & Geiger, 2022; Jablonka, 2003; Weiland, 2017), a content-specific framework of communicative rationality for statistical literacy was developed that adapts and expands Habermas’ constructs. As research on students’ processes of proof and argumentation shows, Habermas’ constructs can be fruitfully employed for theoretical systematization as well as for differentiated empirical analysis (Boero & Planas, 2014; Morselli & Boero, 2011). The brief overview of different validity claims investigated in studies on statistical literacy provided here (see Sect. 3.2) provides first impressions of the framework’s potential for highlighting some central differences in different studies’ approaches to statistical literacy. The following sections provide an argument for the empirical usefulness of the framework of communicative rationality for statistical literacy. The following research question is discussed:

RQ: What kind of validity claims are addressed by students when interpreting and communicating about statistical arguments?

## 5 Method

This study is situated within the context of the larger *cli.math* design research project with a focus on learning processes (Prediger et al., 2015). Whereas the overall project aims at identifying and refining design principles for developing statistical literacy (Büscher, 2022a, 2024), this study focuses on the theorizing work of the project. As the above considerations on the underdeveloped theoretical status of the communicative component of statistical literacy show, topic-specific design research projects often need to develop theoretical constructs that allow the formulation of what-questions for research (Prediger, 2024). Such theorizing work can be characterized as a process of substantiation of more general theoretical ideas and subsequent refinement that progresses through integration of theoretical and empirical steps of analysis (Prediger, 2024). The following sections will illustrate the developed framework’s empirical foundation that enabled the substantiation of theoretical ideas.

In the project, a digital learning environment was designed that provides content for a teaching unit of 45 to 60 min on statistical literacy with a special focus on the communicative component of statistical literacy. In it, students progress through three different sections called “worlds” (not related to the “world references” from the framework in Table 1), each focusing on a different type of activity. In the *story world*, students develop their context knowledge of the phenomenon of Arctic sea ice. Through a card-collecting mechanic, they collect information and claims like “the Arctic sea ice is declining” or “there is no need to panic” that should be investigated. In the *data world*, students explore the data and take ‘data snapshots’ that prove or refute the claims under investigation (for more details on the design see Büscher, 2024).

The focus on communicative rationality is provided in the *argument world* (Fig. 1). Here, the students see a data snapshot from a fictitious student. In this study, the students were asked to provide an evaluation of fictitious student Mei’s argument (Fig. 1, bottom left). This argument shows a bar graph of the time series of monthly Arctic sea ice minimum extent for 2 years. The months following the low values during the Arctic summer are highlighted to support the argument that the ice recovers after each winter. This is taken as an argument for the claim that there is no cause for concern.

At this point in the digital learning environment, the students have had the opportunity to engage with a much longer time series of data in the data world, observing that since 1980, the Arctic sea ice extent has in fact declined dramatically (see Büscher, 2024). The anticipated difficulty of this activity is the evaluation of Mei’s argument, which is not factually wrong, but provides a distorted view of Arctic sea ice decline due to its short time frame. This activity is composed of two phases: in Phase 1, the students evaluate Mei’s argument without any further guidance from the learning environment. In Phase 2, a previously hidden list of prompts that provide a structured list of criteria for evaluating arguments is revealed to the students (Fig. 1, bottom right). The students are asked to rate Mei’s argument using the criteria and to provide written explanations of their reasoning. The criteria in this list were a tentative first attempt at providing scaffolds for evaluation and were not directly based on the framework of communicative rationality for statistical literacy, although they can loosely be assigned to different validity claims (e.g., “correctness” of numbers can be taken as referring to comprehensibility or truth, “fairness” to rightness or truthfulness).

### 5.1 Participants and data collection

Semi-structured interviews were carried out in June 2022 with 24 Grade 5 students of a public academic-track secondary school in a German low-income metropolitan area, who all attended the same mathematics class. Participation was voluntary, and interviews were conducted with all 24 students who volunteered. Participants were assigned to pairs according to the students’ choice, resulting in 12 pairs of students who were mostly used to working with each other. This method of sampling was chosen to increase the likelihood for active and engaging discussion between the two participants of each pair during the interviews. Prior knowledge of mathematics, statistics, or Arctic sea ice was not controlled. As data-based argumentation and critical reflection of data-based arguments are not part of the German standardized curriculum until later grades, it would be expected that the students have had only little prior knowledge of these contents.

In the interviews, the students were provided a laptop running the digital learning environment and were asked to engage with the learning environment as well as to explain their reasoning. The author acted as the interviewer, aiming to strike a balance between giving the students space for formulating their own answers and eliciting or challenging their explanations and arguments. Each interview lasted for approximately 40 min, during which the students worked on all three worlds of the digital learning environment. Video data from the interviews was captured using a camera as well as screen recordings of the digital learning environment. From the 12 interviews, 2 had to be removed from the analysis because no actual activity in the argument world took place. For this study, all utterances of students and interviewers as well as relevant gestures within the argument world sections of the videos were transcribed. The resulting 10 transcripts were then analyzed with regard to the students’ addressed validity claims.

### 5.2 Data analysis

For substantiating the general theoretical considerations, a category-generating approach was chosen combining deductive and inductive steps of data analysis in the style of qualitative content analysis (Kuckartz, 2012). The first difficulty encountered in the analysis was that addressed validity claims could not be attributed to single turns due to the students’ interaction. Additionally, the intensity of discussion and rapidness of turn changes greatly varied between cases, making comparisons difficult. In response to these problems, two different types of units were identified for the analysis: coding units coded the occurrence of addressed validity claims in the transcripts. Units of analysis were introduced to subsume different coding units in order to provide a unifying structure for comparing different interviews. The analysis adapted an approach used by Schoenfeld (2018) to segment the data into units of analysis of comparable length. In this study, units of analysis were constructed based on the following rules: (a) units of analysis consist of segments of the transcript; (b) the first unit starts with the first turn of a student in the argument world; (c) the current minute of the time stamp of this first turn is noted (e.g., the turn starts at 27:05, so 27 is noted); (d) each unit ends with the first turn that begins in the subsequent minute (e.g., turn #247 begins at 27:55 and ends at 28:03, so it is included in this unit and a new unit of analysis starts at 28:03 with turn #248). This procedure resulted in sequences of units of analysis with variable length of around 1 min. One advantage of this procedure is that no single turns have to be divided into two units of analysis. The decision to use units of about 1-min length is based on practical reasons, as it provided a reasonable granularity for highlighting empirical differences and similarities. The arbitrariness and variability of length need to be taken into account when interpreting differences between units of analysis. This procedure resulted in 77 units of analysis for the 10 cases, with a median of 7 units per interview.

Coding units consisted of whole sentences as well as sentence fragments in the interaction of the students within each unit of analysis. Each coding unit was assigned to a single validity claim so that each unit of analysis consisted of disjunctive coding units. Based on the students’ world references in their evaluations (Table 1), a first coding scheme was developed. After coding 25% and 50% of the data, the coding scheme was iteratively refined by determining coding rules and anchoring examples. At this point, the coding units were compared and contrasted to inductively identify sub-categories within the coded validity claims. A core team of two researchers and colleagues who discussed coding inconsistencies and consensually validated the coding scheme conducted the whole analysis. After coding approximately 75% of the data, no additional changes were considered necessary. Theoretical satiation was declared, and the whole data were coded using the developed coding scheme. After coding, 70 of the 77 units of analysis contained coding units coded as validity claims, while no validity claims could be identified in 7 units of analysis. These 7 units of analysis consisted of prolonged silence or off-topic conversation.

## 6 Results

The inductive category generation resulted in differentiation and topic-specific elaboration of the general construct of validity claims showing the differences in students’ evaluations of statistical information (Sect. 6.1) which can reveal differences between different students (Sect. 6.2). In all presented cases, the students react to fictitious student Mei’s argument (Fig. 1).

### 6.1 Sub-categories of addressed validity claims

This section illustrates the identified sub-categories for addressing validity claims. Examples are provided by showing coding units, and an overview of the units of analysis is provided afterwards (Table 2).

#### 6.1.1 Comprehensibility: coherence, statistical rules, design

The validity claim of comprehensibility is concerned with the rules of language, mathematics, and statistics. Three sub-categories were identified. One example for the first sub-category of coherence is given by student Nicole:

N: Well, I think the argument is very good. It fits to the pictures.

This example is coded as the validity claim of comprehensibility, as it focuses exclusively on the elements of the argument without relation to any external world. The identified sub-category is the *coherence* of the argument, which is evaluated positively. Other students employ the same sub-category, but arrive at different evaluations:

C: If you look at the diagram above, I would not say that the argument is correct […] because you can clearly see that in January, it’s just a little bit higher than in May 2020.

Here, Cedrik disagrees with Mei’s claim that the ice did not change based on Mei’s own diagram, which he observes actually shows a slight decline from January to May. Thus, he argues a missing coherence between Mei’s claim and the diagram.

A little bit later, a different sub-category is addressed by Cedrik:

C: And, well, with only two years you cannot really say that it’s correct. You got to have a value of something like a few decades.

This is an example of the sub-category of *statistical rules*, as Cedrik refers to statistical rules of requiring a suitable quantity of data for supporting claims. Notably, he does not refer to specific data known to him—which would make this an example of the validity claim of truth—but articulates a general rule of requiring a few decades of data before making claims.

The last sub-category is again illustrated by Nicole:

N: [The argument isn’t good] because you cannot see the numbers very well.

Nicole comes to a negative evaluation because in Mei’s diagram, the numbers are hard to read. The related sub-category is *design*, which concerns the readability and general appeal of the presented graphics and text.

#### 6.1.2 Truth: phenomenon and data

The validity claim of truth relates the argument to an external objective world. In the interviews, two sub-categories could be identified.

H: I think what she says is correct, that the ice, when the warmth decreases, that it gets more.

G: Yes.

H: But I wouldn’t say that it always was this way, because climate change is increasing and therefore, I would say that, like, in earlier months there was less ice, like in 1980.

In this excerpt, students Gina and Hannah provide a differentiated evaluation. On the one hand, Hannah agrees with Mei’s argument by elaborating a causal mechanism of Arctic ice, that a decrease in temperature should lead to an increase in Arctic sea ice. Yet, she also observes that “climate change is increasing,” which is another piece of context knowledge about Arctic sea ice. In both instances, she draws on unmathematized context knowledge, which is subsumed under the sub-category *phenomenon*. This excerpt shows that two conflicting evaluations under the same sub-category can exist alongside each other. Immediately afterwards, however, another sub-category comes into play, as Hannah relates the rising climate change to specific data of 1980. This is data she investigated earlier in the interview in the data world. Since she relates Mei’s argument to an external data set, this is an example of the sub-category *data.*

#### 6.1.3 Truthfulness

Truthfulness was the validity claim least addressed in the interviews, and no sub-categories could reliably be identified due to the low occurrence of this validity claim (see Table 2). The following examples show how this validity claim was addressed:

F: Because, well, she isn’t being personal, like saying that she dislikes it, she just says what she means.

Although Franziska has no access to Mei’s subjective world and has no information other than the argument at hand, she nevertheless reacts to a hypothesized subjective world by referring to Mei’s intentions and beliefs. She claims that Mei holds no specific negative feelings and that the argument mirrors her actual viewpoint of the phenomenon of Arctic sea ice without being skewed by malicious intent, meaning that Mei is *truthful* in her argument. Sven makes a similar point:

S: She basically thinks that she is correct. She looked at it and thought ‘it is rising again, so this has to be correct’ […] Well, but no, it doesn’t. So it’s more like imagination.

Sven personally holds that the Arctic sea ice is not on a way to recovery. While he thus rejects Mei’s argument on the basis of *truth*, he still positively evaluates the *truthfulness* of Mei’s argument. He observes that Mei’s argument might be subjectively rational, and is only skewed by her power of imagination. Because of this, he comes to the conclusion that there is no manipulative intent behind Mei’s argument.

#### 6.1.4 Rightness: importance and discourse

For the validity claim of rightness, two sub-categories could be identified. The first sub-category is illustrated by student Franziska in the early stages of the argument world:

F: Well, she says that when a little bit of ice melts, that’s no cause for concern and that it rises again after it falls. But it could be that when it gets less ice, that it gets broken somehow […] and when it’s always warm and cold, warm and cold, then that’s not good for the animals that live there, and maybe not for the ice, because it’s not used to this.

Franziska does not argue the comprehensibility, truth, or truthfulness of Mei’s claim but relates it to the world of norms: Mei’s conclusion that there is no cause for concern is incorrect, because the implications of melting ice are not tenable for Arctic wildlife, meaning it is not *right*. Because the rightness of the argument is grounded in knowledge of the importance of the argument for the context of the Arctic, the sub-category for such evaluations is identified as *importance*.

The different sub-category of *discourse* is illustrated by student Gina:

G: You could rate it a ‘2’, because it’s a really important thing […], because if you would submit this, that it’s always been this way, then people would believe it. And then you would have put something wrong out into the world and that’s no good.

Gina’s point here is not only that Mei’s argument is wrong according to the validity claim of truth. Instead, she sketches the undesirable implications of letting such an untrue argument out into the public: people would start to believe something that is untrue, which is wrong. Because Gina frames this in social norms, her evaluation is about the *rightness* of Mei’s claim. In contrast to the sub-category of importance, however, she does not draw on the relevance of the argument for Arctic nature but considers the implications to the public sphere and the formation of knowledge and beliefs according to this data-based argument. This sub-category is identified as relating rightness to the public *discourse.*

#### 6.1.5 Overview of addressed validity claims

Across the cases, the students addressed various validity claims (Table 2). Comprehensibility and truth provide the majority of addressed validity claims, as 42 (comprehensibility) and 38 (truth) of the 70 units of analysis contained these validity claims. Rightness occurred more rarely in 13 units of analysis but was still present in more than half of the cases (6 out of 10). Truthfulness is addressed in only 4 of 10 cases and even here only rarely (6 units of analysis in total).

It can also be observed that the change from Phase 1 to Phase 2 influenced the students’ use of validity claims. Evaluations of comprehensibility grew from 13 to 29 making this the most addressed validity claim, whereas the other validity claims mostly stayed the same. However, differences can be observed through a change in the sub-categories. For comprehensibility, an increase can be observed in sub-categories of design and statistical rules. The focus for truth in Phase 2 also lies on data instead of the phenomenon, while rightness sees an increase in considerations of discourse.

However, Table 2 presents aggregate values that do not allow insights into the individual change between Phase 1 and Phase 2. The following section will show how on the individual level, there are more differences visible between different cases and between first reactions and prompt-supported evaluations.

### 6.2 Comparison of students’ use of validity claims

The different cases showed substantial differences in addressed validity claims. The following example presents a single unit of analysis from students Cedrik and Dominik, who evaluate Mei’s argument based on the scaffold criterion of “The argument is convincing.”

C: Well, but think again. It basically is a scam. But it’s not forged.

D: Yes.

C: And just how it’s written there, it’ believable.

D: But it’s just a small timeframe, and like….

C: It’s just….

D: Yeah.

I: What do you mean when you say ‘it’s a scam but not forged’?

C: It’s a scam, because it’s too short a timeframe. Yes, it feels like a scam to me. Because, if you would give it to an ‘relatively uneducated person’, then they would believe it.

This is an example of a very complex evaluation simultaneously combining different validity claims. Cedrik assumes a malicious intent behind Mei’s argument and criticizes her *truthfulness* by calling her argument a “scam.” Both students see a problem with the short timeframe, which violates rules for the creation of arguments and is thus a problem of *comprehensibility* with the sub-category of *statistical rules*. Finally, Cedrik argues that this argument poses a danger to society, as people without much background knowledge would believe this wrong argument. Thus, he criticizes the *rightness* of the argument due to its impact on *discourse.*

However, such a differentiated and complex evaluation that combines several validity claims within a single unit of analysis could only rarely be observed in the data. Figure 2 provides a comparison between three pairs of students’ addressed validity claims by providing a process view of the whole interviews. These three pairs were chosen for illustrative purposes to show the possible differences between the students’ use of validity claims. For each pair, the rows illustrate the subsequent units of analysis with their coded categories and sub-categories. Cedrik and Dominik’s example above can be found in Fig. 2 as unit of analysis #4. Unit of analysis #4 was the most complex one for Cedrik and Dominik with three different addressed validity claims. Other units of analysis addressed just one or two validity claims. It can also be observed that a change took place from Phase 1, where Cedrik and Dominik addressed only comprehensibility, to Phase 2, where their other validity claims appear and their evaluations also grow more complex. Over the course of the whole process, they addressed every validity claim, exhibiting the potential for a very differentiated evaluation of arguments.

On the other hand, Maya and Nicole show very little complexity in their evaluations. Most of their units of analysis consist of a single validity claim (#1, #3, #4, #5, #7). Emily and Franziska represent a middle ground of mixed complexity. There also are differences in the changes from Phase 1 to Phase 2. Whereas Maya and Nicole address truth and comprehensibility through both phases, Emily and Franziska address truth and rightness in Phase 1 to which they add comprehensibility and truthfulness in Phase 2. The changing sub-categories show a pattern already observed in Table 2, from a focus on phenomenon for truth to statistical rules. A comparison with Cedrik and Dominik also reveals differences in Phase 1; Cedrik and Dominik and Emily and Franziska focus on very different validity claims in this phase. This comparison shows that the prompts in Phase 2 can indeed change the validity claims and sub-categories addressed by the students, but not in a very predictable manner.

## 7 Discussion

The aim of the analysis was to show the empirical usefulness of the framework of communicative rationality for statistical literacy. The research question focused on what validity claims are addressed by students. A coding scheme was developed in order to answer the research question (Table 3). This coding scheme is a major result of the theorizing work that can be employed in further research. Using the coding scheme, the analysis shows a richness in validity claims in the students’ communication. Across the different cases, all validity claims can be found in the data, although rightness and truthfulness appear more rarely. The inductively generated sub-categories show that the validity claims are addressed in different ways. The comparison of the cases shows that the addressed validity claims greatly vary between the students. Some students address all validity claims, and sometimes multiple validity claims simultaneously, whereas other students provide less complete and less complex evaluations.

While validity claims of truth and comprehensibility provide the majority of addressed validity claims, it is important to note the occurrence of the validity claim of rightness in more than half of the cases, indicating that these Grade 5 students are able to relate statistics to social norms, which is an important part of critical statistical literacy (Weiland, 2017). This could be considered surprising, as such critical considerations are not a common element of mathematics classrooms. Analyses of textbooks show that critical contexts are employed only rarely in statistics (Weiland, 2019), as are activities that question arguments to uncover alternative models and missing context information (Büscher, 2022b). One explanation could be the context of climate change used in the learning environment. Stephan and colleagues (2021) show that the context has a direct impact on students’ critical mathematical consciousness. The context seems to have a critique-supporting function, as it provides the background knowledge needed to reflect on the relation between the statistical argument at hand and the larger phenomenon in which it is situated. Indeed, some students in the interviews made a direct connection to fake news on climate change encountered on social media, showing their experience with engaging with the rightness of an argument based on its impact on discourse.

As Morselli and Boero (2011) assert when introducing their adaption of Habermas’ theory for proof and argumentation, “it is necessary to show how [the new analytical tool] can be useful in describing and interpreting students’ behavior, orienting and supporting teachers’ educational choices, or suggesting new research developments” (Morselli & Boero, p. 461). As shown, the framework of communicative rationality for statistical literacy reveals a complexity in students’ evaluations of arguments that would otherwise be missed. The framework shows that it does make a difference whether an argument is rejected based on issues of comprehensibility, truth, truthfulness, or rightness. Acknowledging these differences has practical consequences: if teachers strive for statistical literacy, they should be able to distinguish between evaluations of different validity claims. This is especially important for the case of critical statistical literacy (Weiland, 2017), where a possible confusion of truth, comprehensibility, and rightness might hinder the ability to challenge power structures through statistics. The developed framework can help as an analytical tool for noticing, similar to the “rationality tool” for proof described by Guala and Boero (2017).

Although the effects of the design principles were not the focus of this study, the framework of communicative rationality for statistical literacy also reveals differences between the two phases of the argument world. The engagement with the provided scaffolds did seem to increase the richness and complexity of validity claims addressed by the students. Subsequent cycles of design could use this as a starting point to refine the design of the learning environment. One possibility informed by the framework could be the construction of fictitious students’ arguments in such a way that specific validity claims are put into focus. This would allow teachers to facilitate classroom discussion around arguments that are true, but not comprehensible or truthful, but not right.

Regarding possible research directions, it can first be observed that the framework of communicative rationality for statistical literacy allows to highlight differences in theoretical focus. Under this perspective, some studies on statistical literacy appear more concerned with comprehensibility (Aguilar & Castaneda, 2021; Kollosche & Meyerhöfer, 2021), while others focus on rightness (Engledowl & Weiland, 2021). Where Gal (2002) provides a thorough list of possible worry questions, the framework of communicative rationality for statistical literacy can provide structure: Some questions are concerned with truth, while others are concerned with comprehensibility and only few are concerned with truthfulness or rightness. Most importantly, Habermas’ theory opens up a new perspective on communication. Whereas communication in research on statistical literacy is commonly used in the sense of “communicating results” of a statistical investigation (e.g., Weiland, 2017), Habermas emphasizes communication as a situation between interlocutors who reach a shared understanding of the world through communication by providing account regarding validity claims—or who come to the justified conclusion of disagreement. Research then should investigate how students can be supported in their development of the ability to engage in such communication using statistical means. The developed and empirically grounded framework of communicative rationality for statistical literacy (Table 3) could be a suitable analytical tool for this task.

## 8 Conclusion

Statistical literacy consists of two components (Gal, 2002): an interpretative component that is often the focus of research (e.g., Aguilar & Castaneda, 2021; Callingham & Watson, 2017) and a communicative component that requires additional elaboration. For this, this study developed the framework of communicative rationality for statistical literacy which adapts the works of Habermas (1981, 1984, 1998). The theoretical and empirical usefulness of the framework was demonstrated by illustrating the potential for systematizing research on statistical literacy as well as by reporting on empirical results of a study on Grade 5 students’ evaluations of statistical arguments.

Important limitations concern this study’s method of data collection, data analysis, and research focus. Because prior knowledge was uncontrolled, this study can provide no insights into how it influenced the differences observed in students’ use of validity claims. For example, it might be possible that students with higher statistical knowledge might find it easier to switch to a reflective stance on the rightness of arguments. Additionally, as context knowledge plays an important role, the students’ different evaluations might be the result of differences in context knowledge. As all students were sampled from a single mathematics class, the classroom culture, which depends on specific teachers, could also have been a factor for students’ habits of criticizing. Regarding data analysis, the developed method allows only to identify the presence of validity claims for pairs of students in discrete units of time. No claims can be made on student authorship or quality of reflection. Finally, the study did not investigate the actual processes of two interlocutors communicating and finding a shared understanding but investigated students’ ability to address validity claims when presented with a persons’ statistical argument. It might be worthwhile for further research to broaden this focus.

Overall, this study can contribute to the growing body of research on statistical literacy that is concerned with the use of statistics in society (Weiland, 2017; Gal & Geiger, 2022). The framework of communicative rationality for statistical literacy can help to emphasize a central distinction that is important for challenging power structures embedded in statistical arguments: that statements can be *comprehensible* and *true* while still being *not truthful* or *not right*. The empirical results show that students differ in their ability of challenging the rightness of a claim and might need to be made aware of the differences between truth and rightness—but also that students in as early as Grade 5 do show promising starting points for addressing a complex interplay of different validity claims. Overall, this study attempted to show that Habermas’ constructs, which already inspired research on mathematical proof (Morselli & Boero, 2011), can also provide the substantial theoretical background needed for research on the communicative component of statistical literacy. This study’s topic-specific adaptation and empirical substantiation resulted in a coding scheme (Table 3) which can serve as a starting point for researchers to give attention to this important part of statistical literacy.

## Data Availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

## References

Aguilar, M. S., & Castaneda, A. (2021). What mathematical competencies does a citizen need to interpret Mexico’s official information about the COVID-19 pandemic?

*Educational Studies in Mathematics*,*108*, 227–248. https://doi.org/10.1007/s10649-021-10082-9.Boero, P., & Planas, N. (2014). Habermas’ construct of rational behavior in mathematics education: New advances and research questions. In P. Liljedahl, C. Nicol, S. Oesterle, & D. Allan (Eds.),

*Proceedings of the joint meeting of PME 38 and PME-NA 36*(Vol. 1, pp. 205–235). PME.Büscher, C. (2022a). Design principles for developing statistical literacy in middle schools.

*Statistics Education Research Journal*,*21*(1). https://doi.org/10.52041/serj.v21i1.80.Büscher, C. (2022b). Learning opportunities for statistical literacy in German middle school mathematics textbooks. In J. Hodgen, E. Geraniou, G. Bolondi, & F. Ferretti (Eds.),

*Proceedings of the 12th Congress of the European Society for Research in Mathematics Education*(S. 845–852). Free University of Bozen-Bolzano and ERME.Büscher, C. (2024). Design principles for developing statistical literacy by integrating data, models, and context in a digital learning environment. In S. Podworny, D. Frischemeier, M. Dvir, & D. Ben-Zvi (Eds.),

*Reasoning with data models and modeling in the big data era*(pp. 49–60). Universität Paderborn.Callingham, R., & Watson, J. M. (2017). The development of statistical literacy at school.

*Statistics Education Research Journal*,*16*(1), 181–201. https://doi.org/10.52041/serj.v16i1.223.Cukier, W., Bauer, R., & Middleton, C. (2004). Applying Habermas’ validity claims as a standard for critical discourse analysis. In B. Kaplan, D. P. Truex, D. Wastell, A. T. Wood-Harper, & J. I. DeGross (Eds.),

*Information systems research*(pp. 233–258). Springer. https://doi.org/10.1007/1-4020-8095-6_14.da Silva, A. S., Barbosa, M. T. S., De Souza Velasque, L., Alves, D. S. B., D., & Magalhães, M. N. (2021). The COVID-19 epidemic in Brazil: How statistics education may contribute to unravel the reality behind the charts.

*Educational Studies in Mathematics*,*108*, 269–289. https://doi.org/10.1007/s10649-021-10112-6.Engledowl, C., & Weiland, T. (2021). Data (mis)representation and COVID-19: Leveraging misleading data visualizations for developing statistical literacy across grades 6–16.

*Journal of Statistics and Data Science Education*,*29*(2), 160–164. https://doi.org/10.1080/26939169.2021.1915215.Ernest, P. (2018). The ethics of mathematics: Is mathematics harmful? In P. Ernest (Ed.),

*The philosophy of mathematics education today*(pp. 187–216). Springer. https://doi.org/10.1007/978-3-319-77760-3_12.Fischer, R. (1988). Didactics, mathematics, and communication.

*For the Learning of Mathematics*,*8*(2), 20–30.Fischer, R. (1993). Mathematics and social change. In S. P. Restivo, van J. P. Bendegem, & R. Fischer (Eds.),

*Math worlds*(pp. 197–218). State University of New York.Gal, I. (2002). Adults’ statistical literacy: Meanings, components, responsibilities.

*International Statistical Review*,*70*(1), 1–25. https://doi.org/10.1111/j.1751-5823.2002.tb00336.x.Gal, I., & Geiger, V. (2022). Welcome to the era of vague news: A study of the demands of statistical and mathematical products in the COVID-19 pandemic media.

*Educational Studies in Mathematics*,*111*, 5–28. https://doi.org/10.1007/s10649-022-10151-7.Goizueta, M. (2014). The emergence of validity conditions in the secondary mathematics classroom: Linking social and epistemic perspectives. In P. Liljedahl, C. Nicol, S. Oesterle, & D. Allan (Eds.),

*Proceedings of the Joint Meeting of PME 38 and PME-NA 36*(Vol. 1, pp. 213–218). PME.Guala, E., & Boero, P. (2017). Cultural analysis of mathematical content in teacher education: The case of elementary arithmetic theorems.

*Educational Studies in Mathematics*,*96*(2), 207–227. https://doi.org/10.1007/s10649-017-9767-2.Habermas, J. (1981).

*Theorie Des Kommunikativen Handeln*s.*Band 1: Handlungsrationalität und gesellschaftliche rationalisierung [Theory of communicative action, volume one: Reason and the rationalization of society]*. stw.Habermas, J. (1984).

*Vorstudien Und Ergänzungen Zur Theorie Des Kommunikativen Handelns [Prelimary studies and additions to the theory of communicative action]*. Suhrkamp.Habermas, J. (1998). Some further clarifications of the concept of communicative rationality. In J. Habermas, & M. Cooke (Eds.),

*On the pragmatics of communication*(pp. 307–342). MIT Press.Hersh, R. (1997).

*What is mathematics, really?*Oxford University Press.Jablonka, E. (2003). Mathematical literacy. In A. J. Bishop, M. A. Clements, C. Keitel, J. Kilpatrick, & F. K. S. Leung (Eds.),

*Second international handbook of mathematics education*(pp. 75–102). Springer. https://doi.org/10.1007/978-94-010-0273-8_4.Jablonka, E., & Bergsten, C. (2021). Numbers don’t speak for themselves: Strategies of using numbers in public policy discourse.

*Educational Studies in Mathematics*,*108*, 579–596. https://doi.org/10.1007/s10649-021-10059-8.Kollosche, D., & Meyerhöfer, W. (2021). COVID-19, mathematics education, and the evaluation of expert knowledge.

*Educational Studies in Mathematics*,*108*, 401–417. https://doi.org/10.1007/s10649-021-10097-2.Kuckartz, U. (2012).

*Qualitative inhaltsanalyse: Methoden, Praxis, Computerunterstützung [Qualitative content analysis: Methods, practice, computer support]*. Beltz Juventa.Morselli, F., & Boero, P. (2011). Using Habermas’ theory of rationality to gain insight into students’ understanding of algebraic language. In J. Cai & E. Knuth (Eds.),

*Early algebraization*(pp. 453–481). Springer. https://doi.org/10.1007/978-3-642-17735-4_24.Porter, T. M. (1995).

*Trust in numbers: The pursuit of objectivity in science and public life*. Princeton University.Prediger, S. (2024). Conjecturing is not all: Theorizing in design research by refining and connecting categorial, descriptive, and explanatory theory element.

*Educational Design Research*,*8*(1). https://doi.org/10.15460/eder.8.1.2120. Article 60.Prediger, S., Gravemeijer, K., & Confrey, J. (2015). Design research with a focus on learning processes: An overview on achievements and challenges.

*ZDM-Mathematics Education*,*47*(6), 877–891. https://doi.org/10.1007/s11858-015-0722-3.Schoenfeld, A. H. (2018). Video analyses for research and professional development: The teaching for robust understanding (TRU) framework.

*ZDM-Mathematics Education*,*50*(3), 491–506. https://doi.org/10.1007/s11858-017-0908-y.Stephan, M., Register, J., Reinke, L., Robinson, C., Pugalenthi, P., & Pugalee, D. (2021). People use math as a weapon: Critical mathematics consciousness in the time of COVID-19.

*Educational Studies in Mathematics*,*108*, 513–532. https://doi.org/10.1007/s10649-021-10062-z.Urhan, S., & Bülbül, A. (2023). Habermas’ construct of rationality in the analysis of the mathematical problem-solving process.

*Educational Studies in Mathematics*,*112*, 175–197. https://doi.org/10.1007/s10649-022-10188-8.Weiland, T. (2017). Problematizing statistical literacy: An intersection of critical and statistical literacies.

*Educational Studies in Mathematics*,*96*(1), 33–47. https://doi.org/10.1007/s10649-017-9764-5.Weiland, T. (2019). The contextualized situations constructed for the use of statistics by school mathematics textbooks.

*Statistics Education Research Journal*,*18*(2). https://doi.org/10.52041/serj.v18i2.138.Zhuang, Y., & Conner, A. (2022). Teachers’ use of rational questioning strategies to promote student participation in collective argumentation.

*Educational Studies in Mathematics*,*111*, 345–365. https://doi.org/10.1007/s10649-022-10160-6.

## Funding

Open Access funding enabled and organized by Projekt DEAL. Open Access funding enabled and organized by project DEAL. No funding was received for conducting this study.

## Author information

### Authors and Affiliations

### Corresponding author

## Ethics declarations

### Conflict of interest

The author declares no competing interests.

## Additional information

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

## About this article

### Cite this article

Büscher, C. Adapting Habermas’ construct of communicative rationality into a framework for analyzing students’ statistical literacy.
*Educ Stud Math* (2024). https://doi.org/10.1007/s10649-024-10325-5

Accepted:

Published:

DOI: https://doi.org/10.1007/s10649-024-10325-5