Collective Argumentation in Integrated Contexts: A Typology of Warrants Contributed in Mathematics and Coding Arguments

It is important to understand how students reason in K-12 integrated STEM settings to better prepare teachers to engage their students in integrated STEM tasks. To understand the reasoning that occurs in these settings, we used the lens of collective argumentation, specifically attending to the types of warrants elementary students and their teachers provided and accepted in integrated STEM contexts and how teachers supported students in providing these warrants. We watched 103 h of classroom instruction from 10 elementary school teachers and analyzed warrants that occurred in arguments in mathematics, coding, and integrated contexts to develop a typology of warrants contributed in mathematics and coding arguments. We found that these students made their warrants explicit the majority of the time, regardless of the teacher’s presence or absence. When teachers were present, they supported argumentation in various ways; however, they offered less support in integrated contexts. Additionally, we found students relied more on visual observations in coding contexts than in mathematics or integrated contexts, where they often provided warrants based on procedures required to accomplish a task. These findings have implications for improving integrated STEM instruction through engaging students in argumentation.


Introduction
Teaching and learning in integrated STEM is a complex endeavor, requiring teachers and students to coordinate ideas from multiple disciplines and reason appropriately with them. Reasoning has been identified by researchers and policymakers across disciplines as an essential focus in K-12 learning (e.g., K-12 Computer Science Framework, 2016; National Governors Association Center for Best Practices and Council of Chief State School Officers [NGABP&CCSSO], 2010;National Research Council, 2013). Reasoning and communicating one's reasoning are important to both mathematics and coding (constructing a computer program) at the school level. In mathematics, reasoning is necessary if students are to understand mathematics as more than a collection of procedures and facts (Cuoco et al., 1996;Goldenberg, 1996;Kilpatrick et al., 2001). The Standards for Mathematical Practice (NGABP&CCSSO, 2010) highlight the importance of reasoning abstractly and quantitatively and constructing and critiquing arguments. In computer science, communication-including justification of processes and solutions-is one of the seven core practices (K-12 Computer Science Framework, 2016). Requiring the communication of reasoning and justification for choices involved in coding helps move students away from the trial and error approach commonly used by novice programmers (Lye & Koh, 2014). Collective argumentation is a potentially useful way to highlight reasoning in instruction that integrates mathematics and coding. This paper emerged from a professional development project in which an interdisciplinary team worked with teachers of students in grades 3-5 to integrate coding into mathematics and science instruction with a focus on collective argumentation. The goal of our research is to understand how argumentation supports this integrated instruction in elementary classrooms. In this paper, we report a subset of our research, focusing on the reasoning that teachers and students used in arguments that emerged in teachers' classrooms when mathematics or coding were the focus of instruction. Knowing the ways teachers and students reason in these settings can prepare teachers to engage their students in productive arguments in these integrated STEM settings.

Relevant Literature
Integrated STEM STEM (science, technology, engineering, and mathematics) integration is defined in several ways. Most researchers define STEM as including the learning of concepts that span multiple disciplines (e.g., Honey et al., 2014;Sanders, 2009;Shaughnessy, 2013). Policymakers argue that the four STEM disciplines should not be taught in isolation (e.g., STEM Task Force Report, 2014). Although colloquial understandings of STEM may include the teaching of any of the disciplines of science, technology, engineering, and mathematics individually (see, e.g., Ring et al., 2017), a definition of integrated STEM must include at least two disciplines, and some definitions require the integration of all four disciplines (Bybee, 2013). Vasquez et al. (2013) and English (2016) described increasing levels of STEM integration when incorporating at least two STEM disciplines. A multidisciplinary STEM integration approach includes the learning of concepts and skills in some subset of the four STEM disciplines separately but with a common theme. In an interdisciplinary approach to STEM integration, "closely linked concepts and skills are learned from two or more disciplines with the aim of deepening knowledge and skills" (English, 2016, p. 2). At the highest level of integration, a transdisciplinary approach, students learn concepts and skills from two or more of the four STEM disciplines and apply these concepts and skills to real-world problems. Although researchers and policymakers use varying definitions of STEM and STEM education, their conceptualizations of STEM share features of the multidisciplinary, interdisciplinary, and transdisciplinary approaches described by Vasquez et al. (2013) and English (2016). Our conceptualization and implementation of integrated STEM in this project, as reported by Zhuang et al. (2022), involves both interdisciplinary and multidisciplinary instruction of coding, mathematics, science, and argumentation with the "contextualized problem-solving experiences" (p. 66) provided by educational robots. In this paper, we examine reasoning within one aspect of integrated STEM: the integration of mathematics and coding in teacher-designed tasks.
Although coding and robotics have been used in some mathematics teaching since the 1980s (e.g., Clements et al., 2001;Highfield, 2015;Hoyles & Noss, 1992;Miller, 2019) and some studies have demonstrated mathematics learning via coding and programmable robots, many teachers at the elementary level have not been prepared to teach coding or robotics, nor do they have experience with integrating coding into other disciplines. Much of the literature addressing the integration of mathematics and coding examines how certain mathematics topics can be taught through coding or robotics (e.g., geometry through LOGO, Hoyles & Noss, 1992;patterns through Scratch, Miller, 2019).
One of the complexities of integrated STEM is the different epistemologies inherent within each discipline (see Slavit et al., 2021, for an overview of these complexities). How scientists establish claims, and the certainty with which claims are established in science, differs greatly from how claims are established in mathematics (Conner & Kittleson, 2009). In coding, pragmatic questions of how efficient a program is and whether or not "it works" seem to govern how claims are established. This, in turn, differs from an inductive approach to establishing claims in science and the deductive establishment of certain claims in mathematics. There is limited research on the kinds of reasoning used by students and the supports used by teachers in integrated STEM settings. This study aims to address this gap in the research related to argumentation in mathematics, coding, and integrated contexts.

Collective Argumentation and Argumentation in STEM
To examine the reasoning that occurs in classrooms, we use the lens of collective argumentation, which we define as teachers and students working together to 1 3 establish or reject claims. Collective argumentation has been studied extensively in mathematics education (e.g., Civil & Hunter, 2015;Conner & Singletary, 2021;Horn, 2008;Kosko et al., 2014;Krummheuer, 1995;Yackel & Cobb, 1996), and it has been found to support student sense-making, reasoning, and learning in the mathematics classroom (Forman et al., 1998;Krummheuer, 1995Krummheuer, , 2007Yackel, 2002). Like many researchers in mathematics education, we build on Krummheuer's (1995) interpretation of Toulmin's (1958Toulmin's ( /2003 conceptualization of arguments in multiple fields, which focuses on the structure of arguments as containing claims, data, warrants, backings, qualifiers, and rebuttals. Although argumentation has not been widely used or studied in the teaching and learning of coding, we propose that teaching using argumentation in this context will promote a more structured approach to coding, moving students away from trial and error approaches (Conner et al., 2020).
Collective argumentation has also been a topic of study in science education (e.g., Erduran et al., 2004;Jiménez-Aleixandre, 2000;Osborne et al., 2004;Sampson & Clark, 2008;Sandoval et al., 2019), but research on argumentation in integrated STEM education is relatively new. Some empirical studies involving argumentation in integrated STEM focus on students' motivation and attitude toward STEM. For example, Dönmez et al. (2022) purposefully integrated Toulmin's (1958Toulmin's ( /2003) model for an argument into a new STEM curriculum to investigate the impact of argumentationbased STEM activities on students' STEM motivation (i.e., willingness to participate in STEM activities). They found that students were able to use Toulmin's model (i.e., data, warrant, claim) to solve a real-world problem and that the argumentation-based STEM activity had a positive effect on students' STEM motivation. Similarly, Smyrnaiou et al. (2015) found that engaging students in an argumentative/debate approach to solve real-world problems had a positive impact on students' attitudes toward STEM sciences. In addition, Smyrnaiou et al. (2015) argued that providing students with opportunities to collaborate with scientists and other classmates as they construct and present arguments can enhance students' learning of scientific concepts.
One affordance of studying collective argumentation is access to the reasoning students and teachers provide and accept within classrooms. A few researchers have explored reasoning within argumentation in STEM contexts. Mathis et al. (2017) explored how argumentation was incorporated into STEM integration units designed and written by elementary and middle school science teachers. After analyzing the curricular units, the researchers found multiple opportunities for students to engage in argumentation, such as when students were required to explain their design process to others. They distinguished between scientific argumentation and argumentation in an engineering context and proposed the term evidence-based reasoning (EBR) to describe engineering arguments. They also emphasized the importance of students supporting their claims with reasoning. Similarly, Slavit et al. (2021) proposed that focusing on students' claims and reasoning is a promising avenue to explore students' thinking at the intersection of the different STEM disciplines. They introduced a new construct, STEM Ways of Thinking, to focus on the cognitive activity of students while engaging in interdisciplinary STEM activities. These ideas illuminate the importance of argumentation as a form of reasoning; our study provides a finer-grained examination of students' and teachers' reasoning within argumentation by focusing on warrants.

Examining Reasoning Through Warrants
To understand the specific kinds of reasons students and teachers give and accept in elementary STEM contexts, we attend to the warrants-one component of Toulmin's (1958Toulmin's ( /2003) model for argumentation-that occur in arguments during mathematics, coding, and integrated contexts (see, e.g., Cole et al., 2012;Conner et al., 2014a, b). According to Toulmin (1958Toulmin ( /2003, a warrant in argumentation serves as a "bridge" between data and claim (p. 91), where data is facts taken as evidence and the claim is what is being established. Although some researchers have chosen to focus on types of reasoning (such as inductive, deductive, analogical, or abductive, see, e.g., Conner et al., 2014a), we have chosen to examine the kinds of reasons given for the claims, that is, individual warrants, rather than interpreting the types of reasoning in the overall argument. These warrants can illuminate the ideas on which students and teachers base their reasoning in disciplinary classroom discourse.
Warrants, as part of argumentation, have been studied in mathematics education and in STEM education, albeit sometimes labeled as "reasoning" rather than the Toulminspecific "warrant" (Slavit et al., 2021). Weber et al. (2008) drew attention to the importance of warrants within discussion, finding evidence of learning within the argumentation when the warrant was the object of students' discussion and attention. Slavit et al. (2021) described the importance of reasoning (and claims) in their STEM Ways of Thinking. They emphasized the multiplicity of conceptualizations of STEM and suggested that focusing on students' claims and reasoning provided a way to examine students' ways of thinking in interdisciplinary ways rather than focusing primarily on one discipline's practices to the exclusion of others.
Teachers are well-positioned to play an important role in supporting students as they develop mathematical arguments (Yackel, 2002), including their claims and reasoning. Teachers can support argumentation by directly contributing components of an argument, asking questions that prompt student contributions, or responding to student contributions (Conner et al., 2014b). These various supportive actions often lead to students making their reasoning-warrants-explicit in mathematics classrooms (Gómez Marchant et al., 2021). Therefore, it is important to understand how supportive actions by the teacher impact the reasoning that occurs in the classroom. In this paper, we examine both the kinds of reasons teachers and students provide and accept within arguments and how teachers support students in making their reasoning explicit in mathematics, coding, and integrated contexts.

Frameworks
To understand how students reasoned and how teachers supported reasoning in mathematics and coding arguments in elementary classrooms, we used two frameworks as described in the following sections. Toulmin's (1958Toulmin's ( /2003 structure of an argument includes data, claims, warrants, qualifiers, rebuttals, and backings. Conner (2008) expanded this structure to include the contributor(s) of each component. Although entire arguments can help us understand the reasoning given by students (and teachers) in the classroom, a deeper look at the warrants can provide insight into the kinds of reasons students and teachers give and accept within collective argumentation. Toulmin (1958Toulmin ( /2003 argued that what is accepted as valid for each component of an argument is discipline dependent, so identifying the types of warrants provided across different disciplines can help us understand and describe the kinds of reasons that students and teachers provide in various contexts. Various researchers have used different kinds of categories to classify warrants. Inglis et al. (2007) classified warrants as inductive, structuralintuitive, and deductive. Nardi et al. (2012) classified their warrants according to a "range of influences (epistemological, pedagogical, curricular, professional, and personal)" (p. 160). Conner (2012) classified warrants based on an inductive analysis of the content of warrants, describing warrant types including pattern noticing, interpreting a theorem, and using a specific procedure like a calculation. This framework arose from analyzing episodes of argumentation in a high school algebra and geometry classroom. We chose to adapt this framework to analyze the content of warrants contributed across multiple contexts: mathematics and coding. Conner's initial framework identified 29 types of warrants that were collapsed into ten major categories. Because our data included episodes of argumentation from elementary school classrooms with lessons focusing on mathematics and coding, we adapted and expanded Conner's (2012) framework to analyze warrants in these new contexts. Our framework identified 34 types of warrants that we collapsed into six major categories: authority, interpretation, knowledge, method, unformalized knowledge, and inspection (See Table 7 in Appendix).

Teacher Support for Collective Argumentation (TSCA) Framework
As outlined in the TSCA framework (Conner et al., 2014b), teachers support collective argumentation in three distinct ways: directly contributing a component of the argument, asking questions that prompt argument components, and using other supportive actions. A direct contribution refers to when a teacher makes a verbal statement, writes something on the board, or presents a task to contribute a particular component (e.g., data, warrant, claim, rebuttal, qualifier) to an argument. When the teacher requests an action or information that results in a student contributing a component of an argument, they are supporting collective argumentation by asking a question. Conner et al. (2014b) identified five categories of questions: requesting a factual answer, requesting an idea, requesting a method, requesting elaboration, and requesting evaluation. Sometimes, teachers support collective argumentation with statements or actions that are not direct contributions nor questions but respond to a student's contribution of a component of an argument. Conner et al. (2014b) defined these different kinds of statements or actions as other supportive actions.
They identified five categories of other supportive actions: directing, promoting, evaluating, informing, and repeating.
In this study, we used both frameworks to analyze the types of warrants provided by students and teachers in conjunction with how teachers support collective argumentation-more specifically, how teachers supported students' reasoning (i.e., warrants) in arguments. Our research questions addressed the kinds of warrants that were contributed most frequently, patterns in how teachers used questions and other supportive actions to prompt and respond to warrants, and how teachers supported warrants in different contexts (i.e., mathematics, coding, integrated) and across different class settings (i.e., whole class, small group).

Background and Participants
The data analyzed came from a larger study that involved 32 elementary school teachers from a rural school district in the Southeastern United States. All 32 teachers, across two cohorts, participated in a semester-long professional development course that introduced various block-based coding platforms and discussed the use of collective argumentation across multiple disciplines. The semester following the course, teachers were recruited for classroom observations and coaching sessions. All teachers who responded positively and who were teaching in elementary classrooms during the focus semester were selected (one volunteer moved to middle school teaching and was not selected). The classroom observations of these ten teachers are the focus of this analysis. At the beginning of the study, these ten elementary teachers' years of experience ranged from 4 to 21 years. Eight of the ten teachers taught all subject areas, with four of them teaching gifted or advanced content. The remaining two teachers taught more specialized content: one taught only STEAM (including the Arts in STEM) content while the other taught only mathematics, science, and writing content. Students in participating classrooms were in grades 3 through 5. Participating teachers chose the topics for the observed lessons, focusing on integrating coding into regular instruction and using argumentation in the classroom. Research team members co-planned and co-taught some of the lessons, as requested by teachers.

Analysis
Our research team 1 watched approximately 103 h of classroom instruction and identified all episodes of argumentation. Because collective argumentation occurs in both small group and whole class settings and our teachers used both class structures extensively in their teaching, we included arguments in both small group and whole class settings. To ensure similar amounts of time were analyzed across teachers, we randomly selected episodes of argumentation within classrooms. All whole class arguments were selected for analysis; small group arguments from each teacher's classroom were randomly selected until 10 min with a teacher present and 5 min without a teacher present had been selected. Two researchers diagrammed each argument independently using Conner's (2008) modified diagram structure and compared the resulting diagrams. Discrepancies were brought to the research team, who reviewed and compared the diagrams until consensus was reached. Our research team diagrammed a total of 221 arguments and inserted all information for each argument, including components of arguments and teachers' support for argumentation, into a spreadsheet. For this study, the authors analyzed 592 warrants from 158 of those arguments, focusing only on the arguments that involved primarily coding, primarily mathematics, or both coding and mathematics. We analyzed a total of 188 min of argumentation from 10 teachers' classrooms, including approximately 78 min of arguments in whole class settings, 78 min of arguments during small group settings with a teacher present, and 32 min of arguments during small group settings without a teacher present.

Disciplinary Contexts
We categorized the context that was associated with each argument as mathematics, coding, or an integration of mathematics and coding. When categorizing the context, the argument was the unit of analysis. Arguments that were identified as occurring within a mathematics context included primarily mathematical ideas, such as drawing a polygon with specific characteristics. In comparison, arguments that occurred within a coding context included primarily coding ideas (including writing pseudo-code), such as coding a robot to travel a certain number of steps. When contexts included both mathematical and coding ideas in such a way that it was difficult to separate the mathematics and the coding, we identified these contexts as integrated. An example of an integrated context is coding a robot to travel the path of a rectangle with a given perimeter. For the rest of the analysis, the warrant was the unit of analysis.

Warrants: Level of Explicitness
When a member of the collective (a student or a teacher) communicated their reasoning (either verbally or in writing), these utterances were identified as explicit warrants. When members of the collective did not make their reasoning explicit, we inferred their reasoning from the context, and we identified these warrants as either implicit or hybrid. In this section, we describe the new construct of hybrid warrants and relay the definition of implicit warrants as used by Conner et al. (2014b) and others. Implicit warrants involve a high level of inference from researchers because the reasoning for a claim is not explicitly stated or necessarily accepted by the group (see, e.g., Conner et al., 2014b). This may occur because a warrant is not explicitly requested or because there is an assumption of shared understanding of reasoning (see Rasmussen & Stephan, 2008). For example, when programming their robot to travel through various stations on a map of the Oregon Trail, a group of students claimed they needed to use a 0.5-s time delay to make the robot travel from Station 7 to Station 8. Just prior, this group of students had successfully programmed their robot to travel from Station 6 to Station 7 using a time delay of 0.5 s. We inferred students chose 0.5 s for their time delay because "0.5 s worked previously and the distance from Station 7 to Station 8 looked similar on the map to the distance from Station 6 to Station 7" (Gloria Lesson 2).
As we analyzed the warrants in arguments within coding and integrated contexts, we found instances where the reasoning was not made explicit, but did seem to be collectively accepted, often based on something visual that was observed by the whole group rather than reasoning that had been accepted in the class. Because these instances did not fit within our constructs of explicit warrants or implicit warrants, we developed the new construct of hybrid warrants. Hybrid warrants were developed during analysis to account for situations in which a group observed something (such as the robot stopping short of an intended position) and seemed to make their claim based on that observation without verbally describing that observation as the warrant for their claim. Although both implicit and hybrid warrants involve some inference from the researchers, hybrid warrants involve less inference than implicit warrants because hybrid warrants involve situations where the group collectively observed something that seemed to be "obvious" to those contributing to the argument. In other words, because "everyone" saw what happened, participants might not have felt it necessary to verbalize what they saw in the form of a warrant for the argument. Hybrid warrants often included something visual that the students and/or teachers observed; this may not have been visible to the researchers on the video recordings. When we recorded hybrid warrants, we described what the students and/or teacher observed and our interpretation of how their observation linked the claim to the data (i.e., their reasoning). For example, in the same Oregon Trail map lesson described above, students claimed that the time delay they used for their robot to travel from the starting point to Station 1 failed. Although the students did not explicitly verbalize their reasoning for their claim, the group collectively observed their robot implement their code and claimed their choice for the time delay failed. Later in the episode of argumentation, students were successful in programming the robot to reach its destination by increasing the time delay. Understanding and recreating the whole argument enabled us to infer that students' reasoning for their earlier claim was because the robot stopped short of its intended destination. Therefore, the hybrid warrant included what the students observed, "Students observed Roborobo travel for 0.5 s" and our interpretation of their observation, "The Roborobo must have stopped short of Station 1" (Gloria Lesson 2).
In this analysis, all three authors first classified all warrants (explicit, implicit, and hybrid) from a subset of the data according to the framework proposed by Conner (2012) from her work in high school algebra and geometry classrooms. As we coded, we made necessary adaptations to the framework due to the context of our data-mathematics and coding arguments in elementary classrooms. Then we independently coded and compared codes until we reached consensus on the classifications of warrants in the rest of the data. In the rare occurrence that a warrant seemed to fit two categories, we chose the category into which it fit best rather than double-coding the warrant. Using this adapted framework, we examined the 1 3 categories and types of explicit, implicit, and hybrid warrants contributed by students and teachers to make sense of the kinds of reasons that students and teachers used and accepted in arguments addressing mathematics, coding, and integrated contexts. Our findings focus primarily on explicit warrants, but we provide some indications of the categories of hybrid and implicit warrants as well.
Our findings begin with a description of our typology of warrants. Then we discuss how we coordinated the TSCA framework with our typology of warrants, including an examination of teacher support of warrants across contexts. Lastly, we compare teacher support of warrants across whole group and small group settings.

Categories and Types of Warrants
Through our iterative coding of warrant types, we identified 34 types of warrants, which we then collapsed into 6 categories (see Table 1). In the following sections, we describe each of the categories of warrants and provide examples of several types of warrants within each category. In addition, we discuss the frequencies with which these warrant categories occurred in our data and the distribution of warrants across contexts.

Descriptions of Warrant Categories
The largest category of explicit warrants was Inspection. Warrants identified in the Inspection category involved reasoning that related to something the students/teachers could see, such as what a robot did (Observation), or describing what a robot might do if a particular idea was included in the code (Visualization). Inspection warrants also included Visual cues, such as pointing to a block or chip on the screen. The final type of Inspection warrant, Observation with quantification, occurred when a student or teacher contributed a warrant that involved noticing an aspect of the activity and then used mathematical terms or ideas to describe it. For example, a student justified why they needed to cut the time delay in half for their robot by offering the warrant, "One second got us two times too far" (Sarah Lesson 2). Rather than simply stating the robot traveled too far, the student quantified the amount of distance the robot traveled too far.
Method, Unformalized knowledge, Interpretation, and Knowledge contained similar numbers of warrants, with the number of Method warrants slightly higher than the others. Many explicit warrants that were categorized as Method involved reasoning attributed to a specific calculation (Procedure-calculation), such as explaining that "6 times 4 is 24" (Gloria Lesson 1). Other warrants in the Method category attributed reasoning to a general procedure (Procedure-general), such as when a student justified how he rounded 739 to the hundreds place by explaining the general procedure for rounding, "[Because you look at the number] to the right [of the seven and] if it's four or less, you let it rest. Five or more you raise the score" (Katy Lesson 1). Also in the Method category were warrants involving several theorems or ideas (Synthesis) or patterns (Pattern noticing, Patterning). An example of Pattern noticing occurred when students were examining data points on a graph and noted that the longer the time the robot traveled, the further the distance it went "because like on the graph, it is like at 5 seconds, it is just a little distance. Ten seconds, it is a little bit a higher distance. At 15 seconds, it is even higher" (Erica Lesson 3). Sometimes, students provided a rationale along with their calculation (Calculation-why) and other times students changed their code arbitrarily (Trial and error) or reasoned based on exhausting all possibilities (Exhaustion). Unformalized knowledge and Knowledge warrants are related in that they involved reasoning from ideas the teachers and students "know." However, Unformalized knowledge involved ideas that had not been established formally or collectively in the class. For example, a group of students based their reasoning on their understanding of decimals (Informal understanding) when they explained, "We thought [0.10 s] would be more than 0.5 [s] because it was ten" (Sarah Lesson 1). Warrants involving Number sense or Previous experience were also included in the Unformalized knowledge category.
The Knowledge category included warrants in which students or teachers used Definitions, Theorems, or what they knew about the robotics interface or programming language (Platform). Several of our participants used the block-based coding platform Rogic 2 to program a robot called Roborobo, which includes separate motors for each of the two wheels. When reminding a group member to use the motors chip in Rogic to program the Roborobo to turn, the student provided the warrant, "That's how you turn [in Rogic]…you have to turn off one motor and then [Roborobo will] turn" (Gloria Lesson 2). Because this student based their reasoning on how the programming language of Rogic works, we classified this warrant as Platform. Ideas that had been established previously in the class (Previous result) and other ideas that had been established prior to the class (Prior knowledge) were also included in the Knowledge category.
When students went beyond stating something they knew by interpreting or applying their knowledge, we categorized these warrants as Interpretation. For instance, sometimes, students made inferences about what should be done based on the problem statement (Interpretation of problem). Students tasked with coding the water cycle in a block-based coding platform called Scratch 3 justified their code by explaining, "[The water is] supposed to go down and then at the end you want it to go up [because we're doing] evaporation here and precipitation [here]" (Olivia Lesson 1). Other warrants categorized as Interpretation were those in which students reasoned about the appropriate uses of a definition (Interpretation of definition) or described what a piece of code meant or would make the robot do (Interpretation of written code). These, and other similar warrants, such as Interpretation of observation and Interpretation of results, were categorized as Interpretation.
Very few explicit warrants were categorized as Authority, in which a claim was justified based on a convention (Mathematical convention, Classroom convention, Journal for STEM Education Research (2023) 6:275-301 Coding convention), information provided in the problem (Given), or reasoning attributed to the teacher or some other non-student entity (External authority). For example, a student claimed that an acute angle drawn on a piece of grid paper was a 45-degree angle and gave the reason "because I think my dad told me, when you do this, when you split a square it's a 45-degree angle" (Olivia Lesson 2). This low frequency of Authority warrants indicates that students rarely attributed their reasoning to some authority figure or convention.
When we analyzed all warrants (explicit, hybrid, and implicit), the distribution of categories was similar to that of only the explicit warrants (see Table 1), with Inspection warrants still occurring most frequently (37%) and Authority warrants occurring least frequently (6%). In general, when students did not make their reasoning explicit and thus the reasoning was inferred (hybrid and implicit warrants), we inferred the same types of warrants the students and teachers had been verbalizing. That is, our inferences were consistent with the observed classroom activity.

Warrants Across Contexts
When we analyzed the warrants across contexts (mathematics, coding, and integrated), the distribution of categories of warrants differed, as shown in Table 2. Looking across the three contexts, the highest percentage of warrants were categorized as Inspection in both coding and integrated contexts. We hypothesize that Inspection warrants were the highest category in coding and integrated contexts because coding physical robots and coding sprites on a screen are inherently visual activities. Warrants categorized as Method made up the highest percentage in mathematics contexts. Relying on calculations (Procedurecalculation) and patterns (Pattern noticing), both categorized as Method, is appropriate for elementary grade students in mathematical arguments. Because integrated contexts include both mathematical and coding ideas, it makes sense for the second highest category of warrants in this context to be Method. Of importance, of all Method warrants (65 total), only one warrant was identified as Trial and error, which means students were reasoning through changing their code, rather than modifying it arbitrarily, an idea that was a focus of the professional development in which these teachers had participated.

Coordinating the TSCA Framework With our Typology of Warrants
According to the TSCA framework, (Conner et al., 2014b), teachers support argumentation by directly contributing argument components, asking questions that prompt argument components, and responding to argument components with other supportive actions. In the following sub-sections, we describe these supportive actions and the categories of warrants they supported in more detail.

Direct Contributions of Warrants
We observed each of these kinds of supports for warrants in our study; teachers directly contributed 27 warrants, and they co-constructed 26 warrants with students (see Table 3). This was out of a total of 372 warrants that were made explicit in these classes. Teachers contributed (and contributed to) similar kinds of warrants as their students, although the proportions of kinds of warrants varied somewhat. Given the low number of warrants contributed by the teachers, we cannot draw many conclusions about these numbers, but a few interesting observations can be made. The proportion of Knowledge warrants was higher for teachers than students, with 37% of teachers' warrants falling in this category (9% of students' warrants were categorized as Knowledge, see Table 3). Teachers also contributed no warrants categorized as Unformalized knowledge and co-constructed only one warrant in this category with their students (16% of students' warrants were in this category). In the remainder of this section, numbers refer to explicit warrants in arguments for which the teacher was present and exclude arguments in small group settings when the teacher was attending to other groups.

Example of TSCA Framework Analysis
In addition to directly contributing components of arguments, teachers supported argumentation by asking questions and using other supportive actions. For example, in a coding context in Hope's class (Hope Lesson 3), students were looking at their code which included two 90-degree angles and claimed it would be simpler if they used a different distance and angle. As shown in Table 4, Hope used a question and other supportive actions to support the students' contributions of the Authority warrant (so coded because it addressed a Coding convention that privileges efficiency).

Questions Supporting Warrants
Teachers asked questions across a wide range of categories that prompted warrants. Teachers asked 166 questions that prompted 128 warrants (when teachers asked questions to prompt warrants, they often asked multiple questions that prompted one warrant). For example, when working in an integrated context, Sarah's students were tasked with coding a robot to travel around a fraction of the area of a square meter (Sarah Lesson 2). When the students tried to use their measurements from a previous task that used inches and feet, Sarah asked multiple questions as shown in Table 5. In this example, Sarah prompted a Knowledge warrant by requesting a factual answer, "What did I say about the square? What size the square was?" Sarah responded to the student and then requested another factual answer, "What unit makes up a meter?" She then asked the student to compare two units by requesting an idea, "If we're comparing inches and meters, are those the same type of unit?". The most prominent category of questions that prompted students to contribute warrants was Requesting elaboration (48% of questions that prompted warrants were in this category). These are questions that ask students to elaborate on some idea, statement, or diagram, including questions that ask students to explain, interpret, or justify something. For example, teachers frequently asked questions like "How did you come up with that prediction? (Neal Lesson 1) or "How do you know?" (Erica Lesson 3) that prompted students to contribute warrants. Questions that requested elaboration prompted warrants in all categories and across mathematics, coding, and integrated contexts. When we examined teachers' questions by context, we found teachers prompted warrants with questions with about the same frequency in mathematics contexts and coding contexts (68 questions prompted warrants in coding contexts; 69 questions prompted warrants in mathematics contexts). We observed fewer questions prompting warrants in integrated contexts, with teachers asking 29 questions that prompted warrants in these contexts. One could attribute this to the fact that there were fewer warrants in integrated contexts overall; however, when compared to the number of explicit warrants in each context, we see that a lower proportion of warrants were prompted by questions in integrated contexts than in primarily coding or primarily mathematics contexts. That is, 30% of warrants (24 of 80) were prompted by questions in integrated contexts, while 45% of warrants (54 of 121) in coding contexts were prompted by questions and 48% of warrants (50 of 105) in mathematics contexts were prompted by questions.
We examined the proportion of warrants to which teachers contributed, either by directly contributing or by asking a question that prompted the warrant. In a primarily coding context, students contributed 45% of the explicit warrants without teacher intervention. In a primarily mathematics context, students contributed 36% of the explicit warrants without teacher intervention. In integrated contexts, students contributed 65% of the explicit warrants without teacher intervention.

Other Supportive Actions Responding to Warrants
Teachers' responses to students' reasoning (warrants) with verbal or nonverbal actions are classified as other supportive actions according to the TSCA framework (Conner et al., 2014b). When looking at the categories of other supportive actions, we found that teachers most frequently used Evaluating actions, in which they validated or verified a student's contribution. This occurred in each context as well as in combination with other supportive actions. Teachers supported warrants with other Requesting an idea 7 Students: No supportive actions 170 times, supporting 117 warrants. Note that sometimes, teachers engaged in multiple supportive actions for one warrant. For example, when students were investigating the relationship between distance and time in a mathematics context (Erica Lesson 3), a student (Student 4) contributed a claim, and Erica supported the argumentation by both asking a question to elicit a warrant (contributed by Student 5) and using other supportive actions. In this example, Erica summarized the student's contribution, "So you could think about the amount of time it takes you to travel to things," and then validated the answer, "That's a perfect example." She used other supportive actions from the Repeating and Informing categories to support the student's warrant.
Teachers used other supportive actions to support warrants contributed by students, co-constructed by teachers and students together, contributed by the teachers themselves, and even some hybrid warrants (by directing student attention to something observable). When a teacher supported their own contribution of a warrant, the support was often in the Directing or Repeating categories: they often supported by pointing at aspects of a figure or diagram on the board or by writing something on the board as they spoke. Support for co-constructed warrants was often in the Repeating category; support for students' warrants ranged across all categories of other supportive actions.
When we examined teachers' other supportive actions by context, we found teachers responded with other supportive actions to warrants in mathematics contexts most frequently, with 94 actions supporting 62 warrants (out of the 105 warrants contributed when the teacher was present). In coding contexts, teachers responded 58 times to 42 warrants (121 warrants were contributed in coding contexts). We found it surprising that the number of other supportive actions in integrated contexts was only 18, responding to 13 warrants (out of the 80 warrants contributed in this context). The smaller total number of questions and other supportive actions in integrated contexts may reflect the teachers' unfamiliarity with the kinds of tasks or arguments encountered in these contexts. This will be addressed more fully in the discussion section.

Comparing Teacher Support of Warrants Across Settings
The arguments in this analysis were categorized as occurring in small group with a teacher present, small group without a teacher present, or whole class (in which the teacher was always present). Our analysis included approximately 78 min of arguments in whole class settings and approximately 78 min of arguments in small groups when the teacher was present, and 32 min of arguments in small groups when the teacher was not present.
In whole class settings, warrants were explicitly contributed (student, teacher, co-constructed) approximately 66% of the time (see Table 6). Teachers supported arguments by directly contributing or co-constructing with students 16% of the warrants that occurred in whole class arguments, although this occurred only approximately 6% of the time in small group arguments. The higher frequency of teacher contributed and co-constructed warrants in whole class settings is unsurprising because teachers can feel the need to make warrants explicit in whole class settings to ensure all students understand the reasoning behind claims. Teachers also asked more questions in whole-class discussions (103) than in small group (64), prompting contributions of more warrants in whole class settings (77) than when they were present in small group settings (51). In general, teachers contributed to more warrants, requested more warrants, and asked more questions per warrant in whole class settings. Similarly, with respect to other supportive actions, teachers responded to more warrants with more supportive actions in whole class (117 other supportive actions to 78 warrants) than when they were present in small group settings (51 other supportive actions to 38 warrants).
In small group settings with a teacher present, warrants were explicitly contributed 60% of the time (student, teacher, co-constructed), with only 6% of the total warrants being contributed by the teacher or co-constructed (see Table 6). In small group settings with a teacher present, the majority of the warrants were made explicit, most often by the students (students contributed 90% of the explicit warrants). Although teachers asked comparatively fewer questions in small group settings than in whole class, teachers prompted warrants with questions from the same categories but not with the same frequency. The most frequently asked categories of questions in both small group settings and whole class were Requesting a factual answer and Requesting elaboration, but Requesting a factual answer questions were observed more frequently in whole class settings and Requesting elaboration questions were more frequent in small group settings. The categories of warrants in small group settings tended more toward Inspection and Interpretation; there were more Knowledge and Unformalized knowledge warrants in whole class settings. Speculation about the potential relationship between the questions asked and the warrants elicited in these settings is beyond the scope of this paper. Similarly, the majority (61%) of warrants were made explicit in arguments that occurred in small group settings without a teacher present. This similarity was surprising because we anticipated students would make their warrants explicit more often when the teacher was present to support the argument, especially in this case where teachers were focused on implementing and supporting collective argumentation in their classrooms. However, in these classrooms, warrants were made explicit in small groups about as often with or without a teacher present.
Of note, students were less likely to make their warrants explicit if those warrants involved the observation of an event by the entire small group. The greatest percentage of hybrid warrants (16%) occurred in small group arguments without a teacher present. Students seemed to assume that because all students in a group observed the same event (often a robot traveling a certain distance), all students in that group had a shared understanding of the reasoning for their claim. Hybrid warrants were less common in small groups when the teacher was present (6%). This is unsurprising because teachers, when present, are likely to ask questions to prompt students to make their reasoning from observation explicit.

Discussion and Implications
The larger project from which this study developed had a goal of helping elementary teachers integrate coding into their everyday classroom instruction, allowing more students access to the skills and reasoning required in the twenty-first century. The research team proposed that using collective argumentation, which many teachers were already using in mathematics and science, to teach coding would result in less trial and error coding by students and more integration of coding into daily instruction by teachers. We expected that teachers would support the reasoning of students in similar ways across all these contexts; however, we were unsure of how students would justify their claims, specifically in coding contexts. Our typology of warrants is a significant contribution to the literature because it provides insights into the kinds of reasons and justifications contributed by students in their arguments.
We speculated that because the coding activities in this study often involved mathematics, students might base their reasoning on similar things in mathematics, 1 3 coding, and integrated contexts. However, focusing on warrants allowed us to see the differences in reasoning occurring across contexts, as suggested by Slavit et al. (2021). In this study, students did not necessarily base their reasoning on similar things across all three contexts. For example, students contributed a considerable amount of Method warrants in mathematics and integrated contexts, but not in coding. In coding, they contributed more Inspection warrants. It seems that if mathematics is not intentionally incorporated into coding activities, students will rely more on their observations and visualizations than on mathematical calculations, methods, or procedures. Therefore, if teachers want to engage students in mathematical reasoning while engaging students in coding, mathematics needs to be purposefully incorporated into coding activities.
Teachers may also need additional support in learning how to support argumentation in these integrated tasks. When designing the professional development course for the teachers in this study, we hoped that teachers would use their experiences supporting argumentation in mathematics and science to similarly support argumentation in teaching coding. The results show that the elementary teachers in this study did, in fact, support student contributions of warrants in coding arguments in similar ways to how they supported warrants in mathematics arguments, which aligned with the secondary teachers in Gómez Marchant et al. (2021). However, the teachers in this study did not offer as much support for argumentation that occurred in integrated contexts. The integration of coding into other disciplines in the elementary classroom is a way to engage more elementary students in coding and better prepare them for the technological world. Therefore, it is important that teachers can take the strategies they use in teaching each discipline and use them in integrated contexts as well. The results of this study suggest that teachers may not translate these supports from mathematics and science to integrated contexts naturally. It is possible that teachers are not as confident with the integrated tasks, which require simultaneously coordinating the goals and ideas of two (or more) disciplines-including a discipline such as coding which is new to many teachers.
Despite the observed differences in teachers' support for argumentation, students in this study consistently made their reasoning explicit in mathematics, coding, and integrated contexts, with or without the teacher present. We expected, given our focus on teacher support for argumentation, that when teachers were present, students would make their warrants explicit. We did not anticipate the frequency of explicit warrant contributions when the teacher was not present. It is possible that the teachers in this study had already established the classroom norm of providing justification for claims made in mathematics contexts and students continued to adhere to these norms in coding and integrated contexts. However, there were certain situations without a teacher present in which students seemed to think they did not need to make their reasoning explicit. These situations were seen in the case of hybrid warrants, where students observed something, such as their robot carrying out their code, and seemed to make their claim based on that observation without explicitly voicing their reasoning. It is possible that in these small group settings, students felt that the reasoning was clear to the other members of their group because they all observed the same thing.
It is worth noting that in these coding activities, students often justified their claims about their code based on a variety of things, including observations and visualizations. They were regularly engaging in systematic modification of their code based on these justifications, not just engaging in trial and error as many novice coders do. This result demonstrates the possible positive impacts of using argumentation in the teaching of coding.

Limitations
The results of this study are limited by methodological choices. This study involved a small number of teachers, all of whom participated in a semester-long professional development course about argumentation. One goal of this course was helping teachers support argumentation to help students engage in systematic modification, rather than trial and error, in coding activities. Therefore, it is likely that these teachers were knowledgeable about argumentation and intentional in their support during arguments. There may also be fewer Trial and error warrants because this was one specific intention of this work. Although these results may have limited generalizability, this study does provide evidence that, in classrooms wherein argumentation is a focus, teachers and students regularly provide explicit reasons for their claims across mathematics, coding, and integrated tasks. In future investigations, it may be productive to look for relationships between kinds of warrants used within the same argument or in similar arguments.

Conclusion
In conclusion, this study reveals promise in engaging students in argumentation in the learning of coding, especially if students are familiar with argumentation in other contexts. However, our findings regarding support for reasoning in integrated tasks suggest that supporting students in argumentation in STEM contexts is difficult for teachers. Additional research is necessary to examine how to better support teachers in creating integrated tasks and in supporting students in argumentation in integrated tasks. From a research perspective, this study presents a framework that can be used as a starting point for examining warrants across several disciplines, which may provide insights into warrants contributed in integrated STEM contexts. Knowing what kinds of warrants to expect can assist teachers in anticipating their students' responses and supporting their students' engagement in collective argumentation in these integrated contexts. This future work has the potential to improve integrated STEM instruction.
Journal for STEM Education Research (2023) 6:275-301 Knowledge: Reasoning from ideas the teachers and students know or have learned, with an assumption that these ideas are shared or had been established in the class Definition: Invoking given mathematical terminology and properties thereof (including tautologies)Platform: Justification based upon how the programming language is constructed or an interpretation of how the programming language worksPrevious result: A result that was collectively established within the research window (may be an individual's misremembering of a result)Prior knowledge: A result that was established prior to the research windowTheorem: A mathematical justification based upon an externally constructed theorem or property Method: Reasoning based on a procedure for accomplishing a task Calculation-why: Providing a rationale for performing a calculation (along with the calculation in some cases) Exhaustion: All of the possibilities have been exhausted or we found everything Pattern noticing: Inferring/creating a pattern from examples Patterning: Reasoning made by using a previously established (not necessarily formally) pattern (using the pattern). Procedure-calculation: A mathematical process or set of steps that produces a solution to a specific problem (includes counting) Procedure-general: A procedure that describes the method to perform an operation or produce a required result (in general terms, not a specific case) Synthesis: Reasoning statement based on several theorems or ideas Trial and error: Choosing to change code arbitrarily