Introduction

Mechanistic reasoning is central to cognition in STEM domains. To reason mechanistically is to seek to explain a phenomenon by identifying its underlying entities and their properties, activities, and cause-effect relationships (Russ et al., 2008). Characterized by philosophers of science as an important part of scientific and technical thinking (Machamer et al., 2000), mechanistic reasoning enables humans to produce predictive models of both natural and designed systems.

The science education research community has devoted efforts to exploring what students’ mechanistic reasoning looks like and to developing principles for learning experiences that better support mechanistic reasoning development (Dickes et al., 2016; Grotzer & Basca, 2003; Krist et al., 2019). Researchers have studied mechanistic reasoning among science learners at many levels. For example, kindergarteners explained floating and sinking behavior by talking about force as an entity that interacts with objects placed in water (Louca & Papademetri-Kachrimani, 2012); middle-school students identified scent particles and their motion as central to how an orange can be smelled at a distance (Wilkerson-Jerde et al., 2015); and graduate students in physics explained solar panels by reasoning about how cells within a panel must be arranged to enable current flow (Jones, 2015). These studies show that engaging in mechanistic reasoning is part of students’ successful participation in the practices of science—itself an important form of science learning. They also illustrate how to notice mechanistic reasoning among varied forms of student discourse and representation and how to encourage and develop it within science learning environments.

Studies of engineering design expertise (e.g., Bucciarelli, 1994; Cross, 2004) overlap in some ways with work on mechanistic reasoning in science education. In particular, Gero’s function-behavior-structure (FBS) framework conceptualizes engineering designers’ thinking as a series of “transformations” back and forth from design requirements to designed structures (Gero, 1990; Gero & Milovanovic, 2021). Mechanistic reasoning is not explicitly named as part of the FBS framework, but it is a type of thinking that underlies what Gero calls “analysis”—transforming a designed structure into an expected behavior—and “evaluation”—transforming an observed behavior into the structure that caused it.

While scrutiny of professional and undergraduate engineering design expertise has revealed evidence of mechanistic reasoning, less is known about what mechanistic reasoning looks like in K-12 engineering learning experiences. With more information about how students express mechanistic reasoning about their own engineering designs, engineering educators could better assess and strengthen students’ understanding of how and why a design functions as it does. Pedagogical content knowledge about the role that mechanistic reasoning plays in successful pre-college engineering design could be an important tool to help students learn engineering. In summary, there is a need for close examination of mechanistic reasoning in K-12 engineering education. In this study, we tackle the elementary school level and the specific curricular context of community-connected design challenges. Previous literature on children’s mechanistic reasoning about engineering solutions has mostly focused on how children explain highly structured, pre-existing mechanical systems such as gear trains and linkages (Bolger et al., 2012). Yet there is growing interest in engineering design experiences in which elementary students create their own design solutions for local community contexts (Chiu et al., 2021; Jordan et al., 2021; Tan et al., 2019). Important questions remain about how the specific features of community contexts influence student opportunities for engineering design practice and reasoning. Community-connected design challenges are often sociotechnical in nature and offer a myriad of contextual details that students might choose to attend to, instead of or in addition to physical cause-and-effect relationships (Benavides et al., 2023; Topçu & Wendell, 2023). In this study, we look at a range of five different community contexts and explore whether comparisons in students’ mechanistic reasoning about their design solutions can be made across contexts.

Background and Framework

Reasoning in Elementary School Engineering Design

When thinking about how to apply depictions of mechanistic reasoning to pre-college engineering education, we can begin with what researchers have already documented about how children reason about engineered systems. For example, when supported by adults, older elementary students can look at simple mechanical systems, such as gear trains and linkages, and describe causally how the elements affect each other’s motion (Bolger et al., 2012; Weinberg, 2017). When working with other children to design and build their own working devices or structures, children engage in different kinds of reasoning throughout the entire engineering design process and within different disciplinary practices (Cunningham & Kelly, 2017). During problem-scoping, children can use functional reasoning to think about how user needs translate into specific criteria and constraints (Watkins et al., 2014). While they are constructing a prototype, they can carry out trial-and-error experimentation to figure out how to resolve a failure (Jordan & McDaniel, 2014) or make balanced trade-offs that satisfy different dimensions of a design problem, such as competing technical and economic requirements (Goldstein et al., 2021). And they can use evidence-based reasoning to make and communicate reflective decisions about what design ideas to pursue and how to iterate on them (Wendell et al., 2017). The information sources they use for evidence include design briefs, math and science lessons, and material investigations (McFadden & Roehrig, 2019; Siverling et al., 2021). In summary, the literature shows that children use a variety of approaches to reason about engineered systems, but it does not yet provide a cohesive, detailed framework that can help us intentionally support mechanistic reasoning in engineering learning experiences.

Framework for Mechanistic Reasoning in Elementary School Engineering Design

In order to explore children’s mechanistic reasoning in engineering, we have adapted frameworks proposed in the science education literature by Russ et al. (2008) and Krist et al. (2019). Drawing from depictions of mechanistic reasoning by philosophers of science (Machamer et al., 2000), Russ et al. (2008) developed a coding scheme for systematic analysis of the substance of mechanistic reasoning in young students’ science inquiry. Their scheme includes seven hierarchical categories suggesting that mechanistic reasoning is evident when students describe the target phenomenon (category 1), identify the set-up conditions for the phenomenon (category 2), identify the entities that play a role in producing the phenomenon (category 3), identify the properties, activities, and organization of those entities that affect the outcome of the phenomenon (categories 4 through 6), and finally chain the current state of the entities backward to what happened previously or forward to what will happen next (category 7). The higher the category, the stronger the evidence of mechanistic reasoning by students.

While Russ et al. (2008) focused on classroom discussion in physical science, Krist et al. (2019) examined mechanistic reasoning in students’ written explanations in multiple science content areas. Their approach made use of the elements of mechanistic accounts proposed by Russ et al., the structure–function-behavior framework developed by Hmelo-Silver and Pfeffer (2004), and Wilensky and Resnick’s (1999) idea of thinking about complex systems “in levels.” Synthesizing all of this prior work together, Krist et al. (2019) proposed that science students produce mechanistic reasoning by applying three epistemic heuristics—that is, ideas about how to guide one’s intellectual work in science. These heuristics include (1) considering what occurs at the scalar level below the level of the observed phenomenon, (2) identifying and characterizing the relevant elements at that lower level, and (3) coordinating those elements over space and/or time to see whether and how they give rise to the observed phenomenon (p. 175). In summary, Russ et al.’s (2008) framework foregrounds the distinct elements of a mechanistic account—what learners say about the phenomenon’s entities and their characteristics and actions, and Krist et al.’s (2019) framework foregrounds scalar levels—how learners describe what is happening at a scale other than the observed phenomenon.

Both of these frameworks were designed to characterize mechanistic reasoning during classroom scientific inquiry—when students seek to explain phenomena. Yet we needed a tool to characterize mechanistic reasoning during classroom engineering design—when students seek to create functional artifacts. We used the frameworks developed by Russ et al. (2008) and Krist et al. (2019) as launching points because they were tailored for analysis of student talk and work specifically in K-8 classrooms. Their specific focus on students’ causal explanations of systems and phenomena aligns well with our interest in children’s reasoning about why their engineering design solutions function as desired, or not.

Previously, we analyzed the mechanistic reasoning expressed spontaneously in elementary students’ small-group conversations while they built and tested engineering design prototypes (Wendell et al., in preparation). In that work, we found that with an adapted subset of Russ et al.’s seven levels and Krist et al.’s three heuristics, we could fully describe the ways that engineering students used mechanistic reasoning to discuss their design ideas, artifacts, test results, and plans for iteration. Adapting Russ et al.’s seven levels, we saw that in engineering conversations, (a) identifying the target phenomenon co-occurred with identifying set-up conditions when students identified the target design performance; (b) identifying entities occurred when students named key design components; (c) identifying properties, organization, and activities of entities co-occurred when students described key factors of those design components; and (d) chaining occurred when students recognized causal relationships between different factors. We also saw (e) that Krist et al.’s third heuristic captured something about engineering mechanistic reasoning that Russ et al.’s seven levels did not—students’ linking from the components of a design to the overall design performance (which we considered to be the target phenomenon). The major result of this previous analysis was a five-part framework for mechanistic reasoning in elementary engineering design, shown in Table 1.

Table 1 Framework for mechanistic reasoning in elementary school engineering design

Research Purpose and Question

We have written elsewhere about the mechanistic reasoning that students used “in progress” while making engineering design decisions (Wendell et al., in press). In this report, we shift to characterizing elementary students’ use of mechanistic reasoning in “final design” accounts. We ask: how do elementary students use mechanistic reasoning when describing and explaining their design prototypes at the conclusion of five different community-connected engineering units?

Methods

This study occurred within a larger project in which researchers and school district partners developed and studied community-connected, integrated science and engineering curriculum units to support diverse 3rd–5th grade students’ science and engineering ideas, practices, and attitudes. In each of the five units, students prototype, test, and iterate on functional solutions to a design challenge rooted in the students’ local community. They also explore scientific explanations of the phenomena and mechanisms related to the challenge. The context for each design challenge is a real situation in a real location suggested by classroom teachers based on their knowledge of the community. These locations include a school playground, town reservoir, regional entertainment center, train track construction site, and intersection of a local highway with a migratory bird route. Table 2 summarizes the design goal, prototype materials, and testing procedure for the design challenge in each of the five units. Figure 1 shows sample student prototypes for each of the five community-connected design challenges.

Table 2 Goals, materials, and testing procedures for the design challenges in the community-connected curriculum units
Fig. 1
figure 1

Prototypes for each of the five community-connected design challenges

Data Collection and Participants

For this qualitative descriptive study, we focus on interview data collected after each of five community-connected curriculum units: accessible playground design (3rd grade, N = 8, district A, schools 1 and 2), displaced animal relocation design (3rd grade, N = 10, district A, school 1), migration stopover site design (4th grade, N = 4, district A, school 2), retaining wall design (4th grade, N = 13, district B, school 1), and water filter design (5th grade, N = 9 students, district A, school 3). District A is a suburban public school district that is home to a linguistically and culturally diverse population with approximately 45% economically disadvantaged households, 40% students whose first language is not English, and a majority Latinx population (approx. 55% Latinx, 35% White, 5% Black, 5% multiracial, 1% Asian). District B is an urban public school district also home to a linguistically and culturally diverse population with approximately 70% economically disadvantaged households, 45% students whose first language in not English, and a racially and ethnically diverse population (approx. 45% Latinx, 30% Black, 15% White, 10% Asian, 2% multiracial).

In the 20-minute interviews, students were shown a photo of the artifact they constructed to solve the community-connected design problem as well as a photo of an alternative solution, which they were told was constructed by students at another school. They were then prompted to (a) describe and explain their team’s final design solution, (b) compare it to the alternative solution, and (c) evaluate how well it connected to the real-life design problem. Our goal in using this stimulated-recall interview approach was to find out more about how students had made design decisions and how they justified these decisions to people outside their design team. We were exploring what kinds of reasoning they used to express their decisions and justifications and whether mechanistic reasoning in particular was one of the reasoning tools they used.

Data Analysis

To make sense of the data, we applied the deductive data analysis approach (Bingham & Witkowsky, 2022; Creswell & Plano Clark, 2007). When using deductive coding, the researcher creates codes before doing any analysis and then goes through the data to see if and how it fits into those codes (Bingham & Witkowsky, 2022). We utilized a priori elements of mechanistic reasoning, which were generated in light of previous research (e.g., Krist et al., 2019; Russ et al., 2008). We coded the interview transcripts for four of the elements of mechanistic reasoning shown in Table 1: naming entities, describing entity factors, connecting entity factors, and linking up to design performance. We did not code for identifying target performance because the interviewer reminded the student of the design goal in the interview prompts. Mechanistic reasoning elements were independently examined by the two researchers according to the mechanistic reasoning analytical framework stated in Table 1. The first author coded all transcripts, and the second author read all transcripts to review all code applications. We had 7 conflicting mechanistic reasoning elements (out of all mechanistic reasoning elements revealed by students) for all transcripts. Where we disagreed, we came to a consensus code through discussion. Since both authors analyzed all transcripts independently, we did not report the inter-rater reliability value.

Some of the elements of mechanistic reasoning are prerequisites for other elements. Describing entity factors could only occur after naming entities. Likewise, linking up to design performance could only occur after entities had been named. Connecting entity factors could only occur after entities had been named and their factors had been described. This nesting of factors means that some codes necessarily occurred together; any student talk coded for connecting entity factors also included naming entities and describing entity factors. For example, for the playground unit, Tessa named the entities of wheelchairs and swings and described the entity factor of wheelchair weight. She connected entity factors by explaining that the weight of the wheelchair (factor #1) on a particular location of the swing was the reason for the flipping over (factor #2) of the swing.

Findings

The proportion of students who used elements of mechanistic reasoning when describing and explaining design solutions to community-connected problems is given in Table 3. In the interviews after all five curriculum units, when describing and explaining design solutions, the majority of students used all four of the elements of mechanistic reasoning included in our coding scheme. As shown in Table 3, all students named entities and described entity factors for the design solutions for all five community contexts. For three of the contexts (playground, displaced animals, stopover sites), some students described the design artifacts without expressing connections between entity factors and/or the way factors linked up to the design performance.

Table 3 Proportion of students who used elements of mechanistic reasoning when describing and explaining design solutions to community-connected problems

The rest of the findings are presented in three parts.

In Part I, we introduce each curriculum context and show how students engaged in naming entities and describing entity factors to point out and characterize the important components in the design prototypes. We provide representative examples for these two aspects of mechanistic reasoning for all five community design contexts so that the reader has a sense of the range of entities and factors that students identified.

In Part II, students’ linking up to performance across the curriculum contexts is analyzed. Linking up to performance is illustrated with one representative example from each curriculum context.

In Part III, students’ connecting entity factors performance across the curriculum contexts is analyzed. Connecting entity factors performance is illustrated with one representative example from each curriculum context.

Part I: Naming Entities and Describing Entity Factors

All students in all units named entities and defined entity factors. Students named entities to identify what they perceived as the major components of design artifacts. Students described entity factors when they identified the characteristics or actions of a component that mattered to the design’s functioning. Examples of entity factors include shape, size, texture, orientation, location, and motion. For each unit, examples of naming entities and describing entity factors are given below (Table 4).

Table 4 Examples of naming entities and describing entity factors (annotated with bolding to point out the entity or factor)

Water Filters

After the water filter unit, Melissa and Raven named entities and described entity factors to compare their teams’ design solutions with an alternative solution shown by the interviewer. To point out a key difference between the two filters, Melissa named the entity of “net stuff” used to remove pollutants. Raven accounted for a weakness in her team’s design (a little unsturdy) by naming an entity (tape) and describing its factors—the actions of getting wet and slipping off of the components it was holding together.

Retaining Walls

For the retaining wall unit, Julio named the entities of “toothpicks” and “sticky notes” used to create their wall design. For the same unit, Dani talked about properties of the napkin (not holding the sand) and the aluminum foil (holding the sand) for filling the holes in their design.

Stopover Sites

After the stopover sites unit, Josiah named the entity of “sticks” to make a roof for the animals. Jimmy mentioned that they stuck “toothpicks from the bottom” connecting it coming out of the turf roof in order to provide light for the animals.

Displaced Animals

For the displaced animals unit, Beth named the entities of “tape,” “foam,” “paper,” and “cardboard” in order to create their design. Rohan described entity factors as he talked about “being stable” as a necessary characteristic of the pillars in their bridge design.

Playground

After the playground unit, Tessa named entities in order to identify components—“metal and stuff”—that would have made her team’s design function even better. Penny described several entity factors that caused trouble with their design. She talked about the moisture content, location, and sticking action of the clay.

Part II: Linking Up to Performance

Students linked up to performance when they pointed out explicitly that a particular entity or factor played a role in the performance of the design artifact as a whole. For each unit, examples of linking up to performance were given in Table 5.

Table 5 Examples of linking up to performance across the units

Design Contexts Where All Students Linked Up to Performance

For the water filters, retaining walls, and stopover sites units, all students used linking up to performance at least once as a component of mechanistic reasoning. The design contexts in these units all involved blocking particles or letting particles flow in some way, and these performance goals may have been easier for students to link to elements of the design prototypes. For example (see Table 5), after the water filter unit, Raven linked the entity of “straws” and the factor of their orientation (“facing that way”) up to the performance of blocking the beads.

Design Contexts Where Fewer Students Linked Up to Performance

For the playground unit, 63% of the interviewed students revealed linking up to performance component, and for the displaced animals unit, 60% of the students showed linking up to performance at least once. Since displaced animals and playground units included loosely specified testing procedure as opposed to other units including highly specified testing procedure, some students might not have seen a need to explain the role a particular entity or factor played in the performance of the design artifact overall. For example, after the displaced animals unit, the interviewer asked Beth how her team’s design solved the problem. Beth’s response provided a narrative of what their truck design would accomplish without pointing out specific design components or the success or failure of a design test. She described: “Um, animals need a new home because their home might have been destroyed. So they can go into the truck somehow. Maybe we can lure them in with food, cause we have food in the truck. And so they can go into the truck and we can let them go off into a safe habitat.”

Part III: Connecting Entity Factors Across the Units

Students connected entity factors to give cause-and-effect explanation between the characteristics or actions of multiple design components. Making connections between factors involved explaining how or why one characteristic, action, or component influenced another, all at a level below the overall design performance. For each unit, examples of connecting entity factors are given in Table 6.

Table 6 Examples of connecting entity factors across the units

Design Contexts Where All Students Connected Entity Factors

For the water filters and retaining walls units, all students revealed connecting entity factors at least once as a component of mechanistic reasoning. For the water filter unit (see Table 6), Raven connected the entity factor (sponges at the top of the design) to another factor (stopping oil and glitter) by explaining that using sponges was the reason of stopping oil and glitter in their design.

Design Contexts Where Fewer Students Connected Entity Factors

For the playground, displaced animal, and stopover sites units, the use of connecting entity factors was less common among the students. In these three units, the tests of students’ design solutions involved hypothetical interactions with people and animals (in the displaced animal and playground units) or interactions with non-tangible physical entities such as sound and light (in the stopover sites unit). Since the students’ prototypes did not interact with tangible entities like water or sand, students less often discussed patterns of physical cause and effect when telling the interviewer about their designs. However, they did use connecting entity factors to explain how their design met secondary goals. For example, John connected entity factors to explain why the roof of their stopover site design was tilting and what might make it collapse. He did not discuss the cause-and-effect relationships involved in trying to reduce the stopover site’s sound and light levels, which are non-tangible entities.

Discussion

All students named entities and described entity factors for the design solutions for all five community contexts. For three of the contexts (playground, displaced animals, stopover sites), some students described the design artifacts without expressing connections between entity factors or the way factors linked up to the design performance. This result is interesting because these students had already done the prerequisite work of identifying key parts of their design. In a sense, they were prepared for reasoning at the level of linking or connecting. We speculate that differences between the design challenges may explain why all the students clearly named and described entities in all the units, but in some units, some students did not go on to linking up to performance or connecting entity factors.

Linking Up to Performance

The curriculum units that included highly specified testing procedures and that required the design construction to interact with tangible entities (like water filter and retaining wall units) were more conducive to the full set of mechanistic reasoning elements. Students may have linked up to the overall design performance more consistently after the water filters, retaining walls, and stopover sites units because highly specified testing occurred frequently and at discrete moments in time for these units. Students could therefore focus on the effects of particular entities and factors on the success or failure of their design. The testing procedures for these units were visually very observable (either by eye or with a measurement device), and it was clear to students what counted as a successful test result—the blocking of beads, sand, or light and sound. Specific testing procedures are often present in effective elementary engineering curriculum units (Cunningham et al., 2020; Purzer et al., 2022). For example, in an engineering design challenge developed by Dankenbring et al. (2014), elementary science students are tasked with designing an effective compost column that will produce better compost for crop fertilization. Students can easily see what constituted a successful test result thanks to the highly visible testing procedures (the soil is dry, wet, or moist and the average temperature of the soil) for this engineering design task, which can be observed visually or with the use of a measurement tool. During the design construction, students also interact with tangible entities (grass, leaves, fruit, twigs, red worms, cheesecloth, vegetables, and bread). Though Dankenbring et al. did not explore the relationships between the use of tangible entities and mechanistic reasoning, they concluded that students are able to test their ideas about scientific phenomena and see any shortcomings in their understanding firsthand as they design, construct, and test their prototypes.

By contrast, the playground and displaced animals units included more open-ended instructions about how to test designs. In these units, students had much more choice about what materials and procedures to use in testing their designs. Therefore, linking up from a single entity or factor to the overall design performance required more interpretation and effort.

Connecting Entity Factors

The findings also showed that connecting entity factors occurred less often than naming entities and describing entity factors. We can also speculate on why students connected entity factors most consistently in the water filter and retaining wall units. In these units, tests of students’ design solutions focused on concrete interaction with specific elements of the natural environment—water or sand. In the other three units, although the design challenges were based on real situations in real community contexts, the tests of students’ design solutions involved hypothetical interactions with people and animals (in the displaced animal and playground units) or interactions with non-tangible physical entities such as sound and light (in the stopover sites unit). Since the students’ prototypes did not interact with tangible entities like water or sand, students less often discussed patterns of physical cause and effect when telling the interviewer about their designs. When constructing and testing filters and retaining walls, students could touch and see the water and sand and observe how these natural elements interacted with each entity they had chosen to include in their design solution. Since these interactions were more concrete, it was likely easier for students to observe cause-and-effect connections between entity factors. Schellinger et al. (2021) stated that tinkering with physical materials may promote students’ curiosities and support their engagement in further exploring and sense making around scientific phenomena. They also reported that physical materials may spark students’ wonderment and inquiry and also students may gain familiarity with the materials. In the more abstract and hypothetical scenarios used in the playground, displaced animals, and stopover site units, it was more difficult for students to see patterns of cause and effect. In summary, we did not find evidence in this study that the particular community site or problem itself influenced student reasoning, though future studies should still explore this possibility. Instead, our findings suggest that opportunities to reason mechanistically were influenced by the specificity of testing procedures and concreteness of design tasks. These results can be related to scholarship on the nature of engineering. Pleasants and Olson (2019), building on Karataş et al. (2016), proposed nine disciplinary features of engineering. Of these nine, the feature “specifications, constraints, and goals” is most aligned with the aspects of our curriculum units that likely hindered or helped with mechanistic reasoning. Testing procedures are an aspect of design specifications, and they operationalize design goals.

The finding that students did not always connect entity factors and link from entities and factors up to design performance is consistent with our related research on students’ use of mechanistic reasoning during team design conversations (Wendell et al., in preparation). There, we found that students often needed instructor prompting to express causal thinking about relationships among design components and between components and their overall design. Findings from both studies imply that connecting entity factors when explaining an engineering design is not a spontaneous strategy for all elementary students. This result is consistent with findings about students’ mechanistic reasoning in science. Russ et al. (2008) put “chaining”—which involves making causal connections—at the most sophisticated end of their mechanistic reasoning framework, and Krist et al. (2019) showed that some students need substantial support to make claims about entities at a scalar level below the phenomenon. Tang et al. (2020) also reported that elementary school students predominantly revealed pattern-seeking in science classrooms instead of mechanistic reasoning.

Implications

Our findings have implications for the choices educators make about engineering design curriculum. When educators wish to create opportunities for elementary students to use mechanistic reasoning during engineering design processes, they should consider choosing engineering design contexts that include interaction with tangible entities, clearly testable design tasks, and limited reliance on hypothetical scenarios. When the engineering unit contexts have these characteristics, students can be more focused on how the elements of their design relate to design performance and more attentive to relationships between design entities and factors. More use of hypothetical situations, less tangible materials, and more ambiguous design tests might be rich contexts for mechanistic reasoning among high school or college students but not be effective for younger/elementary students who are mostly in the concrete operational stage (7–11 years old) like the sample of the present study.

We can also suggest that if students have had experiences at home or in other non-classroom environments trying to solve a problem similar to the one posed in an engineering lesson, students may reveal more sophisticated mechanistic reasoning elements such as linking up to performance and connecting entity factors. For example, for the water filter unit, many students stated that they had experiences with filtering water in their out-of-school life (i.e., one of the students, Raven, observing her father’s change of water filter). Students’ familiarity to the context of the water filter unit might have facilitated their connections between entities and factors and enabled more sophisticated mechanistic reasoning elements. By contrast, students interviewed about the displaced animals and playground design tasks did not report having previous experiences solving similar problems in their everyday life. Of course, all students have familiarity with playgrounds and animals, but these students did not perceive themselves as having previously worked on problem-solving related to playgrounds and animals—that is, they had not previously tried to design stable playground structures or systems to relocate animals.

Conclusion and Future Research

Our findings suggest that particular features of the design tasks used in the community-connected engineering curricula influenced students’ opportunities for mechanistic reasoning about their design solutions. Each curriculum unit featured a different community problem (e.g., need for playground equipment, need to filter reservoir inflow), and each problem afforded different design task features (e.g., loosely vs. highly specified testing procedures; concrete vs. hypothetical interactions with entities external to the design solution). We can claim that comparisons can be made across different community-connected engineering curriculum units in terms of children’s mechanistic reasoning. However, future research is needed to confirm which characteristics of community contexts specifically support or hinder aspects of mechanistic reasoning in engineering learning.

Methodologically, this study demonstrates that the five-part framework derived from Krist et al. (2019) and Russ et al. (2008) works just as well for analyzing student interviews conducted after an engineering design experience as it does for analyzing student discourse during the building and testing of engineering design prototypes. We recommend future work using this framework for different age groups of students and different engineering learning contexts in order to improve its usability and validity in engineering education research. Teacher professional development is another context where this approach to characterizing children’s engineering reasoning may be useful. Definitions and examples of these five elements of mechanistic reasoning could be a tool for professional learning communities where teachers gather to look at students’ engineering work (Mangiante & Gabriele-Black, 2020). The elements could support teachers in noticing, interpreting, and deciding how to respond to students’ engineering thinking (Watkins & Portsmore, 2022).

In this study, we were limited by examining only five curriculum units and by eliciting student reasoning only after the units had concluded. Another limitation is that we only explored curriculum units in which the engineering design instruction occurred after the science inquiry instruction. It is possible that students would use mechanistic reasoning more or differently if the curriculum sequence featured engineering design tasks before inquiry into scientific phenomena. Finally, our sample consisted of students from different grade levels who may have different previous experiences with engineering, and this grade-level variability is a limitation when we compare engineering design contexts. Future work is needed to disentangle the influence of design context from the influence of curriculum sequencing, students’ grade level, and students’ previous experience with engineering design.