Journal of Mathematics Teacher Education

, Volume 19, Issue 1, pp 7–32 | Cite as

Exploring how teacher-related factors relate to student achievement in learning advanced algebra in technology-enhanced classrooms

Article

Abstract

In this study, we examine the relationship between contextual variables related to teachers and student performance in Advanced Algebra classrooms in the USA. The data were gathered from a cluster-randomized study on the effects of SimCalc MathWorlds®, a curricular and technological intervention as a replacement for Algebra 2 curriculum, on student learning of Algebra 2 content. Conditional measures (teacher background characteristics) and instructional measures (self-reported instructional preferences, stances, and classroom practices) were subjected to a variety of empirical analyses to discern their relationship to student learning. Researchers examined both the overall effect of teacher contextual variables on student learning and the specific effect of SimCalc on both teacher instructional measures and student performance. There is evidence to support that teachers who use the SimCalc curriculum value classroom communication, deep understanding of math concepts, and support for both routine and non-routine problems.

Keywords

Algebra Student achievement Dynamic representations Connectivity Teacher contextual variables 

Rationale

Success in advanced algebra is a strong predictor of college readiness (Conley 2007; National Center for Education Statistics 2006). In this study, we examine teacher factors that correlate with student success in high school algebra in a sample of schools in Massachusetts, USA. We examine the relationship between teacher characteristics, instructional approaches, and student mastery of essential concepts in “Algebra 2”.

Instructional approaches (what the teacher does) and teacher quality (measurable elements of the teacher’s background, preparation, or experience) play an important role in student achievement. Our view of instructional approaches conforms closely to the way Hiebert et al. (1997) describe the “role of the teacher”. Instructional approaches are what the teacher does to engage students—the tasks, the environment, and the discourse. Throughout this article when referring to student achievement, we are specifying inferred conceptual growth from the performance measures. Effective teachers are those that are associated with high student achievement. In general, effective mathematics teachers elicit stronger student performance on measures of mathematics (Darling-Hammond 1999; Rockoff 2004; Wenglinsky 2000), provide greater access to mathematics for underserved populations (Akiba et al. 2007; Heck 2007; Pianta et al. 2007) and create greater opportunities for college attendance and success (Alliance for Excellent Education 2002; Peske and Haycock 2006).

There is a strong correlation between measures of teacher quality and student performance (Darling-Hammond 1999; Alliance for Excellent Education 2002). Goldhaber and Anthony (2007) and Sanders and Rivers (1996) propose that teacher quality is closely linked with student performance. Moreover, studies suggest that teacher quality plays a critical role in explaining discrepancies in mathematics performance between high and low poverty schools (Akiba et al. 2007; Clotfelter et al. 2006, 2007). The National Center for Education Statistics (2006) reported that high poverty schools had the highest percentage of teachers with <5 years of teaching experience—a key predictor of teacher quality (Clotfelter et al. 2007). Lack of teaching experience may be a factor accounting for lower mathematics achievement in high poverty schools nationally.

Access to high-quality mathematics teachers in high school is important for success in mathematics. Completion of Algebra 2 (a school curriculum subject for advanced algebra in the USA) is a strong predictor of college readiness (Choy et al. 2000; Virginia Department of Education 2012; Klopfenstein and Thomas 2009). While having an effective mathematics teacher predicts more success in college, and despite recent policy efforts in the USA at ensuring highly qualified teachers for all students, not all students have access to effective mathematics teachers. The equitable dispersion of high-quality mathematics teachers may be at the heart of the achievement gap (Peske and Haycock 2006). Effective teachers help students to develop a more long-lasting, conceptual understanding of mathematics (Boaler 2013; Ma 1999). Those students are more likely to enter college and to finish (Conley 2007).

Effective teachers are those who develop strong knowledge of teaching, content, and how their students learn (Darling-Hammond et al. 2005). This type of knowledge is developed in teacher preparation programs and through in-service professional development. Our work explored growth in teacher knowledge of instructional approaches, particularly with regard to classroom communication.

Technology offers innovative and effective ways for transforming practice and shifting mathematical communication within the classroom (Ares et al. 2009; Goos et al. 2003). The use of multiple mathematical representations (graphs, charts) facilitated by graphing calculators provides objects or artifacts to support conjecture and reflection (Papert and Harel 1991).

Technology can provide a means for professional growth as teachers develop new pedagogical identities through an appropriation of technology in the classroom (Goos 2005). In a longitudinal study investigating teachers’ adoption of technology for the classroom, Hennessy et al. (2005) described a “pedagogical evolution” (p. 186) as teachers incorporated technology into their practices. The authors reported teacher accounts of using technology to support, enhance, and extend existing classroom practice that included exploiting the role of technology in facilitating inquiry in the classroom, or utilizing dynamic representations afforded by technology to increase student access to difficult mathematical and scientific concepts. The authors describe a shift in teachers’ practice and thinking during this evolution.

Our work is situated within such research and examines the transformative role of a particular type of technology-enhanced learning environment that supports communication of mathematical ideas across a diverse population of teachers and students through a randomized control experiment. Our aim is to inform teacher growth and effectiveness in in-service populations.

Literature review

Our study explored the relative importance of factors that have been shown to influence student mathematics understanding and performance. Building on past work, we investigated the strength of the relationship of conditional measures (teacher contextual factors) and instructional measures (teacher practices and focus of instruction) with student performance. Our goal was to understand the relationships between teacher conditional and instructional measures and student performance, in the context of this current study of SimCalc as a replacement for Algebra 2 curriculum.

Conditional measures for predicting student success

Conditional measures such as teacher experience, background/education, and the size of the class being taught have been found to have some predictive value for student success. Years of teaching experience, though extensively studied, demonstrate a weak correlation with student performance (Goldhaber 2002; Goldhaber and Anthony 2007). Several studies found significant differences between teachers’ efficacy early in their careers and after many years of experience. These studies suggest that, on average, teachers are more effective after 5 years in the classroom (Coleman et al. 1966; Klitgaard and Hall 1974; Murnane and Phillips 1981).

While there is some disagreement about the effects of class size reduction generally (Mishel and Rothstein 2002), the reduction of class size seems to be more predictive of higher student performance for some students particularly with low socioeconomic status (Finn et al. 2003; Hoxby 2000). Finn et al. (2003) proposed that this difference is largely due to differences in engagement (learning behavior and pro-social behavior). Teacher quality may, however, exert a greater effect in influencing student outcomes than reducing class size (Mishel and Rothstein 2002).

Teacher background characteristics such as certification status and appropriate degree(s) in the field that is taught have significant positive correlation with student outcomes (Darling-Hammond 1999; Hawk et al. 1985). These characteristics are thought to be proxies for significant pedagogical and content knowledge, respectively. Teachers’ mathematical content knowledge for teaching has been shown to have a strong association with student achievement (Ball and Bass 2002; Ball et al. 2001; Fennema and Franke 1992; Hill et al. 2005).

Instructional measures for predicting student success

An integral element of the SimCalc learning environment is the necessity of frequent mathematical discourse about the multiple mathematical representations that are created in the software. Such communication grows organically as needed in response to sophisticated mathematical concepts and is essential to SimCalc pedagogy. Hence, research on student and teacher mathematical discourse is particularly relevant. In addition, our work with SimCalc is situated within a rich literature on the use of technology to enhance or improve instruction.

The quality of student discourse has an important impact on student learning (Hegedus and Penuel 2008; Huang et al. 2005; Hunt et al. 2002; Khisty 2002). The technology itself plays an important role in facilitating meaningful communication in the mathematics classroom. Goos et al. (2003) suggested that graphing calculators used in the classroom could facilitate communication and sharing of knowledge in both private and public settings. Additionally, the authors found that when teachers invited students to share their work publicly via the overhead projector, the technology became a discourse tool that mediated whole-class discussion rather than a presentation device.

Staples (2007) outlined a tripartite model of the teacher’s role in supporting collaborative inquiry: (1) supporting students in making contributions; (2) establishing and monitoring a common ground; and (3) guiding the mathematics. Staples (2007) analyzed and described teacher moves and skills to bring about productive whole-class discussions in a reform-based mathematics classroom while actively involving students in the collaborative process to organize a collaborative learning environment and promote and extend students’ mathematical understanding, which informed our overarching design principles. Ares et al. (2009) found that the use of technology in a connected classroom environment facilitates student agency, power over their learning, and gives them an opportunity to communicate about and participate in powerful mathematical ideas.

More generally, numerous studies support the assertion that instructional technology impacts student performance, classroom climate, and attitudes of teachers and students about learning (Becker et al. 1999; Christensen 1997; Ertmer 2005; Means 2010; Pierson 2001; Pitler et al. 2012). Means (2010) points out that technology was most successfully utilized when it was part of a greater vision for approaching instruction. This included overt principal support and the ability and disposition of teachers using technology to collaborate and reflect on its use. Bitner and Bitner (2002) reported that the use of technology can have positive impacts on students’ school experience but, to be successful, implementers must provide extensive professional development for teachers and explore new administrative structures. Although schools in our study exhibited a variety of school and classroom structures, all teachers in the study received additional professional development with SimCalc and had regular opportunities to gather and reflect on their experiences.

Instructional variables other than those associated with technology have demonstrated an impact on student learning in research. Pedagogy that focuses on individual or group learning has also been found to have positive effects over teacher demonstrations. Boaler (2013) and Kuh (2001, 2003) found that more student time on task was predictive of greater student learning. Prince (2004), Ryan and Patrick (2001), and Turner et al. (2003) found that students expressed strong preferences for active learning over lecture. Research also suggests that a pedagogy that emphasized mathematical meaning and application produced superior student learning in both academic and applied mathematical settings (Boaler 2013; Cobb et al. 2000).

Given this literature basis, we wanted to investigate whether these contextual variables mattered in evaluating whether a technology-enhanced classroom implementation can impact student achievement as measured on a content-specific assessment related to procedural and conceptual learning and, if not, whether disaggregating or triangulating the data with other factors related to the implementation could account for such student achievement.

In particular, we aim to illustrate that certain technology design principles that are based on improving communication of fundamental mathematical practices can provide evidence for how teachers’ practices and changes in dispositions toward focusing on certain mathematical practices over time can help explain student achievement.

Prior research regarding SimCalc (which is the chosen technology platform we report on here) and related technology-enhanced learning environments has demonstrated, through efficacy and scaled randomized trials (Hegedus and Roschelle 2013; Roschelle et al. 2010), that such implementation can impact a wide variety of teachers in diverse settings to enhance students’ learning in core mathematical areas. Key features of such studies were the integration of technology, curriculum, and teacher professional development. In particular, the core technological ingredients include:
  1. 1.

    Representational infrastructure, which provides new ways for students to express, visualize, compute, and interact with mathematical objects,

     
  2. 2.

    Display infrastructure, which allows for both private (e.g., on a handheld) and public (e.g., projected) views of mathematical representations, and

     
  3. 3.

    Connectivity infrastructure, which allows for rapid communication of mathematical objects among classroom participants and supports operations that distribute, collect, and aggregate student work. (Roschelle et al. 2010; p. 236)

     

The design principles of such work were also based upon allowing teachers the flexibility through the technology-enhanced learning environment, and the teacher materials, to provide an integrated approach to how teachers can sustain and support the discussion of mathematical ideas; how each student is represented and how to compare student contributions through the affordances of the environment to support dialogic inquiry regarding group consensus of mathematical reasoning around functions and their representations.

We build on this literature which highlights teacher contextual variables and instructional practices utilizing technology in the classroom as critical to impact student learning outcomes. In this investigation utilizing SimCalc in the classroom, we provide empirical results to inform in-service teachers and teacher educators of how student-centered instructional practices, supported by technology in this type of learning environment incorporating multiple representations with enhanced communication infrastructures, can affect student learning outcomes.

Research questions

In using a technology-enhanced curriculum in contrast to existing instructional practices:
  1. 1.

    What contextual factors related to teachers can potentially explain student achievement?

     
  2. 2.

    What relationship do changes in teachers’ self-reported instructional practices and mathematical focus have with changes in their students’ achievement?

     

Our first question is focused on specific teacher conditional measures outlined in the literature review that are related to student achievement in learning mathematics including how many years a teacher has taught, class size, mathematical background, and total time spent in covering the curriculum. The latter factor often varies for implementation studies and could be an important factor in more significant learning gains. This is often controlled for in studies (Roschelle et al. 2010) as we have done here.

Our second question examines whether changes in teacher instructional measures can explain student achievement on Algebra 2 concepts including quadratic and exponential functions and relationships. In particular, we examine teacher dispositions toward developing communication techniques inside their classrooms. We believe these dispositions are a significant affordance of the technology-enhanced learning environment as described earlier. In addition, we utilize other instruments to analyze relationships between student performance and teachers’ performance goals as well as a longitudinal visual analysis of class objectives using sparklines (Tufte 2006) and how these change over time.

Design and implementation of SimCalc

The SimCalc software addresses content through dynamic representations and student participation via classroom connectivity. It runs as a stand-alone application on the TI-83+/84+ family of graphing calculators, a cross-platform Java Application, and in combination using networks such as TI-Navigator learning system. These systems allow students to submit functions from their calculator, which are aggregated for public display and execution via the Java application. Our activity structures are at the core of our innovation. They are designed to increase understanding of nonlinear functions and variation by allowing students to be intimately involved with the mathematical objects that they create.

Sample activity: exploring quadratic functions through linear velocity functions

This sample activity presented is the third activity in the second unit of materials and continues to introduce students to quadratic functions by examining the attributes of the function in terms of a runner moving with a linearly varying speed. This example activity uses the connectivity features of SimCalc to create a family of quadratic functions via varying velocity (or rate) graphs with the aim of creating a systematically varying family of quadratic functions that differ from an original target function according to students’ group numbers. These functions are created by focusing students on corresponding velocity graphs and to use those, along with the table and the motion, to understand the attributes of a quadratic function. The expectation for teachers, as presented in the curriculum materials and in the professional development sessions, is that she or he will show the various representations over time starting with the motion, versus all representations at once, and a discussion will happen exploring what is going on in each representation. Further, the expectation for teachers is that the information within each mathematical representation will be used to answer the question, “what should we expect to see?” in a new mathematical representation that has not yet been displayed (see Fig. 1). With this discussion format, students are expected to use what they have done in their group and the task to think more generally about all groups in the classroom. For example, after a whole-class discussion about the collection of motions, students are asked to make conjectures and predictions about the collection of functions in a velocity graph representation. Furthermore, for each representation, a dialogue about whether the class submissions are what was expected or not will take place and about whom each runner represents and how students have determined this.
Fig. 1

Various representations (a world, b velocity graph, c position graph, and d table) of group work across a class

Suggested dialogue prompts are provided in the materials to encourage the comparison of work across representations and the comparison of work across student group contributions. A few examples of dialogue prompts across representations include:
  • What will the position versus time graphs of these velocity functions look like?

  • Can we determine the final position of the runners at the end of the motion, 5 s, based on their velocity graphs?

  • Can we determine where the runners will meet in the world based on their velocity graphs?

A few examples of dialogue prompts across groups include:
  • How much faster do group 4’s runners travel compared to group 1’s runners?

  • What is varying between groups in the velocity graph? In the position graph? In a table? In the velocity function expression?

Our curriculum focuses on multiple representations of quadratic functions including interpreting graphical forms (e.g., parabolas) and algebraic procedures (e.g., factoring) as well as exponential functions including properties and ways of modeling. Our materials focus on general skills for manipulating function expressions, solving systems of nonlinear equations and conceptual tasks to develop generalized understanding, and applications of parameterized families of functions. The complete curriculum intervention replaces approximately 3–6 weeks (depending on the length of class time) of existing materials (30 lessons or 1,500 min of instructional time) and can be downloaded freely at http://www.kaputcenter.umassd.edu/products/curriculum_new/algebra2/ and http://www.kaputcenter.umassd.edu/products/curriculum_new/.

In essence, using SimCalc allowed teachers flexible support in utilizing many representational and communicational affordances. To utilize and re-frame Roschelle et al. (2010) these include:
  • Set up a classroom register and cluster students into mathematically meaningful group structures,

  • Distribute a configured document to students giving them a particular setup but with the ability to make slight mathematically meaningful alterations as needed (e.g., change the expected start point based on the calculator-based assignment),

  • Control which representations can be shown at any time (for example, velocity of time, or a particular group, all are displayed above), and

  • Teachers can collect all work into their private cache at will for public display and discussion,

  • Hide or show various student contributions and representations of such contributions that had not been accessible to students for the purpose of the activity, but is available to the teacher in the computer software, and

  • Control the main display of any particular student device to focus attention of the whole class on a particular construction or enabling a construction to be edited live in front of the class at the will of the teacher.

Curriculum materials and teacher support

The curriculum materials are divided between teacher materials and student activity sheets. For the teacher, this included the activity materials, problem sheets, answer keys, and dialogue boxes with suggested questions to focus the classroom discourse on linking multiple representational features and generalization. These questions were developed by analyzing SimCalc classrooms from previous work where we observed questions that sustained discussion or helped focus students’ attention on the relevant topic or concept. Expert SimCalc teacher leaders and department chairpersons were important partners in the activity design in selecting core topics, skills, and concepts based on their relative importance and range of difficulty. These key partners also provided input in determining the length of the curriculum package, the format of the materials, and necessary technical support. The materials were developed and piloted in eight classrooms prior to the main implementation. The materials were revised following this pilot study in consultation with this expert group of teacher leaders.

Teachers met weekly with the research team for professional development over 6 weeks. Each session lasted approximately 2 h with some follow-up questions addressed via email/phone when they were implementing the curriculum. We wanted to ensure that the time needed to train teachers to implement the system was not unwieldy (approximately 15 h) to ensure ease of dissemination in the future. Our main focus for these weekly professional development meetings was on how to implement the technology and the associated curriculum materials. In particular, we focused on the dialogue prompt boxes in the teacher materials to support communication of ideas as an integrated package. We also presented several classroom videos of SimCalc being used from prior work. We discussed types of student interactions, teacher responses, technology use, and question strategies enacted.

Research design

Experimental design

We investigated our research questions by conducting a cluster-randomized trial in six high schools in Massachusetts, USA, of varying academic achievement levels. The cluster was the classroom and hence was the unit of randomization. Given that several sections of Algebra 2 were taught in our participating schools with some teachers teaching more than one section, we used this unit to increase our pool for assignment. Following a power analysis, we assumed that all the variability was at the student level for both treatment and control and that we needed 28 clusters, to achieve power = .80 when ρ = .10, δ = .40, and n = 25.

We used the same pre/posttest measurement to evaluate differences in learning core algebra concepts and skills with respect to various student- and class-level variables. At the teacher level, we used the same pre/postinstrument to measure self-reported classroom practices and daily logs to address both research questions.

Sample and demographics

Our sample region offered a diversity of school districts including urban, suburban, and rural settings with a wide range of proficiency levels as evaluated by the Massachusetts State Education department in the USA. Six school districts agreed to participate in our main study. The participants for the study were high school Algebra 2 students (15–17 years old) and their teachers. Each school district had two SimCalc classes and two non-SimCalc classes, which were randomly assigned. The non-SimCalc classes continued using their district adopted classroom materials while the SimCalc classes replaced portions of their text with the SimCalc materials. In total, 606 students agreed to participate in the study.

Implementation design

The study was designed as a cluster-randomized control trial where students were nested within classes (cluster). This allowed us to compare results within a cluster (class) and across clusters (classes). In the summer prior to the intervention, the mathematics department chairpersons of the participating high schools supplied a list of all teachers who had agreed to participate in the study and their assigned Algebra 2 (or equivalent) classes throughout the year. From a total sample size of 45 eligible classes, 28 classrooms (clusters) were selected to participate in the study. We randomly selected a pair (ensuring each pair was from one school) and randomly assigned one of the pair red or black. With 14 pairs selected, we flipped a coin to assign whether red was treatment or control (and hence black the other condition).

Treatment teachers implemented the SimCalc Algebra 2 curriculum, which replaced the regular activities and curriculum but covered identical topics. The curriculum was designed to be implemented in 25 h. Teachers implemented the curriculum at a variety of points during the year. They implemented for various continuous lengths of days due to a variety of factors. These included whether they were teaching in the Fall or Spring semester, whether they were on a block (90 min) or period (45 min) schedule, or if they were in a vocational setting (where classes cycle between 9 days in academic classrooms and 9 days in their shop). Even though the actual class time in self-reported minutes spent by all participating teachers varied, the mean time did not differ significantly from the expected total time to complete the intervention of 1,500 min. Control teachers continued to teach their existing Algebra 2 curriculum.

We met with participating treatment and control teachers separately to outline the purpose of our program, our expected outcomes as well as participant requirements. In these initial meetings, both sets of teachers were explained what was required of their participation including:
  • Attend project orientation meeting,

  • The need to collect student and parent consent forms,

  • Administer a content test right before and right after the designated topics as aligned with their mapped curriculum,

  • Complete a teacher background survey developed by UMass Donahue Institute (one-time only per year),

  • Complete a teacher survey on instructional practices at the start and end of the intervention, and

  • Complete a daily log for every class they followed the curriculum as mapped out.

Both groups of teachers were paid $250 to complete these tasks.

In addition, treatment teachers were required to:
  • Complete all professional development sessions, and

  • Participate in a postintervention interview.

Treatment teachers were paid an additional $500 for this, which was based on the required district rate for 12 h of professional development. We did not pay treatment teachers any extra for preparation or incentivize them in anyway. Classrooms were randomly assigned and teachers had agreed to these conditions up front. They had equal chance of being selected for both treatment and control classrooms. Stipends were paid on completion of the intervention and delivery of a completed set of forms as outlined above. Stipends were calculated on expected hours of time spent and a state mandated hourly rate. Treatment teachers received all materials and equipment (where necessary) at the orientation meeting.

Assessment design and development

We used and developed a variety of instruments to evaluate impact of the intervention on student learning and related factors. Our primary instrument was an Algebra 2 content test that was developed through an iterative method and was composed of conceptual and procedural tasks aligned with the curriculum that was being replaced. The test focused on quadratic and exponential functions, their properties and attributes across different representations, factoring, identifying patterns of quadratic functions and representing them symbolically, interpreting quadratic and exponential relationships, solving and identifying solutions of quadratic functions, and systems of quadratic functions.

This student mathematics content test was developed and piloted the previous year as a pre–posttest around the Algebra 2 intervention. To construct the test, we followed the principled assessment design approach (Mislevy et al. 2003). For details about the creation of the mathematics content test utilizing this approach, see the technical report: http://www.kaputcenter.umassd.edu/products/technical_reports/.

The Algebra 2 content test consists of 19 items for a total of 22 points: 18 multiple-choice items and 1 open response, the content test and rubric are published in the appendix of the technical report #5, see link above. Students in both treatment and control classes completed the mathematics content test at the start and end of the intervention (and at similar content times for control classrooms). The assessment is composed of standardized test items to measure student’s mathematical ability and problem-solving skills before and after the intervention across three content categories determined theoretically after the test was compiled: multiple representation items (8), graphical interpretation items (5), and procedural/computational items (6). Of the 19 items on the content test, 9 are designated as conceptually simple type items (M1) and 10 are designated as conceptually complex type items (M2) as in other SimCalc studies (see Roschelle and Shechtman 2013). The M1 category refers to items that are typically one-step problems that are conceptually simple. An example of an M1 item or task would be to ask students to read a specific value in a table or on a graph. The M2 category refers to items that are conceptually more difficult and complex. Often M2 items are multi-step items. Specific details on the instrument construction and validation can be found in technical report #5, see http://www.kaputcenter.umassd.edu/products/technical_reports/.

The Teacher Instructional Practices (TIP) survey used was a modified Horizons 2000 National Survey of Science and Mathematics Education Mathematics Questionnaire. The 2000 National Survey of Science and Mathematics Education was prepared with support from the National Science Foundation. A total of 5,728 science and mathematics teachers in schools across the United States participated in this survey, which had a response rate of 74 % (Horizons Research, Inc. 2002). From the Horizons 2000 National Survey, we included the following items in the Teacher Instructional Practices survey:
  • Question 1: teacher opinion on statements about students, learning, and teaching,

  • Question 2: familiarity with NCTM standards,

  • Question 3: preparedness to teach,

  • Question 15: factual information about class meeting hours,

  • Questions 16 and 17: ability of the students in the class relative to other students,

  • Question 19: emphasis on various student objectives for the course,

  • Question 20: how often various instructional practices occur in the class,

  • Question 21: how often do students take part in various class activities,

  • Question 22: how often do students use computers/calculators to do various activities,

  • Question 25: Amount of homework assigned in a given week,

  • Question 27b: Reporting of percentage of textbook “covered” in the course,

  • Question 27c: Teacher rating of quality of textbook, and

  • Question 29: Most common classroom practices, which was modified.

Some of the items included in the Horizons survey were deleted, such as demographic items and background items related to teaching, because we either gathered the information on a background questionnaire or it was not relevant to our pool of teachers. Additional items were added to our modified Horizons Survey related to teacher use of and comfort with technology in the classroom. We changed section headings and instructions where necessary to relate explicitly to the items we included. Question 29 was modified to get a sense of, in general, what are the three most common practices in the teacher’s mathematics classroom. The whole instrument was piloted online via Survey Monkey to 57 respondents to check for clarity of language and completion time. Treatment and control teachers participating in this research project completed the teacher instructional practices survey at the start of the school year and after completing the intervention.

The Horizons Research group used factor analysis to create several composite variables from items on the survey that represent important constructs related to mathematics education. Mean composite scores were used to report the status of high school mathematics teaching (Horizons Research, Inc. 2002). In the analysis of our modified Teacher Instructional Practices (TIP) survey, we created four of the same composite scores for analysis: mathematics reasoning objectives (α = .757), use of traditional teaching practices (α = .779), use of strategies to develop students’ abilities to communicate ideas (α = .790), and use of calculators/computers (α = .780). Each of these constructs (TIP Factors) had a high reliability with our data set, (Cronbach’s Coefficient Alpha included above). The reliability was similar to the reported reliabilities of the Horizons constructs based on their entire sample. From these composite scores, we could measure the change in teachers’ instructional practices in each experimental group on each of these constructs, since teachers completed this at the start of the school year and after completing the intervention.

A daily log instrument was constructed for both experimental groups. Additional items were added to the treatment teacher form that related to specific aspects of implementing the SimCalc program. We utilize the overlapping part of each instrument in our reporting here including focus on various forms of mathematical problem-solving behaviors. We also use this in our sparkline analysis as time spent covering the prescribed curriculum in both conditions used as a covariate in our modeling later on.

Several of the items on the teacher classroom daily log came from, or were adapted from, the daily teacher log administered in the SimCalc Texas Scale-Up Project (Roschelle and Shechtman 2013). An example of an adapted item from the Texas Scale-Up Project used in our analysis is the teacher’s reporting of the extent of class focus on various topics such as how to reason across multiple representations, how to construct an algebraic expression, and how to construct or interpret graphs. Each of these items was responded to daily on a 5-point scale of ‘not at all’ a focus to ‘a major focus’.

Another set of items used in our analysis came from the work of Porter (2002) in which teachers report on the extent of the class focus on the following performance goals: (a) memorizing facts, definitions, and formulas, (b) perform procedures/solve routine problems, (c) communicate understanding of concepts, (d) solve non-routine problems/make connections, and (e) conjecture, generalize, or prove. Each of these items was responded to daily on a 5-point scale of ‘not a focus at all’ to ‘a major focus’. Porter (2002) developed these five categories with the aim of creating uniform descriptors of topics and categories of cognitive demand. These categories of cognitive demand “distinguish what it is about a specific topic that a student is expected to know or be able to do” (Porter 2002, p. 12). Thus, with this question, teachers are reporting on their classroom focus of each particular category of cognitive demand.

Additional items were adapted from a daily reflective journal administered by the authors in previous SimCalc projects. Some items were included to track factual information about the class each day, such as activity completed, technology used, and curricular materials used. In addition to factual information about the time, activity, and materials used, the teacher daily log aimed to provide a teacher reported account of the class’ focus on specific topics, percentage of time spent on SimCalc and non-SimCalc activities, and the percentage of class time spent on various activities such as whole-class lecture, teacher demonstration, whole-class discussion, individual student work, student pair work, and student small group work. The teacher daily log also aimed to gather teacher reported information on whether any of the suggested teacher dialogue boxes were utilized in class.

This daily log was piloted with non-participating high school teachers to gauge completion time and readability. A similar log for the control teachers was also created. This log included several identical questions about focus on specific topics, performance goals, and amount of time spent preparing the lesson. It also tracked the mathematical activity and percentage of class time spent on various activities such as whole-class lecture, teacher demonstration, whole-class discussion, individual student work, student pair work, and student small group work. Teachers completed this log every day that their mathematics class met between and including the days when administering the content pretest and the content posttest. The log was estimated to take between 10 and 15 min to complete. All instruments can be downloaded here: http://www.kaputcenter.umassd.edu/products/instruments/.

Analytical framework and models

Given that our research design was a cluster-randomized trial, the most applicable form of analysis was Hierarchical Linear Modeling (Snjiders and Bosker 1999). The aim of HLM was to explore whether such explanatory variables (when nested) can significantly predict the dependent variable (student gain scores), and how much variance the random effects can account for. Such effects will tabulate our results. Including random effects allows there to be unexplained between group variability that can be explained by group-level variables (related to classrooms, our unit of randomization in our study). For example, success of a student within a particular class may derive from interactions with other students sharing the same class environment, or from the effect of sharing the same teacher, or from numerous other classroom-level conditions. Standard OLS techniques aggregate classroom effect and do not distinguish the effect of clustering; they overestimate the effect and underestimate the error (Hegedus et al. 2013b).

Our pre-intervention power analysis also assumed such a form of analysis. Our unit of randomization was the classroom, and our unit of analysis was student outcomes on a content test; such units often differ for a cluster-randomized trial. Given sufficient numbers of classrooms were assigned to different conditions and the small number of schools, we expected to conduct two-level modeling with random effects where Level 1 was student-level measures, and Level 2 was classroom-level covariates and contextual variables. Subsequent analysis supports such modeling when observing the interclass correlation coefficients. Our data did not support three-level modeling (including a school level). Our general model accounted for variation at Level 1 and Level 2 to include class (teacher)-level, cross-level interactions, and forced interactions between two variables at the same level (such as an experimental variable) as well as random effects. Most of the models used one Level 1 and/or one Level 2 covariate with interactional effects, but sometimes we used more than one covariate at each level as generally outlined below:

Level 1:
$$Y_{ij} = \beta_{0j} + R_{ij}$$
Level 2:
$$\beta_{0j} = \gamma_{00} + \gamma_{01} Z_{1j} + \gamma_{02} Z_{2j} + \gamma_{03} Z_{1j} \times Z_{2j} + \gamma_{04} Z_{3j} + \gamma_{05} Z_{1j} \times Z_{3j} + \cdots + U_{0j}$$
$$\beta_{1j} = \gamma_{10}$$

The outcome measure (Y) is student performance as measured by a difference score from pre-to-post on our Algebra 2 content instrument. We prefer to use difference scores versus posttest scores as an outcome variable as an analysis of difference scores addresses questions about improvement from pre- to posttest. The gain score is an unbiased estimate of the treatment effect (Maris 1998). For all of our models, Z1j is the treatment variable, which is dichotomous (0: control, 1: treatment) at Level 2. Other Level 2 variables include teacher contextual variables and self-reported instructional practice variables. The model assumes independent and identically distributed group and individual effects, i.e., U0j ~ (0, τ00) and Rij ~ (0, σ2). Unexplained group effects are controlled by similar mechanisms across all groups and operate independently between groups (Snjiders and Bosker 1999). All continuous Level 2 variables are group mean centered.

Results

In total 606 students and 16 teachers agreed to participate in the study with three students being removed since they participated in a control and treatment class during the year. We could not collect complete data for 6.1 % (37) of our students due to them missing either a pretest or a posttest. Our final sample (treatment = 298; control = 268 students) includes only students who took a pretest and a corresponding identical posttest from these two scores we created a gain score that is used as our main outcome measure.

Descriptive statistics

Our sample student demographics are gender (male: 50.3 %, female: 49.7 %), socioeconomic status (Received Free or Reduced Lunch: 22.2 %, Denied: 77.8 %), and ethnicity (Hispanic: 2.8 %, African American: 8.8 %, Asian: 2.2 %, White: 72.1 %, Other: 14.1 %). There were 16 control teachers and 15 treatment teachers in the study. There were no significant differences between experimental groups for class size (control: M = 22.63, SD = 4.573; treatment: M = 21.67, SD = 5.589), years teaching (control: M = 14.19, SD = 9.432; treatment: M = 13.47, SD = 6.424), whether the teacher had a bachelors degree in mathematics or not (control: M = .39, SD = .500; treatment: M = .40, SD = .507), content coverage (control: M = 2,168.19 min, SD = 607.317; treatment: M = 1,836.73 min, SD = 315.999), pre-TIP factor 1 (control: M = .88, SD = .086; treatment: M = .84, SD = .083), pre-TIP factor 2 (control: M = .81, SD = .062; treatment: M = .86, SD = .092), pre-TIP factor 3 (control: M = .71, SD = .146; treatment: M = .72, SD = .104), and pre-TIP factor 4 (control: M = .43, SD = .148; treatment: M = .39, SD = .132).

Main effects of treatment

The main effect of the SimCalc Treatment condition was statistically significant (see Table 1) with a medium effect size (Cohen’s d = .385). Gains occurred across both experimental conditions, but were larger and more frequent for the SimCalc classes. Students in both experimental conditions were not significantly different on the pretest in both studies. Hegedus et al. (2013a) offer a full account of student performance according to various sub-populations and mathematical skills. We build on this work and previous SimCalc work in various states and countries to investigate what contextual factors and teacher instructional practices can account for such learning at the classroom level (Hegedus and Roschelle 2013).
Table 1

Algebra 2 main study effect

 

N

Pretest

Posttest

Gain

Relative gain

Effect size

Mean

SD

Mean

SD

Mean

SD

Mean

SD

Control

268

9.43

3.448

9.91

4.180

.48

3.370

.11(11 %)

.470

.385*** [.120, .650]

Treatment

298

9.68

3.806

11.40

4.334

1.72

3.084

.25(25 %)

.488

 

*** p < .001

The unconditional model illustrated a significant intraclass coefficient (ICC) which verified that modeling the data with two levels was necessary (ρ = .19). A basic HLM was first conducted with no Level 1 covariate, just the treatment variable at Level 2, and random effects. We first report these positive student-level results in Table 2 as a context for us to present our teacher-level factors. We modeled these in order to disaggregate the student achievement data and investigate answers to our primary research questions with respect to teachers and their self-reported practices.
Table 2

Main effect of treatment on student achievement

Main effect is treatment condition

Main study (Level 1 n = 566; Level 2 N = 31)

Model

Value

p

SE

Unconditional model

   

 Intercept

.92**

.003

.285

 Level 2 variance (τ)

2.05

  

 Residual variance for Level 1 (σ2)

8.98

  

 χ2

145.01***

.000

 

 ICC

.186

  

Treatment

   

 Main effect

1.35*

.013

.510

 Intercept

.27

.570

.463

 Level 2 variance (τ)

1.65

  

 Residual variance for Level 1 (σ2)

8.98

  

 χ2

119.94***

.000

 

** p < .01; *** p < .001

Teacher conditional measures (contextual factors)

Given the specific student-level achievement we observed in our overall main effect model, we investigated specific teacher-related variables that have corresponded to student achievement in previous research (see Table 3).1 Each of these was modeled as Level 2 (teacher/class) variables in our HLM analyses, hence testing whether such contextual variables can explain student achievement when accounting for differences within and across classrooms. We also forced an interaction effect between each variable (group mean centered) and a dichotomous experimental variable (0: control, 1: treatment).
Table 3

Teacher contextual variables

Main effect is treatment condition

Main study (Level 1 n = 566; Level 2 N = 31)

Model

Value

p

SE

Class size (Model 2)

   

 Main effect (TREAT)

1.37**

.009

.478

 Main effect (class size)

.16

.080

.087

 Class size × TREAT interaction

−.09

.372

.10

 Intercept

.35

.414

.421

 Level 2 variance (τ)

1.44

  

 Residual variance for Level 1 (σ2)

8.99

  

 χ2

98.34***

.000

 

Years teaching (Model 2)

   

 Main effect (TREAT)

1.35**

.012

.501

 Main effect (years teaching)

.04

.439

.048

 Years TEACHING × TREAT interaction

−.03

.588

.056

 Intercept

.27

.557

.452

 Level 2 variance (τ)

1.74

  

 Residual variance for Level 1 (σ2)

8.98

  

 χ2

115.59***

.000

 

Math bachelors degree (Model 2)

   

 Main effect (TREAT)

1.36*

.041

.636

 Main effect (Math Bachelors)

1.45

.074

.781

 Math bachelors × TREAT interaction

−.11

.898

.828

 Intercept

−.29

.642

.621

 Level 2 variance (τ)

1.19

  

 Residual variance for Level 1 (σ2)

9.00

  

 χ2

84.37***

.000

 

Math content coverage (Model 2)

   

 Main effect (TREAT)

1.44**

.011

.524

 Main effect (content coverage)

.0001

.869

.001

 Coverage × TREAT interaction

.0001

.673

.001

 Intercept

.25

.610

.481

 Level 2 variance (τ)

1.81

  

 Residual variance for Level 1 (σ2)

8.98

  

 χ2

119.64***

.000

 

p < .10; * p < .05; ** p < .01; *** p < .001

Given some variation across class sizes for both experimental conditions (control: M = 22.63, SD = 4.573; treatment: M = 21.67, SD = 5.589) class size only marginally predicts learning gains (p = .08) but not for any particular experimental group.

All teachers in our study, except one, had more than 5 years of experience. The range of experience varied from 5 to 37 years. It was expected that teachers in Algebra 2, which is often an upper high school class or honors class for 10th grade, are taught by more experienced teachers. This was evident in our sample for both experimental condition teachers but we found that years of teaching were not predictive of student achievement overall for either experimental group.

Approximately 50 % of teachers in both experimental conditions had Bachelors degrees in Mathematics and having such a degree only marginally predicted student success (p = .074) but we found qualifications were not predictive of student achievement overall for either experimental group.

Content coverage was measured in total minutes spent on curriculum as reported in teacher logs versus days on curriculum activities (whether treatment or control) and was not a predictor of student achievement.

In summary, our results (see Table 3) illustrate that in our cluster-randomized trial none of the traditional teacher-related factors (class size, years of teaching, qualifications, time on content) predicted student achievement on core algebra skills and conceptual understanding. We now examine specific instructional practices to explore potential correlates with student achievement. The motivation for such returns to our core design principles of technology-enhanced communication and mathematical discourse.

Instructional measures

In order to investigate further the potential reasons for such student achievement, we utilized various instruments outlined above including the teacher instructional practices survey and the classroom activity log completed by all teachers on a daily basis. We use HLM, standard correlational analysis, and other visual tools to present further evidence.

We calculated four factor scores (TIP factors) for each teacher using their responses to certain items on the TIP survey outlined previously to obtain a pre- and posttest set of scores. These were used to create changes-in-instructional practices variables (post-TIP survey scores minus pre-TIP survey scores in four factors) that were entered into our HLM as Level 2 covariates (see Table 4). We also forced interactions between these variables and a dichotomous experimental variable (0: control, 1: treatment) at Level 2. This more complex model still illustrated a significant treatment effect (p = .045). TIP factor 3 was a significant predictor overall, but even greater differences were found for the SimCalc treatment group at the class level (p = .008). This third factor, which we describe as “Use of strategies to develop students’ abilities to communicate ideas”, predicts student achievement for SimCalc students when accounting for classroom variation in strong and significant ways (β = 23.68). This confirms that changes in teachers’ instructional practices toward student-centered communication were an important factor for success. This result illustrates two things: Teachers’ instructional practices regarding types of discourse can change and can be related to student achievement, and this is a key design principle to our technology-enhanced learning environment. We return to this idea in our discussion section. We believe that it is also important to note that use of traditional teaching practices and methods are not significant predictors of student achievement neither overall nor by each experimental group.
Table 4

Main effect of instructional measures on student achievement

Main effect is treatment condition

Main study (Level 1 n = 566; Level 2 N = 31)

Model

Value

p

SE

TIP Factors (Model 2)

   

 Main effect (TREAT)

1.58*

.045

.746

 Main effect (TIP factor 1)

.18

.979

6.526

 Main effect (TIP factor 2)

−11.55

.467

15.601

 Main effect (TIP factor 3)

−16.36*

.049

7.834

 Main effect (TIP factor 4)

−.99

.858

5.449

 TIP factor 1 × TREAT interaction

−2.70

.696

6.834

 TIP factor 2 × TREAT interaction

9.27

.564

15.806

 TIP factor 3 × TREAT interaction

23.68**

.008

7.979

 TIP factor 4 × TREAT interaction

.62

.913

5.530

 Intercept

−.03

.962

.711

 Level 2 variance (τ)

1.98

  

 Residual variance for Level 1 (σ2)

8.98

  

 χ2

97.77***

.000

 

* p < .05; ** p < .01; *** p < .001

Item 9 of our daily classroom activity log from the work of Porter (2002) asked teachers: To what extent did you and your class focus on the following performance goals for students?
  1. 1.

    Memorize facts,

     
  2. 2.

    Perform procedures/solve routine problems,

     
  3. 3.

    Communicate understanding of concepts,

     
  4. 4.

    Solve non-routine problems, and

     
  5. 5.

    Conjecture, generalize, or prove.

     
Teachers rated each item on a 5-point Likert scale (from ‘not a focus at all’ to ‘a major focus’). Scales were normalized, and a mean score per class for each of the various performance goals was calculated. Table 5 illustrates the correlational matrix for treatment and control classrooms between student achievement (difference scores) and changes for each of the five performance goals. The split between how such performance goals are significantly correlated with change in student achievement for each experimental group is quite striking. Control teacher reports of their focus on traditional performance goals are correlated with statistical significance to student achievement including memorizing facts (p < .001) and performing procedures/solving routine problems (p < .001), whereas these are not significant factors for the treatment teachers. Moreover, treatment teacher reports of their focus on non-standard performance goals are correlated with statistical significance to student achievement including solving non-routine problems (p = .049) and conjecture, generalize, or prove (p = .006).
Table 5

Pearson correlation of performance goals and student achievement

 

Treatment difference score

Control difference score

I. Memorize facts

−.096

.202**

II. Perform procedures/solve routine problems

−.056

.252***

III. Communicate understanding of concepts

.102

.279***

IV. Solve non-routine problems

.114*

.029

V. Conjecture, generalize, or prove

.158**

.006

Treatment n = 298; control n = 268

*** p < .001; ** p < .01; * p < .05;  p < .10

Finally, we examined visual representations of teachers’ reports on the extent to which they focused on these performance goals over time and other more specific mathematical practices. We used sparklines to generate high-density visual plots of all teachers aggregated by experimental conditions that provide evidence of how these two groups of teachers changed their practices over time. These plots offer an overall trend for the aggregate groups (black: treatment teachers, grey: control teachers) with end point markers to offer some range of scale. Teachers were asked to rate their focus on such activities using a 5-point Likert scale (from ‘not a focus at all’ to ‘a major focus’).

Most notably, aggregate trend lines for the following classroom practices illustrate overall trends in teachers’ foci, which triangulate in part with the statistical results above. These sparklines involve thousands of data points to attempt to offer a visual representation of the overall population disaggregated by experimental conditional and continuous over time in terms of the frequency of the log registers. We present such illustrations as qualitative descriptions of similarity/dissimilarity and convergence/divergence of the experimental groups to further provide evidence of evolving teacher practices versus point serial estimates.

The first three log items illustrate similarity (Figs. 2, 3, 4): Memorize facts, definitions, and formulas: control group teachers are generally focused on such classroom practices but both groups are similar and converge toward the end (see Fig. 2).
Fig. 2

Sparkline graphic for ‘memorize facts, definitions, formulas’ for treatment (black) and control (grey)

Fig. 3

Sparkline graphic for ‘perform procedures’ for treatment (black) and control (grey)

Fig. 4

Sparkline graphic for ‘construct algebraic expressions’ for treatment (black) and control (grey)

Perform procedures: consistently the same for both experimental groups over time with some very brief deviation toward the end of the log period (see Fig. 3).

Construct algebraic expressions: consistently the same for both experimental groups over time with some very brief deviation toward the end of the log period (see Fig. 4).

The next two log items illustrate divergence (Figs. 5, 6): Reason across multiple representations. Both experimental groups start off similarly and diverge over time with increasing focus for SimCalc teachers and less focus for control teachers (see Fig. 5).
Fig. 5

Sparkline graphic for ‘reason across multiple representations’ for treatment (black) and control (grey)

Fig. 6

Sparkline graphic for ‘make connections or comparisons across two or more functions’ for treatment (black) and control (grey)

Make connections or comparisons across two or more functions The general trend is that both experimental groups follow a similar pattern starting together but with the SimCalc teacher group being translated higher for the latter three quarters of the log period and ending significantly higher (see Fig. 6).

The last three log items illustrate strong dissimilarity (Figs. 7, 8, 9): Communicate understanding of concepts. The SimCalc teachers are consistently higher than the control group teachers throughout the intervention (see Fig. 7).
Fig. 7

Sparkline graphic for ‘communicate understanding of concepts’ for treatment (black) and control (grey)

Fig. 8

Sparkline graphic for ‘solve non-routine problems’ for treatment (black) and control (grey)

Fig. 9

Sparkline graphic for ‘conjecture, generalize, prove’ for treatment (black) and control (grey)

Solve non-routine problems The SimCalc teachers are consistently higher than the control group teachers throughout the intervention (see Fig. 8).

Conjecture, generalize, and prove The SimCalc teachers are consistently higher than the control group teachers throughout the intervention (see Fig. 9).

With respect to Porter’s classroom performance goals, the weak dissimilarity graphs relate to “memorizing facts and definitions” and “performing procedures/solve routine problems”. These appear to be important and consistent for both experimental groups to achieve such behaviors.

In contrast, Porter’s other three classroom performance goals: “communicate understanding of concepts”, “solve non-routine problems/make connections”, and “conjecture, generalize, or prove” all have strong longitudinal dissimilarity which we triangulate with evidence provided earlier on student achievement in such technological environments.

Porter (2002) reports that the content of instruction, the intersection of teacher reported classroom performance goals and the mathematical content to be taught, is a powerful predictor of gains in student achievement. He further argues that had content been used as a control variable, as we have done in this SimCalc work, undoubtedly researchers would be better able to identify pedagogical practices specifically (i.e., classroom performance goals) that contribute to student achievement gains. Porter acknowledges the importance of this finding and adds that it should be replicated.

Discussion

This study has attempted to investigate the teacher-related factors that can account for differences in student achievement over time with respect to their exposure to a technology-enhanced learning environment and the curriculum that exploit the representation and communication infrastructure available. In addressing our first research question, we found no evidence to support that traditional conditional measures could explain differences between students’ achievement in our experimental conditions. Unlike other studies on teacher quality variables, class size (Mishel and Rothstein 2002), years of teaching (Rockoff 2004), teacher qualifications (Clotfelter et al. 2007), and time spent on content (Cavanagh 2006) were not significant predictors of student achievement. Given such findings it was important for us to investigate what teacher-related instructional measures could explain such variation and to explore potential differences in pedagogical practices or dispositions toward such practices between our experimental groups (treatment teachers vs. control teachers) in order to address our subsequent second research question.

The SimCalc learning environment does support many of the approaches highlighted in the literature earlier to correspond to effective teaching, e.g., group learning versus teacher demonstration and active versus passive learning. SimCalc modifies the group structure to be a mathematical parameter in how students work to be part of a public aggregate (e.g., a family of functions). But can changes in teachers’ instructional practices be a significant factor?

Our results show that memorization of facts and a focus on procedural tasks are still relevant for both groups; hence, treatment teachers are not privileging non-routine problem-solving or other practices at the expense of fundamental skills. Treatment teachers do focus on such classroom practices. Similarly, reasoning across multiple representations and making connections or comparisons across two or more functions is predominantly a focus for both experimental groups at the start of the intervention. However, they diverge during the last quarter of the intervention period where treatment teachers focus more on these practices than the control teachers. It is the practices and performance goals that focus more on communication of ideas and advanced mathematical thinking such as solvingnon-routine problems and conjecture, generalize, or prove that become strong indicators of dissimilar practices and significant predictors of student achievement. A major design principle of the SimCalc program is to support the communication of ideas within and across groups, which concurs with these results. In summary, we wish to highlight two major findings:
  • It is important that such instructional practices change over time and such change can predict student achievement.

  • These are the results of a cluster-randomized study that can address any confounding variables dealing with teacher selection and background since such work was conducted across a wide variety of educational settings.

The implications of such findings impact various areas including: (1) technology development and implementation in school settings, (2) teacher development, and (3) future research. We conclude by outlining such implications.

First, the SimCalc program of work is based on design principles that integrate suites of technologies and curriculum that exploit the representation and communication infrastructure affordances available. In addition, professional development that relates to the implementation of such integrated suites must address these affordances explicitly. SimCalc activity structures begin with operational forms of thinking exploring how manipulations of actions within a dynamic representation results to corresponding changes in other representations including animations. The affordances of a communication infrastructure can publically display aggregated student work within the same representational infrastructure on a teachers’ computer. The shift from private to public work is sustained by this infrastructure with the aim of moving toward symbolic forms of thinking that are established at the group level. Such design elements can be specifically related to activity structures, goals and expectations, teacher questioning strategies, and pedagogical routines.

The results of our work can potentially impact teacher preparation programs including in-service education and professional development more generally. The SimCalc PD sessions and curriculum materials focus on questioning strategies related to reasoning across multiple representations and communication of ideas within and across the parameterized space of the classroom. Given the impact on student achievement was triangulated with sustained use of such performance goals, we suggest that in-service education programs should focus on explicitly outlining and discussing such affordances with specific pedagogical strategies. In such an environment, students are mathematically identifiable. This can impact their motivation and access to the underlying mathematical structure.

It was evident that dispositions toward such relevant practices and performance goals changed over time. There was a trend toward an increase in performance goals that were higher in cognitive demand (see earlier discussion of Porter 2002). There was also a trend toward an increase in instructional practices that exploited the affordances of the technology environment to increase student access to difficult mathematical concepts, such as reasoning across representations. This was a significant contributor to student achievement. Hence, our recommendation is for teachers to be open to changes in their practice; it is not sufficient to learn how to use and effectively implement a technology-enhanced environment through a one-time professional development institution but actually experience change over time. This comes with a risk of teacher educators and educational leaders but our research provides evidence that such an approach is necessary given that traditional conditional measures or contextual factors are not sufficient to impact change in student achievement over time.

Finally, in this cluster-randomized study, we have, through an analysis of instructional measures, included the element of classroom communication that is essential to the SimCalc pedagogy in our analysis. At the scope of a cluster-randomized study, a fine grain analysis of teacher questions or an analysis of daily classroom student discourse would take time and resources that are not available to educational researchers. Our analysis of various instructional measures is twofold, we created a hierarchical linear model to analyze changes-in-instructional practice variables and the effects on student learning gains for each experimental group, while accounting for student variation and class-level variation. We also looked, over time, at teachers’ reports on the extent to which they focused on specific performance goals and mathematical practices through an examination of visual representations. This is teacher reported data from a classroom log completed daily and provides a snapshot of classroom activity as reported by the teacher with respect to the specific performance goals and mathematical practices. In collecting this information, our design recognizes both the pre-to-post changes in student achievement and the importance of classroom changes-in-instructional practices over time. This provided us with a way to maintain the scope needed for a cluster-randomized study, but also to gather and analyze data of the day-to-day instructional practices and mathematical performance goals as reported by the classroom teacher. An implication of this work is an approach in which the scope required of a cluster-randomized trial can still incorporate a more microscopic analysis of day-to-day classroom activity. This is something future research will likely further develop and refine. Our longitudinal analysis of qualitative descriptions of similarity/dissimilarity and convergence/divergence of the experimental groups could be expanded on or utilized by researchers investigating student performance outcomes via large scale or efficacy studies who wish to triangulate classroom performance goals and mathematical practices longitudinally.

We also wish to acknowledge a limitation to this work related to the student population. While we have outlined some important factors in which the participants vary, we are aware this study does not include a high percentage of free and reduced lunch students: less than one-quarter of the total sample. In designing the research study, there were additional schools with a higher percentage of students who qualified for free and reduced lunch that were asked to be partners and participate in this research. These schools declined to be a partner and support this work. This is a big challenge in educational research and one that could be overcome slowly through building partnerships and clearly communicating expectations and previous research findings.

Additionally, the location of this work occurs in a state that ranks highly in per pupil spending in the USA and highly on educational outcomes as reflected in the National Assessment of Educational Progress standardized test scores. However, several of the schools in this study are low performing and not meeting the progress and performance measures required by the state of Massachusetts to meet national standards. This may lead to future research in which instructional measures, such as the ones we report on here, are part of the analysis for teaching and learning with technology in which communication, active learning, and group work are essential components of the pedagogy and the learning experience.

Footnotes

  1. 1.

    Coverage has been centered and is based on minutes (from length of class, and number of daily logs).

Notes

Acknowledgments

This work is based upon work supported by the Institute of Education Sciences at the US Department of Education under Grant R305B070430. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Institute of Education Sciences.

References

  1. Akiba, M., LeTendre, G. K., & Scribner, J. P. (2007). Teacher quality, opportunity gap, and national achievement in 46 countries. Educational Researcher, 36(7), 369–387.CrossRefGoogle Scholar
  2. Alliance for Excellent Education. (2002). Every child a graduate: A framework for an excellent education for all middle and high school students. Washington, DC: Joftus, S. E.Google Scholar
  3. Ares, N., Stroup, W. M., & Schademan, A. R. (2009). The power of mediating artifacts in group-level development of mathematical discourses. Cognition and Instruction, 27(1), 1–24.CrossRefGoogle Scholar
  4. Ball, D. L., & Bass, H. (2002). Toward a practice-based theory of mathematical knowledge for teaching. In E. Simmt & B. Davis (Eds.), Proceedings of the annual meeting of the canadian mathematics education study group (pp. 3–14). Kingston: CMESG.Google Scholar
  5. Ball, D. L., Lubienski, S. T., & Mewborn, D. S. (2001). Research on teaching mathematics: The unsolved problem of teachers’ mathematical knowledge. In V. Richardson (Ed.) Handbook of research on teaching (4th ed., pp. 433–456). New York: Macmillan.Google Scholar
  6. Becker, H., Ravitz, J., & Wong, Y. (1999). Teacher and teacher-directed student use of computers and software (Report No. 3). Irvine, CA: Center for Research on Information Technology and Organizations (CRITO). Retrieved from http://www.crito.uci.edu/tlc/html/findings.Html
  7. Bitner, N., & Bitner, J. (2002). Integrating technology into the classroom: Eight keys to success. Journal of Technology and Teacher Education, 10(1), 95–100.Google Scholar
  8. Boaler, J. (2013). Experiencing school mathematics: Traditional and reform approaches to teaching and their impact on student learning. Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  9. Cavanagh, S. (2006). Students double-dosing on reading and math. Education Week, 25(40), 1–12.Google Scholar
  10. Choy, S. P., Horn, L. J., Nuñez, A. M., & Chen, X. (2000). Transition to college: What helps at-risk students and students whose parents did not attend college. New Directions for Institutional Research, 2000(107), 45–63. doi:10.1002/ir.10704.CrossRefGoogle Scholar
  11. Christensen, R. R. (1997). Effect of technology integration education on the attitudes of teachers and their students. Ph.D. thesis, University of North Texas, USA.Google Scholar
  12. Clotfelter, C. T., Ladd, H. F., & Vigdor, J. L. (2007). Teacher credentials and student achievement: Longitudinal analysis with student fixed effects. Economics of Education Review, 26(6), 673–682.CrossRefGoogle Scholar
  13. Clotfelter, C., Ladd, H. F., Vigdor, J., & Wheeler, J. (2006). High-poverty schools and the distribution of teachers and principals. North Carolina Law Review, 85(5), 1345–1380.Google Scholar
  14. Cobb, P., Yackel, E., & McClain, K. (2000). Symbolizing and communicating in mathematics classrooms: Perspectives on discourse, tools, and instructional design. Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  15. Coleman, J. S., Campbell, E. Q., Hobson, C., McPartland, J., Mood, A. M., Weinfeld, F. D., & York, R. L. (1966). Equality of educational opportunity (pp. 1–32). Washington, DC: US Government Printing Office.Google Scholar
  16. Conley, D. T. (2007). The challenge of college readiness. Educational Leadership, 64(7), 23–29.Google Scholar
  17. Darling-Hammond, L. (1999). Teacher quality and student achievement: A review of state policy evidence. Seattle, WA: Center for the Study of Teaching and Policy, University of Washington.Google Scholar
  18. Darling-Hammond, L., Bransford, J., & LePage, P. (2005). Introduction. In L. Darling-Hammond & J. Bransford (Eds.), Preparing teachers for a changing world: What teachers should learn and be able to do (pp. 1–39). San Francisco, CA: Jossey-Bass.Google Scholar
  19. Ertmer, P. A. (2005). Teacher pedagogical beliefs: The final frontier in our quest for technology integration? Educational Technology Research and Development, 53(4), 25–39.CrossRefGoogle Scholar
  20. Fennema, E., & Franke, M. L. (1992). Teachers’ knowledge and its impact. In D. A. Grouws (Ed.), Handbook of research on mathematics teaching and learning: A project of the National Council of Teachers of Mathematics (pp. 147–164). New York, NY: Macmillan.Google Scholar
  21. Finn, J. D., Pannozzo, G. M., & Achilles, C. M. (2003). The “why’s” of class size: Student behavior in small classes. Review of Educational Research, 73(3), 321–368.CrossRefGoogle Scholar
  22. Goldhaber, D. (2002). The mystery of good teaching. Education Next, 2(1), 50–55.Google Scholar
  23. Goldhaber, D., & Anthony, E. (2007). Can teacher quality be effectively assessed? National board certification as a signal of effective teaching. The Review of Economics and Statistics, 89(1), 134–150.CrossRefGoogle Scholar
  24. Goos, M. (2005). A sociocultural analysis of the development of pre-service and beginning teachers’ pedagogical identities as users of technology. Journal of Mathematics Teacher Education, 8(1), 35–59.CrossRefGoogle Scholar
  25. Goos, M., Galbraith, P., Renshaw, P., & Geiger, V. (2003). Perspectives on technology mediated learning in secondary school mathematics classrooms. The Journal of Mathematical Behavior, 22(1), 73–89.CrossRefGoogle Scholar
  26. Hawk, P., Coble, C. R., & Swanson, M. (1985). Certification: It does matter. Journal of Teacher Education, 36(3), 13–15.CrossRefGoogle Scholar
  27. Heck, R. H. (2007). Examining the relationship between teacher quality as an organizational property of schools and students’ achievement and growth rates. Educational Administration Quarterly, 43(4), 399–432.CrossRefGoogle Scholar
  28. Hegedus, S. J., Dalton, S. K., & Tapper, J. (2013a). The impact of technology-enhanced curriculum on learning advanced algebra in American high school classrooms. Manuscript submitted for publication.Google Scholar
  29. Hegedus, S. J., & Penuel, W. R. (2008). Studying new forms of participation and identity in mathematics classrooms with integrated communication and representational infrastructures. Educational Studies in Mathematics, 68(2), 171–183.Google Scholar
  30. Hegedus, S., & Roschelle, J. (Eds.). (2013). Democratizing access to important mathematics through dynamic representations: Contributions and visions from the SimCalc research program. Berlin: Springer.Google Scholar
  31. Hegedus, S., Tapper, J., Dalton, S., & Sloane, F. (2013b). HLM in cluster-randomised trials–measuring efficacy across diverse populations of learners. Research in Mathematics Education, 15(2), 177–188.Google Scholar
  32. Hennessy, S., Ruthven, K., & Brindley, S. (2005). Teacher perspectives on integrating ICT into subject teaching: Commitment, constraints, caution, and change. Journal of Curriculum Studies, 37(2), 155–192.CrossRefGoogle Scholar
  33. Hiebert, J., Carpenter, T. P., Fennema, E., Fuson, K. C., Wearne, D., Murray, H., & Human, P. (1997). Making sense: Teaching and learning mathematics with understanding (Vol. 361, pp. 03801–3912). Portsmouth, NH: Heinemann.Google Scholar
  34. Hill, H. C., Rowan, B., & Ball, D. L. (2005). Effects of teachers’ mathematical knowledge for teaching on student achievement. American Educational Research Journal, 42(2), 371–406.CrossRefGoogle Scholar
  35. Horizons Research, Inc. (2002). The status of high school mathematics teaching. Chapel Hill, NC: D. Whittington.Google Scholar
  36. Hoxby, C. M. (2000). The effects of class size on student achievement: New evidence from population variation. The Quarterly Journal of Economics, 115(4), 1239–1285.CrossRefGoogle Scholar
  37. Huang, J., Normandia, B., & Greer, S. (2005). Communicating mathematically: Comparison of knowledge structures in teacher and student discourse in a secondary math classroom. Communication Education, 54(1), 34–51.CrossRefGoogle Scholar
  38. Hunt, P., Soto, G., Maier, J., Müller, E., & Goetz, L. (2002). Collaborative teaming to support students with augmentative and alternative communication needs in general education classrooms. Augmentative and Alternative Communication, 18(1), 20–35.CrossRefGoogle Scholar
  39. Khisty, L. L. (2002). Pedagogic discourse and equity in mathematics: When teachers’ talk matters. Mathematics Education Research Journal, 14(3), 154–168.CrossRefGoogle Scholar
  40. Klitgaard, R. E., & Hall, G. R. (1974). Are there unusually effective schools? Journal of Human Resources, 10(3), 90–106.Google Scholar
  41. Klopfenstein, K., & Thomas, M. K. (2009). The link between advanced placement experience and early college success. Southern Economic Journal, 75(3), 873–891.Google Scholar
  42. Kuh, G. D. (2001). Assessing what really matters to student learning inside the National Survey of Student Engagement. Change: The Magazine of Higher Learning, 33(3), 10–17.CrossRefGoogle Scholar
  43. Kuh, G. D. (2003). What we’re learning about student engagement from NSSE: Benchmarks for effective educational practices. Change: The Magazine of Higher Learning, 35(2), 24–32.CrossRefGoogle Scholar
  44. Ma, L. (1999). Knowing and teaching elementary mathematics: Teachers’ understanding of fundamental mathematics in China and the United States. Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  45. Maris, E. (1998). Covariance adjustment versus gain scores—revisited. Psychological Methods, 3(3), 309–327.CrossRefGoogle Scholar
  46. Means, B. (2010). Technology and education change: Focus on student learning. Journal of Research on Teacher Education, 42(3), 285–307.CrossRefGoogle Scholar
  47. Mishel, L., & Rothstein, R. (2002). The class size debate. Washington, DC: Economic Policy Institute.Google Scholar
  48. Mislevy, R. J., Steinberg, L. S., Almond, R. G., Haertel, G. D., & Penuel, W. R. (2003). Improving educational assessment. In B. Means & G. D. Haertel (Eds.), Evaluating educational technology: Effective research designs for improving learning (pp. 149–180). New York, NY: Teachers College Press.Google Scholar
  49. Murnane, R. J., & Phillips, B. R. (1981). Learning by doing, vintage, and selection: Three pieces of the puzzle relating teaching experience and teaching performance. Economics of Education Review, 1(4), 453–465.CrossRefGoogle Scholar
  50. National Center for Education Statistics. (2006). The condition of education. NCES 2006-071. National Center for Education Statistics. Washington, DC: U.S. Government Printing Office: Rooney, P., Hussar, W., Planty, M., Choy, S., Hampden-Thompson, G., Provasnik, S., & Fox, M. A.Google Scholar
  51. Papert, S., & Harel, I. (1991). Situating constructionism. Constructionism (pp. 1–11). Norwood, NJ: Ablex Publishing Corporation.Google Scholar
  52. Peske, H., & Haycock, K. (2006). Teaching inequality: How poor and minority students are shortchanged on teacher quality. Washington, DC: The Education Trust.Google Scholar
  53. Pianta, R. C., Belsky, J., Houts, R., & Morrison, F. (2007). Opportunities to learn in America’s elementary classrooms. Science, 315(5820), 1795–1796. doi:10.1126/science.1139719.CrossRefGoogle Scholar
  54. Pierson, M. E. (2001). Technology integration practice as a function of pedagogical expertise. Journal of Research on Computing in Education, 33(4), 413–430.CrossRefGoogle Scholar
  55. Pitler, H., Hubbell, E. R., & Kuhn, M. (2012). Using technology with classroom instruction that works. Alexandria: ASCD.Google Scholar
  56. Porter, A. C. (2002). Measuring the content of instruction: Uses in research and practice. Educational Researcher, 31(7), 3–14. doi:10.3102/0013189X031007003.CrossRefGoogle Scholar
  57. Prince, M. (2004). Does active learning work? A review of the research. Journal of Engineering Education, 93(3), 223–231.CrossRefGoogle Scholar
  58. Rockoff, J. E. (2004). The impact of individual teachers on student achievement: Evidence from panel data. The American Economic Review, 94(2), 247–252.CrossRefGoogle Scholar
  59. Roschelle, J., & Shechtman, N. (2013). SimCalc at scale: Three studies examine the integration of technology, curriculum, and professional development for advancing middle school mathematics. In S. J. Hegedus & J. Roschelle (Eds.), Democratizing access to important mathematics through dynamic representations: Contributions and visions from the SimCalc research program (pp. 125–143). Netherlands: Springer.Google Scholar
  60. Roschelle, J., Shechtman, N., Tatar, D., Hegedus, S., Hopkins, B., Empson, S., Knudsen, J., & Gallagher, L. (2010). Integration of technology, curriculum, and professional development for advancing middle school mathematics: Three large-scale studies. American Educational Research Journal, 47(4), 833–878.Google Scholar
  61. Ryan, A. M., & Patrick, H. (2001). The classroom social environment and changes in adolescents’ motivation and engagement during middle school. American Educational Research Journal, 38(2), 437–460.CrossRefGoogle Scholar
  62. Sanders, W. L., & Rivers, J. C. (1996). Cumulative and residual effects of teachers on future student academic achievement. Knoxville, TN: University of Tennessee Value-Added Research and Assessment Center.Google Scholar
  63. Snjiders, T. A. B., & Bosker, R. J. (1999). Multilevel analysis: An introduction to basic and advanced multilevel modeling. London: Sage.Google Scholar
  64. Staples, M. (2007). Supporting whole-class collaborative inquiry in a secondary mathematics classroom. Cognition and Instruction, 25(2–3), 161–217.CrossRefGoogle Scholar
  65. Tufte, E. (2006). Beautiful evidence. Cheshire, CT: Graphics Press.Google Scholar
  66. Turner, J. C., Meyer, D. K., Midgley, C., & Patrick, H. (2003). Teacher discourse and sixth graders’ reported affect and achievement behaviors in two high-mastery/high-performance mathematics classrooms. The Elementary School Journal, 103(4), 357–382.CrossRefGoogle Scholar
  67. Virginia Department of Education. (2012). High school predictors of college readiness: Determinants of high school graduates’ enrollment and successful completion of first-year Mathematics and English college Courses in Virginia. Richmond, VA: Jonas, D., Dougherty, C., Herrera, A. W., LaTurner, J., Garland, M., & Ware, A.Google Scholar
  68. Wenglinsky, H. (2000). How teaching matters: Bringing the classroom back into discussions of teacher quality. Princeton, NJ: Policy Information Center, Educational Testing Service.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  1. 1.Kaput Center for Research and Innovation in STEM EducationUniversity of Massachusetts DartmouthFairhavenUSA
  2. 2.University of HartfordWest HartfordUSA
  3. 3.Southern Connecticut State UniversityNew HavenUSA

Personalised recommendations