Introduction

Traditional models of reading such as the Simple View of Reading (Gough and Tunmer 1986; Hoover and Gough 1990) and the Modified Cognitive Model (McKenna and Stahl 2009) are frequently used to explain influencing factors on student reading comprehension. However, student reading comprehension goes beyond language comprehension, decoding skills, and strategy use. The Component Model of Reading (Aaron et al. 2008) posits that literacy instruction is couched in a context where psychological and ecological components also greatly influence literacy acquisition as adults and children interact within classrooms, schools, and the community thus impacting learning outcomes. School and community culture, classroom environment, teacher and administrator knowledge, instructional materials, and instructional methods all contribute to the ecological environment. The ecological environment as described by Aaron et al. (2008) is important to understanding the implementation of reading interventions in school environments because it helps to describe not only what is taking place but sheds light on the complex factors contributing to the learning outcomes.

With this complex backdrop, children’s lack of literacy has been declared a crisis because of the failure of children in elementary grades to successfully grasp basic reading and writing skills (Fleishman, 2008) that can sustain them into upper-grade levels, professional careers, and informed citizenry. Indices of state, national, and international assessments of literacy present a sobering picture of poor reading and writing scores at many grade levels. An example of such poor performance is the National Assessment of Educational Progress (NAEP 2015) in the USA where approximately one-third of children are not able to read and write proficiently at grade level. In this manuscript, we focus on factors that contribute to these outcomes by synthesizing findings from two studies, a large-scale cluster randomized controlled study on reading with high poverty schools and a design and development project for a teacher-led computer-supported persuasive writing intervention. We gathered data from teachers and administrators during the implementation of both projects and carefully documented artifacts related to literacy curricula used in the participating schools and professional development (PD) plans and curricula decision-making by administrators. Figure 1 shows the complex factors we studied using these data sources. The findings of this synthesis may be of use to researchers and practitioners as they consider the rollout of interventions and the fidelity of implementation factors that frequently affect student literacy outcomes. We organize the manuscript by the etiology of teacher knowledge and instructional practices and school-related contextual factors for reading and writing at the upper elementary grade levels. We conclude with how these factors affected teacher practices and present supporting evidence from the data collected.

Fig. 1
figure 1

Ecological puzzle of factors associated with literacy outcomes in schools

Background about elementary literacy today

Reading and writing are complex and continuously evolving skills that require the fluid application of low and high level strategies. Early readers focus on decoding and fluency and upper elementary grade readers are required to successfully comprehend text in English language arts (ELA) and content areas. As evidenced by the high-stakes assessments within the USA, upper elementary grade students continue to struggle with the basic and foundational skill of comprehension. In raw numbers, this translates to millions of kids at fourth-grade failing to master reading. This means these children cannot understand the textbooks, cannot engage in reading for enjoyment, and may develop negative attitudes towards reading that can impact their futures. If a child is struggling to read in upper elementary grades, it is unlikely that they are spontaneously going to become superior readers through some developmental change. It is more plausible that good instruction is the best antidote to poor reading.

Similarly, early writing focuses on mechanical factors such handwriting, spelling, and grammar. At upper elementary grade levels, writing focuses on coherent expression of ideas in three main genres, persuasive, informative, and narrative. Writing skills tested at upper elementary grades show an even more sobering picture with many students unable to write at grade level. Since written expression is the foremost form of communicating one’s knowledge in school, a child who cannot effectively use writing to communicate what they have learned is in jeopardy of failing. Without careful instruction in the classrooms, it is unlikely that students will overcome any writing deficits. Complicating the picture surrounding writing, many classroom teachers devote very little instructional time to this important skill further contributing to the declines in writing skills and attitudes towards writing.

Many researchers have studied the factors that contribute to these poor literacy learning outcomes in elementary grades (Binks-Cantrell, Washburn, Joshi, & Hougen 2012; Brindle, Graham, Harris, & Hebert, 2016; Pressley et al. 1998). Pressley et al. observed ELA classrooms and noted that teachers did not use many comprehension strategies at the upper elementary grade levels. Binks-Cantrell et al. report that teachers lacked pre-service preparation in early reading skills and thus were unable to present evidence-based instruction in their classrooms. Within the writing domain Brindle et al. (2016) administered a survey to 157 upper elementary grade teachers and found that 75% of respondents reported receiving inadequate pre-service preparation for teaching writing. Teachers also reported devoting less than 15 min each day to writing instruction in their classrooms.

Etiology of teacher knowledge and instructional practices in elementary grade literacy

Teachers appear to shoulder the bulk of the burden when it comes to presenting evidence-based literacy instruction in their classrooms and researchers frequently focus on teachers’ pre-service preparation as one source for the challenges (Binks-Cantrell, Washburn, Joshi, & Hougen 2012; Darling-Hammond et al. 2009; McKenna and Parenti 2017). However, teachers do not function in independent silos and rarely have the flexibility or autonomy of developing their own lessons or instruction without guidance from a textbook, school curricula, school-based PD, and/or support from their peers and school leaders. The focus of this analysis is on the instructional practices that we have documented during two recent research studies and contextual factors surrounding such instruction. Teacher knowledge within our two studies was gathered from surveys. Administrator factors were gathered through structured interviews, and instructional practices were corroborated through artifact analysis of school textbooks and PD plans.

The term etiology is frequently used in the medical field to identify possible causes for health conditions. In this manuscript, we use the term to identify possible causes for the literacy problems and particularly reviewing and documenting possible factors that influence teacher knowledge about evidence-based practices. The following research questions guide the data analysis and results presented in this manuscript:

  1. 1.

    What types of pre-service and in-service instruction have teachers received about evidence-based practices related to reading comprehension and writing?

  2. 2.

    What evidence-based practices are used in upper elementary grade classrooms participating in a reading efficacy and writing design study?

  3. 3.

    What preparation have administrators received about evidence-based literacy practices?

  4. 4.

    How is classroom instruction aligned with district-level curricula and professional development practices?

Teacher preparation, knowledge, and use of reading comprehension strategies

Research design and procedure

A two-wave 2-year data collection was conducted to study the efficacy of a web-based intelligent tutoring system designed to teach children how to use the text structure strategy (Intelligent Tutoring System for the Structure Strategy (ITSS), Wijekumar et al. 2013). This cluster randomized controlled trial focused on improving reading comprehension with fourth- and fifth-grade children attending high-poverty schools. The intervention presented the text structure strategy to students using the software and students were supported by trained classroom teachers. The structure strategy has theoretical and empirical support and focuses student’s attention to using signaling words to classify text structures (comparison, cause and effect, problem and solution, description, sequence), selecting important ideas from the text scaffolded by the text structure, generating a main idea again using the text structure, making inferences, and integrating new information with prior knowledge in a logical memory structure (Meyer et al. 2010). This approach has been tested and found to be efficacious with rural and suburban school students at fourth-grade (Wijekumar et al. 2012), fifth-grade (Wijekumar et al. 2014), and seventh-grade (Wijekumar et al. 2017).

Prior to the intervention, teachers received 2 days of PD followed by four to six in-school coaching and modeling sessions within their classrooms. The PD sessions covered both the text structure strategy, its integration into the classroom practices, and the use of the ITSS software. This approach was developed to encourage teachers to use the evidence-based text structure strategy within their classroom instruction in addition to the software use. This PD approach was designed to address the critical needs of schools that serve economically disadvantaged students and many of the challenges they face. Lesson plans were prepared showing the integration of the comprehension instruction with the text structure strategy in coordination with the textbook’s lesson scope and sequence.

The intervention teachers completed their surveys prior to the PD sessions and the control group teachers completed their surveys during the orientation to the study session. Teachers participated in focus group interviews during the PD sessions. All observations were conducted approximately 12 weeks into the school year in both intervention and control classrooms.

Participants

Within the ITSS high poverty efficacy study with fourth- and fifth-grade classrooms, 280 teachers completed surveys about their backgrounds, pre-service preparation, and reading comprehension strategies used. We also conducted interviews with approximately 10% of teachers from all eight school districts participating in the study. The average number of years of experience was 13.2 and approximately 28% of teachers had advanced degrees. All others reported bachelor’s degrees only. These teachers presented instruction within schools where over 80% of students were eligible for a free or reduced price lunch. The schools also served over 92% minority students.

Measures

During the study, we used a teacher survey, focus group meetings, and classroom observations to gather data about teacher knowledge and instructional practices about reading. A six-item survey was administered to all participating teachers during the PD sessions. The first part of the survey gathered demographic information about the participants. The second part of the survey included open-ended questions focusing on reading comprehension strategies used and identifying text structures. All the surveys were transcribed and coded based on keywords related to reading comprehension strategies (e.g., summarizing, read-aloud) and the five text structures used in the intervention (i.e., comparison or compare/contrast, cause and effect, problem and solution, sequence, and description).

Twenty-four focus group meetings were held with approximately 43% of the teachers to gather further information about the ELA practices employed by the teachers. The discussions were guided by the following questions:

  1. 1.

    What are the reading strategies you use in the classroom?

  2. 2.

    What are the resources you use to teach reading?

  3. 3.

    How much professional development did you receive about reading strategies or the curriculum during the past 2 years?

  4. 4.

    What strategies do you find useful? And what strategies have you discarded?

  5. 5.

    Do you provide differentiated instruction in your classroom?

  6. 6.

    What data sources do you rely on for assessing your students’ reading?

  7. 7.

    What resources do you utilize to match instruction to the needs of your students?

Finally, we conducted observations in approximately 90% of the classrooms (both intervention and control) to document time spent on instructional practices and classroom organization. The observations lasted approximately 30 min each and were conducted by two trained research analysts using a tablet-based observation system that gathered data every 90 s on what instruction was being delivered at that timeframe. The system totaled the time on each skill automatically.

Results

As for pre-service preparation, all teachers reported feeling confident in their ability to deliver reading instruction at the elementary grade levels. When asked to report reading comprehension strategies used, the most common practices included reading aloud, vocabulary, and discussions. With the exception of vocabulary instruction, none of the other reading strategies or foci mentioned any of the recommendations of the National Reading Panel (2000). When asked about text structures taught, 99% of respondents reported using text structures, but when asked to list the text structures, 54% listed only one text structure and less than 30% correctly listed more than one text structure. Over 80% of the teachers reported other textual elements as text structure (e.g., narrative, summary). These responses signal a lack of nuanced understanding of reading comprehension strategies. Almost 98% of teachers reported following the basal reading textbook series without deviating from the plan.

During the focus group sessions, some interesting findings emerged. First, 105 teachers within two large school districts reported using a strategy called “beginning-middle-end” for finding the main idea of a passage. Other variations for the main idea included, “who, what, when, where, but.” When asked how they used text structures such as cause and effect, teachers reported asking students to name the text structure and sometimes completing a graphic organizer at the conclusion of the lesson if time permitted. This pattern was verified during the classroom observation where teachers followed the reading textbook very linearly with limited deviation (Beerwinkle et al. 2018).

The same school district which had over 20 schools participating in the study used the Texas Journeys (Baumann et al., 2011) reading textbook which organized instruction around “skill of the week” units. These units contained reading skills taught in isolation through selected reading passages. A very thorough review of this and other textbooks showed that each skill (e.g., main idea) was taught approximately 8% of the time (Beerwinkle et al. 2018). Students did not regularly engage in writing the main idea for the text and instead focused on the specific skill for the week. The textbook series was adopted 3 years prior to the current efficacy study and no PD connected to the curricula was available for the teachers.

During the focus group meetings, almost all teachers reported following the textbook scope and sequence of lessons as prescribed. Teachers voiced concern that administrators would frown upon any deviations from the textbook lessons and felt that due to oppressive accountability factors, doing the prescribed worksheets were easier to implement and track.

Teachers reported that the school district used two additional software tools related to literacy and teachers were required to schedule time in the computer lab for students to use the tools. This was in addition to time on the ITSS software used as part of the research study. Administrators and teachers had received PD on how to use the reporting tools with the other software tools and regularly ran time on task and score reports from the computer logs. Teachers reported that administrators regularly checked whether students had spent time using these software tools and ran analytic reports showing mastery of different literacy skills (e.g., vocabulary). These time on task and score reports were also shared with the school district-level administrators.

All teachers reported that their schools regularly conducted benchmark or progress monitoring tests. The frequency ranged from every 6 weeks to every 12 weeks during the academic year. All teachers received data that showed how many of their students performed at below, basic, and mastery levels of performance on these tests. These tests reported students’ performance in key areas of reading. Unfortunately, there was little assistance for teachers in matching evidence-based practices to the identified needs of the students.

All teachers reported meeting with peers and administrators regularly after the benchmark tests to discuss student performance. Our research team attended at least one such meeting at every participating school to observe what was discussed. The observations confirmed that student performance in each area of literacy was discussed at great length. About 30% of schools had a wall of data where teachers posted each student’s performance in green (mastery), yellow (basic), and red (below basic). After every administration of the assessment, student cards were moved around on the wall. Teachers discussed the student performance during these meetings and shared their thoughts about why students scored below basic in the skills. Most of the discussions revolved around challenges with the assessments used and general challenges related to special education students. During the observed meeting times and the structured interviews, teachers sought ideas from their colleagues for interventions that may help struggling readers. Notably, none of the observed discussions showed teachers asking whether the instructional remedies had supporting evidence.

Although teacher discussions surrounding the benchmarks did not present a strong focus on the use of evidence-based practices, the discussions elicited related information that is not under the control of teachers but affects the culture and dynamics of the classrooms and schools. The following factors about benchmark tests were identified through these interviews and observations. First, the benchmarks were usually end-of-year high-stakes assessments that were administered to students throughout the academic year. It did not appear that the appropriateness of administering an end-of-year exam early in the year was considered. It also appeared that a match-up between the scope and sequence of instruction and what was measured on the assessments was not considered. Further, student feelings over repeated testing and poor performance did not appear to be given consideration. These benchmark tests often replace end of unit tests tailored to what the student has learned during the recent time period. The benchmarks, in contrast, do not account for variations in the scope and sequence of skills taught and may have negative impacts on students’ motivation and efficacy due to being tested on materials and skills not taught during that timeframe. Even though teachers were concerned about these factors, these concerns did not change the district’s policies or practices related to such benchmarks.

The benchmarks produce reports on student performance and teachers and administrators appear to stress over underperforming students. However, none of the participating schools had a systematic plan to address any deficiencies in student knowledge based on the scores or a list of evidence-based practices matched to specific student deficiencies. Teacher feedback during the interviews and our observations of the benchmark discussion meetings showed that there were no standardized procedures for matching the needs of students with evidence-based practices.

Teacher preparation, knowledge, and use of writing strategies

Research design and procedure

A series of design studies and an underpowered randomized controlled study were conducted during the development of a teacher-led computer-supported intervention: We-Write persuasively (Wijekumar et al. 2016). We-Write was designed to teach upper elementary grade students how to write persuasive essays. Three design studies were conducted to gather data on usability of the We-Write teacher-led lessons. Prior to the beginning of each study, teachers received a 1-day professional development by team members. During this PD session, the team gathered data about teacher demographics and practices. The culmination of the We-Write project was an underpowered randomized controlled study with 12 classrooms and data was gathered from teachers participating in that study as well.

We-Write was developed based on the sound instructional practices refined through many years of research on the self-regulated strategies development model for writing (SRSD, Harris, 1980; Harris et al. 2008). SRSD-based instruction presents genre-specific knowledge, mnemonics to reduce working memory strain, and actively promotes efficacy towards writing. Six recursive stages of instruction are utilized to encourage each student to achieve success in writing persuasively. SRSD has been successful and deemed to be an evidence-based practice (cf. Baker et al. 2009; Graham and Perrin 2007; Graham et al. 2015).

All participating teachers received 2-day professional development delivered by the research team. During that time, teachers received information about the theoretical foundations and refinement of the SRSD approach to writing. Teachers were shown scripted lessons for each stage of the intervention. Teachers also learned how to use data from their classroom students to make instructional decisions on the six stages of the SRSD model. Additional sessions of coaching and modeling were provided within the classroom. All teachers completed the surveys prior to the beginning of the PD sessions. Focus group interviews were conducted during a break in the PD sessions.

Participants

Twenty-one teachers participated during the design and usability studies during the development of the We-Write intervention. These teachers worked at three elementary schools that served approximately 62% minority students and 70% economically disadvantaged. Eleven teachers from two suburban and one private school participated in the pilot study. Over 90% of children were economically disadvantaged and 75% were minority in one of the schools. All the other schools reported serving on average 5% minority students and 23% of economically disadvantaged students. The three public schools had an average enrollment of 727 students.

Measures

Within the writing studies, teachers completed surveys, participated in focus groups, and were observed during the implementation of the intervention. The same surveys were administered during the design studies as well as the pilot studies. The focus was on gathering demographic data about the teachers and on literacy practices focusing on writing.

Results

All teachers responding to the survey stated that they did not receive any pre-service instruction on teaching writing. Over 90% of teachers reported using planning and composing strategies infrequently during their language arts classes. Writing instruction was scarce during observations of classroom practices with less than 10% of classrooms doing any writing-related activities during the observed time. Most teachers showcased worksheets on spelling and grammar as part of their writing instruction.

Two of the participating eight schools had recently adopted a writing program referred to as the Collins writing program prior to the We-Write intervention. Teachers received 1 day of training related to the Collins program and were required to maintain reports about how much writing was being done in the classroom by tracking sentence, paragraph, and passage writing for each student. The Collins program was designed to encourage more writing by acknowledging any and all writing activities happening within each classroom. Little to no constructive instruction was provided to the children about how to write as part of this implementation.

All teachers were confused about how to integrate the We-Write intervention into their classrooms while also meeting the requirements of the Collins program adopted prior to the academic year. Thus, the research team had to schedule meetings with school administrators to resolve the issues and ensure that teachers were able to use the We-Write intervention without interruption. The We-Write intervention focused on the six stages of the SRSD model developed by Drs. Harris and Graham (Harris et al. 2008) and has extensive empirical evidence supporting the framework. However, teachers and administrators were not aware of the empirical evidence about SRSD and the lack of evidence for the Collins approach.

Summary of results from reading and writing studies

Results from both the reading efficacy and writing studies show that teachers reported focusing primarily on the scope and sequence of instruction dictated by the textbooks. The surveys and observations showed that few evidence-based practices were being used by teachers for reading and writing in the participating schools.

School context

We gathered data about school leadership, textbooks, and school PD portfolios during both the reading efficacy and We-Write research studies to identify factors that may be contributing to the literacy outcomes within these studies. The school leadership data was gathered using structured interviews and content analyses were used to review the school PD portfolios and the textbooks. To shed light on the ecological context of both studies, we present a summary of each data collection effort and findings next.

Administrator knowledge and styles—reading and writing studies

We interviewed every administrator within the reading study and found that 34% had previous experience as reading teachers. All principals participating in the study had been out of the classroom for over 10 years. All administrators noted that they signed up for the research study because reading comprehension was a high priority for their schools. They all noted that text structures were mentioned in all state-level standards and felt they had a good grasp of text structures. All administrators reported complex cultures and climates within each school. We cataloged administrator comments and found the most common themes related to student performance on high-stakes assessments, special education challenges, and teacher morale.

Further, we gathered data about administrator knowledge during three district-level meetings with 12 intervention school principals participating in the reading study approximately 12 weeks into the academic year. All attendees were leaders of intervention schools and had attended the PD sessions provided for the teachers. For this data collection, we played three video clips from classrooms showing reading comprehension instruction or presented sample reading lessons live. These 5- to 8-min videos were developed at non-participating schools during a prior project and the live instructional models depicted the exact same scenarios as the video clips. Each principal was given a Principal’s Checklist (excerpt shown in Fig. 2) and asked to note whether the teacher on the video or live instructional example displayed the skills from the text structure intervention. Only one of the 12 administrators correctly noted that the three models did not present main idea, inference, and elaboration scaffolding using the text structure. This is a very important distinction between the evidence-based text structure strategy and traditional teaching methods. The fact that 11 out of 12 principals did not catch the nuanced implementation factors was troubling. If they did not notice the important differences in the three videos, we can state that they would have been unable to provide any feedback or guidance to teachers during routine walkthroughs in the classrooms. They also may be unable to gauge whether this intervention is different from others available from publishers.

Fig. 2
figure 2

Excerpt from principal’s checklist of implementation fidelity

During the interviews, principals noted that they received numerous options from district-level administrators and developers outside the school for interventions. They were also constantly making decisions about matching students’ needs based on benchmarks with interventions that may help change student performance. When asked about the decision-making process, research evidence was not cited by any of the administrators. Instead, district-level references, peer recommendations, and regional educational center recommendations were the most influential in the adoption decisions.

During the We-Write intervention design and pilot studies, we interviewed school administrators twice. Each interview was structured and gathered data about the interventions currently used in the schools and the administrators’ interest in writing. None of the administrators reported receiving any writing related training at any time during their pre-service preparation. All the administrators expressed interest in writing instruction for only the grades where high-stakes assessments were administered (e.g., fifth grade for the states that followed the Common Core State Standards). They were reluctant to invest instructional time on writing for other grade levels. All administrators were keen on writing instruction focusing on mechanics such as grammar and were not aware of evidence-based writing practices. Notably, the schools that had recently adopted the Collins writing approach reported that they purchased the intervention based on references received from a neighboring school district superintendent. They had not checked on any research about the approach prior to purchasing the intervention.

Reading textbook analysis

We conducted a thorough review of the ELA textbooks used by the participating schools. Table 1 presents a summary of the fifth-grade textbook skills compiled from the complete data set reported in Beerwinkle et al. (2018). Key findings include a lack of regular and sustained practice for students in key literacy skills such as writing a main idea, teaching of text structures as an independent and separate skill from main ideas and summaries, and a lack of explicit directions for writing a main idea. Some of the textbooks also contradicted the instruction delivered by ITSS during the efficacy study. For example, Scott Foresman Reading Street instructs students that the main idea is the most important idea about a topic while giving students no clear instruction on how to decide when an idea is important. Additionally, the Reading Street series informs students that authors may state the main idea in a single sentence at the beginning, middle, or end of a text. This leaves students looking for a single sentence as the main idea rather than taking the information presented as a whole. In contrast, ITSS teaches students to use the text structure to guide the selection of important ideas and use a sentence stem to scaffold writing the main idea. When a student identifies the text structure of the passage as cause and effect, they use the sentence stem: the cause is _____ and the effect is _____ to generate their main idea.

Table 1 Summary of key literacy skills reported in fifth-grade textbooks (adapted from Beerwinkle, Wijekumar, Walpole, & Aguis, 2018)

Beerwinkle et al. (2018) found that at the fifth-grade level both the Scott Foresman Reading Street (Afflerbach et al. 2011a, b) series and the Texas Journeys (Baunmann et al., 2011) series addressed main idea in only two lessons. Similarly, summary was taught in only 10% of lessons in the Scott Foresman Reading Street series and 16% in the Texas Journeys series. These numbers are in stark contrast to what students are taught with the text structure strategy. The text structure strategy teaches students to write a main idea and summary for each text they read. Following the textbook skill of the week approach, students are unlikely to get the frequent main idea practice needed for mastery.

Text structure is also minimally covered in both textbook series (Beerwinkle et al. 2018). The Texas Journeys (Baunmann et al., 2011) series focused specifically on cause and effect in 12% of lessons, compare and contrast in 8% of lessons, and sequence in 12% of lessons. The Scott Foresman Reading Street (Afflerbach et al. 2011a, b) series addressed cause and effect, compare and contrast, and sequence in 10% of lessons each. The paucity of text structure coverage contradicts the text structure strategy which teaches students to identify the text structure of each text they read. Further, both textbook series treat text structure as a minor part of the text and do not make it explicit to students how understanding the structure of the text can help them gain a deeper understanding of what they have read.

Additionally, both textbook series addressed critical skills such as drawing conclusions, generalizing, and making inferences in only 8–16% of lessons (Beerwinkle et al. 2018). These skills are mandatory for students to be able to understand what they read beyond the surface level. The text structure strategy teaches students to use the structure of the text to facilitate making inferences, drawing conclusions, and developing generalizations. However, given the limited number of lessons covering text structure as well as the limited lessons covering deeper reading skills, students are unlikely to master these skills.

Analysis of writing curricula

During the We-Write design studies, we reviewed the textbook and other writing curricula materials offered to the teachers. The textbooks presented vocabulary, spelling, and grammar worksheets for students. The textbook also contained about two lessons about writing short answers and essays focusing on planning with a graphic organizer, writing, and revising. Most of the teachers reported practicing writing with students starting a month before the state tests. One of the participating schools had invested in purchasing professional development about a writing program called the Collins writing approach (Collins, 2018). Teachers had received PD about the method. The current research link from the site does not present any research studies that meet What Works Clearinghouse standards.

Professional development offerings from schools

The research team gathered PD lists available to teachers from all participating schools. Most of the lists were compiled by the district based on administrator input and the regional educational service centers (in Texas) and intermediate units (in other participating states). We conducted a thorough review of the portfolio of offerings in every participating district and school and found that approximately 60% were targeted at state-level standards and/or changes in high-stakes assessments. Another 25% focused on special education services. Approximately 15% were devoted to curricula but mostly related to new adoptions of textbooks and offerings from publishers. The publisher offerings were all based on new books that were published and focused on selling the books. None of the offered PD sessions (over 50 from the participating districts) were evidence-based practices as defined by the What Works Clearinghouse. There were no research studies published in refereed journals cited in any of the materials presented.

Teacher PD offered by the school districts during the study years were focused on new state standards, dyslexia, and administrative requirements. The intervention-related PD sessions were mostly delivered through publishers advertising their new books and most involved authors presenting information about their books. Again, the publishers and authors cited some research in support of their work but none of the curricula or interventions had been through rigorous research studies.

Discussion

This synthesis documented the complex ecological context of literacy instruction within upper elementary classrooms participating in two research studies. Our goal was to seek possible causes for teacher knowledge and instructional practices in reading and writing classrooms. The picture that emerged shows that while teachers may have not received appropriate pre-service preparation and infrequently used evidence-based practices in their classrooms, the school context does little to remediate the situation. This is one of a few studies where administrator knowledge and actions have been documented and show the important role that they play in changing teacher practice. Textbooks, assessments, and administrator decision-making also factor into the causes for a lack of support for teachers that may cause poor learning outcomes for students. Until all these factors are addressed, it is unlikely that teachers by themselves have the authority and autonomy to make the changes necessary to make a big difference in their knowledge and instructional practices.

As noted, the teacher knowledge reported by Beerwinkle et al. (2018) shows many similarities to many previous studies about teacher knowledge. Binks-Cantrell et al. (2012) and Piasta et al. (2009) found that low teacher knowledge was connected to low student knowledge. Teachers in the current efficacy study had very low knowledge of text structures with less than 8% of teachers able to name three text structures. Given the lack of text structure knowledge, Beerwinkle et al. (2018) point out that teachers may not be able to adequately model the text structure strategy and may rely on incorrect textbook identifications of text structure.

School level PD plans, textbooks, and administrator knowledge about guiding implementation of evidence-based practices in the literacy classrooms all show major challenges. Joshi et al. (2009) reviewed 17 college-level textbooks and found that 13 presented instructional components about all five recommendations by the National Reading Panel (2000). Unfortunately, the same review showed that less than 10% of the total coverage was devoted to important reading skills such as phonics and vocabulary. In this study, we have reviewed elementary grade textbooks and have found similar patterns and deficiencies. The lack of adequate information in these textbooks may affect student outcomes because teachers report using the scope and sequence of lessons without any changes.

Regarding administrator knowledge, the reading literature does not address these challenges often and many research teams focus more on the teachers. In this report, we showcase the important role that school leaders play in guiding the implementation of evidence-based practices, monitoring the fidelity of the implementation, and providing regular feedback to teachers. Finally, the school leaders can play an important role in selecting evidence-based practices to solve the problems facing students and ensure that professional development sessions meet the needs of their teachers.

When these findings are interpreted in the ecological context of the participating schools, teacher pre-service preparation appears to be only one of many complex factors that contribute to a lack of strong evidence-based literacy instruction in classrooms we studied. Given the lack of systematic support structures within schools, lack of PD that provides strong and sustainable change, and frequently changing interventions in schools as reported above, it is not surprising that teachers rarely implement evidence-based practices in ELA classrooms and lack the knowledge necessary to do so. This, in turn, affects student outcomes in reading and writing. Changing the direction of student scores requires more than just focusing on teacher preparation and teacher practices in the classroom. The complete ecological context needs to be carefully studied and modified.

This research has limitations due to the research methods employed and the numbers of participants. Future studies may focus on large-scale surveys of teachers and administrators to gather data about reading similar to those conducted by Brindle et al. (2016). The focus should also be expanded to gather data about curricula decision-making at the school and district level. Researching that process is important but requires researchers to be either participant observers or have access to decision-making meeting notes. Ideally, researchers will focus on how administrators find evidence-based practices and what factors influence their decisions. Social network analysis has been used to identify some factors that influence teacher decision-making and may be of use in this type of study.

Conclusion

Teachers are navigating a minefield of contradicting classroom and ecological factors and instructional practices against a backdrop of pre-service preparation and personal experiences in reading and writing at the upper elementary grade levels. School level leaders and districts should carefully review their literacy plans and implementation guidelines to ensure consistency, stability, and most importantly a focus on evidence-based practices.