Today’s educators are faced with an increasingly complex student population. Inclusive classrooms are comprised of students with various learning differences and disabilities (Subban, 2006; Tomlinson, 2000). When it comes to reading, students may struggle in a variety of ways and for a number of reasons. As such, teachers need to accurately and efficiently identify students with reading difficulties and then differentiate instruction to meet the individual needs of each student. Many teachers are turning to technology for instructional support, but the effects of technology-based interventions are not well understood. In this study, we sought to understand what works for whom in technology-based literacy instruction. To accomplish this goal, we followed a three-step line of inquiry. First, we classified students into four reader profiles based on scores from a readily-available, commonly-used progress monitoring tool (aimsweb©). Second, we examined the relative progress of reader profiles on a technology-based reading program (Lexia® Core5® Reading).
Third, we evaluated whether individual reader profiles made gains on the progress monitoring tool at the end of the school year.
Identifying reader profiles
Multiple cognitive and linguistic abilities are required for successful reading. These abilities generally contribute to one of two primary skills: word reading or linguistic comprehension (Catts, Hogan, & Fey, 2003; Oakhill, Cain, & Bryant, 2003). Accurate word reading relies on the knowledge of letter-sound relationships and integration of information about phonology, orthography, and morphology. Linguistic comprehension is the product of foundational skills such as vocabulary and syntax as well as discourse level processing such as inferencing, comprehension monitoring, and story grammar knowledge (Language and Reading Research Consortium, (LARRC) & Chiu, 2018). In summary, skilled readers must be able to decode words as well as derive meaning from words and connected text.
Poor readers might struggle because of difficulties with word reading, comprehension, or both. The Simple View of Reading (Gough & Tunmer, 1986; Hoover & Gough, 1990) is an empirically-supported framework commonly used to guide assessment and subgrouping of readers based on their individual strengths and weaknesses (Aaron, Joshi, & Williams, 1999; Catts, Hogan, & Fey, 2003). Using this theory, students may present with concordant or discordant abilities. It is most common to find concordant abilities: students with good word reading and good comprehension abilities are considered typical readers, whereas students who struggle with both word reading and comprehension are labeled as having a mixed deficit. Students with the mixed deficit profile often have more severe reading difficulties than their peers with a single deficit (Catts, Hogan, & Fey, 2003). Studies have also identified students with deficits in a single component of reading or discordant reader profiles. Some students struggle with word reading but have a relative strength in comprehension. These students are often labeled poor decoders and may even fit the criteria for dyslexia (Catts, Hogan, & Fey, 2003; Petscher, Justice, & Hogan, 2018; Elwér, Keenan, Olson, Byrne, & Samuelsson, 2013; Hagtvet, 2003). On the other hand, students who have relatively good word reading, but poor comprehension are labeled poor comprehenders. These students can read accurately and fluently, but they have difficulty extracting meaning from text.
Many studies necessarily focus on a single reader profile (e.g., Nation, Clarke, Marshall, & Durand, 2004) or recruit a specific number of students for each reader profile (e.g., Cutting, Materek, Cole, Levine, & Mahone, 2009), while few studies identify and examine all reader profiles within a single sample. Shankweiler and colleagues (1999) were among the first to use the Simple View of Reading to classify their full sample of students into each reader profile. Their clinical sample included 361 students in second-grade or above, aged 7.5–9.5 years (Shankweiler and colleagues 1999). Students were classified according to performance on eight different assessments of word reading and reading comprehension using composite scores for each construct of interest and then utilizing cut-scores and a “buffer-zone” to determine group assignment. Students whose scores fell in the buffer-zone were not assigned to a reader profile. This method lessened the influence of individual variation and measurement error for students who performed at or near the boundary between subgroups, thus alleviating some of the disadvantages associated with using arbitrary cut-scores (Dwyer, 1996; Shankweiler et al., 1999). This method resulted in the following reader profile proportions: 44% had mixed deficits, 11% were poor decoders, 6% were poor comprehenders, and 39% were typical readers (Shankweiler et al., 1999). Leach et al. (2003) used a similar methodology with a population-based sample of 289 students in fourth- through fifth-grade. They found the following reader profile proportions: 16% had mixed deficits, 17% were poor decoders, 7% were poor comprehenders, and 59% were typical readers (Leach, Scarborough, & Rescorla, 2003).
The relative proportion of these profiles is fairly consistent across studies, even when more advanced statistical analyses are employed and different languages are examined (Lerkkanen, Rasku-Puttonen, Aunola, & Nurmi, 2004; Torppa et al., 2007). For example, Torppa et al. (2007) used mixture modelling to categorize 1750 Finnish-speaking students into reader profiles according to word reading and reading comprehension abilities. In their sample, 15% had mixed deficits, 28% were poor decoders (i.e., they were slow readers in Finnish), 11% were poor comprehenders, and 46% were typical readers (Torppa et al., 2007). In general, typical readers comprise the largest or most-common profile, while either poor decoders or students with mixed deficits are the next largest group. Poor decoders and poor comprehenders are typically the smallest or least-common reader profiles. Minor variations in this pattern may occur due to differences in study sample, age or grade-range, language, and finally yet importantly, profile selection criteria (Aaron et al., 1999).
Unlike the studies described here, schools have a limited capacity to identify reader profiles by administering a large assessment battery or running advanced statistical tests due to constraints on staff availability or expertise, instructional time, and/or budget. In-depth diagnostic assessments are typically reserved for students who demonstrate significant difficulty with reading that cannot be remediated within a classroom setting (see Gersten et al. 2008 for an overview of multi-tier interventions). There is a need to determine whether more practical methods can be used to screen large populations and identify reader profiles. The aimsweb assessment system is a standardized measure of reading ability that is commonly-used by teachers for progress monitoring at the beginning, middle, and end of the school year. It contains two norm-referenced subtests of word reading and reading comprehension abilities but does not contain any subtests that measure oral language (e.g., listening comprehension, vocabulary, or syntax). Therefore, aimsweb can be a useful screening tool, but it is not comprehensive enough to replace diagnostic assessment of reading and language abilities.
Notably, past studies of poor reader subgroups—many of which subscribe to the Simple View of Reading—differ in whether they measure listening comprehension (e.g., Catts, Hogan, & Fey, 2003; Elwér et al., 2013) or reading comprehension (e.g., Catts, Adlof, & Weismer, 2006; Leach et al., 2003; Nation et al., 2004; Shankweiler et al., 1999; Torppa et al., 2007), depending on the study aims, research design, and participant sample. Listening comprehension is not synonymous with reading comprehension; however, by third grade, the majority of reading comprehension abilities are explained by individual differences in listening comprehension (Catts, Hogan, & Adlof, 2005; LARRC, 2015; Cain, 2015; Catts et al., 2006; Hagtvet, 2003; Justice, Mashburn, & Petscher, 2013; Nation, et al., 2004). Despite not providing very specific diagnostic information, reading comprehension measures are highly relevant to educational performance. They also fit within the constraints of school settings because they are often available for group administration. In comparison, few listening comprehension measures are available for group administration, and unfortunately, the available measures are often more difficult and time consuming to score than reading comprehension measures. As such, listening comprehension measures are not routinely administered in the school setting (Hendricks,
Adlof,
Alonzo,
Fox, & Hogan, 2019).
In the present study, we examined an existing dataset comprising a large, population-based sample. We used scores from the aimsweb word reading and reading comprehension subtests and applied cut-points used in previous studies (Adlof et al., 2006; Catts et al., 2006) to categorize students into one of the four reader profiles: mixed deficits, poor decoders, poor comprehenders, and typical readers. We hypothesized that reader profiles captured through aimsweb would be similar in proportion to those found in past studies.
Instructing reader profiles
Decades of research and nationally commissioned reports have agreed upon the importance of explicit and systematic reading instruction that emphasizes phonemic awareness, phonics, fluency, and comprehension (e.g., National Reading Panel, 2000). Unfortunately, there is still debate about how to provide these comprehensive and effective reading programs (e.g., Foorman, Breier, & Fletcher, 2003). Differentiated instruction is the practice of modifying the focus or format of instruction to meet the needs of individual students; it is typically implemented in small groups and employs frequent progress monitoring (Foorman et al., 2003; Subban, 2006; Tomlinson et al., 2003). Many teachers and schools are turning to a blended learning model to differentiate instruction and meet the needs of all students (Staker & Horn, 2012). Blended learning incorporates both educational technologies and traditional, offline teaching methods. The technology-based portion of blended learning programs can often support the most time-consuming aspects of differentiated instruction, such as data collection and ongoing progress monitoring. Educational technologies can also provide a systematic scope and sequence for reading skill acquisition as well as targets for additional offline instruction.
One technology-based intervention that appears to be both comprehensive and effective is the Lexia Core5 Reading (Core5) program (Schechter, Macaruso, Kazakoff, & Brooke, 2015; Prescott, Bundschuh, Kazakoff, & Macaruso, 2017). Core5 was designed to accelerate mastery of reading skills for students of all abilities in preschool through fifth-grade. The online component of Core5 contains systematic, personalized, and engaging instructional activities. The offline component of Core5 utilizes student performance data to create detailed progress reports linked to targeted resources for teacher-led lessons or independent paper-and-pencil activities. This study will examine the online portion of Core5, which contains 18 Levels corresponding to specific grade-level material: preschool (Level 1), kindergarten (Levels 2–5), first grade (Levels 6–9), second grade (Levels 10–12), third grade (Levels 13–14), fourth grade (Levels 15–16), and fifth grade (Levels 17–18). The content in each level is aligned with the Common Core State Standards. The program activities are organized into six strands of reading skills: Phonological Awareness, Phonics, Structural Analysis, Automaticity/Fluency, Vocabulary, and Comprehension.
There are two key program features that personalize the learning experience for each student: auto placement and instructional branching. All students begin Core5 by taking an auto placement test. This brief screening tool determines the appropriate Start Level for each student based on their reading ability. Thus, students may begin Core5 by working on material that is below, in, or above their current grade level. Students then work on units within activities at the designated Core5 level until they achieve 90–100% accuracy and advance to the next level. When students have difficulty in a given unit of Core5, an instructional branching feature automatically differentiates task presentation. Figure 1 provides a screenshot for each instructional mode for an example unit. All students begin the program in the standard instructional mode. If students make 1–2 errors in a unit, they are automatically moved to the guided practice mode with fewer stimuli and more structure. If students are successful, they return to the standard mode. If students continue to make errors, they are moved to the direct instruction mode that explicitly teaches the skill in the activity (e.g., multiple meanings) and the type of error the student made (e.g., bat).
No one has explicitly studied the effects of Core5 for poor reader profiles. In this study, we compared the performance of reader profiles on Core5 for one school year. Performance was gauged by three key measures: accuracy in the standard mode, time spent in the guided practice mode, and time spent in the direct instruction mode. The performance of typical readers served as a comparison group for the impaired reader profiles. We hypothesized that typical readers would achieve greater accuracy in the standard mode of Core5 while spending less time in the guided practice or direct instruction modes. In contrast, students with mixed deficits might achieve lower accuracy in the standard mode compared to the other profiles. We predicted that students with discrepant reading skill profiles would achieve lower accuracy in the standard mode compared to their typical peers, but only in their area of weakness. For example, poor decoders might achieve lower accuracy than typical readers on Phonics activities. Another possibility however, is that these students could achieve similar accuracy as the typical readers if time in the guided practice or direct instruction modes facilitates performance in the standard mode. Using the same example, poor decoders might demonstrate equal accuracy during Phonics activities if they spend significantly more time in the guided practice mode of Phonics instruction.
Proximal and distal measures of reading performance
Reading performance can be assessed using proximal or distal measures. The terms proximal and distal typically refer to the relative distance between two objects. In assessment, distance is comparable to how closely a measure is associated with the mechanism of change; proximal is close or highly-related and distal is more separated. If the mechanism of change is a particular program of literacy instruction, the proximal measures are the embedded program metrics, and the distal measures are outside assessments. In other words, the proximal measures tell us about performance in the reading program, but we need distal measures to determine whether any changes were made in a more ecologically valid context. Regarding Core5 metrics, we anticipated that significant between-group differences in Fall performance variables would disappear in the Spring, which would indicate improvement. If such a pattern of change occurred proximally, we could potentially see gains on the more distal reading assessment, aimsweb.
In summary, the goal of this paper was to determine what works for whom in technology-based literacy instruction. The investigation was organized around three key questions. First, can scores from a readily-available, commonly-used progress monitoring tool (aimsweb) be used to classify third-grade students into four reader profiles commensurate with past research? Second, on average, do students in the four reader profiles perform differently from one another on a technology-based reading program (Lexia Core5 Reading) in word reading and comprehension activities, when controlling for program start level? Third, how does each reader profile perform on aimsweb in the spring compared to the fall? Results of this study will contribute to practical and efficient assessment and intervention for students with reading difficulties.