Keywords

1 A Brief Overview of the School Involved in the Reading Success Project

The Australian school involved in the Reading Success project identified the assessment and intervention practices for reading as a major area for school improvement. At the time of completing the project, there were 674 students (50% males/50% females) enrolled at the school. These enrolments included 6% of students who identified as being indigenous and 31% of students who had a language background other than English. The Index of Community Socio-Educational Advantage (ICSEA) scale (Australian Curriculum, Assessment and Reporting Authority [ACARA], n.d.) was used as an indicator of the demographic composition of the school. ICSEA is a scale developed by ACARA and used to provide an indication of the level of educational advantage of students attending the school. An overall ICSEA value is calculated for each school, based on factors including parents’ occupation, parents’ education, a school’s geographical location, and the proportion of indigenous students. The average ICSEA score (or the benchmark) is 1000, with scores falling between approximately 500 (representing students from extremely disadvantaged backgrounds) and 1300 (representing students from more advantaged backgrounds). The school involved in this project had a school ICSEA value of 1005 suggesting the level of educational advantage for this school was consistent with the average benchmark across Australia. However, it should be noted that according to the Australian Early Development Census data (https://www.aedc.gov.au/ 2015 data), the proportion of children attending the school who were considered developmentally vulnerable in language and cognitive skills was double in comparison with the reported percentage of children considered developmentally vulnerable residing in the school’s geographic region.

In Australia, the National Assessment Program—Literacy and Numeracy (NAPLAN) is a nationwide assessment process undertaken in all Australian schools with students in Years 3, 5, 7, and 9 (Australian Government, 2016). The assessment is designed to test the types of skills that are essential for every child to progress through school and life, including skills in reading, writing, spelling, grammar and punctuation, and numeracy. Students are provided with individual results explaining their performance on NAPLAN; however, schools also gain information relating to their overall school performance. Through using ICSEA, schools are able to compare their performance to that of other schools with similar educational advantage values. For the school in the current project, it was evident that their reading results on NAPLAN showed that on average the students performed similar to students attending similar schools over time, despite the school facing many unique challenges.

The Reading Success project included all key stakeholders involved in the teaching of reading in the school setting, resulting in an interdisciplinary approach to both the identification of reading difficulties and the provision of targeted intervention for students at risk of or experiencing difficulties in reading. As part of this collaborative approach, qualitative data and reports obtained from the school leadership team, teachers, and students were included. Reading achievement data collected from two cohorts of students at different stages of their reading development will be reported on in this book:

The Learning to Read stage Students in the ‘learning to read’ phase of development included students in Year 1 (i.e. their second year of formal schooling). Parents of 94% (93 out of 99 students) of the overall Year 1 cohort provided consent for their child to be involved in this project, providing a representative sample of all students in Year 1 at this school. Of these students, 46% were male, and 32% spoke a language other than English in the home environment (E/ALD) as determined based on parent report data.

The Reading to Learn stage Students in the ‘reading to learn’ phase of development included students in Year 4 (i.e. their fifth year of formal schooling). A total of 83% (78 out of 94 students) of the overall Year 4 cohort was involved in this project, providing a representative sample of all students in Year 4 at this school. Of these students, 47% were male, and 24% had E/ALD as determined based on parent report data.

The specific data collection methods for both the qualitative aspect of this project and the data pertaining to reading achievement are described in further detail below.

2 Stakeholder Interviews

To obtain information about the educators’ perspectives about their school as well as their opinions about the enablers and/or inhibitors of the school’s reading programmes and approaches to the teaching of reading, a number of qualitative data sets were collected to ensure richness of the data (Lambert & Loiselle, 2008):

a productive iterative process whereby an initial model of the phenomenon guided the exploration of individual accounts and successive individual data further enriched the conceptualisation of the phenomenon; identification of the individual and contextual circumstances surrounding the phenomenon, which added to the interpretation of the structure of the phenomenon; and convergence of the central characteristics of the phenomenon across focus groups and individual interviews, which enhanced trustworthiness of findings. (p. 228)

Therefore, the qualitative data included in the Reading Success project were collected through interviews and/or focus groups with teachers (at the beginning and at the end of the project), interviews with the leadership team, and a student focus group as well as individual student interviews. Information was also collected about the reading programmes the school implemented over the course of the project. In addition, school planning documents and policies were included in the data set.

Educators The interview questions for teachers and leaders firstly focused on their demographics—how long they had been teaching, what their qualifications were, and what year levels they had taught previously including any other teaching experiences. Remaining questions explored what the educators’ perspectives were about their school as well as what they felt were the enablers and/or inhibitors of the school’s reading programmes and approaches to the teaching of reading. The post-interviews also investigated what staff members felt about the Reading Success project including its impact on student learning. Prompt questions for both the pre- and post-interviews can be found in Appendix 1. The questions were based on a previous project conducted by Barton and McKay (2014, 2016a, 2016b) that explored another school’s approach to teaching reading in the secondary context.

Students The student focus group and subsequent interviews involved students in the reading to learn cohort (i.e. Year 4 students). Three of these students rated themselves the lowest on the Reading Self-Concept Scale (Chapman & Tunmer, 1995). Initially, the students were invited as a whole group to participate in a focus group to talk about reading, how they perceive themselves as readers and what might help or inhibit their reading. While the students participated in this focus group, the researcher asked them to create a collage using cut-up pictures from a range of magazines. The purpose of this collage was to support the students in feeling comfortable about talking about themselves with the researcher. The researcher utilised a method known as re-collage (Barton, 2020), whereby participants create an artwork that expresses deeper messages about a topic, in this case reading. Arts-based research methods have been known to support participants’ abilities to express their feelings about focused topics. For students of this age, the researchers were interested in not only asking for their opinions orally, but also using a stimulus artwork created by the participants to base a discussion around.

Allowing the students to reflect on themselves as readers through the art had potential for students to uncover aspects of themselves as readers not previously known. Leitch (2006), for example, explained that “through the creation of images in relation to self, new meanings, previously unaware, unvoiced, unexpressed, half-understood came to be significant and capable of being incorporated into the participants’ social and/or emotional understanding of themselves, to the point that new actions or directions could be taken in their lives” (p. 566). In a similar vein, McNiff (2008) showed how arts-based and creative approaches to reflecting on a topic often resulted in more “meaningful insights [that] often come by surprise, unexpectedly and even against the will of the creator” (p. 40). Utilising modes, other than language, created the opportunity for the students to feel safe and express different ways of knowing or being (Liamputtong & Rumbold, 2008).

Analytical approaches All interviews and focus groups were audio recorded and then transcribed. Participants were provided with the transcripts in order for member checking to occur. The team then analysed the transcripts by firstly identifying any codes (interesting factors) throughout the initial reading of documents. Subsequently, the team re-read each transcript to ensure no code was left unidentified. Finally, the codes were clustered into identified themes (Braun & Clarke, 2006). The identified themes from the educator and student perspectives are described in detail in Chaps. 4 and 7.

3 Student Reading Self-Concept

As outlined in Chap. 1, student self-perception/motivation for reading is an important aspect to consider when creating reading profiles and when planning for intervention (Cartwright, Marshall, & Wray, 2016; Katzir, Lesaux, & Kim, 2009). For this reason, students in the reading to learn cohort completed the Reading Self-Concept Scale (Chapman & Tunmer, 1995) as part of this project. This scale comprises 30 items to determine students’ self-perceptions across three different subscales: (1) perception of their competency in reading, (2) perception of reading difficulty, and (3) their attitudes towards reading. For each item, the statement is read aloud to the child by the examiner. This oral administration ensures reading skills of the child do not impact on completion of the task. If the student does not understand any of the questions, the examiner provides further explanation related to the statement. The student is then required to respond to each statement on a five-point scale ranging from ‘Yes, always’ to ‘No, never’. Student responses are scored from 1 (indicating a low reading self-concept) to 5 (indicating a high reading self-concept) and then summed to provide an overall score based on the 30 responses. Comparisons between the child’s responses across the three different subscales can also be conducted. Student performance on this scale is described in Chap. 4.

4 Types of Assessments and Interpretation of Results

4.1 Assessment Type

In selecting assessments to complete the speech-to-print profile and describe a student’s strengths and weaknesses in the skills needed for successful reading, it is important to understand the underlying features and components of different tests and how these features may influence the information that can be obtained. Broadly speaking, assessments can be formal or informal in design (see Table 2.1). Formal assessments are standardised, which means that they are well constructed, and have clear administration and scoring processes to ensure that administration of the assessment is ‘standard’ no matter who administers the test or scores it. Most formal assessments are norm-referenced, which means that a student’s performance can be compared to his/her same-age or same-grade peers. Informal assessments, on the other hand, are often less structured and contextualised (more authentic) and can include observational assessments, screening assessments, interviews, and questionnaires. Informal assessment tasks are not standardised, and do not have norms. This means that although a student’s performance can be observed and described, it is not possible to compare the student’s performance to his/her peers. Both formal and informal assessments may be criterion-referenced. This type of assessment is different from norm-referenced tests, as the focus is on determining whether the child can achieve a certain level of performance (rather than comparing a child with the performance of another). This type of assessment has advantages as it examines behaviours in depth and can be a useful method for establishing baseline performance and monitoring treatment progress.

Table 2.1 An overview of different types of assessments

4.2 Interpretation of Norm-Referenced Test Results

The features of standardisation for norm-referenced assessments allow for student performance to be compared with other students (i.e. the norming sample) who are the same age or the same year of schooling, and their strengths and difficulties can be interpreted accordingly. More specifically, student performance on standardised assessments can be interpreted using a bell curve (see Fig. 2.1). A bell curve is a representation of normal distribution and shows that the further a person’s score moves away from the mean (in either direction) the fewer people in the population will receive that score. Most standardised tests report mean scores for each age group and also a standard deviation. The standard deviation represents the average difference of scores from the mean score to account for variability that exists in test scores amongst different test takers. It is the combination of information from the mean and standard deviation of a test that allows decisions to be made about when a child’s score falls far enough from the mean to warrant the decision that it is significantly different from the norm. Raw scores (i.e. the total number of items the student got correct on a standardised test) need to be converted to allow for meaningful comparisons to norms provided for each test. These comparisons result in the creation of standard scores (such as z-scores or scaled scores) or percentile ranks which can be used to provide an indication of how far from the mean the child performs on a test in comparison with the normative data for that assessment. The interpretation of these three types of scores, z-scores, scaled scores, and percentile ranks is provided below.

Fig. 2.1
figure 1

(Source Wikimedia Commons, retrieved from https://commons.wikimedia.org/wiki/Category:Normal_distribution#/media/File:Normal_distribution_and_scales.gif)

Bell curve.

Z-scores A z-score of 0 indicates average performance (see Fig. 2.1), with a negative score indicating performance below the mean and a positive score falling above the mean. A student’s performance with a z-score below −1 indicates the child’s performance is more than 1 standard deviation below the mean and is subsequently interpreted as performing ‘below expectations’.

Scaled Scores The interpretation of scaled scores, or standard scores, is usually based on the mean score that is assigned for each test. For example, it is common for tests to assign a mean score of 100 (i.e. z-score of 0), and scores less than 85 indicate a child’s performance is more than 1 standard deviation below the mean, and thus ‘below expectations’.

Percentile Rank A percentile rank refers to the proportion of the normative population that scored lower than the subject taking the test. A percentile rank of less than 16 is indicative of performance 1 standard deviation below the mean, suggesting skills ‘below expectations’.

The bell curve in Fig. 2.1 provides a visual representation of the association of all these types of scores. In Fig. 2.1, you can see that 68.2% of the population scores between z-scores −1 and +1 and hence is considered to be scoring within typical limits. Importantly, the interpretation of all norm-referenced scores is usually dependent on the purpose and the level of severity that is required to be identified. Thus, the decision of what cut-point to use to indicate ‘below expectations’ and the subsequent need for intervention is arbitrary. It should be based on the purpose of the assessment and the local context and should be a collaborative team decision based on all available data.

5 Choosing a Reading Comprehension Test

As stated previously, the aim of learning to read is to read accurately and fluently with comprehension. To determine the level of reading ability, it is therefore important to use a formal reading assessment that adequately captures a student’s reading skills across reading accuracy, reading fluency, and reading comprehension. As summarised by Westerveld (2009), when choosing a reading comprehension test, the following needs to be taken into consideration: (1) test format (i.e. cloze vs question–answer format; oral or silent); (2) passage dependency (can the students answer the questions without reading the passage); and (3) test-taking strategies (i.e. student initiative, involvement, and item completion rate; Bornholt, 2002; Keenan & Betjemann, 2006; Keenan, Betjemann, & Olson, 2008).

Unfortunately, not all reading tests adequately measure reading comprehension. In fact, some assessments may fail to identify reading comprehension deficits (Bowyer-Crane & Snowling, 2005). For example, student performance based on tests such as the Neale Analysis of Reading Ability (NARA; Neale, 1999) and the York Assessment of Reading for Comprehension (YARC; Snowling et al., 2012) depends on both word recognition and language comprehension skill, as students are asked to answer questions after readig a passage. In contrast, performance on cloze reading tasks (i.e. student reads a sentence or passage and fills in the blank) has been shown to be mostly dependent on adequate word recognition ability. Based on these results, Westerveld (2009) advised the use of a reading test that requires the student to read passages and answer open-ended questions following the reading or using an individually administered test of language comprehension in addition to using a reading test (Nation & Snowling, 1997).

Another consideration in the Australian context is to choose a test that has been normed on the Australian population. Recognising the importance of the schooling system, including the effect of what age students start their formal reading tuition (Cunningham & Carroll, 2011) on their reading performance, using standardised tests that have been normed overseas may not adequately identify Australian students with reading difficulties. Both the NARA and the YARC have been normed on Australian students. A recent study by Colenbrander, Nickels, and Kohnen (2016) compared student performance on the NARA versus the YARC. Students attended Australian primary schools and were in grades 3–6. Interestingly, more students were classified with a ‘specific comprehension difficulty’ profile on the NARA than on the YARC, but no differences were found for classification of ‘specific word reading difficulties’ between the tests. Closer inspection of these results, however, showed that only 9% of the variance in YARC scores could be explained by a child’s ability to decode words; this was 21% for the NARA. Taken together, the authors concluded that the YARC may be a better test to indicate reading comprehension difficulties for students who have word recognition difficulties. Furthermore, the authors recommend using a separate decoding task (word or non-word reading measure), and that diagnosis of a reading comprehension difficulty should not be based on the results of a single assessment. For the current study, based on previous research, we administered the YARC to determine students’ reading performance across reading accuracy, reading fluency, and reading comprehension.

6 The Reading Assessment Process Used in the Reading Success Project

As part of the Reading Success project, comprehensive assessments of students’ reading and spoken language skills were undertaken to create speech-to-print profiles for those students who struggled with their reading comprehension. As explained in Chap. 1, this assessment process was guided by the Simple View of Reading (SVR) as a framework and included several steps to determine individual reading profiles, which in turn were used to inform intervention practices. First, students’ reading comprehension performance was assessed using a standardised, norm-referenced reading assessment (Step 1). Next, for students with identified reading comprehension difficulties, their skills in reading accuracy and language comprehension were assessed (Step 2). For those students with identified difficulties in reading accuracy, additional testing to determine proficiency in word recognition skills (including single word reading, orthographic knowledge, and phonological awareness) was conducted (Step 3). Finally, for each student who scored below expectations on reading comprehension in Step 1, individual speech-to-print profiles were created to visually represent the assessment data and show: (1) strengths and weaknesses in the underlying spoken language skills needed for successful reading performance and (2) strengths and weaknesses in the code-related skills needed for successful word recognition (Step 4). Finally, the information obtained from Steps 1–4 was used to guide intervention practices as part of Step 5. This assessment to intervention process is shown descriptively in Fig. 2.2.

Fig. 2.2
figure 2

Five-step assessment to intervention process based on the Simple View of Reading. Note RC = Reading comprehension; RA = reading accuracy; RR = reading rate; LC = language comprehension. #Step 3: Further assessment may also include students whose RC was within expectations but their RA was below expectations and may warrant further assessment (see Step 3 below)

The following section details these five assessment steps including description of the specific assessments administered as part of the Reading Success project. We acknowledge that there are many other assessments available to assess reading and spoken language skills of school-aged children and recommend collaboration with all professionals involved in assessment and intervention for students at risk for or experiencing difficulties in reading, including speech pathologists. The tests that were used in this project are described in detail below; however, a list of other suitable tests are shown as further examples in Appendix 2.

6.1 Step 1: Assessment of Reading Skills Using a Validated Assessment

For this project, the Australian edition of the York Assessment of Reading for Comprehension, Primary (YARC; Snowling et al., 2012), was used as a formal (standardised), norm-referenced assessment of reading ability. The YARC yields measures of reading accuracy and reading comprehension as well as reading rate. This diagnostic reading assessment was administered individually, either by a speech pathologist or by a teacher, with all students in the project. As per the manual, the student was asked to read aloud two passages (at a level suitable to the student’s reading ability), and the time taken to read each passage was recorded. If the student produced a reading error, the correct production was immediately provided to the student by the examiner. At the conclusion of each passage, the student was asked a series of literal and inferential comprehension questions, and the student was able to refer to the text to answer the questions if they needed. Following the completion of two passages, several scores were calculated: (a) the number of reading errors was totalled to provide an overall reading accuracy (RA) score; (b) the time taken to read each passage was summed to provide an overall reading rate (RR); (c) the total number of questions answered correctly was calculated to provide a reading comprehension (RC) score. These raw scores were then converted to standard scores to allow for comparisons with Australian normative data. As per the test guidelines, for students in the Year 1 cohort, RR was not calculated if the student exceeded 16 reading errors on the level 1 passage. These test scores (RA, RR, and RC) can be calculated online using a free online score conversion tool.

In the Reading Success project, we used a standard score (SS) cut-off of 85 as an indicator of ‘below expectations’, which corresponds to students scoring in the bottom 16% of the same-age population (see Fig. 2.1). Using this cut-off, the cohort was divided into those students who needed further assessment and those who performed within expectations for their age/stage of schooling.

6.2 Step 2: Further Assessment of Students Who Scored Below Expectations on Reading Comprehension

For students who scored below expectations on the RC subtest of the YARC, consistent with the Simple View of Reading, we investigated their reading accuracy skills and considered their language comprehension skills.

  1. (i)

    Check Reading Accuracy First, student performance on the YARC RA subtest was checked to determine whether challenges in RA were contributing to RC difficulties. Students who demonstrated RA SS < 85 were considered to demonstrate challenges in RA.

  2. (ii)

    Check Language Comprehension The next step was to screen the language comprehension skills of students who scored below expectations in RC. In this project the Understanding Spoken Paragraphs subtest from the standardised, norm-referenced assessment, the Clinical Evaluation of Language Fundamentals, fourth or fifth edition (CELF-4/CELF-5; Semel, Wiig, & Secord, 2006; Wiig, Semel, & Secord, 2017) was administered. As per the test manual, students were asked to listen to spoken passages and then answer questions about the content of the passages. The subtest was administered as per the assessment manual guidelines, and a total raw score was calculated and then converted to a scaled score. In the current project, a scaled score below 7, i.e. 1 standard deviation below the mean, was used to indicate a student performing ‘below expectations’ in language comprehension (i.e. equivalent to SS85).

The CELF is a restricted test (i.e. can only be administered by a speech pathologist) and is relatively expensive if the only aim of using it is to determine if the student’s RC difficulties stem from underlying problems in LC. An alternative way of assessing LC is to orally administer two paragraphs of the YARC. The YARC Passage Reading Test—Primary comprises two parallel tests of graded passages (A and B), each of which is accompanied by a set of eight comprehension questions. Therefore, if form A has been used to test the student’s RC, form B can be used to test the student’s LC. In this instance, the examiner (e.g. teacher or speech pathologist) reads two passages (level equivalent to the student’s age or year of schooling) out loud and asks the student to answer the comprehension questions afterwards. The student’s performance can be evaluated by scoring the total number of questions answered correctly across the two passages and converting this raw score to a scaled score. Although this test was not normed for this purpose, a scaled score ≥ 85 would potentially indicate satisfactory language comprehension performance.

The results from Steps 1 and 2 were then used to determine whether students’ poor performance on the RC subtest of the YARC could be explained by poor RA, poor LC, or both.

6.3 Step 3: Further Assessment of Word Recognition Skills

Next, for students who performed below expectations in RA on the YARC (i.e. showed difficulties accurately reading passages), further assessment was conducted to determine possible underlying reasons for their RA difficulties. It is important to note that assessment at this stage should include students who demonstrate poor RA and RC (both SS < 85), as well as those students whose RC SS ≥ 85 but who show difficulties in RA (i.e. SS < 85). This is an important consideration as these students, who demonstrate decoding difficulties but age-appropriate RC, may have oral language skills that allow for masking of difficulties in RC, particularly in the early years of schooling. Therefore, further assessment of these students’ word recognition skills and consideration of targeted intervention around decoding may be required.

Assessments of the following print-related skills were conducted: (a) single word reading, (b) orthographic knowledge, and (c) phonological awareness.

  • Single Word Reading

The Castles and Coltheart Word Reading Test—Second Edition (CC-2; Castles, et al., 2009) was administered as a formal, standardised, norm-referenced measure of single word reading ability. This test was selected as it has Australian normative data, is freely available, and is an efficient measure of word reading across regular, irregular, and nonsense words. This test is untimed and assesses a student’s ability to read aloud a series of single words, including 40 regular, 40 irregular, and 40 nonsense words. This test was administered online according to the test guidelines, and the words were presented one at a time, in a pseudo-randomised order, and with gradually increasing difficulty. Once a student makes > 5 errors on any of the single word subtypes, the administration of that subtype is discontinued. The student continues with the remaining word types until the ceiling is reached for all subtypes or until all words have been read. A total raw accuracy score for each subtest type (regular, irregular, and nonsense words) was calculated and then converted to a z-score for the child’s age. A z-score below −1 indicated a student performing below expectations on single word reading ability.

  • Orthographic Knowledge

The Letter Sound Test (LeST; Larsen, Kohnen, Nickels, & McArthur, 2015) assessed orthographic knowledge. This freely available, formal, norm-referenced assessment was chosen to assess a student’s ability to sound out single letters and letter combinations (including consonant and vowel digraphs). The test has been normed on Australian students (kindergarten to grade 3). This test was administered online (www.motif.org.au), in a one-to-one setting, and took approximately 5–10 min. Students were shown 51 graphemes (single letters and letter combinations) in isolation, and they were asked to produce the target sound. After the administration of the test, a total raw score (/51) was calculated and converted to a z-score as per the test guidelines. In the current study, students with a z-score less than −1 were classified as performing below expectations with their orthographic knowledge, including the students attending Year 4 who were older than the norming sample.

  • Phonological Awareness

The Sutherland Phonological Awareness TestRevised (SPAT-R; Neilson, 2003) was administered to assess the students’ phonological awareness skills. This standardised test was normed on 559 Australian students, is suitable for students in their first to fourth year of schooling, and takes approximately 10–15 min to administer. For students in Year 1, the first seven subtests of the SPAT-R were administered individually according to the test manual instructions. The first seven subtests provided an indication of phonological awareness skills in terms of syllable segmentation, rhyme detection and production, blending and segmenting phonemes, and onset–rime identification. All subtests involved the students completing four items for each task. In this project, subtest scores below the 25th percentile were used to identify students whose phonological awareness skills were below expectations and potentially of concern.

For older students, in Step 3 of the assessment process, we used subtests from the Comprehensive Test of Phonological ProcessingSecond Edition (CTOPP-2; Wagner, Torgesen, & Rashotte, 2013), including: (a) elision, measuring a student’s ability to remove segments from spoken words to form other words; i.e. phoneme awareness; (b) non-word repetition; (c) rapid automatic naming; and (d) non-word repetition. The CTOPP-2 is a restricted test (i.e. only allied health and special education professionals can administer this test).

6.4 Step 4: Creating Speech-to-Print Profiles

Table 2.2 provides an overview of the assessments that were used in Steps 1–3. Speech-to-print profiles were created for students who performed below expectation on the RC subtest of the YARC. Information from all assessment measures (from Steps 1 to 3) was mapped on to the speech-to-print profiles for each student. It is paramount to ensure this step of the assessment process is in collaboration with all stakeholders involved in the teaching and remediation of reading, including teachers, speech pathologists, and learning support teachers. This collaborative process allows for the identification of strengths and difficulties in the core components of language comprehension and word recognition. Furthermore, this team approach provides an avenue for identifying if any further assessments should be completed and by whom, or whether some of this information has already been collected as part of routine school-wide assessment processes. For example, as shown in Table 2.2, Step 4 may require a comprehensive evaluation of a student’s spoken language skills, which may involve a referral to the speech pathologist. The speech pathologist may, for example, administer the CELF-5 and/or collect a spontaneous language sample in a narrative (e.g. Profile of Oral Narrative Ability [PONA]; Westerveld & Vidler, 2016) or expository context (Heilmann & Malone, 2014). For other students, more detailed information may be needed regarding a student’s writing and spelling skills, which may well be available from the classroom teacher. Completing the profile will then promote discussion about appropriate intervention planning (see Step 5).

Table 2.2 An overview of the areas assessed and the assessment methods used at each step

6.5 Step 5: Provision of Targeted Intervention

The next step was to discuss how appropriate and targeted intervention could be provided for students with identified reading difficulties, based on their speech-to-print profiles. For example students who showed a profile of dyslexia would benefit from intervention aimed at improving these students’ phonological processing and word recognition skills, whereas students who demonstrated specific comprehension deficits would benefit from a programme aimed at improving their language comprehension skills. In this project, the speech-to-print profiles were discussed with the school team to: (1) trial appropriate interventions for a selection of students within the school setting, and (2) allow for the school team to plan and continue to implement regular classroom instruction. Detailed information about the interventions implemented and evaluated as part of the Reading Success project is provided in Chap. 5. Case samples are presented in Chap. 6.

7 Progress Monitoring

As explained in Chap. 1, progress monitoring underpins the response-to-intervention framework. Monitoring of student progress was conducted on a regular basis as part of normal school procedures, and more in-depth assessment of reading and spoken language skills was undertaken following periods of targeted (Tier 2 and Tier 3) intervention. Examples of progress monitoring practices are described next.

7.1 School-initiated Assessments for Reading

The school utilised several routine data collection methods for monitoring reading outcomes across the school years. The following section describes these specific methods; however, it is important to note that other Australian schools would have different processes for measuring reading outcomes. At the school where the Reading Success project was undertaken, it was reported by the school leadership team that the assessments administered change throughout primary school based on student year level and reading level. In the early foundational years of schooling, the PM Benchmark Reading Assessment Resource Kit (Smith, Nelley, & Croft, 2009) was used to measure reading skills. Once the students exceeded the maximum reading level on the PM Benchmark (i.e. level 30), teachers administered the Prose Reading Observation Behaviour & Evaluation of ComprehensionSecond Edition (PROBE 2; Parkin & Parkin, 2011) and the Progressive Achievement Tests in Reading (PAT-R; Australian Council for Educational Research, 2018). Further information about these assessment tools is provided next.

Learning to Read Cohort The PM Benchmark assesses students’ instructional and independent reading levels using levelled fiction and non-fiction texts. The assessment is criterion-referenced, is administered individually, and takes approximately 15–20 min to complete per student. There are different reading levels which are expected for each year level to determine ‘average’ and ‘below average’ reading performance. The highest level obtainable is level 30. The final reading level is based on the student’s performance in decoding, retelling, and answering comprehension questions. At early levels, these comprehension questions are literal, and as students progress through the reading levels both literal and inferential questions are included. At the school where this project was conducted, students in the early years of schooling were assessed on the PM Benchmark every term as part of the school’s process for tracking reading progress.

Reading to Learn Cohort The PROBE 2 and PAT-R are two reading comprehension assessments that are commonly used in Queensland schools. As the PROBE is organised across six-month developmental intervals, it is not sensitive enough to measure progress each term. Therefore, once students reached a PM Benchmark level of 30, the school alternated between administration of the PROBE 2 and the PAT-R.

The PAT-R is a norm-referenced test of reading comprehension and can be used with students in Prep to Year 10. It takes approximately 40 min to administer plus 15 min for scoring. However, the test can be administered and scored online, automatically generating a range of reports, including group reports and individual reports. The test yields scaled scores, percentile rank scores, and stanines.

The PROBE 2 consists of 20 sets (one fiction and one non-fiction passage) of graded passages with reading ages ranging from 5 to 6 years to 14.5–15.5 years. The test should be administered individually and takes about 15–20 min. The examiner estimates the starting level and asks the student to first read the passage silently and then to read it out loud. Following this, the examiner asks a series of comprehension questions. Both the student’s reading accuracy and reading comprehension are scored. A reading level (and corresponding age range) is determined if the student obtains > 95% correct in decoding and > 70% correct in comprehension.

7.2 Reading Success Project Assessments

In addition to the school-based measures, a number of assessments were conducted as part of the Reading Success project to monitor student progress over time, including response to interventions. This included the re-administration of key assessments described above including the YARC, the LeST, and the CC-2. Chapter 5 provides a description of the outcomes for the Year 4–5 cohort across these assessments following the relevant intervention programme.

8 Comparisons Between Project-Initiated Standardised Reading Assessments and School Reading Data

As described above, the school already implemented reading assessments as part of their common practice to monitor reading outcomes. As an additional reading assessment (i.e. the YARC) was administered as part of the Reading Success project, it was important to compare reading results on the YARC and these school-based measures. Comparing student performance across the tools allowed us to evaluate whether the school’s existing reading assessments would accurately identify students who struggled in their reading comprehension and reading accuracy on the YARC. For this project, school-based data on the PM Benchmark and PAT-R collected at a similar point in time to the YARC were available. Chapter 3 reports these results.

9 Summary

This chapter provided an overview of the methodology that was used in the Reading Success project to (1) understand teacher perspectives around the teaching of reading, (2) identify students who show challenges in their reading accuracy and/or comprehension skills, and (3) evaluate student reading self-concept and motivation. It explained how the Simple View of Reading drives the reading assessment process and provided details of the theoretically based assessments that were used, including an overview of how to interpret different types of assessment results. Finally, the importance of ongoing monitoring of student performance was highlighted as students move from the learning to read to reading to learn phase.