Introduction

In Kenya, progress on educational quality lags behind the significant progress which has been made in increasing children’s access to schooling (Mugo et al. 2011; NASMLA 2010; Uwezo 2012a). Nationally, only 32 per cent of third-graders can read a second-grade-level passage, in English or Kiswahili (Uwezo 2012a). A number of interventions are currently under way in Kenya to improve early reading outcomes. These include the United States Agency for International Development’s (USAID’s) Primary Math and Reading (PRIMR) Initiative; a PRIMR expansion programme funded by the United Kingdom’s Department for International Development (DFID), which is the data source for the analyses presented here; the Aga Khan Foundation’s Education for Marginalised Children in Kenya (EMACK) programme; the Opportunity Schools programme implemented by Women Educational Researchers of Kenya (WERK) and SIL International; the National Book Development Council of Kenya (NBDCK) programmes; new DFID Girls’ Education Challenge programmes; and the USAID- and DFID-funded national literacy programme, entitled Tusome,Footnote 1 which will implement a literacy improvement programme at scale starting in 2015. The progress in intervention design and implementation, however, has outpaced the attention paid to the evaluation of these projects.

The current focus on early-grade reading in Kenya calls for reliable evaluation tools. One of the tools most commonly used in this region is the Early Grade Reading Assessment (EGRA) (for a description, see Gove and Wetterberg 2011). While EGRA consists of a flexible set of assessments which can vary by location, language and programme design, it generally includes oral reading assessment as a central literacy outcome. Some alternative assessments, such as the 2010 National Assessment System for Monitoring Learner Achievement (NASMLA) in Kenya, primarily use silent reading passages accompanied by reading comprehension questions. To our knowledge, to date, no formal evaluation has been done to determine whether oral assessments or silent ones are preferable in Kenya. The question is not merely academic; it is of direct and immediate relevance to dozens of literacy interventions currently being implemented in Kenya and elsewhere in sub-Saharan Africa.

As mentioned above, the data utilised in this article are drawn from the PRIMR Initiative’s baseline data set for an expansion of PRIMR which is being funded by DFID/Kenya. The PRIMR programme is designed by the Ministry of Education, Science and Technology, funded by USAID/Kenya and DFID/Kenya, and implemented by the research firm RTI International. The PRIMR programme is organised both as an intervention to improve literacy and numeracy in the two earliest primary grades (Classes 1 and 2),Footnote 2 through enhanced pedagogy and instructional materials; and as a randomised controlled trial of a variety of interventions. The DFID/Kenya portion of PRIMR is being implemented in 834 schools in Bungoma and Machakos counties, in the former Western and Eastern provinces, respectively. The PRIMR programme has shown significant impacts on learning achievement in Kenya (Piper et al. in press; Piper and Kwayumba 2014; Piper and Mugenda 2013; Piper et al. 2014). The data set used in this article was collected as the baseline of the DFID/Kenya programme, in March 2013, before the expansion interventions began.

This data set, which is a random stratified sample of schools and children in Bungoma and Machakos counties, provides a unique opportunity to test the relationships between oral and silent reading rates in English and Kiswahili, as well as the relationships of both types of reading rates to reading comprehension.

Background and context

We approach this theoretical discussion from an applied perspective – that is, our aim is to provide researchers and programme evaluators with evidence to back up advice on the best approach to assessing literacy achievement among learners in Kenya and other countries in sub-Saharan Africa. Oral and silent fluency are not the same construct. To read out loud, one must also form and speak the words, adding a series of steps to the reading task. Therefore, the words-per-minute rate a student can produce orally will likely be lower than his or her silent words-per-minute rate (McCallum et al. 2004). In a latent variable analysis, Young-Suk Kim and her colleagues (2012) demonstrated that silent and oral fluency are indeed dissociable – albeit related – constructs. There is some possibility that silent fluency would be lower than oral fluency in Kenya, because pupils are seldom given opportunities to read at all (Piper and Mugenda 2013), much less silently; or taught how to read silently efficiently (Piper and Mugenda 2012). However, both oral and silent reading fluency have been linked to comprehension, the ultimate goal of literacy development, as discussed further below. While oral assessment of fluency has become increasingly common internationally with the use of EGRA, little research has been done as to whether this approach is most reliable and valid for children in Kenya and other sub-Saharan African countries. Further evidence is needed to inform the choice of assessment method, and to understand the bias introduced by assessment choices culled from tools developed primarily in the United States.

Assessing fluency: silent and oral measures

Most fluency research focuses on oral fluency, despite the fact that silent reading skills are more relevant to success in the upper primary grades and beyond. Some researchers have warned that the extensive focus on oral assessment may result in teachers focusing on oral reading, to the detriment of silent reading. As Elfrieda Hiebert and her colleagues noted, “a diet heavy on oral reading with an emphasis on speed is unlikely to lead to the levels of meaningful, silent reading that are required for full participation in the workplace and communities of the digital–global age” (Hiebert et al. 2012, p. 111). However, given the widespread use of oral fluency measures around the world, and the dependence on oral reading as an instructional method in Kenya (Dubeck et al. 2012; Piper and Mugenda 2012), we chose to examine an oral measure in this study.

Oral reading fluency (ORF) is generally measured one on one, by having an assessor ask a student to read a passage out loud for a period of time, typically one minute (Rasinski 2004). Measures of this type include the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) Oral Reading Fluency task and the EGRA. A student’s score is calculated with the number of words read per minute (WPM) and/or the number of words read correctly per minute (WCPM). In order to counter criticism that such an assessment does not validly measure comprehension, the passages are frequently accompanied by comprehension questions. Assessing comprehension alongside fluency increases the likelihood of identifying “word callers”. Word callers are students who may be able to decode text and read it aloud, but who may not understand what they are reading (Hamilton and Shinn 2003; Stanovich 1986). For a silent assessment, the addition of comprehension questions ensures that students do not merely claim that they have read the entire passage, without actually doing so.

Silent reading fluency is measured in a variety of ways. Word chains ask students to separate an unbroken series of letters into words, cloze tasks ask pupils to fill in missing words, and maze tasks are adapted cloze tasks asking students to fill in blanks in a passage from several options. However, these methods are not direct corollaries of the oral reading passages and present difficulties in comparing the two approaches. In the United States, eye-tracking and computer-based approaches have been used to assess silent reading fluency in a manner more parallel to the assessment of oral reading fluency (Price et al. 2012). However, in the Kenyan context, these methods are impractical and expensive. Therefore, for this study, PRIMR used a silent reading passage which was similar in format to the oral reading passage, and had been equated with the oral passage to ensure similar levels of complexity and difficulty. Children were asked to use a pencil or finger to mark progress as they read, but were not forced to do so. The assessor then marked the last word the child read in one minute, allowing for the calculation of a WPM rate, rather than a WCPM rate, since the assessor would be unable to determine whether the child read the silent word correctly. In combination with both the oral fluency and silent assessments, the assessor asked associated comprehension questions. The Measures section of this paper provides further information on the assessments used in this study.

Relationships between oral and silent fluency and reading comprehension

Studies attempting to elucidate the relationship between silent and oral reading generally compare students’ performance on comprehension questions after they have read passages in the two methods; or compare the performance of two groups of students on the same passage, with half completing the task orally and half silently. The results of such comparisons are mixed. Lynn Fuchs, Douglas Fuchs and Linn Maxwell (1988) found that when the reading levels of oral and silent passages were equated, the correlation between comprehension scores was generally high, a finding echoed in more recent studies (Hale et al. 2011; McCallum et al. 2004). However, among middle-school students (i.e., grades 6–8 or ages 11–14), Carolyn Denton et al. (2011) found that ORF was more strongly related to reading comprehension than to scores on a silent task, results similar to those of a number of other studies (Ardoin et al. 2004; Hale et al. 2007; Jenkins and Jewell 1993). In contrast, among fifth-graders in Turkey, Kasim Yildirim and Seyit Ateş (2012) found that silent fluency was a better predictor of reading comprehension than oral reading fluency. The question which fluency measure is more strongly related to comprehension is therefore unresolved, particularly given the dearth of literature on this topic in Kenya and elsewhere in sub-Saharan Africa.

The first possible reason for the observed gaps between oral and silent reading fluency is the difference in their measurement. The authors of one study which found an advantage to oral reading in terms of comprehension suggested that student self-reports of the last word read during the silent assessment might have been the cause of the discrepancy (Fuchs et al. 2001). If the students claimed they read further than they had, then they would perform poorly on comprehension questions which related to material they did not read. Hiebert et al. (2012) noted the possibility of students – particularly struggling readers – losing interest in a silent assessment and disengaging from the text. Andrea Hale and her colleagues (2007) pointed out yet another issue. In their oral assessment, when a student stopped on a word for more than five seconds, the assessor read the word to the student. On the other hand, in a silent assessment, students do not necessarily need to read every word correctly – even if they skip a word, context may enable them to respond correctly to a comprehension question. In an oral assessment, the student must read the word out loud correctly in order for it to count towards the WCPM rate (Nagy et al. 1987). As a result of these methodological issues, Denton et al. (2011) concluded that on the basis of the available literature, we cannot determine whether any observed gaps between oral and silent reading comprehension are due to measurement issues or actual differences in comprehension.

A second possible explanation for the varied patterns in the research, with some studies finding gaps and others equivalence, is related to the developmental stage of the readers. The gap between oral and silent reading performance may be larger for less skilled readers (Georgiou et al. 2013; Kim et al. 2011; Kim et al. 2012; Kragler 1995; Miller and Smith 1985, 1990), although Hale et al. (2007) did not find such a relationship. This question gains additional complexity when the issue of students being tested in second (or subsequent) languages is considered, as their developmental reading trajectories would vary widely in their second or additional language. Several researchers have argued that oral reading fluency is not as strongly related to silent-reading comprehension among multilingual learners assessed in a language other than their first, as among readers tested in their first language (Jeon 2012; Lems 2006). This issue is of particular relevance in Kenya, where most primary school students learn to read concurrently in English and Kiswahili, neither of which is the first language of more than 15 per cent of the population (Uwezo 2012b).

In sum, even in the United States, where DIBELS and EGRA were developed, there is limited literature on appropriate factors for deciding between oral or silent reading assessments. The matter is not settled in Kenya or sub-Saharan Africa as, to our knowledge, no published peer-reviewed studies present data on this issue from the continent. As discussed above, the discrepancies in findings may indeed be due to differences in measures and samples across studies (McCallum et al. 2004). However, the literature clearly demonstrates that it is important to consider the age and grade level of the students being tested, as well as the relative focus on oral and silent reading in the curriculum and the practice of a child’s school (Berninger et al. 2010). The educational context of Kenya, which is dramatically different from that of the U.S. because of its heavy emphasis on oral reading in classrooms, suggests that children in Kenya may react differently from U.S. children to the two approaches to measuring reading fluency.

Oral and silent reading fluency in Kenya – why might the patterns differ?

The other researchers’ arguments we have presented so far lead us to conclude that even if the findings of the studies reviewed above were more convergent, they might not be applicable to the Kenyan context. For example, in Kenya, reading instruction focuses on whole-class oral repetition rather than text-based work (Dubeck et al. 2012). In the average primary classroom, with insufficient textbook supplies, students do not spend much time reading silently as this would require a 1:1 ratio of books to students whereas the average in Kenya is 1:3 (Piper and Mugenda 2012). Given pupils’ lack of familiarity with silent reading, it might be that some of the advantages to U.S. students in silent reading (in terms of WPM scores) are mitigated in the Kenyan context. On the other hand, Kenyan students are generally unaccustomed to interacting with unknown adults. They may therefore be more comfortable with a silent assessment than with a one-on-one oral assessment, as the latter requires students to read as well as speak English and Kiswahili accurately, adding to the stress of the assessment.

Research questions

Given the gaps and contradictions in the literature, we were interested in determining the answers to the following three key research questions within the Kenyan context:

  1. (1)

    Do students perform better on reading rate and comprehension assessments in English and Kiswahili when they are tested orally or silently?

  2. (2)

    What is the relationship between oral and silent reading rates in English and Kiswahili?

  3. (3)

    Are oral reading rates in English and Kiswahili better predictors of comprehension than silent reading rates?

Research design

Data set

As noted, the DFID-funded portion of the PRIMR programme is implemented in Bungoma County, in the western region of Kenya; and in Machakos County, in the eastern region of Kenya and to the east of Nairobi, Kenya’s capital. Both are predominately rural counties with relatively poor literacy outcomes for children (Piper and Mugenda 2013). From the population of 35 zones in Bungoma and 39 zones in Machakos, the intervention team randomly selected 44 zones and then randomly assigned those zones to treatment and control groups in a phased approach, ensuring that the control zones would receive the intervention after the endline assessment. In the March 2013 baseline assessment, 36 zones were sampled, with an equal number of zones from Bungoma and Machakos. The PRIMR team then randomly sampled 38 per cent of the schools in each zone for the assessment. The number of schools in a zone varied from a minimum of 9 to a maximum of 40. At the school level, PRIMR used simple random sampling to select a total of 20 pupils from each school, stratified by grade (Classes 1 and 2) and gender so that equal numbers of boys and girls and Class 1 and 2 pupils were assessed. In examining the EGRA data, we found that among first-graders, reading ability was so low that there was essentially no variation in oral or silent reading rates. We therefore focused our analysis on second-graders.

Sample

Table 1 presents the sample’s demographics, organised by county and gender. The mean age of the Class 2 pupils was 8.1 years, with slightly higher ages for pupils in Bungoma than in Machakos (p-value .02). Boys were also a few months older than girls, on average (p-value < .01). The percentages of female and male pupils were, as expected, even (50%) across the sample. Socioeconomic status (SES) was relatively low, with the average pupil having only three of the nine household items comprising the SES measure.Footnote 3 Half of the pupils read books at home, with more pupils reporting books at home in Machakos than in Bungoma (p-value .03); and 38.9 per cent and 35.7 per cent had access to an English or Kiswahili textbook, respectively. More than nine in ten of the pupils reported that their mother could read and write. Compared to other counties involved in the larger USAID-funded PRIMR study, these two counties are poorer and schools are more poorly equipped with learning materials (Piper and Mugenda 2013).

Table 1 Demographic description of the sample by gender

Measures

The outcome variables of interest in this study are oral and silent reading rates in English and Kiswahili. Reading rates are simply the numbers of words read in one minute. They are different from the oral reading fluency rates typically used in EGRA studies because they do not account for whether the pupil read the words correctly or incorrectly. The reason that reading rates rather than fluency rates were used in this analysis is because the comparison of interest is between the oral and the silent reading rate. As explained earlier, with silent reading, it is impossible to determine whether a pupil read the words correctly or incorrectly. For comparability, therefore, we present quantitative reading rates for both the oral and silent passages. For the silent reading story, the assessors were trained to request the pupils to use a finger to show where they were reading in the story, so that when the one minute finished, the assessor could note the last word that the pupil had attempted to read.

In order to compare oral and silent reading rates reliably, the PRIMR DFID baseline study utilised two reading passages which had been piloted during previous studies (Piper and Mugenda 2013) and had already been equated. The equating process used simple linear equating methods (Albano and Rodriguez 2012) and means that the rates utilised in this paper are comparable across the two passages. Figures 1a and 1b present the oral and silent passages for English, while Figs. 2a and 2b show the oral and silent passages for Kiswahili. The middle column in all four Figures indicates the number of words, cumulating with each additional section of the reading passage. The reading rates for each passage were continuous variables with possible scores from 0 to 210, depending on the reading rate. Each tested student ended up with four reading rates: oral and silent reading rates in English and oral and silent reading rates in Kiswahili.

Fig. 1
figure 1

a English oral reading passage and associated comprehension questions. b English silent reading passage and associated comprehension questions

Fig. 2
figure 2

a Kiswahili oral reading passage and associated comprehension questions. b Kiswahili silent reading passage and associated comprehension questions

In addition to reading rates, we analysed the reading comprehension scores associated with each reading passage. The reading comprehension scores were the percentage correct of five comprehension questions which the assessor asked orally after the pupil stopped reading. As can be seen in Figures 1a, 1b, 2a and 2b, comprehension questions were keyed to several different locations in the story so that even if a pupil was able to read only the first sentence in either passage, for example, the assessor could ask one question. The placement of the questions was similar in the oral and silent stories, such that pupils would have to read similar portions of the stories to be asked the same number of comprehension questions. Moreover, the complexity of the reading comprehension questions in both passages increased so that the first items were basic recall questions and the final questions included textual-inferential and inferential questions. To confirm that the progressive difficulty levels of the reading comprehension questions were similar across the oral and silent stories, the PRIMR team equated the reading comprehension measures in a manner similar to that used with the oral reading fluency measures. As with reading rates, each participant had four reading comprehension scores: comprehension on the silent passages in English and Kiswahili, and comprehension on the oral passages in English and Kiswahili.

Data analysis

Our analysis of these PRIMR data followed these steps:

  1. (1)

    The PRIMR sampling methods required that the data be weighted in order to be representative in Bungoma and Machakos counties. Therefore, we first used the svy commands in StataFootnote 4 which would produce weighted results to account for the nested nature of pupils in schools.

  2. (2)

    Next, in order to test whether the reading rates were the same or different when the pupils read orally and silently, we used the svy commands in Stata to estimate mean reading rates for oral and silent English and oral and silent Kiswahili. We used post-hoc linear hypothesis commands in Stata to determine whether any differences in these reading rates were statistically significant.

  3. (3)

    Similarly, we used the svy command to estimate reading comprehension rates and post-hoc linear hypothesis tests to determine whether any differences between the comprehension rates for oral and silent reading were statistically significant.

  4. (4)

    Our analysis for research question 2 utilised Pearson correlations between oral and silent reading rates and reading comprehension for both English and Kiswahili.

  5. (5)

    The final data analytic process was to fit ordinary least squares (OLS) regression models estimating the predictive relationship between oral and silent reading rates and comprehension, to address research question 3.

Findings

We began our analysis with an examination of students’ performance on the oral and silent passages in Kenya. Table 2 presents the means, standard deviations, standard errors, and 95 per cent confidence intervals for the key indicators of interest in this article, namely oral and silent reading rates and reading comprehension scores from oral and silent stories for both English and Kiswahili. The table shows that both reading rates were approximately 30 words per minute for English and 23 to 24 for Kiswahili. As the average Kiswahili word is longer than the average English word at beginners’ reading level, it is unsurprising that we found English reading rates to be higher than Kiswahili rates (Piper 2010). Reading comprehension rates were approximately 5 per cent correct for English and between 10 per cent and 13 per cent for Kiswahili. While other research has investigated the meaning of these low achievement scores (Piper and Mugenda 2013), we note that these scores are very low for second-graders and indicative of widespread reading difficulties.

Table 2 Descriptive statistics for oral and silent reading rates and reading comprehension rates for oral and silent passages in English and Kiswahili

Research question 1: Do students perform better on reading rate and comprehension assessments in English and Kiswahili when they are tested orally or silently?

In order to answer this research question, we examined whether there were any substantive differences between the oral and silent reading rates in English and Kiswahili. As shown in Fig. 3, the difference was just one word in English and 1.2 words in Kiswahili (both favouring oral). Neither of these differences was statistically significant (p-values  =  .24 and .18, respectively). For the associated reading comprehension scores, silent reading comprehension rates were 2.0 percentage points higher for Kiswahili (p-value < .001) and 0.5 percentage points higher for English, although that difference was not statistically significant (p-value .28).

Fig. 3
figure 3

Reading rate (words per minute) and reading comprehension (percentage correct)

Research question 2: What is the relationship between oral and silent reading rates in English and Kiswahili?

Our research questions guided us not only to estimate whether there were differences between oral and silent reading rates but also to determine whether there were correlations between the rates for the oral and silent reading passages. As shown in Table 3, the English oral and silent reading rates had a moderate positive correlation of .41 (p-value < .001). The correlation for Kiswahili was slightly lower, .33 (p-value < .001). We also found moderate correlations between oral and silent reading within and across languages. English oral reading rate was more strongly correlated with Kiswahili oral reading rate (.36, p-value < .001) than with Kiswahili silent reading rate (.28, p-value < .001), as would be expected. Similarly, English silent reading rate was more strongly correlated with Kiswahili silent reading rate (.36, p-value < .001) than with Kiswahili oral reading rate (.27, p-value < .001). This suggests similar constructs for oral and silent reading which relate to particular skills across languages. For example, a child who has difficulty decoding in English likely also has difficulty decoding in Kiswahili.

Table 3 Pearson correlations for oral and silent reading rate and comprehension for English and Kiswahili, with statistical significance

Research question 3: Are oral reading rates in English and Kiswahili better predictors of comprehension than silent reading rates?

In order to answer this third research question, we fit OLS regression models. These models had comprehension scores as the outcome variable and reading rates as the predictor variable. The four models are presented in Table 4, for English and Kiswahili, and for oral and silent reading. Interpreting the regression coefficient for each regression model we found that, for English oral reading rate, a difference of 10 WCPM was associated with a 4.4 percentage-point difference in comprehension. This is slightly larger than the silent reading rate’s difference of 3.8 percentage points. For Kiswahili oral reading rate, a difference of 10 WCPM was associated with a 7.3 percentage-point difference in comprehension, which is larger than the 4.6 percentage-point difference in comprehension for the silent reading rate. As we mentioned above, these stories and comprehension results were equated.

Table 4 Ordinary least squares regression coefficients for models estimating the relationship between reading rate and comprehension, comparing oral and silent reading rates for English and Kiswahili

In order to understand whether oral or silent reading rates could predict more of the variation in their respective comprehension measures, we compared the R 2 statistics for the two models in each language. Table 4 presents the models and the coefficients with associated standard errors and R 2 statistics. The table shows that the R 2 is higher for oral reading on comprehension in English (.266) compared with silent reading on comprehension (.162). Similarly, for Kiswahili, the R 2 is higher for the oral model (.197) than the silent model (.124). The coefficients and R 2 statistics together suggest that, in both languages, more of the variation was explained by the oral reading rate than the silent reading rate.

Discussion

The research questions which guided our analyses focused on seeking any systematic differences in reading rates between oral and silent reading passages and oral and silent reading comprehension scores, and examining whether oral or silent reading rates were more predictive of outcomes. As stated earlier, we believe these are unique analyses of the relative appropriateness of oral reading fluency measures being utilised widely in sub-Saharan Africa, and that they offer guidance for the managers and evaluators of the scores of literacy programmes currently under way in the region.

We found no statistically significant differences between oral and silent reading rates in either Kiswahili or English. There was no statistically significant difference between reading comprehension scores for oral and silent reading passages in English, and a difference of 2.0 percentage points for Kiswahili, which is the equivalent of less than a .10 standard deviation difference. In summary, there were very small differences in the substantive outcomes between oral and silent reading measures in Machakos and Bungoma counties, and moderate correlations between oral and silent reading rates within language.

Despite the similarities in the oral and silent reading rates, our multiple regression results indicated that oral reading rate was more predictive of comprehension than silent reading rate, for both English and Kiswahili. In addition, oral reading rate scores explained more of the variation in reading comprehension scores. These findings suggest that oral reading rates are more useful for understanding pupils’ reading comprehension skills in Kenya. If the goal of an assessment is to measure reading comprehension, then the oral rate is likely the better measure.

Our findings have implications for testing and assessment in Kenya and in the region. Given the growth in literacy programmes as described in the introduction, and the wide range of literacy assessments available, our findings suggest that oral assessments – unlike silent ones – offer the benefit of allowing the calculation of fluency rates and the analysis of the errors made by pupils, without sacrificing precision. In fact, we found that the oral assessment was more strongly predictive of reading comprehension. It might be that pupils concentrate more deeply on meaning when they are aware that an assessor is watching them, although more research is necessary to better understand the mechanisms for these effects. It is interesting that the mean oral and silent reading rates were so similar, and further research is necessary to determine whether that similarity persists as pupils become more fluent readers in later grades, since the Western literature discussed above suggests that the relationships between oral and silent reading and comprehension vary by developmental stage. This is particularly likely to be true in contexts like Kenya, where students are learning to read in languages which are not their mother tongue.

Limitations

There were some limitations to the data set and our analysis. First, the literature does show that reading silently becomes an efficient way to maximise reading rate and comprehension in higher grades. Therefore, one can surmise that the results might be different if the study were replicated in higher grades. However, given the growth in interventions in early literacy, answering these research questions using this age of pupils fills an urgent gap in the literature. Second, the low overall reading outcomes of the pupils in the sample do cause us to question what the results of a similar analysis would be in a sample of higher-performing pupils. The relationship between oral and silent fluency might differ in such a sample. Third, while assessors were trained to note the last word read by the pupil during the administration of the assessment, it is likely much harder to mark the exact final word without the oral cues. Assessor errors might bias the results, and further implementation of silent assessments should take care to ensure that there is precision in the measurement of the last word read.

Conclusion

It is widely agreed that the focus of the Education for All movement has slowly moved from access to quality, with a focus on achievement of learning outcomes. To date, Kenya’s educational system has struggled to meet learning goals (Uwezo 2012a). However, key stakeholders, including the Ministry of Education, Science and Technology, as well as local and international non-governmental organisations, have acknowledged the central importance of ensuring that children’s time and family resources spent on schooling are rewarded with useful skills, such as basic literacy and numeracy.

Given the current focus on quality, particularly in the realm of early literacy, increased focus is needed on assessment methods and their appropriateness for the context of Kenya and elsewhere in sub-Saharan Africa. While the EGRA tools are flexible, free and based on the available literature from Western countries, they have too frequently been used without a critical analysis of how well they can measure children’s outcomes in diverse contexts. It is critical to ensure that the measures used to document countries’ progress towards quality improvement are valid, reliable and predictive of the key outcomes of interest. The analyses we have presented here provide evidence supporting the oral assessment of reading, as generally used in EGRA, DIBELS and other popular tools. We found oral reading rates to be more strongly related to reading comprehension than silent reading rates, and determined that oral assessments have a number of additional benefits in terms of analysing reader errors. In this data set from Kenya, we saw no evidence supporting the use of silent reading rate in place of oral reading rates.