Political debates over the “correct” and the “best” method to teach reading during the early stages of learning to read in primary schools and infant classrooms have periodically surfaced since the beginning of compulsory education in Western countries. These debates and “reading wars” have often occurred in conjunction with serious public concerns over reading standards. They reflect the importance placed on learning to read by parents, teachers, employers, and politicians. The public controversies over the teaching of reading have, in turn, fueled public and professional discussions over which specific methods and materials to use with beginning readers and children who have reading difficulties.

These debates have tended to lead to a focus on reading proficiency, “reading standards”, and standardized approaches to teaching reading, reading assessment, and engaging with literacy. The universal acceptance of the importance of learning to read has also given rise to vested interests in specific methods, reading programmes, and early literacy assessments amongst professional, business, commercial, and parental lobbying groups.

These public debates have been growing more intense since the 1950s. In recent decades, the power to make decisions over the teaching of early reading has become progressively centralized, with many countries implementing more prescriptive curricula and national literacy strategies. This centralization and increased policy-based governance of the teaching of reading has, in turn, heightened political control and infringed on the classroom teacher’s professional jurisdiction in this area. Moreover, some observers have expressed concerns that researchers’ voices, as well, are being ignored as decisions regarding how to teach reading have grown more politicized and centralized (see for example, Goldenberg 2000; Goodman 2014; Wyse and Opfer 2010).

Early developments

The current politics surrounding the teaching of reading developed from the intense debate over phonics versus whole language/real books that arose during the twentieth century. The emergence of binary oppositional models of how to teach early reading, and the political stances associated with them, can be linked to the influence of psychology on reading research and pedagogy from 1900 to 1935—ideas that are still influential (Pearson 2000).

Prior to the 1850s, educators used the drill-and-practice approach of the alphabetic method to teach reading. This method had children identify and name the letters of the alphabet, in both uppercase and lowercase and in alphabetical order, from available texts such as a bible or reading primer. In the midnineteenth century, those advocating a phonics-based approach began to challenge this method; at that time, phonics rapidly gained popularity in America and England.

Unlike the alphabetic method, this early phonics method involved learning through recognizing the letters in the composition of the word rather than through focusing on the named letters. This “new phonics”–based approach included the key aspects of more recent phonics-based approaches in that it was highly systematic and used a range of prereading perceptual activities such as introducing the letters and fusing separate sounds into words. Later commentators, writing in the 1960s, would describe this initial phonic system as an “elaborate synthetic system”, and they noted that it had become firmly established by the 1950s (Cove 2006).

The emergence of the “look and say” teaching technique, in turn, challenged the initial phonics-based approach. The look-and-say method was promoted in the early 1900s, and it was dominant by the 1930s (Chall 1967, p. 161). In the 1940s it became established in many countries, including the United States, United Kingdom, and Commonwealth countries (for example, Australia, New Zealand, Canada, and South Africa). During the 1950s, leading reading researchers advocated look-and-say, including William Gray in the US, who argued that teachers need to be encouraged to teach children to read whole words and to avoid meaningless phonics drills. The look-and-say method focused on teaching children to recognize entire words or sentences and emphasized the use of flashcards with pictures. While they had been challenged and, to some extent, overtaken by the look-and-say method, phonics approaches retained enough credibility during the 1930s to be used along with look-and-say until the 1960s. The eight principles that Jeanne Chall (1967) identified as the prevailing views of reading in the 1960s included a mix of look-and-say and instruction in phonics.

From the very early stages of the development of reading methods and programmes, associated ideological viewpoints, commercial interests, and political allegiances have vied over them. For example, with the establishment of the McGruffy basal readers in the United States in the 1940s, people early on recognized the potential of commercial publishing of texts on specific methods of teaching reading. By the 1860s, schools widely used the McGuffey readers, which were based on both the phonetic and the alphabetic methods. From 1927 to 1973, the Scott Foresman Company sold the Dick and Jane series, which used the look-and-say method; by the 1950s and 1960s, these series comprised the main early-reading texts associated with look-and-say and were in use (as Janet and John readers) in the United Kingdom and Commonwealth countries.

Luke (1991) noted that earlier textbooks and reading practices were firmly linked to religious ideology and also served nation-building purposes. These basal readers embodied “technocratic approaches” to the curricula that supported the studying and teaching of reading as a neutral psychosocial phenomenon, divorced from surrounding cultural practices. Luke argued that university-based psychologists designed the Dick and Jane series to be a sequenced instructional programme that sequenced and controlled the teacher’s behaviour as well as the students’ learning. It did this through a focus on children’s behavioural-skill acquisition and an emphasis on detailed teacher guidebooks that had a standardized approach to teaching and communication.

Pearson (2000) has noted how political and professional agendas surround the construction of reading as a “performance” to be probed by scientific examination and systematic testing of individual silent reading. He argues, like Luke, that such construction fit demands for efficiency and scientific objectivity, and supported a psychometrically based system that, in turn, suited the emerging scientism of the period.

“Reading wars”

Current reading debates and the politics surrounding reading methods have tended to become polarized over phonics versus whole-language methods of teaching reading. This particularly divisive “great debate” had its origins in the US during the “reading wars” of the 1950s. Flesch’s 1955 book, Why Johnny Can’t Read, initiated the debate. Flesch maintained that reading standards had declined, and that this could be attributed to using the look-and-say method to teach reading. He advocated a return to a phonics approach as the only method to teach beginning reading. Flesch (1955) argued that existing reading research, especially the studies comparing sight and phonic methods, supported his view.

The 1950s’ politics surrounding the debate over how to teach reading was, therefore, linked both to the public’s concern about falling literacy standards and to assertions that schools should implement “one best method” to teach reading. The claim that it was possible to establish the most effective and efficient method to teach literacy—and in a scientifically valid way—was reinforced by the growing influence of experimental behavioural psychology and psychometrics within education and reading research during the late 1950s and 1960s. During this period, the central debate concerned whether children learned to read better via a method that stressed meaning or via one that stressed cracking the alphabetic code. Chall (1967), who reviewed relevant experimental studies conducted during the twentieth century, supported code and phonics teaching in the early grades rather than whole-language teaching for meaning.

After the 1950s, there was a gradual shift away from the conventional wisdom implicit in the look-and-say method toward the incorporation of practices that recognized the need for broader meaning-making when learning to read (Monaghan et al. 2002). This shift, in turn, affected teaching practices and the conceptualization of reading programmes. For example, word-attack skills became linked to contextual, configurative, structural, and “dictionary” clues as well as to “phonic” clues. It also bought about changes in traditional classroom-based reading practices as more conventional classroom practices such as round-robin oral reading sessions used these clues rather than relying on solely phonics-based instruction (Monaghan et al. 2002, p. 228).

The two principal models of the reading process that developed by the 1980s and 1990s reflected both the polarized perspectives of reading that emerged from the great debate of the 1950s and the challenges from psycholinguistics-based approaches of the 1970s. Researchers and practitioners refer to these two later models as “bottom-up” and “top down”. In the bottom-up model, fluent readers look at the arrangements of the letters in the words before they consider the meaning of the print. In contrast, the top-down model, which encompasses programmes such as Reading Recovery, views learning to read as a concept-driven activity; it assumes that confident readers initially predict the meaning of text before examining the available syntactic, semantic, and graphic cues (Reid 2009, pp. 106–109). Both of these models have had their advocates amongst educational psychologists, primary teachers, and professional support staff in different countries at different times, from the 1960s to the end of the 1990s (see for example, Openshaw and Cullen 2001; Stannard and Huxford 2007).

During the 1970s, the new sphere of cognitive science as an interdisciplinary field in the United States led to the emergence of the concept of “reading-process models”. The development of the top-down, whole-language–based model for teaching early reading marked a radical shift from a behavioural perspective to a metacognitive one, which changed the way in which educators observed readers. This shift began after evidence emerged—through the work of Noam Chomsky and those in psycholinguistics-related fields—about how one learns language (Monaghan et al. 2002, p. 229). In the United States, Australasia, and Canada, due to the psycholinguistic-based work of Ken Goodman (1986), the top-down model of reading became linked to the term “whole language”. Goodman advocated a comprehensive theory of the reading process derived from his studies of the making of meaning, which had revealed unexpected responses in oral reading that focused on miscue analysis. The increasing dominance of the top-down approach challenged bottom-up strategies that drew upon phonics and direct instruction. Initial support for the whole-language approach came from Canadian teachers who rejected the emphasis upon tests and the fragmented nature of contemporary textbook-based reading programmes (Goodman 2014).

From its emergence in the late 1960s and 1970s, the whole-language approach attracted the attention of policymakers and politicians, as well as researchers and educational professionals. Educational historians have noted that in the United States the space race between Americans and Russians after the 1957 launch of Sputnik motivated political interest during the 1960s in finding the best method of teaching reading. It also resulted in increased investment in research-related studies into the teaching of early reading in the United States (Monaghan et al. 2002, p. 229). In the 1970s, political opposition to the whole-language movement and the Goodmans’ work came from those who equated it with the earlier look-and-say approach and supporters of phonics programmes. Direct political opposition took the form of attempts to legislate the way reading was taught (Goodman 1986).

During the 1970s, political disagreement surrounding the question of “one best way” to teach reading further escalated. Additional challenges to accepted practice came from those who urged a psycholinguistics perspective; they saw the process of reading as involving a “range of meanings produced at the interface of person and text, and the linguistic strategies and the cultural knowledge used to ‘cue’ into the meanings embedded in the text” (Rassool 2009, p. 9). This view formed the foundation for the whole-language and real-book approaches espoused by such literacy educators as Kenneth Goodman and Frank Smith in the 1970s and 1980s (see, for example, Smith 1971). Goodman’s (1986) top-down approach to reading became known as the “psycholinguistic guessing game”. Its advocates argued that it resulted in good readers who did not need to rely on graphic clues to process every feature of the words and letters in a reading text. The implicit assumption was that children would learn to read through being read to, becoming immersed in a literacy-rich environment, and engaging in reading.

By the late 1980s and early 1990s, the whole-language movement was firmly established in educational practice internationally. Not all teachers accepted whole-language instruction, however, even in countries such as Australia, where it had been popular since the 1960s (Snyder 2008, p. 51).

Marie Clay, a New Zealander, was another key figure; her psycholinguistics-influenced top-down model resulted in the Reading Recovery programme, which was used in New Zealand primary schools during the 1980s and 1990s. Clay established running records as a simpler form of the miscue analysis developed by the Goodmans. In New Zealand, Clay’s approach became particularly influential and led to an emphasis on observing children’s reading behaviours rather than strategically teaching sounds. Clay’s views, her Reading Recovery programme, and the work of other whole-language advocates—such as the Goodmans, Smith, and fellow New Zealander Donald Holdaway—were supported at a national level via in-service courses and local reading association workshops (Openshaw and Cullen 2001). The establishment of Reading Recovery as an international programme in the late 1980s and 1990s heightened political debates on whole language versus phonics, and on the efficacy and cost of such programmes (see, for example, Soler and Openshaw 2007).

The 1980s marked an acceptance of whole-language–based programmes in many countries. Commentators have argued that during the 1980s and into the early 1990s the establishment of reading programmes took place out of the public gaze in England and New Zealand, among other countries.

They contend that, throughout this period, important discussions about literacy curricula and literacy teaching tended to take place in parliamentary committees with nominated representatives from professional organizations, or between professional organizations and government departments. For example, in New Zealand, following a largescale study with positive outcomes for Reading Recovery, senior officials had the direct support of the director general of the Education Department to provide financial backing for that method’s expansion and for extensive training for Reading Recovery teachers (Openshaw 2002, p. 86). Colin Harrison (2004, p. 1) notes that in England during that time the literacy curriculum was determined in government committees and between such professional groups as the National Association for the Teaching of English and the United Kingdom Reading Association. In subsequent decades, politician-driven initiatives would increasingly determine early-reading and -literacy curricula in the public sphere in both of those countries (Soler and Openshaw 2007)

During the 1990s, the whole-language approach faced renewed pressures. Challenges initially emerged from the work of cognitive psychologists who, through investigating eye movement, explored the processes underpinning fluent reading. This approach enabled researchers to find the extent to which context might help or hinder word recognition and whether children skipped letters and words when reading. By the late 1990s, their evidence supported the combined use of top-down and bottom-up approaches. For example, in the early 1980s, Stanovich (1980) posited that it is problematic to use solely one approach or the other, because readers draw on both processes when reading. He noted that these processes are linked, and a weakness in one area can be supported by the reader’s strengths in other areas (he called this ability “interactive compensatory”). Comprehension research in the early 1990s reinforced the idea of multiple strategies; such research identified five strategies that expert readers use to foster metacognitive awareness and comprehension (Dole et al. 1991).

In the United States, pressure for a bottom-up model of reading reemerged in the 1990s: publishers wanted to reinvigorate the market for young readers though making explicit, systematic, and sequential phonics part of nearly every reading programme (Moore 2002, p. 47). Mesmer and Griffith (2005, p. 368) notes that the terms “systematic phonics” and “explicit, systematic phonics” emerged in the early 1990s, when Adams (1990) described her recommendations for phonics instruction as “explicit, systematic”. Phonics instruction highlighted the following common features:

(a) curriculum with a specified, sequential set of phonics elements; (b) instruction that is direct, precise, and ambiguous; and (c) practice using phonics to read words. (Mesmer and Griffith 2005, p. 369)

In England, in the late 1960s, pressure for a bottom-up model came from the introduction of the National Literacy Strategy (NLS). NLS, to be implemented at a national level, included teaching phonological awareness to five-year-olds (and up) during “literacy hour”. As part of the English NLS, teachers received intensive training and training materials that focused on teachers’ knowledge of phonics (Lewis and Ellis 2006, p. 2).

Work in progress

The debates over whole language versus phonics and the adherence to either top-down or bottom-up methods—or the incorporation of both into interactive compensatory or “mixed methods” approaches—have played out in different ways in different countries. In countries such as Scotland, where the more centrally controlled NLSs were not introduced, the political debate over these issues has not been so prevalent (Lewis and Ellis 2006). However, today, the bottom-up model has been gaining ascendency over all other models in the United States, England, and Australia. Conservative politicians in these countries have endorsed bottom-up plans—in the form of systematic synthetic phonics—arguing that the evidence is “overwhelming” that such approach is the most effective.

Academic researchers and professional groups such as the United Kingdom Literacy Association (UKLA) say that this claim is not supported by empirical evidence (see, for example, UKLA 2000; Wyse and Goswami 2008). It has also been difficult to endorse either whole language or phonics based on empirical evidence. For example, while researchers have gathered a significant amount of data on Reading Recovery over the past 20 years, experimental researchers often see this evidence as relatively weak and ambiguous. From this viewpoint, there are a limited number of true experimental studies and a “lack of independence of those gathering or analyzing data” (Wheldall, Center, and Freeman 1992, cited in Reynolds and Wheldall 2007, p. 207).

Reynolds and Wheldall (2007) have also argued that because most research on Reading Recovery is not experimental, the “highest proof” of efficacy does not exist for the programme, and those studies do not have the most effective design for showing causality and preventing problems with internal validity. In evaluating the evidence, the US Department of Education review noted that only 4 out of 105 studies on Reading Recovery were randomized controlled experiments that met its “evidence standards and eligibility screens” (WWC 2008). However, this could equally apply to synthetic phonics programmes, which have undergone considerably fewer “highest proof studies” than Reading Recovery. Further, the literature indicates that there are problems with the design and internal validity of the Clackmannanshire study, which investigated 8 primary schools in Clackmannanshire, Scotland, to compare the effectiveness of a synthetic phonics reading programme compared with an analytics phonics programme (see for example, Ellis 2007).

The perceived failure of Reading Recovery evaluations to meet the standards demanded by “true experimental studies” highlights the problems with contemporary discourses seeking to “scientifically” evaluate early-reading programmes to find “one best method” to implement nationwide. The current discussions draw upon “scientific” positivistic and psychological discourses that emphasize the identification and measurement of ostensibly culturally neutral cognitive abilities related to reading. Such discourses, in turn, impact one’s ability to identify and assess evidence from early-reading programmes because they do not acknowledge cultural and other social processes associated with reading. In short, there is no “one best method” for teaching reading; and the debates and assessments seeking such a method do not recognize the teaching of early reading as a cultural practice.

From a cultural history perspective, these debates represent an ongoing struggle over the social and cultural practices and interpretations of literacy and literacy practices. Moreover, one could argue that the evolution of early reading as a concept and field of knowledge—and bottom-up synthetic models, in particular—is inextricably linked to the autonomous model of literacy (Street 1993), which emphasizes the text-decoding skills that develop in individual minds.

Problems and difficulties

The ongoing “wars” over the teaching of reading have embedded a credence in the binary nature of top-down programmes (such as Reading Recovery) versus bottom-up programmes (such as synthetic phonics)—and the evaluations of their effectiveness—even more deeply within a cognitive- and science-based paradigm. Further, these debates have sustained the belief that it is possible to scientifically validate a particular “essential” programme or “right” method of teaching reading. The result has been a renewed emphasis on national implementation of the ideal programme to standardize the teaching of early reading. As this sense of the overriding need for a “one best” programme and pedagogical approach develops, the gulf widens between our beliefs regarding the role education should play in shaping the child’s identity and participation as a literate individual within society.

Why, and how, synthetic phonics has come to dominate the teaching of early literacy in these countries, and the extent to which it is currently funded and supported, are crucial questions. The implementation of policies forged in the debate over teaching strategies shapes how professionals, parents, and students think about and engage with early literacy.

The struggles for power between different discourses and associated lobbying groups—with their conflicting educational and social visions—also have implications for how early-reading programmes have been legislated, funded, socially recognized, and carried out. For example, in England, political and commercial rhetoric associated with increased commercial synthetic-phonics resources are linked to decreased government spending on—and an increasing privatization of—literacy resources over the last decade. Strong neoconservative views on education have driven this rhetoric and their results. Some observers have directly linked the oratory over the past three decades concerning “efficacy”, “performativity”, and “market-driven” economies to the influence of neoliberalism on literacy-related educational policies (see for example, Comber et al. 1998).

We also, however, need to look at how increasingly dominant perceptions of these two teaching approaches as antagonistic (and the ensuing dominance of synthetic phonics) can be linked to how a neoliberal ethos that emphasizes technique and functional literacy has taken precedence over social and communicative views of literacy. The complex interaction of different agendas concerning reading have given rise to “commonsense” assumptions about the links between, on one hand, improving reading and, on the other hand, literacy to serve the needs of the economy. The politics encompassing the teaching of reading have, therefore, endorsed the rhetoric of efficacy, performativity, and a market-driven need to improve literacy.

From this perspective, the relationships in early-literacy education that have formed and reformed over the past two decades have also challenged previous understandings and ethics in the field—for example, the hitherto-accepted assumption that professional judgment should be prioritized over programmed instruction and commercial interests. Drawing on Nikolas Rose’s explanation of Foucault’s notion of governmentality (Rose 1999, pp. 20–28), we can view these new relationships—of ethics, power, and the redefinition of professional culture in early-literacy education—as governing reading teachers through a “code of conduct” related to particular reading programmes, techniques, and strategies for particular neoconservative and neoliberal objectives.

Future directions

To move away from the antagonistic and entrenched dualism embedded in the debates over reading, we may need to ask different questions—such as, what underlies and drives phonics versus whole-language dualism? What underpins associated funding battles related to the teaching of early reading?

And, to answer such questions, we must examine the ascendency of neoliberalism during the 1980s and 1990s, whose emphasis on “new public management” introduced a “new mode of regulation and form of governmentality” because it “replaced fundamentally different premises at the level of political and economic theory, as well as at the level of philosophical assumption” (Olssen and Peters 2005, p. 314). Drawing upon a cultural-history perspective and a Foucauldian analysis of neoliberalism, we may see neoliberalism’s rise during this period to be linked to its emphasis on the use of markets as a new technology to control and enhance performance in the public sector (Olssen and Peters 2005).

Neoliberalism deprioritizes locally derived professional knowledge gained at a particular point in time within specific individual interactions and educational settings. This is because, under neoliberalism, governmentality is achieved through managerialism and top-down management chains, which undercut collegial, professional knowledge, and autonomy (Olssen and Peters 2005, p. 234). Literacy policy initiatives under such regimes will, therefore, tend to establish centralized, structured reading programmes. They will also favour certain programmes over others depending on their underlying notions of professional autonomy. Given this, neoliberal policies will naturally advantage phonics and, in particular, synthetic-phonics–based programmes over teacher-led whole-language programmes.

In such political environment, phonics programmes are able to dominate over whole-language approaches because they are perceived to have a “traditional” emphasis on sequentially presented, logical and rationally organized skills rather than a “progressive” child-centred, experiential, and interactive engagement with text. Thus, synthetic-phonics approaches to early literacy fit a neoconservative traditional view of knowledge and the curriculum. They also fulfill a neoliberal agenda for rationality within a centrally controlled model of top-down delivery. Whole-language programmes such as Reading Recovery will be disadvantaged, despite their apparent centrally structured nature and format, because they are founded on an epistemology and a view of reading development that prioritizes educator control and autonomy to make decisions based on individual circumstances, literacy problems, and understandings that evolve during individualized, one-on-one instruction.

The current emphasis on depoliticization, individualism, and financialization is central to the way the reading wars have been enacted and situated in public debates and professional discourses. We can see the impact of these foundational ideals—embodied within neoliberalism—in the increasing deprofessionalization and commodification of early-literacy teaching and programmes. Further, an international domination of neoliberal-based literacy policies and curricula can potentially accentuate what Patti Lather (2012) calls the “quantitative reductionism” that follows the “metric mania” that neoliberalism promotes:

Neoliberalism loves quantitative reductionism. In the realm of public policy a kind of “metric mania” disallows what cannot easily be counted … in a way that profoundly shapes what counts as science. We have only to look at how federal efforts toward “scientific research in education” … have produced an era crushed by demands for more “evidence based” research under some “gold standard” where “evidence” is defined very narrowly indeed. (p. 1023)

A continuing prevalence of quantitative reductionism, in conjunction with strong neoliberal policies, would provide even greater support for early-reading programmes with a reductive, measurable “scientific approach” to the world. Such discourse would move the reading debate further toward an ever more extreme view of the bottom-up model of reading. That is, it would lead to an even stronger emphasis on literacy approaches that focus on skills-based and technique-orientated methods; it would also increase the focus on individual children’s internal cognitive functions. This view stands in stark contrast to one that sees literacy as a social practice rooted in cultural, socioeconomic differences, as reconceptualized with the emergence of New Literacy Studies, critical literacy, and socio-culturally related views of literacy and reading practices in recent decades.