1 Introduction

Particularly since the British vote to leave the European Union (otherwise known as Brexit) and the election of Donald Trump in the USA, there has been much talk of our living in a ‘post-truth’ society, where ‘alternative truths’ compete with each other and where ‘experts’ are often derided and ‘common sense’ celebrated even where it seems to be contradicted by ‘evidence’ (d’Ancona 2017). Calcutt (2016) has suggested that the origins of ‘post-truth’ lay with academics espousing ‘post-modernism’ and other ‘left-leaning, self-confessed liberals’ who sought freedom from state-sponsored truth and started to discredit ‘truth’ as one of the ‘grand narratives’ that needed to be replaced with ‘truths’—‘always plural, frequently personalised, inevitably relativised’. Although both the political and academic versions of ‘post-truth’ may be criticised for undermining any sense of certainty about how we should proceed in educational policy and practice, we suggest in this article that exaggerated claims about the possibility of establishing consensual answers on the basis of research evidence are equally suspect and to be resisted.

The much hyped ‘evidence-informed’ approach to educational research conjures up a brave new world in which robust research can give us answers to enduring social and educational problems—in other words, clear guidance on ‘what works’. It is often implied that this ‘new empiricism’ will take us beyond the ideological use of research that has hampered collaboration between researchers and policy makers in the past. Thus, it is argued, we can solve educational problems if only we can get the evidence right and it is the role of education researchers to come up with that evidence. This has been reflected in rhetoric about ‘evidence-based’ and ‘evidence-informed’ policy and practice, and about the importance of research having ‘impact’ or being ‘impactful’.

What counts as evidence and how it is used has always been contested. Indeed, history suggests agreement between policy makers and researchers (neither of which category is homogeneous, of course) has rarely been easy to achieve. As Gene Glass commented at the time, the Reagan administration’s use of evidence in a review of research entitled What Works (US Department of Education 1986) was hardly unusual in its attempt to legitimate an ideological position through an appeal to educational science:

The selection of research to legitimize political views is an activity engaged in by governments at every point on the political compass…What works does not synthesize research, it invokes it in a modern ritual seeking legitimation of the Reagan administration’s policies…and, lest one forget, previous administrations have done the same (Glass 1987: 9).

Subsequent US administrations, while also embracing the rhetoric of ‘what works’, have been equally cavalier in their use of evidence. For example, after reviewing a landmark set of Blueprint reports from the Obama administration (Obama 2010), Mathis and Welner concluded that:

The overall quality of the [research] summaries is far below what is required for a national policy discussion of critical issues. Each of the summaries was found to give overly simplified, biased, and too-brief explanations of complex issues (Mathis and Welner 2010: 3).

In criticising the ideological use of evidence to support favoured policies, researchers are sometimes in danger of seeming to embrace a ‘hyper-rationalist-technicist’ approach to educational research in which researchers provide evidence to identify policies that, if implemented, will bring about a significant and enduring improvement in teaching and learning (Gewirtz 2003). Some researchers, of course, do embrace this position, as reflected in the current enthusiasm for developing a medical model of educational research in which experimental methods, and particularly randomised controlled trials (RCTs) (Goldacre 2013), together with systematic reviews of evidence (Gough et al. 2012), are used to establish and disseminate evidence about ‘what works’.

Yet, the modern-day cautions about such a position are long-standing. Back in 1974, the then President of the British Educational Research Association (BERA), John Nisbet, claimed that we needed to move away from ‘the naïve idea that problems are solved by educational research’. Rather, he characterised the relationship between research and policy as ‘indirect’ and more about ‘sensitising’ policy makers to problems than solving them. It may be that his warning that educational research may not provide ‘final answers to questions, or objective evidence to settle controversies’, and his support for a ‘spectrum’ of types of research needs to be heeded afresh.

Following Nisbet, we would contend that research advocates of what we will term ‘the medical model’ for shorthand are setting themselves up for disappointment, because politics in a democracy is of necessity driven by all sorts of considerations amongst which the findings of research are often rather low down the list. Other, often more significant, influences include the vagaries of the moment, the demands of the electoral cycle, and the values and preferences of policy makers and their advisors and constituents. Equally, we would also highlight that researchers are by no means of one mind on the nature of evidence and how it should be viewed and treated. Some want their research to go beyond ‘what works’ and explore why ‘what works’ sometimes does not work, as well as asking ‘what works where with whom’ and why. Furthermore, research can also have an important role in deconstructing the assumptions underlying all such questions or in helping people to think about whether what policy makers are trying to do is worthwhile and what constitutes socially just schooling. The ‘what works’ agenda has tended to filter out these more structural and critical perspectives on educational policy and practice and broader understandings of how it develops.

Thus, achieving consensus on what counts as worthwhile educational research and on the right relationship to policy is unlikely to be an attainable goal even if a technicist utopia was desirable. This is not to suggest that there is no role for a ‘what works’ approach to education research, but the notion, implied by some of its advocates, that this is the only type of research that should be encouraged or funded certainly needs to be resisted. It is not necessary to adopt the sort of relativism that is often associated with ‘post-structuralism’ and ‘post-modernism’ to favour a more pluralistic approach to education research, although we would regard such approaches as themselves part of the spectrum that should be supported.

To illustrate our concerns, we present a brief account of ‘evidence-based’ policy in the UK over the past 20 years, where the rhetoric of ‘what works’ was taken up enthusiastically by the incoming New Labour government of Tony Blair in 1997 and has been adopted in various guises by governments ever since. As well as showing the limitations of such an approach to the relationship between research and policy, we also explore whether recent enthusiasm for evidence-informed practice in education is any more viable. Finally, we consider how the ‘new empiricism’ that informs such work will fare in the so-called ‘post-truth’ society.

2 The Limitations of ‘Evidence-Informed’ Policy in English Education

Early in Tony Blair’s government, David Blunkett, Secretary of State for Education and Employment from 1997 to 2001, championed the cause of evidence-based policy making and looked critically at the research–policy relationship in a lecture entitled ‘Influence or irrelevance?’ (Blunkett 2000). While he acknowledged that there were faults on both ‘sides’, he nevertheless threw down the gauntlet to the social science community as a whole to contribute more directly and ‘productively’ to policy making. Some academics read his lecture as a demand that their research should support government policy (e.g. Hodgkinson 2000).

A consultation paper produced by the Blair government’s National Educational Research Forum (NERF 2000) certainly seemed to advocate a particularly limited and instrumental view of research. The view of one education researcher who saw the draft was that it treated research as ‘about providing accounts of what works for unselfconscious classroom drones to implement’ and that it portended ‘an absolute standardisation of research purposes, procedures, reporting and dissemination’ (Ball 2001: 266–267). Similar criticisms were levelled at the emphasis on systematic reviewing (e.g. MacLure 2005). The NERF consultation exercise actually led to the acknowledgement of the need for a pluralist view of research, but it also continued to argue for a means of prioritising resources based on research making a ‘worthwhile contribution’ to education and ‘maximising impact’ (NERF 2001).

David Blunkett himself recognised the need for government to give more serious consideration to ‘difficult’ findings. But how realistic is this in practice? Even if research were of the highest quality and provided robust evidence on a given issue, would governments consistently seek it out and make good use of it so that it was genuinely informing—above all else—their decisions on policy? Various examples from the New Labour administration would suggest not, and few would suggest so unequivocally. In the process, they illustrate how in politics other factors will often take precedence over what the research evidence says even when it seems clear (Wilkes 2014).

One example is the use that New Labour made of evidence on class size during the 1997 general election. Evidence on the effects of class size is notoriously contentious and difficult to interpret, and the controversies continue to this day (e.g. Blatchford et al. 2004; Blatchford 2015). Even so, New Labour’s commitment to reduce class sizes traded quite consciously on research findings accepted by most researchers and most teachers—evidence that if smaller classes have an unambiguously positive impact anywhere it is most marked in the very early years of schooling and in the most socially disadvantaged areas. So, the manifesto commitment to cut class sizes in the early years of schooling to below 30 using monies that had formerly been used to send academically able children to private schools looked like a socially progressive policy based on robust research findings. As a policy, however, it was probably driven as much by the findings of election opinion polling as those of educational research: most classes over 30 were in marginal suburban constituencies, not in inner-city areas where class sizes were already below that level. Some even more robust findings on the beneficial effects of cutting class sizes to 15 in disadvantaged areas did not influence the policy at all, presumably because this would have been extremely expensive, but possibly also because additional votes in these inner-city constituencies would not swing the election (Whitty 2002).

The battle to gain office is one thing, and perhaps research evidence being used in this way under those circumstances is a case apart. Once in power, though, New Labour continued to make quite selective use of research evidence, and it was not always especially concerned about the quality of a research study if it served its policy purposes. One example was the way in which research was used in the English White Paper of 2001, Schools achieving success (DfES 2001). A central plank of the White Paper was to encourage secondary schools to specialise in certain areas of the curriculum to boost achievement. In making its case on ‘specialist schools’, the White Paper made much of research carried out for the then Technology Colleges Trust, which claimed to show that these schools added more value to their pupils’ achievements than other schools. The problem was that the research had not been submitted to peer review and indeed was subsequently subject to public criticism by education statisticians. As one of those statisticians commented:

It is not clear whether the authors of the White Paper sought views on the adequacy of the research before using it, but…there are those within the DfES itself who would have cautioned against taking the results of the study at face value. Given that the research supported what was already Government policy, it would seem that this is what drove the decision to use it as ‘evidence’ (Goldstein 2001).

Another example was provided by the academies programme, where a political commitment to autonomous schools as the solution to academic underachievement in disadvantaged areas meant that the New Labour government again strayed from its avowed commitment to evidence-based policy. After largely disregarding a critical report it had itself commissioned from PricewaterhouseCoopers (DfES 2005), it went on to ignore critical questions raised by academics about the way in which it had used performance data to claim that these schools were, in general, performing better for equivalent pupils than the schools they had replaced—thereby justifying continuing with the policy. Gorard (2005) commented that ‘to expand the [academies] programme on the basis of what has happened so far is so removed from the evidence-based policy making that is a mantra of government today that it is scarcely worth pointing out’ (p. 376).

The House of Commons Education and Skills Select Committee (2005), whose role it was to hold the government to account on education policy and spending, similarly used both the specialist school and academies programmes to argue that, despite the government’s proclaimed attachment to evidence-based policy, expensive schemes were being rolled out before having been adequately tested and evaluated compared to other less expensive alternatives (p. 17).

In a 2005 presidential address to BERA (Whitty 2006), we expressed some scepticism about the Blair government’s policy agenda and highlighted the dangers of letting it drive the future direction of educational research. Nevertheless, government enthusiasm for the rhetoric of evidence-informed policy in education continued throughout the New Labour era and on into the next government, the Conservative-Liberal Democrat Coalition that was in power 2010–2015, and the Conservative government elected in 2015.

In 2016, we published Research and Policy in Education: Evidence, ideology and impact (Whitty et al. 2016), which opened with a chapter entitled ‘Education(al) research and education policy in an imperfect world’ that examined the situation some 10 years after our BERA presidential address. We concluded that, during that decade, the rhetoric had, if anything, grown stronger, as advocates of evidence-informed policy encouraged educational researchers to adopt the medical model of RCTs and systematic reviews.

The over-claiming we had identified in terms of the potential for a closer relationship between policy and evidence—and the push for particular kinds of research to that end—remained both unrealistic and undesirable in our view. We argued that many of the impediments to a close and unmediated relationship between education research evidence and policy debates in education, let alone policy decisions, remained, and that there was therefore a need to guard against a narrowing of the scope of educational research in accordance with this model.

This seems to be even more important in the post-Brexit context where Theresa May, who replaced David Cameron as Prime Minister after the vote to leave the European Union, announced the creation of some new academically selective grammar schools. In this case, some highly selective if not downright misleading use of research evidence seemed much less important in policy making than the personal experiences and preferences of the Prime Minister and the need to satisfy some of her backbenchers. As the BBC Education Editor put it at the time, ‘the symbolic status of grammars as a chance to better yourself has trumped the expert consensus’ about the weight of evidence, so that the debate about what (the extensive and robust) research told us about grammar schools had become ‘almost irrelevant’ (Jeffreys 2016). In the end, the policy itself, whatever the evidence for or against it, became irrelevant as, after losing her majority at the 2017 general election, Theresa May concluded she could not get the necessary legislation through parliament.

So, while we are supportive in general terms of the principle that evidence of various sorts should be a part of policy making, our concern here has been to draw attention to the risk that unrealistic expectations of what this could or should look like in practice would skew research funding and commissioning in unhelpful ways. In particular, we see a risk that the relatively narrow range of methodologies associated with the ‘evidence-informed’ and ‘what works’ bandwagons—RCTs and systematic reviews—could come to be favoured disproportionately, and that this would leave funding for other types of research in education as ‘the remainder of a growing series of subtractions’, to use Dijkgraaf’s turn of phrase in the 2017 pamphlet The Usefulness of Useless Knowledge (Flexner and Dijkgraaf 2017). This, we suggest, would be problematic in and of itself, narrowing the kinds of research being conducted. It would also, in turn, provide a less rich resource with which the policy community itself might engage. Taking the example of the sociology of education (although it might equally apply to the philosophy of education), often now regarded as irrelevant to the business of policy, as opposed to its critique, we would do well to remember the warning of Sir Fred Clarke in the 1940s that ‘educational theory and educational policy that take no account of [sociological insights] will be not only blind but positively harmful’ (quoted in Whitty 1997: 4).

3 The Shift to ‘Evidence-Informed Practice’

Since Research and Policy in Education was published in 2016, there has been a growing shift away from that emphasis on influencing policy towards influencing the professions instead and bringing an evidence-informed approach to professional practice. This was exemplified recently in the UK by the ‘Evidence Declaration for Professional Bodies’ initiative in November 2017 (AfUE 2017).

Advocates of an evidence-informed approach have themselves conceded that this shift is being driven at least in part by the difficulties in joining together policy and research communities. For some, this reflects the difficulty in finding positive examples of evidence-informed policy and the many examples of poor use of evidence by policy makers (Halpern 2016); others have concluded that the grand claims of evidence-informed policy need to be replaced by more modest ambitions, at least for now, partly because researchers are often ‘more interested in indulging their academic interests than providing useful and practical results’ (Turner 2015). More significantly, Jonathan Breckon, Head of the Alliance for Useful Evidence, has recognised that ‘while politicians shouldn’t be ignorant of the evidence, they have the right to ignore it’, that ‘technocracy should not trump democracy’, that ‘it is right and proper that politicians “use their gut’” and even that ‘other ways to make decisions are all really valuable’ (Breckon 2016). Some in political circles are being more vocal about the limits to evidence-informed policy—one noting that being known as an ‘evidence-based politician’ is regarded as an insult, suggesting as it does a lack of interest in the politics of governing (HEPI conference, April 2017).

Nevertheless, in part, the shift of focus to evidence-informed practice is a project to embed a more evidence-informed approach to policy by ‘getting the professions on board’ and building a wider coalition. In his 2016 lecture at the Institute of Education in London, David Halpern, chief executive of the Behavioural Insights Team and What Works National Advisor, stated that his goal was for a ‘golden age of empiricism’, so that the next generation asks ‘why on earth wouldn’t you test that out before setting policy?’ (Halpern 2016). Admittedly, Halpern’s focus is often ‘policy with a small p’—the practicalities of implementing a policy programme that has already been decided, very possibly on largely ideological grounds. The problem is that the evidence-informed/what works rhetoric rarely distinguishes between the two.

The emphasis on practice is also about evidence-informed practice per se—side-stepping the politicians altogether, even if its growth in education has been facilitated by an early decision of the 2015 Coalition government to provide seed funding for an Education Endowment Foundation (EEF), a grant-making charity ‘dedicated to challenging educational disadvantage in English primary and secondary schools’ by sharing evidence on effective practice.Footnote 1 One of the ways in which the EEF has sought to achieve this is through its Teaching and Learning Toolkit. The toolkit synthesises the findings from systematic reviews and trials into an online facility allowing school leaders to compare the estimated impact and cost of different types of educational intervention. It already encompasses over 10,000 pieces of research, and remains a ‘live’ resource that is regularly updated (EEF 2012).Footnote 2 The EEF also commissions research, where it is largely committed to funding and evaluating RCT-type studies.

However, any suggestion that the concept of evidence-informed practice has greater traction than that of evidence-informed policy (as least in the terms envisaged by its most enthusiastic advocates) remains to be tested—or evidenced. The EEF is itself aware of the issues around knowledge mobilisation and evidencing the impact of evidence-informed practice (Collins 2016). One of its own studies has highlighted the challenges in demonstrating a causal link between evidence use and improved pupil outcomes (Speight et al. 2016). Similarly, after reviewing the available evidence for government, Coldwell et al. (2017) concluded that we still ‘know relatively little about the effects of evidence-based approaches on schools, teachers and pupils, and how to increase the likelihood of better outcomes for learners in particular’ (p. 22).

Kevan Collins, CEO of the EEF from 2011 to 2020, has noted that even when research evidence is clear it does not necessarily influence decision-making in schools. He has also noted how school leaders (just like politicians) can often use research selectively to justify decisions already made (Collins 2016). As with policy, research evidence is likely to have greater traction where it chimes with assumptions and beliefs already held within contexts of practice.

There is a strand in the literature on evidence-informed practice that reflects on the possible reasons why it is not more firmly and widely embedded within the schools system. It identifies some now well-rehearsed barriers/enablers to evidence-informed practice. The main factors seem to be:

  • access to the research literature (now arguably much less of a barrier than it has been in the past),

  • relevance, credibility, usability of the research literature,

  • willingness of practitioners to engage,

  • practitioners having the time, skills and confidence to engage,

  • organisational support for practitioners to engage.

To the above list, Brown and Zhang (2016) add the findings from psychological research about how individuals make decisions—namely, the tendency to make do with ‘good enough’ solutions and rely on intuition or perceptions rather than analyse the data, as well as the power of emotion, feeling, snap decision-making and unconscious motivation.

Yet there remains a view that such issues detract from rather than raise fundamental questions about the rational-linear model. Collins (2015) repeated the call for more evidence of the kind that some would see as aiming to offer a prescription for teachers: ‘For too long, too many teachers have been as guilty as politicians of acting on what they believe to work, rather than what has been shown to work’. There still seems to be, then, an underlying assumption that, given time, evidence-informed practice on a rational-linear model will ‘come of age’.

While some continue to press for efforts to bring the schools system as close as possible to a model where practice leads off from trial-based evidence, others are calling for recognition and acceptance of a broader view—emphasising that the findings from local, small scale action research are closer to teachers’ experience and more engaging and useful to them. A BERA-RSA (2014) report, for example, focuses more on teacher-led inquiry than teachers working with evidence created elsewhere/by others. Saunders (2017) also regards such ‘inquiry-led teaching’, based on knowledge created in the teacher’s own context, with the teacher co-creating new knowledge based on professional experience and expertise, as equally valid to teachers working with external evidence. She cites the value of teacher engagement with/in research as making the implicit explicit such that teachers can articulate the precise reasons—ethical, emotional, intellectual—for the decisions they have made during any given lesson. Whatever the methodology, this requires teachers to be part of the research in question, not simply the subject. Nutley et al. (2008) concur that simply engaging in a research project, not just as a research subject but as an investigator, can lead to change in ways of thinking and behaving.

Nevertheless, advocacy of and the pursuit of something supposedly more robust—centred around teacher engagement with and in external trials and the translation of those findings into practice—continues. The EEF is currently focused on issues of knowledge mobilisation to this end. Early signs, though, are not especially encouraging. While the more immediate, practical hurdles to this approach have been addressed (e.g. access to research summaries), teacher skills and confidence to engage are still not securely embedded (Sharples 2017).

It is not, though, just a matter of overcoming barriers to the implementation of an impoverished model of evidence-informed practice. As we argued in the 2005 BERA presidential address referred to earlier, the professional literacy of teachers surely involves more than purely instrumental knowledge. Others have pointed to the dangers of eschewing the moral purpose of education and overstating the promise of a particular form of ‘evidence’ in determining the direction of educational practice (e.g. Biesta 2006; Hammersley 2005). Chiming with this perspective, Winch et al. (2013) emphasise three interconnected and complementary prongs to a richer notion of teacher professionalism: practical wisdom, technical knowledge and critical reflection.

In the face of official support and funding for a narrowly instrumental approach to the role of research in educational practice, it seems that educational researchers themselves will need to make the argument for maintaining a broad church of education research—and make greater effort to show external audiences, not least education practitioners, how their professionalism can grow by engaging with a breadth of material. Just as some are keen for teachers to be better able to engage with and judge the findings of quantitative research, so there remains a place for qualitative approaches and critical perspectives in their repertoire. This should not necessarily be seen as a problem, and the constant slippage back to a rational-linear model and related over-claiming needs to give way to a more inclusive approach to evidence-informed practice.

4 Concluding Comments: Some Lessons from Post-truth

Ironically, then, there is little evidence to date that a rational-linear model of evidence-informed practice is proving any more feasible or desirable than a close link between research evidence and policy. In our closing comments we argue against subordinating a broader view of evidence to the immediate demands of establishing ‘what works’ and we also reflect on the ‘post-truth’ phenomenon as it relates to the issue of research evidence and the cause of evidence-informed policy and practice.

In a free society it is surely important that we have a dialogue about what constitutes appropriate ends as well as the means, the why as well as the how. This applies as much to the teaching profession as it does to civil society in the round. We need a teaching profession that is engaged in such questions, seeing new issues and how they might be addressed. In that process, scholarly perspectives are important. As Biesta (2007) sets out, the ways in which practitioners or policy makers present problems—and hence articulate an alleged ‘research need’—may not necessarily be the best way in which the problem should be understood. Researchers need to challenge what questions are being asked and why, to bring a broader viewpoint. In this, their view is as valid as that of policy makers and practitioners. As part of this, Biesta argues, researchers need to keep a critical distance between themselves and their ‘audience’: they have different kinds of expertise, and different responsibilities.

Equally, society—and policy makers in particular—need to understand the added purpose and value of looking at the world through the lens of (high quality) scholarship. Like the arts, unfettered scholarship ‘uplifts the spirits, heightens our perspective above the everyday, and shows us a new way to look at the familiar’ (Flexner and Dijkgraaf 2017). Missing from the ‘what works’ and evidence-informed policy and practice agenda has been a celebration of that added purpose.

This is a case that needs to be made at an interesting juncture in terms of the evidence-informed policy/practice agenda. When CEO of the EEF, Kevan Collins himself questioned how secure political support is for the rhetoric/practice of taking an evidence-informed approach: would this be merely another policy phase or fad, he asked (Collins 2016). Meanwhile, the context, at least in the UK and the USA, seems to have grown a little less hospitable. On the one hand, we have seen a move away from the ‘end of history’, centre-ground and focus-group led politics of the Clinton and Blair years to something much more ideological and class-based. That has combined with a growing disregard for evidence within political debate and political rhetoric, and possibly also in policy too, something that has been particularly pronounced in the USA. Perhaps evidence-informed policy/practice will turn out, in its current form at least, to be a turn of the century aberration. Another possibility is that the claims of the evidence-informed movement, as well as the voice of its critics, fare better amidst a more obviously ideological battle of ideas.

In The Death of Expertise, Tom Nichols expresses the concern that ‘the average American’ is not simply ‘uninformed’ but moving towards being ‘aggressively wrong’. As well as showing ignorance, Nichols asserts, they are actively resisting new information that might threaten their beliefs. He talks about the conflation of information, knowledge and experience, and how this has been reinforced by the ubiquity of Google. He also talks about the triumph of emotion over expertise. He links in a culture that cannot accept the inequality implicit in someone being more knowledgeable than someone else (Nichols 2017). This publication is just one of many to reflect on what has become known as ‘post-truth’, a term that came to the fore following the vote for Brexit in the UK and the election of Donald Trump in the USA (e.g. d’Ancona 2017; Davies 2017).

This literature suggests that post-truth is different to political spin in terms of the acceptance of untruths, which is termed ‘cognitive resignation’. This results in politicians and the public paying little regard to whether what they are saying is true or not, just to whether others are persuaded. It contrasts truth vs impact; facts vs story/connecting with people emotionally; the honestly complex vs the deceptively simple; the rational vs the visceral; veracity vs solidarity/identity. Perception is all and the battle becomes one of defining reality. This is accompanied by the discrediting of traditional sources of authoritative knowledge. The mainstream media is usually what is being referred to here, but it might also encompass academia—so called ‘experts’. One could even argue that, in Michael Young’s terms (Young 2013), the ‘powerful knowledge’ generated by communities of scholars is being challenged by a new ‘knowledge of the powerful’ where the powerful are not the ruling elites of the past but various ‘populist’ movements (Muller 2017). In this context, the ‘truth test’ is ultimately popularity rather than the agreed conventions of academic disciplines.

Once again, the impact of the internet, but particularly social media, is implicated, for exacerbating people’s tendency to retreat to echo chambers and filter bubbles. Algorithms are now compounding this. Also implicated are Freud and the paradigm of therapy, behavioural economics and the emphasis on psychological impulses in decision-making, as well as the emphasis on emotional intelligence and the role played by emotional competencies in social relations. As intimated earlier, post-modernism and social constructivism, leading to cynicism, relativism and hyperreality, are sometimes said to have had their own corrosive effect in terms of ‘putting the ideologically driven layman at the advantage of the scholar’. Calcutt (2016) argues that ‘those responsible include (postmodernist) academics, journalists, ‘creatives’ and financial traders; even the centre-left politicians who have now been hit hard by the rise of the anti-factual’. What all this adds up to, d’Ancona argues, is emotional necessity trumping the need for adherence to the truth.

However, Nichols, d’Ancona and others arguably put too positive a gloss on science and academia, ignoring academia’s own tendency towards echo chambers and filter bubbles, as well as the limitations of scientific research itself. Thus, although one response to post-truth might be a retreat to facts and technocracy, seemingly justifying the ambitions of the evidence-informed movement, this would be to adopt an unrealistic and unattainable—even undesirable—prospectus. In practice, the response will need to be much more nuanced. As d’Ancona sets out, the ‘backfire effect’ (ill-informed opinion becoming more entrenched in the face of evidence to the contrary) illustrates how post-truth will not crumble under the weight of freshly verified information repeated relentlessly and ubiquitously. Data should not be confused with truth; it cannot capture the complexity of public policy issues, nor values or emotion. Purveyors of evidence will need to be emotionally intelligent as much as rigorously rational—scientifically credible charismatic leaders, able to communicate around biases and heuristics, to speak to experience, memory and hope.

In Research and Policy in Education we argued for more public intellectuals in education and social science, given that politics often follows public opinion rather than expert advice. Academics need, therefore, to be part of the wider dialogue that goes beyond policy makers and professionals. This recognises that the task of academics seeking to impact upon policy and practice is much more complex and uncertain than advocates of evidence-informed policy and practice so often imply. In its present form, the evidence-informed movement exaggerates the possibility of ‘expert’ answers to enduring educational issues and plays into the hands of those who are prone to be suspicious of all research. Other traditions of research better reflect some of the uncertainties that are implied by the post-truth phenomenon. Recognising this does not mean that ‘anything goes’, but that a range of different research traditions, with different truth tests and quality criteria, need to be taken seriously. When conducted to a high standard, various types of research can offer important insights to policy makers and practitioners as well as the wider polity. But none of them are ever going to be the only—or even the main—determinant of education policy and practice.