The attention for the use of evidence in the policy process has received renewed attention in recent years with explicit calls for ‘evidence-based policy’ (EBP). This idea is not new and goes back at least to Lasswell’s call for a ‘policy science’, which would explicitly apply interdisciplinary research to policy problems (Lasswell 1951, 1970). The problématique of evidence use raises a number of questions on how knowledge is recognised and mobilised in the policy process. These questions were addressed in a panel on EBP at the Interpretive Policy Analysis conference in Tilburg, the Netherlands, in July 2012. The panel was grounded in the study of policy making as practice (Colebatch 2005, 2006; Colebatch et al. 2011). Some of the papers from this panel have already been published (Pearce et al. 2014); four more are presented here. Three of the papers in this special issue are case studies of contemporary policy practices in UK, Sweden and Australia, and the fourth investigates the way in which the demand for EBP is mobilised by a movement that advocates randomised controlled trials as the preferable form of knowledge in policy practice. We briefly introduce these four articles, discuss what we learn from them and suggest what questions remain to be investigated. We conclude by arguing that while the EBP discourse is inherently naive about the realities of public policy, its dominance continues to exert real influence on policy practices, academic research and democratic participation.
The focus in this issue
EBP has become, as Schön (1971) would put it, a concept ‘in good standing’ for both policy practice and policy analysis. Pearce and Raman’s article (2014) suggests that this has not been a simple evolutionary process, but the result of quite specific mobilisation of support among particular stakeholders in the policy process. They describe how a coalition of government departments, think tanks and high profile individuals within the UK has sought to promote the increased usage of RCTs as the gold standard for evidence in public policy. They argue that naively presenting RCTs as neutral evidence of what policy interventions work is misleading because producing evidence for policymaking is a hybrid activity that necessarily spans both science and politics. In addition, with its emphasis on quantification and rationality, the quest for EBP privileges quantitative research. Concerns about randomized controlled trials (RCTs) as the ‘gold standard’ within a hierarchy of evidence include fears of marginalisation of qualitative research.
The other pieces in this issue address how the rhetorical acceptance of EBP impacts on policy making practices. A central point that comes out of all the contributions is that expert knowledge is only one part of the policy process. The rhetoric of EBP implies that the nature and dimensions of the problem being addressed are known, measurable and unambiguous, and that appropriate monitoring will show the success of policy measures. However, these case studies suggest that this is rarely the case. Wesselink and Gouldson (‘Pathways to impact in local government: the mini-Stern review as evidence in policy making in the Leeds City Region’) describe how academic research on the economics of low carbon cities, the ‘‘mini-Stern review’, helped create an evidence base for the Leeds City Region and its constituent local authorities. However, the review was only a small contribution to wider-ranging processes of informing, convincing and pressurising within the different local authorities and the broader communities. This brought other contextual factors into play such as the composition, agendas and activities of local civil society and the local business community. A piece of evidence like the mini-Stern review was found to be just one of a variety of resources drawn upon by policy workers. While EBP is taken for granted by all policy actors, it is enacted in locally specific ways. Wesselink and Gouldson also found that the re-framing of ‘climate change’ as ‘economic investment opportunity’ was crucial, since this aligns with current hegemonic policy discourses where economy is all-important, and gives it a much higher priority on the political agenda.
Similarly, in their study of the management of health technology in a Swedish county council (‘Puzzling about problems: the ambiguous search for an evidence-based strategy for handling influx of health technology’), Nedlund and Garpenby show that EBP is taken to be desirable. While it is promoted through the establishment of a Health Technology Advisory Committee (HTAC) to control the introduction of health technology, the problem being addressed was highly complex and could not be resolved by the tabling of one form of evidence. Actors framed the problem in different ways and drew on diverse forms of knowledge, so the search for evidence is contested, the balance of power among key actors is more significant that the scientific method adopted by the HTAC, and the outcome is continuing ambiguity rather than a definitive resolution of the conflict.
Boswell (‘“Hoisted with our own petard”: evidence and democratic deliberation on obesity’) also shows that affirming the importance of evidence does not resolve conflicts over policy choices. He describes how key actors engaged in debate on obesity in Australia and the UK subscribe to radically different narratives about the nature, extent and even existence of this public health problem. Yet, these clashing narratives are all presented as ‘evidence-based’. While disagreeing on the evidence itself, there is at the same time a high degree of consensus among actors on what evidence means and entails in the abstract: they all subscribe to EBP as ideal, but how to give effect to this ideal is contested. Boswell argues that the overall effect of this consensus is to enhance the deliberative potential of policy processes, since participants have to contend with alternative framings, but undermine the democratic elements, since ‘outsiders’ find it harder to get recognition of their framing and knowledge.
These summaries suggest that EBP as prescription has generally permeated the accounts of policy as (needing to be) based on evidence. However, the cases presented here show that EBP is less adequate as an analysis of policy practice. Overt deference to EBP does not remove the need for political reasoning; rather, politics is introduced ‘through the back door’ through debates on what is valid evidence rather than on what values should prevail. In their case study of the health policy arena in Sweden, Nedlund and Garpenby (2014) show how competing rationalities are embedded in their advocates’ framings, each supported by its own evidence. The implicit desire in the EBP rhetoric for uniform rationality actually leads to a meta-level debate on what is acceptable as evidence (rather than how to interpret the evidence), which reintroduces the need for negotiation. The ‘evidence’ in itself is ambiguous, and the different preferences policy actors hold for particular sorts of evidence, or evidence from particular disciplines, help to underpin the clashes among (and sometimes within) the competing narratives on the issue (Lin 2003; Lennon 2014; Boswell 2014).
EBP and democratic policy making
There is a well-developed argument about the impact of EBP on democratic involvement in the policy process. Torgerson (1985, 1986) argues that Lasswell’s 1951 call for a ‘policy science’ sought to strengthen democratic control of the process of governing: by providing public facts for democratic monitoring of government activity, it would improve the possibilities for contestation of policy decisions. In this view, EBP seems to offer a corrective to ‘decisionist’ models of rule in which politicians dominate as well as ‘technocratic’ models in which experts dominate (Hoppe 2005; Weingart 1999). The assumptions here seem to be that the only relevant factor is the inherent quality of evidence that the evidence ‘speaks for itself’ and that ‘good evidence’ will be a conclusive guide to action. However, evidence cannot speak: it has to be introduced in some way by the participants as part of their framing of the policy problem. Much of the discussion about the use of evidence in policy making revolves around the interplay of quality, context and discourse.
Here, both Nedlund and Garpenby (2014) and Boswell (2014) find a middle ground between these two extremes of EBP as facilitating or preventing inclusive deliberation about policy issues. Nedlund and Garpenby find that there is a common ground on EBP when all actor groups emphasise evidence as important. This offers possibilities for deliberation on ‘evidence’ as a term that needs to be interpreted. Boswell (2014) also finds ‘some deliberative advantages on a prosaic level’ in his case study on obesity debates in Australia and the UK, although his overall conclusion on inclusiveness is rather bleak. While he finds little consent among policy actors on the meaning of particular evidence, what counts as valid evidence, or when the evidence is sufficient to merit action, there is a shared commitment in the abstract to use evidence, and all base their advocacy efforts on such evidence. The discourse on EBP itself thereby provides a valuable common ground on which debate can proceed along deliberative principles: they agree to play by the same ‘rules of the game’. However, this shared commitment does not end up facilitating the inclusion of lay perspectives. Boswell argues that the problem is not one of technocrats excluding lay knowledge, as is frequently claimed in the policy studies literature (e.g. Fischer 2003). Rather, they find that the sorts of evidence which citizens might be bringing to the debates tend to be low in the hierarchy of evidence. On balance, scholarship seems to suggest that the primacy of EBP disempowers lay citizens and reaffirms the importance of experts.
Conclusion: policy practice as the context for evidence
As we have seen, the rhetoric of EBP seems only common sense: the policy-maker confronted with a problem should draw on the best scientific evidence to devise the optimal solution. But the policy process is unlikely to be this simple. Rather than a single problem and only a single policy-maker concerned with solving it, it is more likely that a number of participants will be involved and that they will have distinct, overlapping and perhaps conflicting views on both the nature of the problem and of the sort of knowledge most appropriately mobilised in determining a response. ‘Evidence’ is unlikely to be neutral and unproblematic: its definition is part of the policy process and depends heavily on context.
Here, the epistemological simplicity of the EBP rhetoric adds to its appeal, but detracts from its utility. Few would argue that policy should not follow the evidence, but what is policy-relevant evidence is determined by context. EBP’s rhetoric looks for ‘neutral’, context-free and universally applicable ‘evidence’ fit badly with this reality. Studies of the use of evidence in policy making show ‘context’ to be a central factor in explaining process. This is well known to practitioners, who know that they have to be able to ‘read’ the context, and mobilise the appropriate discourse and evidence at the right time (Tenbensel 2006; Wesselink and Gouldson 2014). Practitioners recognise multiple sources of evidence, recognising practical wisdom or judgement rather than privileging studies which claim to offer law-like explanations and prediction, designed to enhance technical control of the world (Flyvbjerg 2001). We concur with Freeman et al. (2011) that ‘the study of policy practice might therefore finally begin to move us away from New Public Management accounts of practitioners as game players or as instrumental agents seeking to defect from external regulation, towards an appreciation of policy practice as an active ingredient in the heady compound that is policy making’ (Freeman et al. 2011: p. 134).
We end up with a puzzle: there are many voices in support of EBP, but the empirical evidence points to the naiveté of expecting a linear relationship between evidence (however defined) and policy. There are multiple forms of knowledge at play, the process is multi-voiced and continuing, and the mobilisation of evidence is part of an interactive process. As Boswell (2014) shows, participants all claim their case to be ‘evidence-based’, but have different perceptions of what ‘the evidence’ says; ‘evidence-based’ indicates a style of discourse rather than determines a policy outcome. However, the assertion of instrumental rationality continues to dominate the ‘sacred’ discourse about governing (Colebatch 2005, 2006). In many ways, therefore, both the advocacy of EBP and the questioning of its actual impact simply repeat the argument of 30 years ago about policy analysis (e.g. Shulock 1999). Research should therefore focus on EBP as a rhetoric of practice, rather than as a system with defined features. The question is how such rhetoric is used. It can be seen as an attempt to re-frame governing as the pursuit of measurable targets. In other cases, it can be seen as a ritual of control in which quantitative targets are defined and reports on the degree of achievement are filed, but they do not give rise to substantial changes in practices or allocations. Both possibilities are credible; what these studies show is the EBP is what the participants make it to be.
The perception of policy as the expression of legitimate authority and instrumental rationality reflects the persisting ideology of the enlightenment with its emphasis on facts rather than responsibility and its detachment of humans from their context (Reason 1998). What the studies discussed in this text suggest is that EBP should be seen primarily as a rhetorical format rather than a guide to practice. However, EBP still has real significance, favouring some practices over others, shaping the way that policy participants construct their practices, the direction of academic research and the possibilities for democratic participation. That the advocates of EBP have unreal visions of its potential, and fail to locate it within the universe of policy discourse and practice, should not blind us to the significance of the impacts EBP has.
Boswell, J. (2014). Hoisted with our own petard’: Evidence and democratic deliberation on obesity. Policy Sciences. doi:10.1007/s11077-014-9195-4.
Colebatch, H. K. (2005). Policy analysis, policy practice and political science. Australian Journal of Public Administration, 64(3), 14–23.
Colebatch, H. K. (2006). What work makes policy? Policy Sciences, 39(4), 309–321.
Colebatch, H. K., Hoppe, R., & Noordegraaf, M. (2011). Working for Policy. Amsterdam: Amsterdam University Press.
Fischer, F. (2003). Reframing public policy: Discursive politics and deliberative practices. New York: Oxford University Press.
Flyvbjerg, B. (2001). Making social science matter: Why social inquiry fails and how it can succeed again. Cambridge: Cambridge University Press.
Freeman, R., Griggs, S., & Boaz, A. (2011). The practice of policy making. Evidence & Policy, 7(2), 127–136.
Hoppe, R. (2005). Rethinking the puzzles of the science-policy nexus: From knowledge utilization and science technology studies to types of boundary arrangements. Poiesis & Praxis, 3(3), 191–215.
Lasswell, H. D. (1951). The policy orientation. In D. Lerner & H. D. Lasswell (Eds.), The policy sciences. Stanford, CA: Stanford University Press.
Lasswell, H. D. (1970). The emerging conception of the policy sciences. Policy Sciences, 36(1), 71–98.
Lennon, M. (2014). Presentation and persuasion: The meaning of evidence in Irish green infrastructure policy. Evidence & Policy, 10(2), 167–186.
Lin, V. (2003). Competing rationalities: Evidence-based health policy. In V. Lin & B. Gibson (Eds.), Evidence-based health policy: Problems and possibilities (pp. 3–17). Melbourne: Oxford University Press.
Nedlund, A.-C., & Garpenby, P. (2014). Puzzling about problems: The ambiguous search for an evidence-based strategy for handling influx of health technology. Policy Sciences. doi:10.1007/s11077-014-9198-1.
Pearce, W., & Raman, S. (2014). The new Randomised Controlled Trials (RCT) movement in public policy: Challenges of epistemic governance. Policy Sciences. doi10.1007/s11077-014-9208-3.
Pearce, W., Wesselink, A., & Colebatch, H. K. (2014). Evidence and meaning in policy making. Evidence & Policy, 10(2), 161–165.
Reason, P. (1998). A participatory worldview. Resurgence, 168, 42–44.
Schön, D. A. (1971). Beyond the stable state. New York: Random House.
Shulock, N. (1999). The paradox of policy analysis: If it is not used, why do we produce so much of it? Journal of Policy Analysis and Management, 19(2), 226–244.
Tenbensel, T. (2006). Policy knowledge for policy work. In H. K. Colebatch (Ed.), The work of policy: An international survey (pp. 199–216). Latham, MD: Lexington Books.
Torgerson, D. (1985). Contextual orientation in policy analysis: The contribution of Harold D. Lasswell. Policy Sciences, 18(3), 241–261.
Torgerson, D. (1986). Interpretive policy inquiry: A response to its limitations. Policy Sciences, 19(4), 397–405.
Weingart, P. (1999). Scientific expertise and political accountability: Paradoxes of science in politics. Science and Public Policy, 26(3), 151–161.
Wesselink, A., & Gouldson, A. (2014). Pathways to impact in local government: The mini-Stern review as evidence in policy making in the Leeds City Region. Policy Sciences. doi:10.1007/s11077-014-9196-3.
About this article
Cite this article
Wesselink, A., Colebatch, H. & Pearce, W. Evidence and policy: discourses, meanings and practices. Policy Sci 47, 339–344 (2014). https://doi.org/10.1007/s11077-014-9209-2