Abstract
Evidence-based approaches to policy-making are growing in popularity. A generally embraced view is that with the appropriate evidence at hand, decision and policy making will be optimal, legitimate and publicly accountable. In practice, however, evidence-based policy making is constrained by a variety of problems of evidence. Some of these problems will be explored in this article, in the context of the debates on evidence from which they originate. It is argued that the source of much disagreement might be a failure to addressing crucial philosophical assumptions that inform, often silently, these debates. Three controversial questions will be raised which appear central to some of the challenges faced by evidence-based policy making: firstly, how do certain types of facts candidate themselves as evidence; secondly, how do we decide what evidence we have, and how much of it; and thirdly, can we combine evidence. In addressing these questions it will be shown how a philosophically informed debate might prove instrumental in clarifying and settling practical difficulties.
Similar content being viewed by others
Notes
A clear sign of this commitment can be found in the 1999 White Paper Modernising Government, which called for the “better use of evidence and research in policy making and better focus on policies that will deliver long term goals” and stipulates evidence as a key principle for policy making. See Cabinet Office (1999), p. 16.
For example, proposals to expand the Sure Start programme led to a £16 million research project which intended to establish whether the programme was actually achieving results. See Hunter (2003).
A randomized controlled trial (RCT) is an experiment in which investigators randomly assign eligible subjects (or other units of study, e.g. classrooms, clinics, playgrounds) into groups. Each of the groups receives or does not receive one or more interventions (e.g. a particular treatment). Then the results are compared, and if the observed outcome is statistically significant, then it can be concluded that it has indeed been caused by the experimenters’ manipulation, i.e. there is a high probability that the intervention actually works. Blind procedures (single, double, triple to even quadruple) are used to control bias.
Ib., p. 86. Dehue claims that the Dutch experiment was indeed designed in the respect of the highest standards.
Gigerenzer (2002). Gigerenzer’s examples are discussed in the context of dealing with risk and the uncertainties of daily life. Nonetheless the way they are set out become instructive vis a vis some of the features we are discussing here concerning evidence.
There is also a fourth way to present the benefit: “increase in life expectancy” (women between 50 and 69 who participate in screening increase their life expectancy by an average of 12 days). See Gigerenzer (2002, p. 59).
By means of Bayesian calculus, Phil Dawid shows us that what we get at the very end is that there are five chances of guilt in a total of 14, which means in terms of guilt probability 5/14 = 35%.
It is interesting to note that the jury, despite struggling with the complex statistical argument which was presented to them, and accepted without objection at trial, reached a verdict of guilt. Clearly, the immense odds of the DNA evidence had an overwhelming effect on the jurors’ assessment of the evidence.
In what follows I make reference to the three features of objectivity as discussed in Martin (2006).
On how to describe a model of objectivity with these characteristics see my (2003).
Haack (2003) uses the image of a crossword puzzle.
These are listed in Martin (2006).
See for example in the case of the precautionary principle.
References
Cabinet Office (1999) Modernising government, white paper Cm 4310, HMSO
Campbell Collaboration. http://www.campbellcollaboration.org
Cartwright N (1999) The vanity of rigour in economics. Discussion paper series, CPNSS. Expanded version in P. Fontaine and R. Leonard (eds) (2005) The experiment in the history of economics. Routledge, London-New York, pp 135–153
Cartwright N (2007a) Are RCTs the gold standard? BioSocieties 2(2):11–20
Cartwright N (2007b) Evidence based policy and its ranking schemes: so, where’s ethnography? (mimeo)
Cartwright N et al (2007) Evidence-based policy: where is our theory of evidence? CPNSS/Contingency and Dissent DP, London. Also published in Beckermann A, Tetens H, Walter S (eds) (2008) Philosophy: foundations and applications. Main lectures and colloquia talks of the German analytic philosophy conference GAP. 6. Mentis-Verlag, Paderborn
Cochrane Collaboration. http://www.cochrane.org
Commission of the European Communities (2001) European governance: a white paper. Commission of the European communities: Brussels. COM
Daston L, Galison P (1992) The image of objectivity. Representation 40:135–156
Daston L, Galison P (2007) Objectivity. Zone Books, New York
Dawid AP (2008) Statistics and the law. In: Bell A, Swenson J, Tybjerg W-K (eds) Evidence. Cambridge University Press, Cambridge, pp 119–148
Dehue T (2002) A Dutch treat. Randomized controlled experimentation and the case of heroin-maintenance in the Netherlands. Hist Human Sci 15:2
Gigerenzer G (2002) Reckoning with risk. Penguin Press, London
Gigerenzer G et al (1989) Empire of chance: how probability changed science and everyday life. Cambridge University Press, Cambridge
Haack S (ed) (2003) Clues to the puzzle of scientific evidence: a more-so story. In: Defending science—within reason. Prometheus Books, New York
Hacking I (1975) The emergence of probability. Cambridge University Press, Cambridge
Hunter DJ (2003) Evidence-based policy and practice: riding for a fall? J R Soc Med 96(4):194–196
Jefferson T (2003) Unintended events following immunization with MMR: a systematic review. Vaccine 21:3954–3960
Lynch M, McNally R (2003) Science, “common sense”, and DNA evidence: a legal controversy about the public understanding of science. Public Underst Sci 12:83–103
Martin E (2006) Evidence, objectivity and public policy: methodological perspectives on the vaccine controversy. APA Proc Address 81(3) (mimeo)
Mayo D (1988) Towards a more objective understanding of carcinogenic risk. PSA Proc 2:489–503
Montuschi E (2003) The objects of social science. Continuum Press, London/New York
SIGN (Scottish Intercollegiate Guideline Network) (2004). http://www.sign.ac.uk/guidelines/fulltext/50/compevidence.html
Oxford Centre for Evidence-based Medicine Levels of Evidence (2007). http://www.cebm.jr2.ox.ac.uk/docs/level.html
Porter T (1995) Trust in numbers: the pursuit of objectivity in science and public life. Princeton University Press, Princeton
Scientific advice, risk and evidence: how government handles them (2006) Evidence Report 15 Feb 2006. http://www.parliament.uk/parliamentary_committees/science_and_technology_committee/sag.cfm
Seckinelgin H (2007) Evidence based policy for HIV/AIDS interventions: questions of external validity, or relevance for use. Dev Change 38(6):1219–1234
Suter G (1993) Ecological risk assesment. Lewis Publ, Chelsea
Wakefield A et al (1998) Ideal lymphoid-nodular hyper-plasia, non specific colitis, and pervasive developmental disorder in children. Lancet 351:637–642
Acknowledgments
This paper presents some of the issues and questions pursued in the research project “Evidence for Use”, hosted by the Centre for Philosophy of Natural and Social Science at the London School of Economics. I am grateful to Nancy Cartwright and the other members of the research group for enlightening discussions over the topic.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Montuschi, E. Questions of Evidence in Evidence-Based Policy. Axiomathes 19, 425–439 (2009). https://doi.org/10.1007/s10516-009-9085-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10516-009-9085-0