Skip to main content
Log in

Questions of Evidence in Evidence-Based Policy

  • Original Paper
  • Published:
Axiomathes Aims and scope Submit manuscript

Abstract

Evidence-based approaches to policy-making are growing in popularity. A generally embraced view is that with the appropriate evidence at hand, decision and policy making will be optimal, legitimate and publicly accountable. In practice, however, evidence-based policy making is constrained by a variety of problems of evidence. Some of these problems will be explored in this article, in the context of the debates on evidence from which they originate. It is argued that the source of much disagreement might be a failure to addressing crucial philosophical assumptions that inform, often silently, these debates. Three controversial questions will be raised which appear central to some of the challenges faced by evidence-based policy making: firstly, how do certain types of facts candidate themselves as evidence; secondly, how do we decide what evidence we have, and how much of it; and thirdly, can we combine evidence. In addressing these questions it will be shown how a philosophically informed debate might prove instrumental in clarifying and settling practical difficulties.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. A clear sign of this commitment can be found in the 1999 White Paper Modernising Government, which called for the “better use of evidence and research in policy making and better focus on policies that will deliver long term goals” and stipulates evidence as a key principle for policy making. See Cabinet Office (1999), p. 16.

  2. For example, proposals to expand the Sure Start programme led to a £16 million research project which intended to establish whether the programme was actually achieving results. See Hunter (2003).

  3. Examples of these “ranking schemes” can be found in SIGN (2004); or the Oxford Centre for Evidence-based Medicine Levels of Evidence (2007).

  4. A randomized controlled trial (RCT) is an experiment in which investigators randomly assign eligible subjects (or other units of study, e.g. classrooms, clinics, playgrounds) into groups. Each of the groups receives or does not receive one or more interventions (e.g. a particular treatment). Then the results are compared, and if the observed outcome is statistically significant, then it can be concluded that it has indeed been caused by the experimenters’ manipulation, i.e. there is a high probability that the intervention actually works. Blind procedures (single, double, triple to even quadruple) are used to control bias.

  5. Ib., p. 86. Dehue claims that the Dutch experiment was indeed designed in the respect of the highest standards.

  6. On this more general issue see also Cartwright (2007a, b), Seckinelgin (2007).

  7. Gigerenzer (2002). Gigerenzer’s examples are discussed in the context of dealing with risk and the uncertainties of daily life. Nonetheless the way they are set out become instructive vis a vis some of the features we are discussing here concerning evidence.

  8. There is also a fourth way to present the benefit: “increase in life expectancy” (women between 50 and 69 who participate in screening increase their life expectancy by an average of 12 days). See Gigerenzer (2002, p. 59).

  9. For a discussion of this case I refer to Dawid (2008) and Lynch and McNally (2003).

  10. By means of Bayesian calculus, Phil Dawid shows us that what we get at the very end is that there are five chances of guilt in a total of 14, which means in terms of guilt probability 5/14 = 35%.

  11. It is interesting to note that the jury, despite struggling with the complex statistical argument which was presented to them, and accepted without objection at trial, reached a verdict of guilt. Clearly, the immense odds of the DNA evidence had an overwhelming effect on the jurors’ assessment of the evidence.

  12. See Lynch and McNally (2003, p. 96). On the use and role of numbers in modern society see Hacking (1975), Porter (1995) and Gigerenzer et al. (1989).

  13. In what follows I make reference to the three features of objectivity as discussed in Martin (2006).

  14. Daston and Galison (1992, 2007). For some, the way to achieve this task consists of a proper use of statistical analysis. See for e.g. Mayo (1988).

  15. On how to describe a model of objectivity with these characteristics see my (2003).

  16. Haack (2003) uses the image of a crossword puzzle.

  17. These are listed in Martin (2006).

  18. The expression is Thomas Jefferson’s from Jefferson (2003); quoted by Martin (2008, p. 13).

  19. See for example in the case of the precautionary principle.

References

  • Cabinet Office (1999) Modernising government, white paper Cm 4310, HMSO

  • Campbell Collaboration. http://www.campbellcollaboration.org

  • Cartwright N (1999) The vanity of rigour in economics. Discussion paper series, CPNSS. Expanded version in P. Fontaine and R. Leonard (eds) (2005) The experiment in the history of economics. Routledge, London-New York, pp 135–153

  • Cartwright N (2007a) Are RCTs the gold standard? BioSocieties 2(2):11–20

    Article  Google Scholar 

  • Cartwright N (2007b) Evidence based policy and its ranking schemes: so, where’s ethnography? (mimeo)

  • Cartwright N et al (2007) Evidence-based policy: where is our theory of evidence? CPNSS/Contingency and Dissent DP, London. Also published in Beckermann A, Tetens H, Walter S (eds) (2008) Philosophy: foundations and applications. Main lectures and colloquia talks of the German analytic philosophy conference GAP. 6. Mentis-Verlag, Paderborn

  • Cochrane Collaboration. http://www.cochrane.org

  • Commission of the European Communities (2001) European governance: a white paper. Commission of the European communities: Brussels. COM

  • Daston L, Galison P (1992) The image of objectivity. Representation 40:135–156

    Google Scholar 

  • Daston L, Galison P (2007) Objectivity. Zone Books, New York

    Google Scholar 

  • Dawid AP (2008) Statistics and the law. In: Bell A, Swenson J, Tybjerg W-K (eds) Evidence. Cambridge University Press, Cambridge, pp 119–148

  • Dehue T (2002) A Dutch treat. Randomized controlled experimentation and the case of heroin-maintenance in the Netherlands. Hist Human Sci 15:2

    Article  Google Scholar 

  • Gigerenzer G (2002) Reckoning with risk. Penguin Press, London

    Google Scholar 

  • Gigerenzer G et al (1989) Empire of chance: how probability changed science and everyday life. Cambridge University Press, Cambridge

    Google Scholar 

  • Haack S (ed) (2003) Clues to the puzzle of scientific evidence: a more-so story. In: Defending science—within reason. Prometheus Books, New York

  • Hacking I (1975) The emergence of probability. Cambridge University Press, Cambridge

    Google Scholar 

  • Hunter DJ (2003) Evidence-based policy and practice: riding for a fall? J R Soc Med 96(4):194–196

    Article  Google Scholar 

  • Jefferson T (2003) Unintended events following immunization with MMR: a systematic review. Vaccine 21:3954–3960

    Article  Google Scholar 

  • Lynch M, McNally R (2003) Science, “common sense”, and DNA evidence: a legal controversy about the public understanding of science. Public Underst Sci 12:83–103

    Google Scholar 

  • Martin E (2006) Evidence, objectivity and public policy: methodological perspectives on the vaccine controversy. APA Proc Address 81(3) (mimeo)

  • Mayo D (1988) Towards a more objective understanding of carcinogenic risk. PSA Proc 2:489–503

    Google Scholar 

  • Montuschi E (2003) The objects of social science. Continuum Press, London/New York

    Google Scholar 

  • SIGN (Scottish Intercollegiate Guideline Network) (2004). http://www.sign.ac.uk/guidelines/fulltext/50/compevidence.html

  • Oxford Centre for Evidence-based Medicine Levels of Evidence (2007). http://www.cebm.jr2.ox.ac.uk/docs/level.html

  • Porter T (1995) Trust in numbers: the pursuit of objectivity in science and public life. Princeton University Press, Princeton

    Google Scholar 

  • Scientific advice, risk and evidence: how government handles them (2006) Evidence Report 15 Feb 2006. http://www.parliament.uk/parliamentary_committees/science_and_technology_committee/sag.cfm

  • Seckinelgin H (2007) Evidence based policy for HIV/AIDS interventions: questions of external validity, or relevance for use. Dev Change 38(6):1219–1234

    Google Scholar 

  • Suter G (1993) Ecological risk assesment. Lewis Publ, Chelsea

    Google Scholar 

  • Wakefield A et al (1998) Ideal lymphoid-nodular hyper-plasia, non specific colitis, and pervasive developmental disorder in children. Lancet 351:637–642

    Article  Google Scholar 

Download references

Acknowledgments

This paper presents some of the issues and questions pursued in the research project “Evidence for Use”, hosted by the Centre for Philosophy of Natural and Social Science at the London School of Economics. I am grateful to Nancy Cartwright and the other members of the research group for enlightening discussions over the topic.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eleonora Montuschi.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Montuschi, E. Questions of Evidence in Evidence-Based Policy. Axiomathes 19, 425–439 (2009). https://doi.org/10.1007/s10516-009-9085-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10516-009-9085-0

Keywords

Navigation