, Volume 33, Issue 2, pp 313–322 | Cite as

Down with the Hierarchies

  • Jacob Stegenga


Evidence hierarchies are widely used to assess evidence in systematic reviews of medical studies. I give several arguments against the use of evidence hierarchies. The problems with evidence hierarchies are numerous, and include methodological shortcomings, philosophical problems, and formal constraints. I argue that medical science should not employ evidence hierarchies, including even the latest and most-sophisticated of such hierarchies.


Evidence Causality Evidence hierarchies Medicine Randomized trials Mechanisms Amalgamating evidence Quality assessment tools RCTs Meta-analysis 



I am grateful to Phyllis Illari, Federica Russo, and two anonymous reviewers for detailed commentary on earlier drafts. Financial support was provided by the Banting Postdoctoral Fellowships program administered by the Social Sciences and Humanities Research Council of Canada.


  1. Atkins D, Best D, Briss PA, Group GW (2004) Grading quality of evidence and strength of recommendations. BMJ 328:1490CrossRefGoogle Scholar
  2. Bluhm R (2005) From hierarchy to network: a richer view of evidence for evidence-based medicine. Perspect Biol Med 48(4):535–547. doi: 10.1353/pbm.2005.0082 CrossRefGoogle Scholar
  3. Bluhm R (2011) Jeremy Howick: the philosophy of evidence-based medicine. Theor Med Bioeth 32(6):423–427. doi: 10.1007/s11017-011-9196-7 CrossRefGoogle Scholar
  4. Borgerson K (2008) Valuing and evaluating evidence in medicine. PhD dissGoogle Scholar
  5. Borgerson K (2009) Valuing evidence: bias and the evidence hierarchy of evidence-based medicine. Perspect Biol Med 52(2):218–233. doi: 10.1353/pbm.0.0086 CrossRefGoogle Scholar
  6. Cartwright N (1979) Causal laws and effective strategies. Nous 13:419–437CrossRefGoogle Scholar
  7. Cartwright N (2007) Are RCTs the gold standard? BioSocieties 2(1):11–20. doi: 10.1017/s1745855207005029 CrossRefGoogle Scholar
  8. Cartwright N (2010) What are randomised controlled trials good for? Philos Stud 147:59–70CrossRefGoogle Scholar
  9. Cho MK, Bero LA (1994) Instruments for assessing the quality of drug studies published in the medical literature. JAMA 272(2):101–104CrossRefGoogle Scholar
  10. Cook TD, Campbell DT (1979) Quasi-experimentation: design and analysis issues for field settings. Houghton Mifflin, BostonGoogle Scholar
  11. Department of Clinical Epidemiology and Biostatistics, M. U. H. S. C (1981) How to read clinical journals: V: to distinguish useful from useless or even harmful therapy. Can Med Assoc J 124(9):1156–1162Google Scholar
  12. Douglas H (2012) Weighing complex evidence in a democratic society. Kennedy Inst Ethics J 22(2):139–162CrossRefGoogle Scholar
  13. Goldenberg MJ (2009) Iconoclast or creed? Objectivism, pragmatism, and the hierarchy of evidence. Perspect Biol Med 52(2):168–187. doi: 10.1353/pbm.0.0080 CrossRefGoogle Scholar
  14. Hadorn DC, Baker D, Hodges JS, Hicks N (1996) Rating the quality of evidence for clinical practice guidelines. J Clin Epidemiol 49:749–754CrossRefGoogle Scholar
  15. Hartling L, Bond K, Vandermeer B, Seida J, Dryden DM, Rowe BH (2011) Applying the risk of bias tool in a systematic review of combination long-acting beta-agonists and inhaled corticosteroids for persistent asthma. PLoS One 6(2):e17242. doi: 10.1371/journal.pone.0017242 CrossRefGoogle Scholar
  16. Howick J (2011a) Exposing the vanities—and a qualified defense—of mechanistic reasoning in health care decision making. Philos Sci 78(5):926–940CrossRefGoogle Scholar
  17. Howick J (2011b) The philosophy of evidence-based medicine. Wiley, OxfordCrossRefGoogle Scholar
  18. Illari PM (2011) Mechanistic evidence: disambiguating the Russo–Williamson thesis. Int Stud Philos Sci 25(2):139–157. doi: 10.1080/02698595.2011.574856 CrossRefGoogle Scholar
  19. Ioannidis JP (2005) Why most published research findings are false. PLoS Med 2(8):e124. doi: 10.1371/journal.pmed.0020124 CrossRefGoogle Scholar
  20. Ioannidis JP (2008) Why most discovered true associations are inflated. [Review]. Epidemiology 19(5):640–648. doi: 10.1097/EDE.0b013e31818131e7 CrossRefGoogle Scholar
  21. Ioannidis JP (2011) An epidemic of false claims. Competition and conflicts of interest distort too many medical findings. Sci Am 304(6):16CrossRefGoogle Scholar
  22. Karanicolas PJ, Kunz R, Guyatt GH (2008) Point: evidence-based medicine has a sound scientific base. [Editorial]. Chest 133(5):1067–1071. doi: 10.1378/chest.08-0068
  23. Kelly MP, Moore TA (2011) The judgement process in evidence-based medicine and health technology assessment. Soc Theory Health 10(1):1–19CrossRefGoogle Scholar
  24. La Caze A (2011) The role of basic science in evidence-based medicine. Biol Philos 26(1):81–98. doi: 10.1007/s10539-010-9231-5 CrossRefGoogle Scholar
  25. Leuridan B, Weber E (2011) The IARC and mechanistic evidence. In: Illari PM, Russo F, Williamson J (eds) Causality in the sciences. Oxford University Press, NewYork, pp 91–109CrossRefGoogle Scholar
  26. Moher D, Pham B, Jones A, Cook DJ, Jadad AR, M Moher, Klassen TP (1998) Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses? Lancet 352(9128):609–613. doi: 10.1016/s0140-6736(98)01085-x CrossRefGoogle Scholar
  27. Petticrew M, Roberts H (2003) Evidence, hierarchies, and typologies: horses for courses. J Epidemiol Community Health 57(7):527–529CrossRefGoogle Scholar
  28. Rawlins M (2008) De Testimonio: on the evidence for decisions about the use of therapeutic interventions. Royal College of Physicians, LondonGoogle Scholar
  29. Russo F, Williamson J (2007) Interpreting causality in the health sciences. Int Stud Philos Sci 21:157–170CrossRefGoogle Scholar
  30. Solomon M (2011) Just a paradigm: evidence-based medicine in epistemological context. Eur J Philos Sci 1(3):451–466. doi: 10.1007/s13194-011-0034-6 CrossRefGoogle Scholar
  31. Stegenga J (2011) Is meta-analysis the platinum standard? Stud Hist Philos Biol Biomed Sci 42:497–507CrossRefGoogle Scholar
  32. Stegenga J (forthcoming) Quality of information in clinical research. In: Illari PM, Floridi L (eds) The philosophy of information quality. SpringerGoogle Scholar
  33. Straus SE, Richardson WS, Glasziou PP, Haynes RB (2005) Evidence-based medicine: how to practice and teach, 3rd edn. Elsevier Churchill Livingstone, LondonGoogle Scholar
  34. Suppes P, Zinnes JL (1962) Basic measurement theory. Institute for mathematical studies in the social sciences, Technical Report No. 45Google Scholar
  35. Upshur RE (2005) Looking for rules in a world of exceptions: reflections on evidence-based practice. Perspect Biol Med 48(4):477–489. doi: 10.1353/pbm.2005.0098 CrossRefGoogle Scholar
  36. Vandenbroucke JP (2008) Observational research, randomised trials, and two views of medical science. PLoS Med 5(3):e67. doi: 10.1371/journal.pmed.0050067 CrossRefGoogle Scholar
  37. Wilson MC, Hayward RS, Tunis SR, Bass EB, Guyatt G (1995) Users’ guides to the medical literature. VIII. How to use clinical practice guidelines. B. what are the recommendations and will they help you in caring for your patients? The evidence-based medicine working group. JAMA 274(20):1630–1632. doi: 10.1001/jama.1995.03530200066040 CrossRefGoogle Scholar
  38. Worrall J (2002) What evidence in evidence-based medicine? Philos Sci 69:S316–S330CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2013

Authors and Affiliations

  1. 1.Institute for the History and Philosophy of Science and TechnologyUniversity of TorontoTorontoCanada
  2. 2.Department of PhilosophyUniversity of UtahSalt Lake CityUSA

Personalised recommendations