Philosophical discussions on causal inference in medicine are stuck in dyadic camps, each defending one kind of evidence or method rather than another as best support for causal hypotheses. Whereas Evidence Based Medicine advocates the use of Randomised Controlled Trials and systematic reviews of RCTs as gold standard, philosophers of science emphasise the importance of mechanisms and their distinctive informational contribution to causal inference and assessment. Some have suggested the adoption of a pluralistic approach to causal inference, and an inductive rather than hypothetico-deductive inferential paradigm. However, these proposals deliver no clear guidelines about how such plurality of evidence sources should jointly justify hypotheses of causal associations. We here develop such guidelines by first giving a philosophical analysis of the underpinnings of Hill’s (1965) viewpoints on causality. We then put forward an evidence-amalgamation framework adopting a Bayesian net approach to model causal inference in pharmacology for the assessment of harms. Our framework accommodates a number of intuitions already expressed in the literature concerning the EBM vs. pluralist debate on causal inference, evidence hierarchies, causal holism, relevance (external validity), and reliability.
This is a preview of subscription content, log in to check access.
Buy single article
Instant access to the full article PDF.
Price includes VAT for USA
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
This is the net price. Taxes to be calculated in checkout.
1 In reality, the decision problem is not as black and white as presented here. There are further actions available to a drug licensing agency such as: restricting access to the drug to a subset of patients and adding further information to the package insert, such as black-box warnings. For ease of exposition, we shall here disregard further possible actions and only consider the black and white decision problem on whether or not to approve a drug or leave it in the market after its safety profile has changed.
2 We deliberately use the circled Ⓒ as our relational predicate symbol here to not invoke any specific technical, theory-informed reading of this claim, yet. We precisely do think, though, that such an explication is in order – even more so since it is often neglected and rarely undertaken in the methodological literature.
4Ioannidis (2016) examines the empirical evidence in support of Bradford Hill guidelines and de-emphasise their importance. Indeed the influence of biases of various kinds may distort the genuine informative value of such indicators. In our framework, this aspect is explicitly taken into account by providing separate lines of support for the confirmatory value of a given piece of evidence and its reliability.
5 Drawing causal inferences from functional or statistical relations alone is a hard task and in many cases not feasible. If a functional description (like a structural equation) or a statistical connection (like a high measure of covariance) is available though (and has proven stable), it can be used for intervention and prediction – two hallmarks of causal knowledge. Although David Freedman criticises the Spirtes-Glymour-Scheines approach (Spirtes et al. 2000) towards automatically inferring causal claims from raw data, he points precisely to the practical use of formal dose-response relations when he writes that “[t]hree possible uses for regression equations are (i) to summarise data, or (ii) to predict values of the dependent variable, or (iii) to predict the results of interventions” (Freedman 1997, p. 62).
6 This has implications for both causal inference as well as intervention and prediction: the more complex the functional relationship, the more difficult it is to detect it and to accurately represent it; and therefore the higher the risk of false prediction and inadequate intervention (Steel 2008). However, the framework presented here focuses on detecting causes rather than on using causal knowledge for prediction and intervention.
7Woodward (2010) uses specificity to distinguish between different kinds of causes, thereby leaving room for ontological pluralism, and also allows for specificity (as well as stability) to come in degrees.
8 Since specificity-as-bijection (referring to a property of the investigated nexus between cause and effect) supports the causal hypothesis DⒸH by excluding alternative explanations of the observed effect (and ideally also alternative effects of the tested drug), we propose to model these alternatives on the same categorical level as our main hypothesis, with the same methodological arsenal for testing and confirming (or rejecting) them. It is the confirmation of D ′ⒸH ′ together with the rejection of D ′ⒸH and DⒸH ′ that makes our actual hypothesis a specific relation, thereby lending additional confirmatory support to it.
9 Precedence in time is considered so essential to causes that Russel bases his denial of their existence on the temporal symmetry of laws in physics, (Russell 1912).
10 From a more general perspective this precisely touches upon the difficulty of defining or describing an event, as discussed, e.g., by David Lewis in “Counterfactual Dependence and Time’s Arrow” (1979) and “Events” (1986).
11 The role of evidence about mechanisms of chemical substances in risk assessment has been recently analysed by Luján et al. (2016). Two issues are particularly relevant for the present purposes: 1) the questioned applicability of animal data in humans; 2) the lack of guarantee that similarity of modes of action may warrant extrapolation of phenotypic effects from one chemical to another. Both issues relate to the problem of extrapolation: the former regard whether a given chemical will produce the same effect in the study and in the target population; the latter refers to whether similar chemicals produce similar effects (on a given population). In our framework the problem of extrapolation is addressed with the following in mind: I. As explained in Section 2.3, in the case of risk assessment the main concern relates to false negatives; hence any signal should be accounted for as a possible sign unveiling latent risks – “If it happened there, it can also happen here”; II. Warrant for extrapolation is also taken to come in degrees and therefore is incorporated in a probabilistic approach. This lets the degree of confidence in such warrant guide the decision at hand in combination with other relevant dimensions (as illustrated in Section 2.2).
12 Randomisation has putatively two main roles: 1) in the long run, it should allow the investigator to approach the true mean difference between treatment and control group; however it is unclear what this true underlying population probability denotes when we are dealing not with population of molecules for instance, but with population of patients undergoing medical interventions, where heterogeneity among individuals can at most allow for an aggregate average measure. Furthermore, it is obviously unethical and unfeasible to re-sample the same subjects of an experiment again and again, and even if this were possible, the subjects who were administered the drug in the first round would undergo physiological change; consequently, the successive trial population would no longer be the “same” (Worrall 2007a); 2) randomisation (together with intervention and blinding) should guarantee the internal validity of the study by severing any common cause, or common effect, between the investigated treatment and its putative effects (i.e., avoidance of confounders and (self-) selection bias). This kind of objective is supposed to justify the primary role assigned to randomised evidence by so called evidence hierarchies, see Section 6 below.
13 This is also known as the “potential outcome approach” to causal inference.
14 David Lewis’ bases his formal account of causation implicitly on the concept of comparative similarity and concedes the following: “We do make judgments of comparative overall similarity – of people, for instance – by balancing off many respects of similarity and difference. Often our mutual expectations about the weighting factors are definite and accurate enough to permit communication. […] But the vagueness of over-all similarity will not be entirely resolved. Nor should it be. The vagueness of similarity does infect causation, and no correct analysis can deny it.” (cf. Lewis 1973, pp. 559–560)
15 One particular inference by analogy is that from animal studies/models to a human target population. In LaFollette and Shanks (1995), it has been argued that animal studies are only good for hypothesis discovery. We side with Baetu (2016) in thinking that animal studies are one important piece to the puzzle to predicting drug reactions.
16 The reader is referred to the very recent (Crupi and Tentori 2014) which discusses two leading Bayesian confirmation measures in detail.
17 ¬R e p i means that “not consequence i is reported” rather than “consequence i is not reported”.
18 Where R E L i and C O N i denote the parent variables of R E P i .
19 Superscripts are suppressed in the notation, whenever no confusion arises.
20 Case reports may contribute in two main different ways to harm assessment: 1) the first one(s) contribute to hypothesis generation: they function as alarm signals by identifying identify a previously unknown side effect; 2) following these hypothesis generation events, the subsequent case reports contribute to “strengthen” the signals, i.e., they have a confirmatory role, analogously to compared studies and other statistical evidence as illustrated in this paper. Other kinds of studies may also function as generators of hypotheses of course, but this role is mainly covered by case reports.
21 By ‘observational/static’ we refer to inference from observation alone, whereas by ‘interventional/dynamic’ we refer to inference from data collected in interaction with the investigated system or population. For example, this contrast becomes evident in the difference between standard probabilistic conditioning (which amounts to shifting the focus in a probabilistic model) and conditioning with Pearl’s d o-operator (which amounts to transforming the probabilistic model).
22 The latter case makes the conceptual divide even more obvious: If one knows the hypothesis to be true, learning that there is no difference-making would not change one’s belief in a positive dose-response. In this case the causal relation under investigation would then be explained as holistic causation. We are thankful to an anonymous reviewer for pointing this case out to us.
23 We are thankful to an anonymous reviewer for hinting at sources of potential disagreement about the role of mechanistic knowledge and thus helping us elucidate our point here. For a discussion of different network structures and their use for hypothesis confirmation see, e.g., Wheeler and Scheines (2013).
26 The rationale for this ranking is provided by methodological-foundational considerations mainly developed within standard statistics and follow a kind of hypothetico-deductive approach to scientific inference (see also our comments on Experiment above, Page 19).
27 A somewhat unwanted consequence of this “take the best” approach is that it has become commonplace to assume an uncommitted attitude towards observed associations least they are “proved” by gold standard evidence (see the still ongoing debate on the possible causal association between paracetamol and asthma; (Shaheen et al. 2000; Eneli et al. 2005; Shaheen et al. 2008; Henderson and Shaheen 2013; Allmers et al. 2009; McBride 2011; Heintze and Petersen 2013; Martinez-Gimeno and García-Marcos 2013)).
28 This also complies with the precautionary principle in risk assessment and with how decisions should be made in health settings (see Section 2.2).
30 Both the Kentians and Cartwright construe the term “mechanism” in slightly different fashion than we did here. For them, a mechanism need not be described on the (sub-)molecular level. This detail is not relevant for our current discussion.
31Osimani and Landes (Forthcoming) investigates the various concepts of reliability involved in such considerations.
32 Moreover, our approach addresses explicitly the issue of external validity by formally incorporating reasoning by analogy (see Section 3.2.6).
Abernethy, D., & Bai, G. (2013). Systems pharmacology to predict drug toxicity: integration across levels of biological organization. Annual Review of Pharmacology and Toxicology, 53, 451–73. doi:10.1146/annurev-pharmtox-011112-140248.
Allmers, H., Skudlik, C., & John, S. M. (2009). Acetaminophen use: a risk for asthma? Current Allergy and Asthma Reports, 9(2), 164–7. doi:10.1007/s11882-009-0024-3.
Anjum, R. L., & Mumford, S. (2012). Causal dispositionalism. Properties, Powers and Structure 101–118, 7 In Bird, A., Ellis, B., & Sankey, H. (Eds.), Routledge.
Baetu, T. M. (2016). The ‘Big picture’: the problem of extrapolation in basic research. British Journal for the Philosophy of Science, 67(4), 941–964. doi:10.1093/bjps/axv018.
Bartha, P. (2013). Analogy and analogical reasoning. In Zalta, E.N. (Ed.), The Stanford encyclopedia of philosophy, fall 2013 edn.
Bartha, P. F. A. (2010). By parallel reasoning: the construction and evaluation of analogical arguments. Oxford University Press.
Bes-Rastrollo, M., Schulze, M. B., Ruiz-Canela, M., & Martinez-Gonzalez, M. A. (2013). Financial conflicts of interest and reporting bias regarding the association between sugar-sweetened beverages and weight gain: a systematic review of systematic reviews. PLOS Medicine, 10(12), 1–9. doi:10.1371/journal.pmed.1001578.
BonJour, L. (2010). Epistemology. Classic problems and contemporary responses. Rowman & Littlefield Publishers.
Bovens, L., & Hartmann, S. (2003). Bayesian epistemology. Oxford University Press.
Britton, O. J., Bueno-Orovio, A., Van Ammel, K., Lu, H. R., Towart, R., Gallacher, D. J., & Rodriguez, B. (2013). Experimentally calibrated population of models predicts and explains intersubject variability in cardiac cellular electrophysiology. Proceedings of the National Academy of Sciences, 110(23), E2098–E2105. doi:10.1073/pnas.1304382110.
Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafo, M. R. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14, 365–376. doi:10.1038/nrn3475.
Carnap, R. (1947). On the application of inductive logic. Philosophy and Phenomenological Research, 8(1), 133–148. http://www.jstor.org/stable/2102920.
Carné, X., & Cruz, N. (2005). Ten lessons to be learned from the withdrawal of Vioxx. European Journal of Epidemiology, 20 (2), 127–129. doi:10.1007/s10654-004-6856-1.
Cartwright, N. (2007a). Are RCTs the Gold Standard? Biosocieties, 2, 11–20. doi:10.1017/S1745855207005029.
Cartwright, N. (2007b). Causal powers: what are they? Why do we need them? What can be done with them and what cannot? Tech. Rep 04/07. http://www.lse.ac.uk/CPNSS/research/concludedResearchProjects/ContingencyDissentInScience/DP/CausalPowersMonographCartwrightPrint http://www.lse.ac.uk/CPNSS/research/concludedResearchProjects/ContingencyDissentInScience/DP/CausalPowersMonographCartwrightPrint.
Cartwright, N. (2008). Evidence-based policy: what’s to be done about relevance? Philosophical Studies, 143(1), 127–136. doi:10.1007/s11098-008-9311-4.
Cartwright, N., & Stegenga, J. (2011). A theory of evidence for Evidence-Based policy. In Dawid, P., & Twinning William Vasilaki, M. (Eds.), Evidence, Inference and Enquiry, chap. 11, OUP (pp. 291–322).
Chan, A. W., & Altman, D. G. (2005). Epidemiology and reporting of randomised trials published in PubMed journals. The Lancet, 365(9465), 1159–1162. doi:10.1016/S0140-6736(05)71879-1.
Clarke, B., Leuridan, B., & Williamson, J. (2014). Modelling mechanisms with causal cycles. Synthese, 191(8), 1651–1681. doi:10.1007/s11229-013-0360-7.
Cohen, M.P. (2016). On three measures of explanatory power with axiomatic representations. British Journal for the Philosophy of Science, 67(4), 1077–1089. doi:10.1093/bjps/axv017. Early view.
Craver, C. (2007). Explaining the brain: mechanisms and the mosaic unity of neuroscience. Oxford: Clarendon Press.
Crupi, V., Chater, N., & Tentori, K. (2013). New axioms for probability and likelihood ratio measures. British Journal for the Philosophy of Science, 64(1), 189–204. doi:10.1093/bjps/axs018.
Crupi, V. C., & Tentori, K. (2014). State of the field: measuring information and confirmation. Studies in History and Philosophy of Science Part A, 47, 81–90. doi:10.1016/j.shpsa.2014.05.002.
Dardashti, R., Thébaut, K., & Winsberg, E. (2016). Confirmation via analogue simulation: what dumb holes can tell us about gravity. British Journal for the Philosophy of Science. doi:10.1093/bjps/axv010. Forthcoming.
Darden, L. (2006). Reasoning in biological discoveries: essays on mechanisms, interfield relations, and anomaly resolution. New York: Cambridge University Press.
Darwiche, A. (2009). Modeling and reasoning with Bayesian networks. Cambridge University Press.
Dawid, R., Hartmann, S., & Sprenger, J. (2015). The no alternatives argument. British Journal for the Philosophy of Science, 66 (1), 213–234. doi:10.1093/bjps/axt045.
Dietrich, F., & Moretti, L. (2005). On coherent sets and the transmission of confirmation. Philosophy of Science, 72(3), 403–424. doi:10.1086/498471.
Doll, R., & Peto, R. (1980). Randomised controlled trials and retrospective controls. British Medical Journal, 280, 44. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1600504/.
Dunn, A. G., Arachi, D., Hudgins, J., Tsafnat, G., Coiera, E., & Bourgeois, F. T. (2014). Financial conflicts of interest and conclusions about neuraminidase inhibitors for influenza. Annals of Internal Medicine, 161(7), 513–518. doi:10.7326/M14-0933.
Edwards, W., Lindman, H., & Savage, L. J. (1963). Bayesian statistical inference for psychological research. Psychological Review, 70(3), 193–242. doi:10.1037/h0044139.
Eneli, I., Katayoun, S., Camargo, C., & Barr, G.R. (2005). Acetaminophen and the risk of asthma. The epidemiologic and the pathophysiologic evidence. CHEST, 127(2), 604–612. doi:10.1006/aama.1996.0501.
Fitelson, B. (2003). A probabilistic theory of coherence. Analysis, 63(279), 194–199. doi:10.1111/1467-8284.00420.
Food Drug Administration (2009). Drug induced liver injury: premarketing clinical evaluation - guidance for industry. http://www.fda.gov/downloads/Drugs/Guidance/UCM174090.pdf.
Freedman, D. (1997). From association to causation via regression. Advances in Applied Mathematics, 18(1), 59–110. doi:10.1006/aama.1996.0501.
Glasziou, P., Chalmers, I., Rawlins, M., & McCulloch, P. (2007). When are randomised trials unnecessary? Picking signal from noise. British Medical Journal, 7589, 349–351. doi:10.1136/bmj.39070.527986.68.
Guyatt, G., & et al. (1992). Evidence-based medicine: a new approach to teaching the practice of medicine. Jama, 268(17), 2420–2425. doi:10.1001/jama.1992.03490170092032.
Hampson, L. V., Whitehead, J., Eleftheriou, D., & Brogan, P. (2014). Bayesian methods for the design and interpretation of clinical trials in very rare diseases. Statistics in Medicine, 33(24), 4186–4201. doi:10.1002/sim.6225.
Heintze, K., & Petersen, K. (2013). The case of drug causation of childhood asthma: antibiotics and paracetamol. European Journal of Clinical Pharmacology, 69 (6), 1197–1209. doi:10.1007/s00228-012-1463-7.
Hempel, C. G. (1968). Maximal specificity and lawlikeness in probabilistic explanation. Philosophy of Science, 35(2), 116–133. http://www.journals.uchicago.edu/doi/abs/10.1086/288197.
Henderson, A. J., & Shaheen, S. O. (2013). Acetaminophen and asthma. Paediatric Respiratory Review, 14(1), 9–15. doi:10.1016/j.prrv.2012.04.004.
Herxheimer, A. (2012). Pharmacovigilance on the turn? Adverse reactions methods in 2012. British Journal of General Practice, 62 (601), 400–401. doi:10.3399/bjgp12X653453.
Hesse, M. B. (1952). Operational definition and analogy in physical theories. British Journal for the Philosophy of Science, 2(8), 281–294. http://www.jstor.org/stable/686017.
Hesse, M. B. (1959). On defining analogy. Proceedings of the Aristotelian Society, 60, 79–100. http://www.jstor.org/stable/4544623.
Hesse, M. B. (1964). Analogy and confirmation theory. Philosophy of Science, 31(4), 319–327. http://www.jstor.org/stable/186262.
Hill, A. B. (1965). The environment and disease: association or causation? Proceedings of the Royal Society of Medicine, 58(5), 295–300.
Holland, P. W. (1986). Statistics and causal inference. Journal of the American statistical Association, 81(396), 945–960. doi:10.1080/01621459.1986.10478354.
Holman, B., & Bruner, J.P. (2015). The problem of intransigently biased agents. Philosophy of Science, 82(5), 956–968. doi:10.1086/683344.
Horton, R. (2004). Vioxx, the implosion of Merck, and aftershocks at the FDA. The Lancet, 364(9450), 1995–1996. doi:10.1016/S0140-6736(04)17523-5.
Howick, J. (2011). Exposing the vanities - and a qualified defense - of mechanistic reasoning in health care decision making. Philosophy of Science, 78(5), 926–940. doi:10.1086/662561.
Howson, C., & Urbach, P. (2006). Scientific Reasoning, 3 edn. Open Court.
Hume, D. (1748). An enquiry concerning human understanding. The University of Adelaide Library 2004 (derived from the Harvard Classics Volume 37, 1910 P.F Collier & Son.) http://ebooks.adelaide.edu.au/h/hume/david/h92e/.
Ioannidis, J. P. A. (2016). Exposure-wide epidemiology: revisiting Bradford Hill. Statistics in Medicine, 35(11), 1749–1762. doi:10.1002/sim.6825.
Johnson, S. R., Tomlinson, G. A., Hawker, G. A., Granton, J. T., & Feldman, B. M. (2010). Methods to elicit beliefs for Bayesian priors: a systematic review. Journal of Clinical Epidemiology, 63(4), 355–369. doi:10.1016/j.jclinepi.2009.06.003.
Jüni, P., Nartey, L., Reichenbach, S., Sterchi, R., Dieppe, P. A., & Egger, M. (2004). Risk of cardiovascular events and rofecoxib: cumulative meta-analysis. The Lancet, 364(9450), 2021–2029. doi:10.1016/S0140-6736(04)17514-4.
Kerry, R., Eriksen, T. E., Lie, S. A. N., Mumford, S. D., & Anjum, R. L. (2012). Causation and evidence-based practice: an ontological review. Journal of Evaluation in Clinical Practice, 18(5), 1006–1012. doi:10.1111/j.1365-2753.2012.01908.x.
Kment, B. (2010). Causation: determination and difference-making. Noûs, 44 (1), 80–111. doi:10.1111/j.1468-0068.2009.00732.x. Wiley Online Library.
Krumholz, H. M., Ross, J. S., Presler, A. H., & Egilman, D. S. (2007). What have we learnt from Vioxx? British Medical Journal, 334(7585), 120–123. doi:10.1136/bmj.39070.527986.68.
Kuorikoski, J., Lehtinen, A., & Marchionni, C. (2010). Economic modelling as robustness analysis. The British Journal for the Philosophy of Science, 61(3), 541–567. http://www.jstor.org/stable/40981302.
La Caze, A. (2009). Evidence-based medicine must be. Journal of Medicine and Philosophy, 34(5), 509–527. doi:10.1093/jmp/jhp034.
La Caze, A., Djulbegovic, B., & Senn, S. (2012). What does randomisation achieve? Evidence-Based Medicine, 17(1), 1–2. doi:10.1136/ebm.2011.100061.
LaFollette, H., & Shanks, N. (1995). Two models of models in biomedical research. Philosophical Quarterly, 45(179), 141–160. http://www.jstor.org/stable/2220412.
Lamal, P. (1990). On the importance of replication. Journal of Social Behavior and Personality, 5(4), 31–35.
Lewis, D. (1973). Causation. Journal of Philosophy, 70(17), 556–567. http://www.jstor.org/stable/2025310.
Lewis, D. (1986). Causal explanation, Philosophical papers, chap. 3. OUP, (Vol. II pp. 214–240).
Lewis, D. (2000). Causation as influence. Journal of Philosophy, 97(4), 182–197. http://www.jstor.org/stable/2678389.
Lipton, P. (2003). Inference to the best explanation. Routledge.
Luján, J. L., Todt, O., & Bengoetxea, J. B. (2016). Mechanistic information as evidence in decision-oriented science. Journal for General Philosophy of Science, 47 (2), 293–306. doi:10.1007/s10838-015-9306-8.
Machamer, P., Darden, L., & Craver, C. F. (2000). Thinking about mechanisms. Philosophy of Science, 67(1), 1–25. http://www.jstor.org/stable/188611.
Martinez-Gimeno, A., & García-Marcos, L. (2013). The association between acetaminophen and asthma: should its pediatric use be banned? Expert Review of Respiratory Medicine, 7(2), 113–122. doi:10.1586/ers.13.8.
McBride, J. T. (2011). The association of acetaminophen and asthma prevalence and severity. Prediatrics, 128(6), 1–5. doi:10.1186/1745-6215-11-37.
McGrew, T. (2003). Confirmation, heuristics, and explanatory reasoning, 54 (4), 553–567. doi:10.1093/bjps/54.4.553.
Meehl, P.E. (1990). Appraising and amending theories: the strategy of lakatosian defense and two principles that warrant it. Psychological Inquiry, 1(2), 108–141. doi:10.1207/s15327965pli0102_1.
Mill, J. S. (1884). A system of logic, ratiocinative and inductive: being a connected view of the principles of evidence and the methods of scientific investigation. Longmans, Green and Company.
Moretti, L. (2007). Ways in which coherence is confirmation conducive. Synthese, 157(3), 309–319. doi:10.1007/s11229-006-9057-5.
Mumford S., & Anjum, R. L. (2011). Getting causes from powers. Oxford: Oxford University Press.
Neapolitan, R. E. (2003). Learning Bayesian networks. Pearson.
Open Science Collaboration (2015). Estimating the reproducibility of psychological science. American Heart Journal, 349(6251), 943–aac4716–8. doi:10.1126/science.aac4716.
Osimani, B. (2007). Probabilistic information and decision making in the health context: the package leaflet as basis for informed consent. Doctoral Thesis, 1 edn Università della Svizzera Italiana.
Osimani, B. (2013). The precautionary principle in the pharmaceutical domain: a philosophical enquiry into probabilistic reasoning and risk aversion. Health, Risk & Society, 15 (2), 123–143. doi:10.1080/13698575.2013.771736.
Osimani, B. (2014a). Causing something to be one way rather than another. Genetic information, causal specificity and the relevance of linear order. Kybernetes, 43(6), 865–881. doi:10.1108/K-07-2013-0149.
Osimani, B. (2014b). Hunting side effects and explaining them: should we reverse evidence hierarchies upside down? Topoi, 33 (2), 295–312. doi:10.1007/s11245-013-9194-7.
Osimani, B., & Landes, J. (Forthcoming). Exact replication or varied evidence? The varied of evidence thesis and its methodological implication in medical research.
Osimani, B., & Mignini, F. (2015). Causal assessment of pharmaceutical treatments: why standards of evidence should not be the same for benefits and harms? Drug Safety, 38(1), 1–11. doi:10.1007/s40264-014-0249-5.
Osimani, B., Russo, F., & Williamson, J. (2011). Scientific evidence and the law: an objective bayesian formalisation of the precautionary principle in pharmaceutical regulation. Journal of Philosophy, Science and Law, 11. http://jpsl.org/files/9913/6816/1730/Bayesian-Formalization.pdf.
Papineau, D. (1993). The virtues of randomization. British Journal for the Philosophy of Science, 45(2), 437–450. doi:10.1093/bjps/45.2.437.
Pearl, J. (2000). Causality: models, reasoning, and inference, 1st edn. Cambridge University Press.
Platt, J. R. (1964). Strong inference. Science, 146(3642), 347–353. http://science.sciencemag.org/content/146/3642/347.
Poellinger, R. (2017). Analogy-based inference patterns in pharmacological research, Forthcoming.
Poellinger, R., & Beebe, C. (2017). Bayesian confirmation from analog models, Forthcoming.
Price, K. L., Amy Xia, H., Lakshminarayanan, M., Madigan, D., Manner, D., Scott, J., Stamey, J. D., & Thompson, L. (2014). Bayesian methods for design and analysis of safety trials. Pharmaceutical Statistics, 13 (1), 13–24. doi:10.1002/pst.1586.
Revicki, D. A., & Frank, L. (1999). Pharmacoeconomic evaluation in the real world. PharmacoEconomics, 15(5), 423–434. doi:10.2165/00019053-199915050-00001.
Roush, S. (2005). Tracking truth: knowledge, evidence, and science. Oxford University Press.
Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66(5), 688–701. doi:10.1037/h0037350.
Rubin, D. B. (2011). Causal inference using potential outcomes. Journal of the American Statistical Association, 81(396), 945–960. doi:10.1198/016214504000001880.
Russell, B. (1912). On the notion of cause, Proceedings of the aristotelian society, (Vol. 13 pp. 1–26). http://www.jstor.org/stable/4543833.
Russo, F., & Williamson, J. (2007). Interpreting causality in the health sciences. International Studies in the Philosophy of Science, 21(2), 157–170. doi:10.1080/02698590701498084.
Sackett, D. L., Rosenberg, W. M., Gray, J. M., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: what it is and what it isn’t. Bmj, 312(7023), 71–72. doi:10.1136/bmj.312.7023.71.
Salmon, W. (1984). Scientific explanation and the causal structure of the world. Princeton: Princeton University Press.
Salmon, W. (1997). Causality and explanation: a reply to two critiques. Philosophy of Science, 64(3), 461–477. http://www.jstor.org/stable/188320.
Schum, D. (2011). Classifying forms and combinations of evidence: Necessary in a science of evidence. In Dawid, P., Twinning, W., & Vasilaki, M. (Eds.), Evidence, inference and enquiry, chap. 2. OUP (pp. 11–36).
Senn, S. (2007). Statistical Issues in Drug Development. Wiley.
Shaheen, S., Potts, J., Gnatiuc, L., Makowska, J., Kowalski, M. L., Joos, G., van Zele, T., van Durme, Y., De Rudder, I., Wöhrl, S., Godnic-Cvar, J., Skadhauge, L., Thomsen, G., Zuberbier, T., Bergmann, K. C., Heinzerling, L., Gjomarkaj, M., Bruno, A., Pace, E., Bonini, S., Fokkens, W., Weersink, E. J. M., Loureiro, C., Todo-Bom, A., Villanueva, C. M., Sanjuas, C., Zock, J. P., Janson, C., & Burney, P. (2008). The relation between paracetamol use and asthma: a ga2len european case-control study. European Respiratory Journal, 32(5), 1231–1236. doi:10.1183/09031936.00039208.
Shaheen, S., Sterne, J., Songhurst, C., & Burney, P. (2000). Frequent paracetamol use and asthma in adults. Thorax, 55(4), 266–270. doi:10.1136/thorax.55.4.266.
Spirtes, P., Glymour, C., & Scheines, R. (2000). Causation, prediction, and search. Adaptive computation and machine learning. MIT Press.
Steel, D. (2008). Across the boundaries. Extrapolation in biology and social sciences. Oxford University Press.
Stegenga, J. (2011). Is meta-analysis the platinum standard of evidence? Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 42(4), 497–507. doi:10.1016/j.shpsc.2011.07.003.
Stegenga, J. (2014). Down with the hierarchies. Topoi, 33(2), 313–322. doi:10.1007/s11245-013-9189-4.
Stegenga, J. (2015). Measuring effectiveness. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 54, 62–71. doi:10.1016/j.shpsc.2015.06.003.
Straus, S. E., & McAlister, F. A. (2000). Evidence-based medicine: a commentary on common criticisms. Canadian Medical Association Journal, 163(7), 837–841.
Suppes, P. (Ed.) (1970). A Probabilistic Theory of causality. North-Holland Pub. Co.
Teira, D. (2011). Frequentist versus bayesian clinical trials. In Gifford, F. (Ed.) Handbook of Philosophy of Medicine (pp. 255–298). Wiley.
Teira, D., & Reiss, J. (2013). Causality, impartiality and evidence-based policy, Mechanism and causality in biology and economics, (pp. 207–224). Springer.
Tillman, R. E., & Eberhardt, F. (2014). Learning causal structure from multiple datasets with similar variable sets. Behaviormetrika, 41(1), 41–64. doi:10.2333/bhmk.41.41.
Unruh, W. G. (2008). Dumb holes: analogues for black holes. Philosophical Transactions of The Royal Society A, 366, 2905–2913. doi:10.1098/rsta.2008.0062.
Upshur, R. (1995). Looking for rules in a world of exceptions: reflections on evidence-based practice. Perspectives in Biology and Medicine, 48(4), 477–489. doi:10.1353/pbm.2005.0098.
Vandenbroucke, J. P., Broadbent, A., & Pearce, N. (2016). Causality and causal inference in epidemiology: the need for a pluralistic approach. International Journal of Epidemiology. doi:10.1093/ije/dyv341.
Waters, K. C. (2007). Causes that make a difference. Journal of Philosophy, 104(11), 551–579. http://www.jstor.org/stable/20620058.
Weatherall, S., Ioannides, S., Braithwaite, I., & Beasley, R. (2014). The association between paracetamol use and asthma: causation or coincidence? Clinical et Experimental Allergy, 45, 108–113. doi:10.1111/cea.12410.
Weber, M. (2006). The central dogma as a thesis of causal specificity. History and Philosophy of the Life Sciences, 28(4), 595–609. http://www.jstor.org/stable/23334188.
Weed, D. L. (2005). Weight of evidence: a review of concept and methods. Risk Analysis, 25(6), 1545–1557. doi:10.1111/j.1539-6924.2005.00699.x.
Weisberg, J. (2015). You’ve come a long way, bayesians. Journal of Philosophical Logic, 44(6), 817–834. doi:10.1007/s10992-015-9363-9.
Wheeler, G., & Scheines, R. (2013). Coherence and confirmation through causation. Mind, 122(485), 135–170. doi:10.1093/mind/fzt019.
Wimsatt, W. C. (1981). Robustness, reliability and overdetermination. In Brewer, M., & Colllins, B. (Eds.), Scientific inquiry and the social sciences: festschrift for Donald Campbell, (pp. 125–163). Jossey-Bass Publishers.
Wimsatt, W.C. (2012). Robustness, reliability, and overdetermination (1981). In Soler, L., Trizio, E., Nickles, T., & Wimsatt, W. (Eds.), Characterizing the robustness of science, boston studies in the philosophy of science, (Vol. 292 pp. 61–87): Springer, DOI doi:10.1007/978-94-007-2759-5_2.
Woodward, J. (2003). Making things happen: a theory of causal explanation (Oxford Studies in the Philosophy of Science). Oxford University Press.
Woodward, J. (2006). Some varieties of robustness. Journal of Economic Methodology, 13(2), 219–240. doi:10.1080/13501780600733376.
Woodward, J. (2010). Causation in biology: stability, specificity and the choice of levels of explanation. Biology and Philosophy, 44, 267–318. doi:10.1007/s10539-010-9200-z.
Worrall, J. (2007a). Evidence in medicine and evidence-based medicine. Philosophy Compass, 2(6), 981–1022. doi:10.1111/j.1747-9991.2007.00106.x.
Worrall, J. (2007b). Why there’s no cause to randomize. British Journal for the Philosophy of Science, 58(3), 451–488. doi:10.1093/bjps/axm024.
Worrall, J. (2010). Do we need some large, simple randomized trials in medicine? In Suárez, M., Dorato, M., & Rédei, M. (Eds.), EPSA Philosophical issues in science: Launch of the European Philosophy of Science Association. doi:10.1007/978-90-481-3252-2_27 (pp. 289–301).
Xie, L., Li, J., Xie, L., & Bourne, P. (2009). Drug discovery using chemical systems biology: identification of the protein-ligand binding network to explain the side effects of CETP inhibitors. PLos Computational Biology, 5(5), 1–12. doi:10.1371/journal.pcbi.1000387.
This paper was presented at various workshops and conferences in Munich, New Brunswick, Sheffield, Helsinki, Durham, Amsterdam, and Ferrara. We greatly profited from the comments and suggestions made by the audiences; in particular we wish to thank Rani Lill Anjum, Timo Bolt, Giovanni Boniolo, Branden Fitelson, Bennett Holman, Phyllis Illari, Mike Kelly, Ulrich Mansmann, Carlo Martini, Julian Reiss, Stephen Senn, Beth Shaw, Jacob Stegenga, and Veronica Vieland. We also thank the focus group members of the ERC project “Philosophy of Pharmacology: Safety, Statistical Standards, and Evidence Amalgamation”, to whom we owe a considerable improvement of the paper’s argumentation: Jeffrey Aronson, Lorenzo Casini, Brendan Clarke, Vincenzo Crupi, Sebastian Lutz, Federica Russo, Glenn Shafer, Jan Sprenger, David Teira, and Jon Williamson. We are extremely grateful to our colleagues at the Munich Center for Mathematical Philosophy, who helped us clarify the objectives and scope of our research, and suggested possible paths of development; in particular we wish to thank Seamus Bradley, Richard Dawid, Samuel C. Fletcher, Stephan Hartmann, Alexander Reutlinger, and Gregory Wheeler. Finally, we thank two anonymous reviewers for their comments. These significantly helped us refine some important assumptions in our theoretical framework. Of course any inaccuracies or errors in the text are, however, our own responsibility.
This work is supported by the European Research Council (grant 639276) and the Munich Center for Mathematical Philosophy (MCMP).
About this article
Cite this article
Landes, J., Osimani, B. & Poellinger, R. Epistemology of causal inference in pharmacology. Euro Jnl Phil Sci 8, 3–49 (2018). https://doi.org/10.1007/s13194-017-0169-1
- Bayesian epistemology
- Scientific inference
- Safety assessment in pharmacology
- Bradford Hill criteria