Skip to main content

Epistemic Gains and Epistemic Games: Reliability and Higher Order Evidence in Medicine and Pharmacology

  • Chapter
  • First Online:
Uncertainty in Pharmacology

Part of the book series: Boston Studies in the Philosophy and History of Science ((BSPS,volume 338))

Abstract

In this paper I analyse the dissent around evidence standards in medicine and pharmacology as a result of distinct ways to address epistemic losses in our game with nature and the scientific ecosystem: an “elitist” and a “pluralist” approach. The former is focused on reliability as minimisation of random and systematic error, and is grounded on a categorical approach to causal assessment, whereas the latter is more focused on the high context-sensitivity of causation in medicine and in the soft sciences in general, and favours probabilistic approaches to scientific inference, as better equipped for defeasibility of causal inference in such domains. I then present a system for probabilistic causal assessment from heterogenous evidence that makes justice of concerns from both positions, while also incorporating “higher order evidence” (evidence/information about the evidence itself) in hypothesis confirmation.

Funding by the European Research Council through grant 639276 (PhilPharm) is grateful acknowledged.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    This is possible also because the topology of our framework reveals that these conflicts relate to different levels or dimensions of causal inference and therefore can be deflated.

  2. 2.

    These two approaches intersect with other epistemic stances such as empiricism vs. methodological pluralism, as well as various programs for causal inference from statistical data, however the above mentioned dichotomy can be analyzed relatively independently from these perspectives.

  3. 3.

    Also, a parallel issue concerns the strong preference accorded to the so called Potential Outcome Approach (POA) vs. more pluralistic views of causal inference and causality itself (see: Vandenbroucke et al. (2016)).

  4. 4.

    Not to mention the fact that, traditionally, epistemology has considered varied evidence as more confirmatory than repetitive data (see Osimani and Landes (2020) for a discussion of the “Variety of Evidence Thesis”.)

  5. 5.

    They cite the example of the drug “Torcetrapib”, as a case where even perfect knowledge of its functioning mechanisms did not enable its producers to avoid excess death rates in the treatment arm of the third-phase trials, and therefore denial of approval. However, this failure was in fact due to lack of knowledge about (the mechanisms leading to) the side-effect, for which knowledge of the mechanisms for the intended effect is not necessarily helpful.

  6. 6.

    In the statistical counterparts of these games (frequentist vs. Bayesian statistics) probability as a measure of uncertainty is attached to the probability of error (in the long run) in one game, whereas in the other it is attached to the hypothesis itself.

  7. 7.

    See also Podolsky and Powers (2015) for a critique on a recent shift in FDA evidential standards, from what I call an “elitist” to a “pluralist” view. The considerations underpinning such critiques are cast exactly in terms of the costs and benefits of each regulatory approach. Analogously Osimani (2014) and Osimani and Mignini (2015) insist that evidential standards should not be the same for assessing efficacy and harm, exactly on these epistemic and strategic grounds.

  8. 8.

    The whole enterprise of the EBM paradigm can indeed be seen as the tireless effort to systematize a set of techniques to track and possibly minimize random and systematic error.

  9. 9.

    However the reason for ranking this data below the phenotypic level lowest, is due to issues of external rather than internal validity (see Howick (2011)).

  10. 10.

    In the causal search literature Pearl (2000) and Spirtes et al. (2000), causal sufficiency is a fundamental assumption which grounds the algorithmic search, and undermines it if it fails to hold.

  11. 11.

    These criticisms have kindled a series of defenses of RCTs on various grounds: see Senn (2003), Papineau (1994), La et al. (2012), Teira (2011), Teira and Reiss (2013) and Osimani (2014) for an overview on the debate.

  12. 12.

    This stance can be made more general by drawing on the “Variety of Evidence Thesis” (VET) which, stated in its more general form, claims that ceteris paribus, the more varied the evidence, the higher the confirmatory support provided to the hypothesis which explains it. Taken at face value this claim seems to favour the pluralistic methodology approach over the “evidence elitist” view adopted in the EBM paradigm. See also Osimani and Landes (2020).

  13. 13.

    Since “causal strength” is generally measured by the “effect size”, that is, the proportion of subjects in the sample who show the effect E in the treatment vs. control group – for instance through relative risk ratio or odds ratio measures – causal strength can be both the result of a relative context-insensitivity of the treatment investigations (or better, the fact that it contributes to the effect in many causal sets), or to its intrinsic force (as measured for instance by a steep dose-response curve). This intrinsic ambiguity prompted the substitution of the Bradford-Hill indicator “strength of the association” with three different indicators: probabilistic dependence, dose-response, and rate of growth in (De Pretis et al. 2019, Section 3). See Sect. 15.4 below.

  14. 14.

    The importance to predict the variability of the effect as a function of the joint interaction of possible co-factors hence casts a new light on sources of evidence, such as case reports and case series – which standard hierarchies rank as low, because it is not “controlled” and hence cannot provide high warrant of internal validity – but whose specific epistemic import is no lower, indeed very high, in that they can provide us with valuable information about the various scenarios in which a given treatment may (or may not) induce its effect in different degrees. So, very detailed case reports facilitate inference about co-factors (prognostic factors and mediators) possibly influencing whether and to what extent, the effect size occurs in a specific population. “In silico” evidence comprehends a huge class of methodologies which can be broadly subdivided into systems biology approaches to computational modelling and simulation and machine learning techniques for knowledge extraction or pattern recognition.

  15. 15.

    Teira analyses for instance the role played by concerns about impartiality of research in the historical establishment of randomised controlled trials as gold standards for drug approval.

  16. 16.

    This is also in analogy with Bogen and Woodward’s distinction between data and phenomena (Bogen and Woodward 1988).

  17. 17.

    These are derived from the epidemiological/causal literature (in particular from Bradford Hill guidelines); see De Pretis et al. (2019).

  18. 18.

    The frameworks also allows modeling and simulation studies to play a relevant confirmatory role, especially with regard to the underlying dynamics underpinning the phenotypic causal effect. See also Osimani and Poellinger (2020).

  19. 19.

    Please refer to De Pretis et al. (2019) for basics and details of the framework.

  20. 20.

    This directly derives from the potential outcome approach underpinning RCT methodology. See Holland et al. (1985), Rubin (2005), and Vandenbroucke et al. (2016) for a critical appraisal of this approach.

  21. 21.

    A somewhat unwanted consequence of this “take the best” approach is that it has become commonplace to assume an uncommitted attitude towards observed associations least they are “proved” by gold standard evidence (see the still ongoing debate on the possible causal association between paracetamol and asthma: Osimani (2014)).

  22. 22.

    This also complies with the precautionary principle in risk assessment and with how decisions should be made in health settings: Osimani (2007, 2013) and Osimani et al. (2011).

  23. 23.

    In the same line, also modular conceptualization of causes such as the ones implied in the causal graph methodology developed by Pearl (2000) and Glymour Spirtes et al. (2000) and colleagues (see also Woodward (2003)), are under attack for failing to recognize that causes may be holistic and therefore may be not adequately captured by a difference making account.

  24. 24.

    This responds to concerns expressed among others by Cartwright (2007c), Mumford and Anjum (2011), Anjum (2012), and Kerry et al. (2012).

  25. 25.

    Osimani and Landes (2020) investigates the various concepts of reliability involved in such considerations.

References

  • Anjum, R. L., & Mumford, S. (2012). Causal dispositionalism, Chap. 7. In Bird, A., Ellis, B., Sankey, H. (Eds.), Properties, powers and structure (pp. 101–118). New York: Routledge.

    Google Scholar 

  • Audi, R. (1993). The structure of justification. Cambridge: Cambridge University Press.

    Google Scholar 

  • Beauchamp, T. L. (2011). Informed consent: Its history meaning, and present challenges. Cambridge Quarterly of Healthcare Ethics, 20(04), 515–523.

    Article  Google Scholar 

  • Begley, C. G., & Ellis, L. M. (2012). Drug development: Raise standards for preclinical cancer research. Nature, 483(7391), 531–533.

    Article  Google Scholar 

  • Bogen, J., & Woodward, J. (1988). Saving the phenomena. The Philosophical Review, 97(3), 303–352.

    Article  Google Scholar 

  • BonJour, L. (2009). Epistemology: Classic problems and contemporary responses. Lanham: Rowman & Littlefield Publishers.

    Google Scholar 

  • Bovens, L., & Hartmann, S. (2003). Bayesian epistemology. Oxford: Oxford University Press.

    Google Scholar 

  • Carnap, R. (1956). The methodological character of theoretical concepts. Indianapolis: Bobbs-Merrill.

    Google Scholar 

  • Cartwright, N. (2007a). Are RCTs the gold standard? Biosocieties, 2, 11–20. https://doi.org/10.1017/S1745855207005029.

    Article  Google Scholar 

  • Cartwright, N. (2007b). Are RCTs the gold standard? BioSocieties, 2(1), 11–20. https://doi.org/10.1017/S1745855207005029.

    Article  Google Scholar 

  • Cartwright, N. (2007c). Causal powers: What are they? Why do we need them? What can be done with them and what cannot? Technical report, contingency and dissent in science technical report 04/07. http://www.lse.ac.uk/CPNSS/research/concludedResearchProjects/ContingencyDissentInScience/DP/CausalPowersMonographCartwrightPrint%20Numbers%20Corrected.pdf.

    Google Scholar 

  • Cartwright, N. (2007d). Hunting causes and using them: Approaches in philosophy and economics. Cambridge/New York: Cambridge University Press.

    Book  Google Scholar 

  • Cartwright, N. (2012). Presidential address: Will this policy work for you? Predicting effectiveness better: How philosophy helps. Philosophy of Science, 79(5), 973–989. https://doi.org/10.1086/668041.

    Article  Google Scholar 

  • Cartwright, N., & Stegenga, J. (2011). A theory of evidence for evidence-based policy. In Dawid, P., Twining, W., Vasilaki, M. (Eds.), Evidence inference and enquiry (Chapter 11, pp. 291–322). Oxford: Oxford University Press.

    Google Scholar 

  • Clarke, B., Gillies, D., Illari, P., Russo, F., & Williamson, J. (2013). The evidence that evidence-based medicine omits. Preventive Medicine, 57(6), 745–747. https://doi.org/10.1016/j.ypmed.2012.10.020.

    Article  Google Scholar 

  • Clarke, B., Gillies, D., Illari, P., Russo, F., & Williamson, J. (2014). Mechanisms and the evidence hierarchy. Topoi, 33, 339–360. https://doi.org/10.1007/s11245-013-9220-9.

    Article  Google Scholar 

  • Cohen, M. P. (2016). On three measures of explanatory power with axiomatic representations. Early view. British Journal for the Philosophy of Science, 67(4), 1077–1089. https://doi.org/10.1093/bjps/axv017.

    Article  Google Scholar 

  • Crupi, V., Chater, N., & Tentori, K. (2013). New axioms for probability and likelihood ratio measures. British Journal for the Philosophy of Science, 64(1), 189–204. https://doi.org/10.1093/bjps/axs018.

    Article  Google Scholar 

  • Dawid, R., Hartmann, S., & Sprenger, J. (2015). The no alternatives argument. British Journal for the Philosophy of Science, 66(1), 213–234. https://doi.org/10.1093/bjps/axt045.

    Article  Google Scholar 

  • De Pretis, F., Landes, J., & Osimani, B. (2019). E-synthesis: A Bayesian framework for causal assessment in Pharmacosurveilance. Accepted in Frontiers in Pharmacology.

    Google Scholar 

  • Dietrich, F., & Moretti, L. (2005). On coherent sets and the transmission of confirmation. Philosophy of Science, 72(3), 403–424. https://doi.org/10.1086/498471.

    Article  Google Scholar 

  • Etz, A., & Vandekerckhove, J. (2016). A Bayesian perspective on the reproducibility project: Psychology. PLoS One, 11(2), e0149794.

    Article  Google Scholar 

  • Faden, R. R., & Beauchamp, T. L. (1986). A history and theory of informed consent. New York: Oxford University Press.

    Google Scholar 

  • Fisher, R. (1955). Statistical methods and scientific induction. Journal of the Royal Statistical Society Series B (Methodological), 17, 69–78.

    Article  Google Scholar 

  • Fitelson, B. (2003). A probabilistic theory of coherence. Analysis, 63(279), 194–199. https://doi.org/10.1111/1467-8284.00420.

    Article  Google Scholar 

  • Gelman, A. (2015). Working through some issues. Significance, 12(3), 33–35. https://doi.org/10.1111/j.1740-9713.2015.00828.x.

    Article  Google Scholar 

  • Goldman, A. I. (1999). Knowledge in a social world (Vol. 281). Oxford/New York: Clarendon Press Oxford.

    Book  Google Scholar 

  • Haack, S. (2011). Defending science-within reason: Between scientism and cynicism. New York: Prometheus Books.

    Google Scholar 

  • Hacking, I. (2006). The emergence of probability: A philosophical study of early ideas about probability induction and statistical inference. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Hanin, L. (2017). Why statistical inference from clinical trials is likely to generate false and irreproducible results. BMC Medical Research Methodology, 17(1), 127. https://doi.org/10.1186/s12874-017-0399-0.

    Article  Google Scholar 

  • Hempel, C. G. (1968). Maximal specificity and lawlikeness in probabilistic explanation. Philosophy of Science, 35(2), 116–133. http://www.jstor.org/stable/186482.

    Article  Google Scholar 

  • Hill, A. B. (1965). The environment and disease: Association or causation? Proceedings of the Royal Society of Medicine, 58(5), 295–300.

    Article  Google Scholar 

  • Holland, P. W., Glymour, C., & Granger, C. (1985). Statistics and causal inference. ETS Research Report Series, 1985(2), i–72.

    Google Scholar 

  • Holman, B. (2015). The fundamental antagonism: Science and commerce in medical epistemology. PhD Dissertation. Irvine: University of California.

    Google Scholar 

  • Howick, J. (2011). Exposing the Vanities – and a qualified defense – of mechanistic reasoning in health care decision making. Philosophy of Science, 78(5), 926–940. https://doi.org/10.1086/662561.

    Article  Google Scholar 

  • Howick, J., Glasziou, P., & Aronson, J. K. (2013). Problems with using mechanisms to solve the problem of extrapolation. Theoretical Medicine and Bioethics, 34(4), 275–291.

    Article  Google Scholar 

  • Hoyningen-Huene, P. (2013). Systematicity: The nature of science. New York: Oxford University Press.

    Book  Google Scholar 

  • Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124.

    Article  Google Scholar 

  • Joffe, M. (2011). Causality and evidence discovery in epidemiology. In D. Dieks, W. J. Gonzalez, S. Hartmann, T. Uebel, & M. Weber (Eds.), Explanation, prediction, and confirmation (pp. 153–166). Dordrecht: Springer Netherlands. ISBN: 978-94-007-1180-8. https://doi.org/1.1007/978-94-071180-8_11.

    Chapter  Google Scholar 

  • Kerry, R., Eriksen, T. E., Lie, S. A. N., Mumford, S. D., & Anjum, R. L. (2012). Causation and evidence-based practice: An ontological review. Journal of Evaluation in Clinical Practice, 18(5), 1006–1012. https://doi.org/10.1111/j.1365-2753.2012.01908.x.

    Article  Google Scholar 

  • Krauth, D., Woodruff, T. J., & Bero, L. (2013). Instruments for assessing risk of bias and other methodological criteria of published animal studies: A systematic review. Environmental Health Perspectives, 121(9), 985.

    Article  Google Scholar 

  • LaCaze, A., Djulbegovic, B., & Senn, S. (2012). What does randomisation achieve? Evidence-Based Medicine, 17(1), 1–2. https://doi.org/10.1136/ebm.2011.100061.

    Article  Google Scholar 

  • Lamal, P. A. (1990). On the importance of replication. Journal of Social Behavior and Personality, 5(4), 31–35.

    Google Scholar 

  • Landes, J., Osimani, B., & Poellinger, R. (2018). Epistemology of causal inference in pharmacology. European Journal for Philosophy of Science, 8(1), 3–49.

    Article  Google Scholar 

  • Lenhard, J. (2006). Models and statistical inference: The controversy between Fisher and Neyman–Pearson. The British Journal for the Philosophy of Science, 57(1), 69–91.

    Article  Google Scholar 

  • Lipton, P. (2003). Inference to the best explanation. London: Routledge.

    Book  Google Scholar 

  • Longino, H. E. (1990). Science as social knowledge: Values and objectivity in scientific inquiry. Princeton: Princeton University Press.

    Google Scholar 

  • Lundh, A., & Bero, L. (2017). The ties that bind. British Medical Journal, 356. https://doi.org/10.1136/bmj.j176.

  • Lundh, A., Lexchin, J., Mintzes, B., Schroll, J. B., & Bero, L. (2017). Industry sponsorship and research outcome. Cochrane Library, 2, Art. No.: MR000033. https://doi.org/10.1002/14651858.MR000033.pub3.

  • Marsman, M., Schoönbrodt, F. D., Morey, R. D., Yao, Y., Gelman, A., & Wagenmakers, E.-J. (2017). A Bayesian bird’s eye view of ‘Replications of important results in social psychology’. Royal Society Open Science, 4(1), 160426.

    Article  Google Scholar 

  • Mayo, D. G., & Spanos, A. (2006). Severe testing as a basic concept in a Neyman-Pearson philosophy of induction. British Journal for the Philosophy of Science, 57(2), 323–357. https://doi.org/10.1093/bjps/axl003.

    Article  Google Scholar 

  • Mayo-Wilson, C., Zollman, K. J. S., & Danks, D. (2011). The independence thesis: When individual and social epistemology diverge. Philosophy of Science, 78(4), 653–677.

    Article  Google Scholar 

  • McGrew, T. (2003). Confirmation, heuristics, and explanatory reasoning. British Journal for the Philosophy of Science, 54(4), 553–567. http://www.jstor.org/stable/3541678.

    Article  Google Scholar 

  • Meehl, P. E. (1990). Appraising and amending theories: The strategy of lakatosian defense and two principles that warrant it. Psychological Inquiry, 1(2), 108–141. https://doi.org/10.1207/s15327965pli0102%5C_1.

    Article  Google Scholar 

  • Moretti, L. (2007). Ways in which coherence is confirmation conducive. Synthese, 157(3), 309–319. https://doi.org/10.1007/s11229-006-90575

    Article  Google Scholar 

  • Mumford, S., & Anjum, R. L. (2011). Getting causes from powers. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. https://doi.org/10.1126/science.aac4716.

  • Osimani, B. (2007). Probabilistic information and decision making in the health context. The package leaflet as a basis for informed consent (1st edn). Lugano: USI, Università della Svizzera italiana. http://doc.rero.ch/record/28759?ln=fr.

    Google Scholar 

  • Osimani, B. (2012). Risk information processing and rational ignoring in the health context. The Journal of Socio-Economics, 41(2), 169–179.

    Article  Google Scholar 

  • Osimani, B. (2013). The precautionary principle in the pharmaceutical domain: A philosophical enquiry into probabilistic reasoning and risk aversion. Health, Risk & Society, 15(2), 123–143.

    Article  Google Scholar 

  • Osimani, B. (2014). Hunting side effects and explaining them: Should we reverse evidence hierarchies upside down? Topoi, 33(2), 295–312. https://doi.org/10.1007/s11245-013-9194-7.

    Article  Google Scholar 

  • Osimani, B., & Mignini, F. (2015). Causal assessment of pharmaceutical treatments: Why standards of evidence should not be the same for benefits and harms? Drug Safety, 38(1), 1–11. ISSN: 1179–1942. https://doi.org/10.1007/s40264-014-0249-5.

  • Osimani, B. & Landes, J. (2020). Varieties of Error and Varieties of Evidence in Scientific Inference. (Accepted).

    Google Scholar 

  • Osimani, B., & Poellinger, R. (2020). A protocol for model validation and causal inference from computer simulation. In M. Bertolaso & F. Sterpetti (Eds.), A critical reflection on automated science. Will science remain human. Heidelberg: Springer Nature. (forthcoming).

    Google Scholar 

  • Osimani, B., Russo, F., & Williamson, J. (2011). Scientific evidence and the law: An objective Bayesian formalization of the precautionary principle in pharmaceutical regulation. The Journal of Philosophy Science & Law, 11(2), 1–24.

    Article  Google Scholar 

  • Papa, A. (2014). L’identità esposta. La cura come questione filosofica. Milano: Vita e Pensiero.

    Google Scholar 

  • Papineau, D. (1994). The virtues of randomization. The British Journal for the Philosophy of Science, 45(2), 437–450.

    Article  Google Scholar 

  • Pearl, J. (2000). Causality: Models, reasoning and inference (1st ed.). Cambridge: Cambridge University Press.

    Google Scholar 

  • Pessina, A. (2009). Biopolitica e Persona. Medicina e Morale, 2, 239–253. http://hdl.handle.net/10807/4748.

    Google Scholar 

  • Platt, J. R. (1964). Strong inference. Science, 146(3642), 347–353.

    Article  Google Scholar 

  • Podolsky, S. H., & Powers, J. H. (2015). Regulating antibiotics in an era of resistance: The historical basis and continued need for adequate and well-controlled investigations regulating antibiotics in an era of resistance. Annals of Internal Medicine, 163(5), 386–388.

    Article  Google Scholar 

  • Poellinger, R. (2018). On the ramifications of theory choice in causal assessment. Indicators of causation and their conceptual relationships. Submitted.

    Google Scholar 

  • Prinz, F., Schlange, T., & Asadullah, K. (2011). Believe it or not: How much can we rely on published data on potential drug targets? Nature Reviews Drug Discovery, 10(9), 712–712.

    Article  Google Scholar 

  • Rising, K., Bacchetti, P., & Bero, L. (2008). Reporting bias in drug trials submitted to the Food and Drug Administration: Review of publication and presentation. PLoS Medicine, 5(11), e217.

    Article  Google Scholar 

  • Rubin, D. B. (2005). Causal inference using potential outcomes. Journal of the American Statistical Association, 100, 322–331.

    Article  Google Scholar 

  • Russo, F., & Williamson, J. (2007). Interpreting causality in the health sciences. International Studies in the Philosophy of Science, 21(2), 157–170. https://doi.org/10.1080/02698590701498084.

    Article  Google Scholar 

  • Scheu, G. (2003). In dubio pro securitate: Contergan, Hepatitis-/AIDS-Blutprodukte, Spongiformer Humaner Wahn und kein Ende? (Vol. 42). Baden-Baden: Nomos.

    Google Scholar 

  • Senn, S. (2002). A comment on replication, p-values and evidence SN Good-man, statistics in medicine 1992; 11: 875–879. Statistics in Medicine, 21(16), 2437–2444.

    Article  Google Scholar 

  • Senn, S. (2003). Dicing with death: Chance risk and health. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Sgreccia, E. (2007). Manuale di Bioetica (3rd ed.). Milano: Vita e Pensiero.

    Google Scholar 

  • Solomon, M. (2015). Making medical knowledge. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Spirtes, P., Glymour, C. N., & Scheines, R. (2000). Causation, prediction, and search. Cambridge: MIT press.

    Google Scholar 

  • Sprenger, J. (2016). Bayesianism vs. Frequentism in statistical inference (Chap. 18). Oxford University Press.

    Google Scholar 

  • Stegenga, J. (2011). Is meta-analysis the platinum standard of evidence? Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences, 42(4), 497–507. https://doi.org/10.1016/j.shpsc.2011.07.003.

    Article  Google Scholar 

  • Stegenga, J. (2014). Down with the hierarchies. Topoi, 33(2), 313–322. https://doi.org/10.1007/s11245-013-9189-4.

    Article  Google Scholar 

  • Swinburne, R. (2001). Epistemic justification. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Teira, D. (2011). Frequentist versus Bayesian clinical trials. In Gifford, F. (Ed.), Handbook of philosophy of medicine (pp. 255–298). Amsterdam: Elsevier.

    Chapter  Google Scholar 

  • Teira, D., & Reiss, J. (2013). Causality, impartiality and evidence-based policy. In H.-K. Chao, S.-T. Chen, & R. L. Millstein (Eds.), Mechanism and causality in biology and economics (pp. 207–224). Dordrecht: Springer. https://doi.org/10.1007/978-94-007-2454-9_11.

    Chapter  Google Scholar 

  • Vandenbroucke, J. P., Broadbent, A., & Pearce, N. (2016). Causality and causal inference in epidemiology: The need for a pluralistic approach. International Journal of Epidemiology, 45, 1776–1786. https://doi.org/10.1093/ije/dyv341.

    Article  Google Scholar 

  • Wheeler, G., & Scheines, R. (2013). Coherence and confirmation through causation. Mind, 122(485), 135–170. https://doi.org/10.1093/mind/fzt019.

    Article  Google Scholar 

  • Wood, L., Egger, M., Gluud, L. L., Schulz, K. F., Juüni, P., Altman, D. G., Gluud, C., Martin, R. M., Wood, A. J. G., & Sterne, J. A. C. (2008). Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: Meta-epidemiological study. BMJ, 336(7644), 601–605.

    Article  Google Scholar 

  • Woodward, J. (2003). Making things happen: A theory of causal explanation (Oxford studies in the philosophy of science). New York: Oxford University Press. ISBN: 9780195189537.

    Google Scholar 

  • Worrall, J. (2007a). Do we need some large, simple randomized trials in medicine? EPSA Philosophical issues in Science, 289–301. https://doi.org/10.1007/9789048132522_27.

  • Worrall, J. (2007b). Evidence in medicine and evidence-based medicine. Philosophy Compass, 2(6), 981–1022. https://doi.org/10.1111/j1747.9991.2007.00106.x.

    Article  Google Scholar 

  • Worrall, J. (2007c). Why there’s no cause to randomize. British Journal for the Philosophy of Science, 58(3), 451–88. https://doi.org/10.1093/.bjps/axm024.

    Article  Google Scholar 

  • Worrall, J. (2008). Evidence and ethics in medicine. Perspectives in Biology and Medicine, 51(3), 418–431. https://doi.org/10.1353/pbm.0.0040.

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported by the European Research Council, ERC Starting Grant GA 639276: “Philosophy of Pharmacology: Safety, Statistical Standards, and Evidence Amalgamation”. I thank my colleagues from the Munich Center for Mathematical Philosophy, and various audiences to which different parts of this talk were presented:

I also thank previous colleagues and students at the MCMP who provided precious comments and feedback on earlier versions of the paper. I am also grateful for discussions on the topics mentioned in the paper with Rani Anjum, Jeffrey Aronson, Bengt Autzen, Seamus Bradley, Lorenzo Casini, Vincenzo Crupi, Ralph Edwards, Branden Fitelson, Stephan Hartmann, Erik Nyberg, Elena Rocca, Jan Sprenger, Daniel Steel, and Momme von Sydow. Finally, I thank Adam LaCaze, Scott Podolsky, David Teira for reading and commenting on earlier drafts of the paper, with the usual disclaimer that any errors are my own responsibility.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Barbara Osimani .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Osimani, B. (2020). Epistemic Gains and Epistemic Games: Reliability and Higher Order Evidence in Medicine and Pharmacology. In: LaCaze, A., Osimani, B. (eds) Uncertainty in Pharmacology. Boston Studies in the Philosophy and History of Science, vol 338. Springer, Cham. https://doi.org/10.1007/978-3-030-29179-2_15

Download citation

Publish with us

Policies and ethics