Advertisement

Stopping Rules and Data Monitoring in Clinical Trials

  • Roger Stanev
Conference paper
Part of the The European Philosophy of Science Association Proceedings book series (EPSP, volume 1)

Abstract

Stopping rules—rules dictating when to stop accumulating data and start analyzing it for the purposes inferring from the experiment—divide Bayesians, Likelihoodists and classical statistical approaches to inference. Although the relationship between Bayesian philosophy of science and stopping rules can be complex (cf. Steel 2003), in general, Bayesians regard stopping rules as irrelevant to what inference should be drawn from the data. This position clashes with classical statistical accounts. For orthodox statistics, stopping rules do matter to what inference should be drawn from the data. “The dispute over stopping rule is far from being a marginal quibble, but is instead a striking illustration of the divergence of fundamental aims and standards separating Bayesians and advocates of orthodox statistical methods” (Steel 2004, 195).

Keywords

Interim Analysis Data Monitor Committee Conditional Power Likelihood Principle Toxoplasmic Encephalitis 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgements

I am grateful to Paul Bartha for his supervision, helpful discussion and feedback. I am also grateful to two anonymous reviewers for their comments and criticisms, and the audience at EPSA 2009 in Amsterdam. Earlier version of this work was presented at the PSX in the Center for Philosophy of Science at the University of Pittsburgh.

References

  1. Ellenberg, S. 2003. Are all monitoring boundaries equally ethical? Controlled Clinical Trials 24: 585–588.CrossRefGoogle Scholar
  2. Ellenberg, S. et al. 2003. Data monitoring committees in clinical trials. New York: Wiley.Google Scholar
  3. Jacobson, M.A. et al. 1994. Primary prophylaxis with pyrimethamine for toxoplasmic encephalitis in patients with advanced human immunodeficiency virus disease: Results of a randomized trial. Journal of Infectious Diseases 169: 384–394.CrossRefGoogle Scholar
  4. Mayo, D. 1996. Error and the growth of experimental knowledge. Chicago: University of Chicago Press.Google Scholar
  5. Mayo, D., and M. Kruse. 2001. Principles of inference and their consequences. In Foundations of Bayesianism, eds. D. Corfield and J. Williamson, 381–403. Dordrecht: Kluwer Academic.Google Scholar
  6. Montori, V.M. et al. 2005. Randomized trials stopped early for benefit: A systematic review. JAMA 294: 2203–2209.CrossRefGoogle Scholar
  7. Neaton, J. et al. 2006. Data monitoring experience in the AIDS toxoplasmic encephalitis study. In Data monitoring in clinical trials, eds. D. DeMets, C. Furberg, and L. Friedman, 320–329. New York: Springer.CrossRefGoogle Scholar
  8. Pocock, S.J. 1993. Statistical and ethical issues in monitoring clinical trials. Statistics in Medicine 12: 1459–1469.CrossRefGoogle Scholar
  9. Proschan, M. et al. 2006. Statistical monitoring of clinical trials. New York: Springer.Google Scholar
  10. Steel, D. 2003. A Bayesian way to make stopping rules matter. Erkenntnis 58: 213–227.CrossRefGoogle Scholar
  11. Steel, D. 2004. The facts of the matter: A discussion of Norton’s material theory of induction. Philosophy of Science 72: 188–197.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2012

Authors and Affiliations

  1. 1.Department of PhilosophyUniversity of British ColumbiaVancouverCanada

Personalised recommendations