Drug Safety

, Volume 36, Supplement 1, pp 95–106 | Cite as

Empirical Performance of a Self-Controlled Cohort Method: Lessons for Developing a Risk Identification and Analysis System

  • Patrick B. Ryan
  • Martijn J. Schuemie
  • David Madigan
Original Research Article

Abstract

Background

Observational healthcare data offer the potential to enable identification of risks of medical products, but appropriate methodology has not yet been defined. The self-controlled cohort method, which compares the post-exposure outcome rate with the pre-exposure rate among an exposed cohort, has been proposed as a potential approach for risk identification but its performance has not been fully assessed.

Objectives

To evaluate the performance of the self-controlled cohort method as a tool for risk identification in observational healthcare data.

Research Design

The method was applied to 399 drug-outcome scenarios (165 positive controls and 234 negative controls across 4 health outcomes of interest) in 5 real observational databases (4 administrative claims and 1 electronic health record) and in 6 simulated datasets with no effect and injected relative risks of 1.25, 1.5, 2, 4, and 10, respectively.

Measures

Method performance was evaluated through area under ROC curve (AUC), bias, and coverage probability.

Results

The self-controlled cohort design achieved strong predictive accuracy across the outcomes and databases under study, with the top-performing settings exceeding AUC >0.76 in all scenarios. However, the estimates generated were observed to be highly biased with low coverage probability.

Conclusions

If the objective for a risk identification system is one of discrimination, the self-controlled cohort method shows promise as a potential tool for risk identification. However, if a system is intended to generate effect estimates to quantify the magnitude of potential risks, the self-controlled cohort method may not be suitable, and requires substantial calibration to be properly interpreted under nominal properties.

References

  1. 1.
    Public Law 110-85: Food and Drug Administration Amendments Act of 2007. 2007.Google Scholar
  2. 2.
    Woodcock J, Behrman RE, Dal Pan GJ. Role of postmarketing surveillance in contemporary medicine. Annu Rev Med. 2011;62:1–10.PubMedCrossRefGoogle Scholar
  3. 3.
    Poh A. SafetyWorks at GlaxoSmithKline, Best Practice Winner: Translational and Personalized Medicine. Bio-IT World Mag. 2009.Google Scholar
  4. 4.
    Ryan PB, Powell GE, Pattishall EN, Beach KJ. Performance of screening multiple observational databases for active drug safety surveillance. Providence: International Society of Pharmacoepidemiology; 2009.Google Scholar
  5. 5.
    Graham PL, Mengersen K, Morton AP. Confidence limits for the ratio of two rates based on likelihood scores: non-iterative method. Stat Med. 2003;22(12):2071–83.PubMedCrossRefGoogle Scholar
  6. 6.
    Ryan PB, Schuemie MJ. Evaluating performance of risk identification methods through a large-scale simulation of observational data. Drug Saf. (2013) (In this supplement issue). doi:10.1007/s40264-013-0110-2.
  7. 7.
    Overhage JM, Ryan PB, Schuemie MJ, Stang PE. Desideratum for evidence based epidemiology. Drug Saf. (2013) (In this supplement issue). doi:10.1007/s40264-013-0102-2.
  8. 8.
    Ryan PB, Schuemie MJ, Welebob E, Duke J, Valentine S, Hartzema AG. Defining a reference set to support methodological research in drug safety. Drug Saf. (2013) (In this supplement issue). doi:10.1007/s40264-013-0097-8.
  9. 9.
    Armstrong B. A simple estimator of minimum detectable relative risk, sample size, or power in cohort studies. Am J Epidemiol. 1987;126(2):356–8.PubMedCrossRefGoogle Scholar
  10. 10.
    Cantor SB, Kattan MW. Determining the area under the ROC curve for a binary diagnostic test. Med Dec Making Int J Soc Med Decis Making. 2000;20(4):468–70.CrossRefGoogle Scholar
  11. 11.
    Smith BM, Schwartzman K, Bartlett G, Menzies D. Adverse events associated with treatment of latent tuberculosis in the general population. CMAJ. 2011;183(3):E173–9.PubMedCrossRefGoogle Scholar
  12. 12.
    Carson JL, Strom BL, Duff A, Gupta A, Shaw M, Lundin FE, et al. Acute liver disease associated with erythromycins, sulfonamides, and tetracyclines. Ann Intern Med. 1993;119(1):576–83.PubMedCrossRefGoogle Scholar
  13. 13.
    Ryan PB, Stang PE, Overhage JM, Suchard MA, Hartzema AG, DuMouchel W, et al. A comparison of the empirical performance of methods for a risk identification system. Drug Saf. (2013) (In this supplement issue). doi:10.1007/s40264-013-0108-9.
  14. 14.
    Suissa S. Immortal time bias in pharmaco-epidemiology. Am J Epidemiol. 2008;167(4):492–9.PubMedCrossRefGoogle Scholar
  15. 15.
    Schuemie MJ, Ryan PB, DuMouchel W, Suchard MA, Madigan D. Interpreting observational studies: why empirical calibration is needed to correct p-values. Stat Med. 2013. doi:10.1002/sim.5925.
  16. 16.
    Trifirò G, Pariente A, Coloma PM, Kors JA, Polimeni G, Miremont-Salame G, et al. Data mining on electronic health record databases for signal detection in pharmacovigilance: which events to monitor? Pharmacoepidemiol Drug Saf. 2009;18(12):1176–84.PubMedCrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2013

Authors and Affiliations

  • Patrick B. Ryan
    • 1
    • 4
  • Martijn J. Schuemie
    • 2
    • 4
  • David Madigan
    • 3
    • 4
  1. 1.Janssen Research and Development LLCTitusvilleUSA
  2. 2.Department of Medical InformaticsErasmus University Medical Center RotterdamRotterdamThe Netherlands
  3. 3.Department of StatisticsColumbia UniversityNew YorkUSA
  4. 4.Observational Medical Outcomes Partnership, Foundation for the National Institutes of HealthBethesdaUSA

Personalised recommendations