Drug Safety

, Volume 36, Supplement 1, pp 107–121

Empirical Performance of the Calibrated Self-Controlled Cohort Analysis Within Temporal Pattern Discovery: Lessons for Developing a Risk Identification and Analysis System

  • G. Niklas Norén
  • Tomas Bergvall
  • Patrick B. Ryan
  • Kristina Juhlin
  • Martijn J. Schuemie
  • David Madigan
Original Research Article

DOI: 10.1007/s40264-013-0095-x

Cite this article as:
Norén, G.N., Bergvall, T., Ryan, P.B. et al. Drug Saf (2013) 36(Suppl 1): 107. doi:10.1007/s40264-013-0095-x

Abstract

Background

Observational healthcare data offer the potential to identify adverse drug reactions that may be missed by spontaneous reporting. The self-controlled cohort analysis within the Temporal Pattern Discovery framework compares the observed-to-expected ratio of medical outcomes during post-exposure surveillance periods with those during a set of distinct pre-exposure control periods in the same patients. It utilizes an external control group to account for systematic differences between the different time periods, thus combining within- and between-patient confounder adjustment in a single measure.

Objectives

To evaluate the performance of the calibrated self-controlled cohort analysis within Temporal Pattern Discovery as a tool for risk identification in observational healthcare data.

Research Design

Different implementations of the calibrated self-controlled cohort analysis were applied to 399 drug-outcome pairs (165 positive and 234 negative test cases across 4 health outcomes of interest) in 5 real observational databases (four with administrative claims and one with electronic health records).

Measures

Performance was evaluated on real data through sensitivity/specificity, the area under receiver operator characteristics curve (AUC), and bias.

Results

The calibrated self-controlled cohort analysis achieved good predictive accuracy across the outcomes and databases under study. The optimal design based on this reference set uses a 360 days surveillance period and a single control period 180 days prior to new prescriptions. It achieved an average AUC of 0.75 and AUC >0.70 in all but one scenario. A design with three separate control periods performed better for the electronic health records database and for acute renal failure across all data sets. The estimates for negative test cases were generally unbiased, but a minor negative bias of up to 0.2 on the RR-scale was observed with the configurations using multiple control periods, for acute liver injury and upper gastrointestinal bleeding.

Conclusions

The calibrated self-controlled cohort analysis within Temporal Pattern Discovery shows promise as a tool for risk identification; it performs well at discriminating positive from negative test cases. The optimal parameter configuration may vary with the data set and medical outcome of interest.

Copyright information

© Springer International Publishing Switzerland 2013

Authors and Affiliations

  • G. Niklas Norén
    • 1
    • 2
  • Tomas Bergvall
    • 1
  • Patrick B. Ryan
    • 3
    • 6
  • Kristina Juhlin
    • 1
  • Martijn J. Schuemie
    • 4
    • 6
  • David Madigan
    • 5
    • 6
  1. 1.Uppsala Monitoring CentreWHO Collaborating Centre for International Drug MonitoringUppsalaSweden
  2. 2.Department of MathematicsStockholm UniversityStockholmSweden
  3. 3.Janssen Research and Development LLCTitusvilleUSA
  4. 4.Department of Medical InformaticsErasmus University Medical Center RotterdamRotterdamThe Netherlands
  5. 5.Department of StatisticsColumbia UniversityNew YorkUSA
  6. 6.Observational Medical Outcomes Partnership, Foundation for the National Institutes of HealthBethesdaUSA

Personalised recommendations