Advertisement

Drug Safety

, Volume 38, Issue 1, pp 105–107 | Cite as

Authors’ Reply to Hennessy and Leonard’s Comment on “Desideratum for Evidence-Based Epidemiology”

  • J. Marc OverhageEmail author
  • Patrick B. Ryan
  • Martijn J. Schuemie
  • Paul E. Stang
Letter to the Editor

We appreciate Hennessy and Leonard’s [1] comments on our paper and their strong support for the need to carefully characterize the performance of epidemiologic methods and analysis choices (which we collectively refer to as analyses). The work performed as part of the Observational Medical Outcomes Partnership (OMOP) is but a first step in this journey. We particularly appreciate the authors making the point that “Problematically implemented studies do not invalidate the underlying research designs, just those implementations”. We fully agree with this assertion and the importance of beginning to systematically answer questions about what makes analyses problematic. We also share their belief that the empirical assessment of the performance of analyses applied to observational datasets is an essential prerequisite for understanding the reliability of any evidence developed from observational studies.

Every measurement approach is limited in precision and accuracy, and the OMOP...

Keywords

Propensity Score Outcome Definition Empirical Performance Analysis Choice Observational Medical Outcome Partnership 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Conflicts of interest

The OMOP was funded by the Foundation for the National Institutes of Health (FNIH) through generous contributions from the following: Abbott, Amgen Inc., AstraZeneca, Bayer Healthcare Pharmaceuticals, Inc., Biogen Idec, Bristol–Myers Squibb, Eli Lilly & Company, GlaxoSmithKline, Janssen Research and Development, Lundbeck, Inc., Merck & Co., Inc., Novartis Pharmaceuticals Corporation, Pfizer Inc., Pharmaceutical Research Manufacturers of America (PhRMA), Roche, Sanofi-aventis, Schering-Plough Corporation, and Takeda.

Patrick Ryan, Paul Stang and Martijn J. Schuemie are employed by and hold stock/stock options in Janssen Research & Development but have no conflicts of interest related to the content of this letter. J. Marc Overhage is employed by and holds stock/stock options in Siemens Health Services but has no conflicts of interest related to the content of this letter.

References

  1. 1.
    Hennessy S, Leonard CE. Comment on: “Desideratum for evidence-based epidemiology”. Drug Saf. doi: 10.1007/s40264-014-0252-x.
  2. 2.
    Madigan D, et al. A systematic statistical approach to evaluating evidence from observational studies. Annu Rev Stat Appl. 2014;1:11–39.Google Scholar
  3. 3.
    Ryan PB, et al. Defining a reference set to support methodological research in drug safety. Drug Saf. 2013;36(Suppl 1):S33–47.PubMedCrossRefGoogle Scholar
  4. 4.
    Norén GN, Caster O, Juhlin K, Lindquist M. Zoo or savannah? Choice of training ground for evidence-based pharmacovigilance. Drug Saf. 2014;37(9):655–9.PubMedCrossRefGoogle Scholar
  5. 5.
    Murray RE, Ryan PB, Reisinger SJ. Design and validation of a data simulation model for longitudinal healthcare data. AMIA Annu Symp Proc. 2011;2011:1176–85.PubMedCentralPubMedGoogle Scholar
  6. 6.
    Ryan PB, Schuemie MJ. Evaluating performance of risk identification methods through a large-scale simulation of observational data. Drug Saf. 2013;36(Suppl 1):171–80.CrossRefGoogle Scholar
  7. 7.
    Schneeweiss S, Avorn J. A review of uses of health care utilization databases for epidemiologic research on therapeutics. J Clin Epidemiol. 2005;58(4):323–37.PubMedCrossRefGoogle Scholar
  8. 8.
    Corser W, et al. Concordance between comorbidity data from patient self-report interviews and medical record documentation. BMC Health Serv Res. 2008;8(1):85.PubMedCentralPubMedCrossRefGoogle Scholar
  9. 9.
    Tisnado DM, et al. What is the concordance between the medical record and patient self-report as data sources for ambulatory care? Med Care. 2006;44(2):132–40.PubMedCrossRefGoogle Scholar
  10. 10.
    Weissman JS, et al. Comparing patient-reported hospital adverse events with medical record review: do patients know something that hospitals do not? Ann Intern Med. 2008;149(2):100–8.PubMedCrossRefGoogle Scholar
  11. 11.
    Luck J, et al. How well does chart abstraction measure quality? A prospective comparison of standardized patients with the medical record. Am J Med. 2000;108(8):642–9.PubMedCrossRefGoogle Scholar
  12. 12.
    Fox MP, Lash TL, Greenland S. A method to automate probabilistic sensitivity analyses of misclassified binary variables. Int J Epidemiol. 2005;34(6):1370–6.PubMedCrossRefGoogle Scholar
  13. 13.
    Lash TL, Fox MP, Fink AK. Applying quantitative bias analysis to epidemiologic data. New York: Springer; 2009: p. 94–99.Google Scholar
  14. 14.
    Greenland S. Bias analysis. International encyclopedia of statistical science. Berlin: Springer; 2011: p. 145–148.Google Scholar
  15. 15.
    Stang PE, et al. Health outcomes of interest in observational data: issues in identifying definitions in the literature. Health Outcomes Res Med. 2012;3(1):e37–44.CrossRefGoogle Scholar
  16. 16.
    Reich CG, Ryan PB, Schuemie MJ. Alternative outcome definitions and their effect on the performance of methods for observational outcome studies. Drug Saf. 2013;36(Suppl 1):S181–93.PubMedCrossRefGoogle Scholar
  17. 17.
    Stang PE, et al. Variation in choice of study design: findings from the epidemiology design decision inventory and evaluation (EDDIE) survey. Drug Saf. 2013;36(Suppl 1):S15–25.PubMedCrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • J. Marc Overhage
    • 1
    Email author
  • Patrick B. Ryan
    • 2
  • Martijn J. Schuemie
    • 2
  • Paul E. Stang
    • 2
  1. 1.Siemens Medical SolutionsMalvernUSA
  2. 2.Jansen Research and DevelopmentTitusvilleUSA

Personalised recommendations