Understanding Treatment Effect Estimates When Treatment Effects Are Heterogeneous for More Than One Outcome

  • John M. Brooks
  • Cole G. Chapman
  • Mary C. Schroeder
Original Research Article

Abstract

Background

Patient-centred care requires evidence of treatment effects across many outcomes. Outcomes can be beneficial (e.g. increased survival or cure rates) or detrimental (e.g. adverse events, pain associated with treatment, treatment costs, time required for treatment). Treatment effects may also be heterogeneous across outcomes and across patients. Randomized controlled trials are usually insufficient to supply evidence across outcomes. Observational data analysis is an alternative, with the caveat that the treatments observed are choices. Real-world treatment choice often involves complex assessment of expected effects across the array of outcomes. Failure to account for this complexity when interpreting treatment effect estimates could lead to clinical and policy mistakes.

Objective

Our objective was to assess the properties of treatment effect estimates based on choice when treatments have heterogeneous effects on both beneficial and detrimental outcomes across patients.

Methods

Simulation methods were used to highlight the sensitivity of treatment effect estimates to the distributions of treatment effects across patients across outcomes. Scenarios with alternative correlations between benefit and detriment treatment effects across patients were used. Regression and instrumental variable estimators were applied to the simulated data for both outcomes.

Results

True treatment effect parameters are sensitive to the relationships of treatment effectiveness across outcomes in each study population. In each simulation scenario, treatment effect estimate interpretations for each outcome are aligned with results shown previously in single outcome models, but these estimates vary across simulated populations with the correlations of treatment effects across patients across outcomes.

Conclusions

If estimator assumptions are valid, estimates across outcomes can be used to assess the optimality of treatment rates in a study population. However, because true treatment effect parameters are sensitive to correlations of treatment effects across outcomes, decision makers should be cautious about generalizing estimates to other populations.

Notes

Author Contribution

JB devised the initial concept, the simulation models, and drafted the main article. CC and MS assisted with the conceptual framework, manuscript edits, and presentation of results.

Compliance with Ethical Standards

Funding

This project was funded by the Patient-Centered Outcomes Research Institute (PCORI) under project number (ME-1303-6011).

Conflict of interest

John Brooks, Cole Chapman, and Mary Schroeder have no conflicts of interest that are directly relevant to the content of this study.

Data Availability Statement

The data from the five simulation scenarios presented are available as ZIP SAS datasets in supplementary material for this paper.

Supplementary material

40258_2018_380_MOESM1_ESM.7z (331.5 mb)
Supplementary material 1 (7Z 339484 kb)

References

  1. 1.
    Kravitz RL, Duan N, Braslow J. Evidence-based medicine, heterogeneity of treatment effects, and the trouble with averages. Milbank Q. 2004;82(4):661–87.CrossRefPubMedPubMedCentralGoogle Scholar
  2. 2.
    Lohr KN, Eleazer K, Mauskopf J. health policy issues and applications for evidence-medicine and clinical practice guidelines. Health Policy. 1998;46:1–19.CrossRefPubMedGoogle Scholar
  3. 3.
    Rothwell PM. Subgroup analysis in randomized controlled trials: importance, indications, and interpretation. Lancet. 2005;365:176–86.CrossRefPubMedGoogle Scholar
  4. 4.
    Starfield B. Threads and yarns: weaving the tapestry of comorbidity. Ann Family Med. 2006;4(2):101–3.CrossRefGoogle Scholar
  5. 5.
    Steinberg EP, Luce BR. Evidence based? Caveat emptor! Health Aff. 2005;24(1):80–92.CrossRefGoogle Scholar
  6. 6.
    Upshur REG. Looking for rules in a world of exceptions. Perspect Biol Med. 2005;48(4):477–89.CrossRefPubMedGoogle Scholar
  7. 7.
    Dubois RW. From methods to policy: a ‘one-size-fits-all’ policy ignores patient heterogeneity. J Comp Eff Res. 2012;1(2):119–20.CrossRefPubMedGoogle Scholar
  8. 8.
    Heckman JJ, Urzua S, Vytlacil E. Understanding instrumental variables in models with essential heterogeneity. Rev Econ Stat. 2006;88(3):389–432.CrossRefGoogle Scholar
  9. 9.
    Angrist JD. Treatment effect heterogeneity in theory and practice. Econ J. 2004;114:C52–83.CrossRefGoogle Scholar
  10. 10.
    Heckman JJ, Vytlacil E. Structural equations, treatment effects, and econometric policy evaluation. Econometrica. 2005;73(3):669–738.CrossRefGoogle Scholar
  11. 11.
    Heckman JJ, The scientific model of causality. Sociol Methodol 35, 2005. 35: p. 1-97.Google Scholar
  12. 12.
    Heckman J, Navarro-Lozano S. Using matching, instrumental variables, and control functions to estimate economic choice models. Rev Econ Stat. 2004;86(1):30–57.CrossRefGoogle Scholar
  13. 13.
    Heckman JJ. Econometric causality. Int Stat Rev. 2008;76(1):1–27.CrossRefGoogle Scholar
  14. 14.
    Brooks JM, Gang F. Interpreting treatment effect estimates with heterogeneity and choice: simulation model results. Clin Ther. 2009;31(4):902–19.CrossRefPubMedGoogle Scholar
  15. 15.
    Brooks JM, Chrischilles EA. Heterogeneity and the interpretation of treatment effect estimates from risk-adjustment and instrumental variable methods. Med Care. 2007;45(10 supplement):S123–30.CrossRefPubMedGoogle Scholar
  16. 16.
    Basu A, et al. Use of instrumental variables in the presence of heterogeneity and self-selection: an application to treatments of breast cancer patients. Health Econ. 2007;16(11):1133–57.CrossRefPubMedGoogle Scholar
  17. 17.
    Heckman JJ, Robb R. Alternative Methods for Evaluating the Impact of Interventions, in Longitudinal Analysis of Labor Market Data. In: Heckman JJ, Singer B (eds). 1985, Cambridge University Press: New York. p. 156–245.Google Scholar
  18. 18.
    Angrist JD, Ferandez-Val I. ExtrapoLATE-ing: external validity and overidentification in the LATE framework. Advances in Economics and Econometrics, Vol Iii: Econometrics, ed. Acemoglu D, Arellano M, Dekel E. 2013. 401–433.Google Scholar
  19. 19.
    Angrist JD, Pischke J-S. Mostly harmless econometrics: an empiricist’s companion. New Jersey: Princeton University Press; 2009.Google Scholar
  20. 20.
    Heckman JJ, Schmierer D, Urzua S. Testing the correlated random coefficient model. J Econ. 2010;158(2):177–203.CrossRefGoogle Scholar
  21. 21.
    Brooks JM, Chrischilles EA. Heterogeneity and the interpretation of treatment effect estimates from risk adjustment and instrumental variable methods. Med Care. 2007;45(10):S123–30.CrossRefPubMedGoogle Scholar
  22. 22.
    Brooks JM, Fang G. Interpreting treatment-effect estimates with heterogeneity and choice: simulation model results. Clin Ther. 2009;31(4):902–19.CrossRefPubMedGoogle Scholar
  23. 23.
    Brooks JM, McClellan M, Wong HS. The marginal benefits of invasive treatments for acute myocardial infarction: Does insurance coverage matter? Inquiry-the J Health Care Organ Provis Financ. 2000;37(1):75–90.Google Scholar
  24. 24.
    Greenfield S, Kaplan SH. Building useful evidence: changing the clinical research paradigm to account for comparative effectiveness research. J Comp Eff Res. 2012;1(3):263–70.CrossRefPubMedPubMedCentralGoogle Scholar
  25. 25.
    Heckman JJ, Urzua S. Comparing IV with structural models: what simple IV can and cannot identify. J Econ. 2010;156(1):27–37.CrossRefGoogle Scholar
  26. 26.
    Spertus JA, Furman MI. Translating evidence into practice: are we neglecting the neediest? Arch Intern Med. 2007;167(10):987–8.CrossRefPubMedGoogle Scholar
  27. 27.
    Yan AT, et al. Management patterns in relation to risk stratification among patients with non-ST elevation acute coronary syndromes. Arch Intern Med. 2007;167(10):1009–16.CrossRefPubMedGoogle Scholar
  28. 28.
    Ko DT, Mamdani M, Alter DA. Lipid-lowering therapy with statins in high-risk elderly patients—the treatment-risk paradox. J Am Med Assoc. 2004;291(15):1864–70.CrossRefGoogle Scholar
  29. 29.
    Sandhu RK, et al. Risk stratification schemes, anticoagulation use and outcomes: the risk-treatment paradox in patients with newly diagnosed non-valvular atrial fibrillation. Heart. 2011;97(24):2046–50.CrossRefPubMedGoogle Scholar
  30. 30.
    Wimmer NJ, et al. Risk-treatment paradox in the selection of transradial access for percutaneous coronary intervention. J Am Heart Assoc. 2013;2(3):e000174.CrossRefPubMedPubMedCentralGoogle Scholar
  31. 31.
    McAlister FA. The end of the risk-treatment paradox? A rising tide lifts all boats. J Am Coll Cardiol. 2011;58(17):1766–7.CrossRefPubMedGoogle Scholar
  32. 32.
    McGlynn E, et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348(26):2635–45.CrossRefPubMedGoogle Scholar
  33. 33.
    Levine DM, Linder JA, Landon BE. The quality of outpatient care delivered to adults in the United States, 2002 to 2013. JAMA Intern Med. 2016;176(12):1778–90.CrossRefPubMedGoogle Scholar
  34. 34.
    Yan AT, et al. Management patterns in relation to risk stratification among patients with non-ST elevation acute coronary syndromes. Arch Intern Med. 2007;167(10):1009–16.CrossRefPubMedGoogle Scholar
  35. 35.
    Brooks JM, et al. Statin use after acute myocardial infarction by patient complexity: are the rates right? Med Care. 2015;53(4):324–31.PubMedGoogle Scholar
  36. 36.
    Cozad MJ, Chapman CG, Brooks JM. Specifying a conceptual treatment choice relationship before analysis is necessary for comparative effectiveness research. Med Care. 2017;55(2):94–6.CrossRefPubMedGoogle Scholar
  37. 37.
    Heckman JJ. Causal parameters and policy analysis in economics: a twentieth century retrospective. Quart J Econ. 2000;115(1):45–97.CrossRefGoogle Scholar
  38. 38.
    Crown WH, Henk HJ, Vanness DJ. Some cautions on the use of instrumental variables estimators in outcomes research: how bias in instrumental variables estimators is affected by instrument strength, instrument contamination, and sample size. Value Health. 2011;14(8):1078–84.CrossRefPubMedGoogle Scholar
  39. 39.
    Bound J, Jaeger DA, Baker RM. Problems with instrumental variables estimation when the correlation between the instruments and the endogenous explanatory variable is weak. J Am Stat Assoc. 1995;90(430):443–50.Google Scholar
  40. 40.
    Heckman JJ. Rejoinder: response to Sobel. Sociol Methodol. 2005;35:135–62.CrossRefGoogle Scholar
  41. 41.
    Brooks JM, Ohsfeldt RL. Squeezing the balloon: propensity scores and unmeasured covariate balance. Health Serv Res. 2013;48(4):1487–507.CrossRefPubMedGoogle Scholar
  42. 42.
    Ben-Akiva M, Lerman SR, Analysis Discrete choice. Cambridge. Massachusetts: The MIT Press; 1985.Google Scholar
  43. 43.
    Harris KM, Remler DK. Who Is the marginal patient? Understanding instrumental variables estimates of treatment effects. Health Serv Res. 1998;33(5):1337–60.PubMedPubMedCentralGoogle Scholar
  44. 44.
    McClellan M, McNeil BJ, Newhouse JP. Does more intensive treatment of acute myocardial infarction in the elderly reduce mortality? Analysis using instrumental variables. JAMA. 1994;272(11):859–66.CrossRefPubMedGoogle Scholar
  45. 45.
    Knol MJ, et al. Potential misinterpretation of treatment effects due to use of odds ratios and logistic regression in randomized controlled trials. Plos One. 2011;6(6):e21248.  https://doi.org/10.1371/journal.pone.0021248.CrossRefPubMedPubMedCentralGoogle Scholar
  46. 46.
    Knol MJ, et al. What do case-control studies estimate? Survey of methods and assumptions in published case-control research. Am J Epidemiol. 2008;168(9):1073–81.CrossRefPubMedGoogle Scholar
  47. 47.
    Pocock SJ, et al. Issues in the reporting of epidemiological studies: a survey of recent practice. BMJ. 2004;329(7471):883–7.CrossRefPubMedPubMedCentralGoogle Scholar
  48. 48.
    Tooth L, et al. Quality of reporting of observational longitudinal research. Am J Epidemiol. 2005;161(3):280–8.CrossRefPubMedGoogle Scholar
  49. 49.
    Brooks JM, Chapman CG, Cozad MJ. The identification process using choice theory is needed to match design with objectives in CER. Med Care. 2017;55(2):91–3.CrossRefPubMedGoogle Scholar
  50. 50.
    Stuart EA, Rhodes A. Generalizing treatment effect estimates from sample to population: a case study in the difficulties of finding sufficient data. Eval Rev. 2017;41(4):357–88.CrossRefGoogle Scholar
  51. 51.
    Chapman CG, Brooks JM. Treatment effect estimation using nonlinear two-stage instrumental variable estimators: another cautionary note. Health Serv Res. 2016;51(6):2375–94.CrossRefPubMedPubMedCentralGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.University of South Carolina and the Center for Effectiveness Research in OrthopaedicsColumbiaUSA
  2. 2.University of South Carolina and the Center for Effectiveness Research in OrthopaedicsColumbiaUSA
  3. 3.University of Iowa College of PharmacyIowa CityUSA

Personalised recommendations