Advertisement

Difference-in-differences and matching on outcomes: a tale of two unobservables

  • Stephan Lindner
  • K. John McConnell
Article

Abstract

Difference-in-differences combined with matching on pre-treatment outcomes is a popular method for addressing non-parallel trends between a treatment and control group. However, previous simulations suggest that this approach does not always eliminate or reduce bias, and it is not clear when and why. Using Medicaid claims data from Oregon, we systematically vary the distribution of two key unobservables—fixed effects and the random error term—to examine how they affect bias of matching on pre-treatment outcomes levels or trends combined with difference-in-differences. We find that in most scenarios, bias increases with the standard deviation of the error term because a higher standard deviation makes short-term fluctuations in outcomes more likely, and matching cannot easily distinguish between these short-term fluctuations and more structural outcome trends. The fixed effect distribution may also create bias, but only when matching on pre-treatment outcome levels. A parallel-trend test on the matched sample does not reliably distinguish between successful and unsuccessful matching. Researchers using matching on pre-treatment outcomes to adjust for non-parallel trends should report estimates from both unadjusted and propensity-score matching adjusted difference-in-differences, compare results for matching on outcome levels and trends and examine outcome changes around intervention begin to assess remaining bias.

Keywords

Difference-in-differences Matching Simulation 

Notes

Acknowledgements

We would like to thank participants of the CHSE brownbag for helpful comments and suggestions.

Funding

Authors Lindner and McConnell received no outside funding for this study.

Compliance with ethical standards

Conflict of interest

Authors Lindner and McConnell declare that they have no conflict of interest.

Ethical approval

This article does not contain any studies with animals performed by any of the authors. All procedures performed in this study that involved human subsjects were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent for this study was not required.

References

  1. Abadie, A.: Semiparametric difference-in-differences estimators. Rev. Econ. Stud. 72(1), 1–19 (2005).  https://doi.org/10.1111/0034-6527.00321 CrossRefGoogle Scholar
  2. Abadie, A., Diamond, A., Hainmueller, J.: Synthetic control methods for comparative case studies: estimating the effect of California’s tobacco control program. J. Am. Stat. Assoc. 105(490), 493–505 (2010).  https://doi.org/10.1198/jasa.2009.ap08746 CrossRefGoogle Scholar
  3. Angrist, J.D., Pischke, J.S.: Mostly Harmless Econometrics: An Empiricist’s Companion. Princeton University Press, Princeton (2008)Google Scholar
  4. Austin, P.C.: An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivar. Behav. Res. 46(3), 399–424 (2011).  https://doi.org/10.1080/00273171.2011.568786 CrossRefGoogle Scholar
  5. Chabé-Ferret, S.: Bias of Causal Effect Estimators Using Pre-policy Outcomes. Toulouse School of Economics, Toulouse (2014)Google Scholar
  6. Chabé-Ferret, S.: Analysis of the bias of matching and difference-in-difference under alternative earnings and selection processes. J. Econom. 185(1), 110–123 (2015).  https://doi.org/10.1016/j.jeconom.2014.09.013 CrossRefGoogle Scholar
  7. Imbens, G.W., Wooldridge, J.M.: Recent developments in the econometrics of program evaluation. J. Econ. Lit. 47(1), 5–86 (2009).  https://doi.org/10.1257/jel.47.1.5 CrossRefGoogle Scholar
  8. Koehler, E., Brown, E., Haneuse, S.J.P.A.: On the assessment of monte carlo error in simulation-based statistical analyses. Am. Stat. 63(2), 155–162 (2009).  https://doi.org/10.1198/tast.2009.0030 CrossRefPubMedPubMedCentralGoogle Scholar
  9. Kronick, R., Gilmer, T., Dreyfus, T., Lee, L.: Improving health-based payment for Medicaid beneficiaries: CDPS. Health Care Financ. Rev. 21(3), 29 (2000)PubMedPubMedCentralGoogle Scholar
  10. Lechner, M.: The estimation of causal effects by difference-in-difference methods estimation of spatial panels. Found. Trends Econom. 4(3), 165–224 (2010).  https://doi.org/10.1561/0800000014 CrossRefGoogle Scholar
  11. Lee, S.M.S., Young, G.A.: The effect of monte carlo approximation on coverage error of double-bootstrap confidence intervals. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 61(2), 353–366 (1999).  https://doi.org/10.1111/1467-9868.00181 CrossRefGoogle Scholar
  12. Lockwood, J.R., McCaffrey, D.F.: Matching and weighting with functions of error-prone covariates for causal inference. J. Am. Stat. Assoc. 111(516), 1831–1839 (2016).  https://doi.org/10.1080/01621459.2015.1122601 CrossRefGoogle Scholar
  13. McConnell, K.J., Renfro, S., Lindrooth, R.C., Cohen, D.J., Wallace, N.T., Chernew, M.E.: Oregon’s Medicaid reform and transition to global budgets were associated with reductions in expenditures. Health Aff. 36(3), 451–459 (2017).  https://doi.org/10.1377/hlthaff.2016.1298 CrossRefGoogle Scholar
  14. McWilliams, J.M., Hatfield, L.A., Chernew, M.E., Landon, B.E., Schwartz, A.L.: Early performance of accountable care organizations in Medicare. N. Engl. J. Med. 374(24), 2357–2366 (2016).  https://doi.org/10.1056/nejmsa1600142 CrossRefPubMedPubMedCentralGoogle Scholar
  15. O’Neill, S., Kreif, N., Grieve, R., Sutton, M., Sekhon, J.S.: Estimating causal effects: considering three alternatives to difference-in-differences estimation. Health Serv. Outcomes Res. Methodol. 16(1–2), 1–21 (2016).  https://doi.org/10.1007/s10742-016-0146-8 CrossRefPubMedPubMedCentralGoogle Scholar
  16. Rubin, D.B.: Estimating causal effects of treatments in randomized and nonrandomized studies. J. Educ. Psychol. 66(5), 688–701 (1974).  https://doi.org/10.1037/h0037350 CrossRefGoogle Scholar
  17. Ryan, A.M., Burgess, J.F., Dimick, J.B.: Why we should not be indifferent to specification choices for difference-in-differences. Health Serv. Res. 50(4), 1211–1235 (2015).  https://doi.org/10.1111/1475-6773.12270 CrossRefPubMedGoogle Scholar
  18. Sommers, B.D., Buchmueller, T., Decker, S.L., Carey, C., Kronick, R.: The Affordable Care Act has led to significant gains in health insurance and access to care for young adults. Health Aff. 32(1), 165–174 (2012).  https://doi.org/10.1377/hlthaff.2012.0552 CrossRefGoogle Scholar
  19. Sommers, B.D., Kenney, G.M., Epstein, A.M.: New evidence on the Affordable Care Act: coverage impacts of early Medicaid expansions. Health Aff. 33(1), 78–87 (2014).  https://doi.org/10.1377/hlthaff.2013.1087 CrossRefGoogle Scholar
  20. Song, Z., Safran, D.G., Landon, B.E., He, Y., Ellis, R.P., Mechanic, R.E., Day, M.P., Chernew, M.E.: Health care spending and quality in year 1 of the alternative quality contract. N. Engl. J. Med. 365(10), 909–918 (2011).  https://doi.org/10.1056/nejmsa1101416 CrossRefPubMedPubMedCentralGoogle Scholar
  21. Stuart, E.A.: Matching methods for causal inference: a review and a look forward. Stat. Sci. 25(1), 1–21 (2010).  https://doi.org/10.1214/09-sts313 CrossRefPubMedPubMedCentralGoogle Scholar
  22. Stuart, E.A., Huskamp, H.A., Duckworth, K., Simmons, J., Song, Z., Chernew, M.E., Barry, C.L.: Using propensity scores in difference-in-differences models to estimate the effects of a policy change. Health Serv. Outcomes Res. Methodol. 14(4), 166–182 (2014).  https://doi.org/10.1007/s10742-014-0123-z CrossRefPubMedPubMedCentralGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Emergency Medicine, Center for Health System EffectivenessOregon Health & Science UniversityPortlandUSA

Personalised recommendations