Difference-in-differences and matching on outcomes: a tale of two unobservables
Difference-in-differences combined with matching on pre-treatment outcomes is a popular method for addressing non-parallel trends between a treatment and control group. However, previous simulations suggest that this approach does not always eliminate or reduce bias, and it is not clear when and why. Using Medicaid claims data from Oregon, we systematically vary the distribution of two key unobservables—fixed effects and the random error term—to examine how they affect bias of matching on pre-treatment outcomes levels or trends combined with difference-in-differences. We find that in most scenarios, bias increases with the standard deviation of the error term because a higher standard deviation makes short-term fluctuations in outcomes more likely, and matching cannot easily distinguish between these short-term fluctuations and more structural outcome trends. The fixed effect distribution may also create bias, but only when matching on pre-treatment outcome levels. A parallel-trend test on the matched sample does not reliably distinguish between successful and unsuccessful matching. Researchers using matching on pre-treatment outcomes to adjust for non-parallel trends should report estimates from both unadjusted and propensity-score matching adjusted difference-in-differences, compare results for matching on outcome levels and trends and examine outcome changes around intervention begin to assess remaining bias.
KeywordsDifference-in-differences Matching Simulation
We would like to thank participants of the CHSE brownbag for helpful comments and suggestions.
Authors Lindner and McConnell received no outside funding for this study.
Compliance with ethical standards
Conflict of interest
Authors Lindner and McConnell declare that they have no conflict of interest.
This article does not contain any studies with animals performed by any of the authors. All procedures performed in this study that involved human subsjects were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent for this study was not required.
- Angrist, J.D., Pischke, J.S.: Mostly Harmless Econometrics: An Empiricist’s Companion. Princeton University Press, Princeton (2008)Google Scholar
- Chabé-Ferret, S.: Bias of Causal Effect Estimators Using Pre-policy Outcomes. Toulouse School of Economics, Toulouse (2014)Google Scholar
- O’Neill, S., Kreif, N., Grieve, R., Sutton, M., Sekhon, J.S.: Estimating causal effects: considering three alternatives to difference-in-differences estimation. Health Serv. Outcomes Res. Methodol. 16(1–2), 1–21 (2016). https://doi.org/10.1007/s10742-016-0146-8 CrossRefPubMedPubMedCentralGoogle Scholar
- Song, Z., Safran, D.G., Landon, B.E., He, Y., Ellis, R.P., Mechanic, R.E., Day, M.P., Chernew, M.E.: Health care spending and quality in year 1 of the alternative quality contract. N. Engl. J. Med. 365(10), 909–918 (2011). https://doi.org/10.1056/nejmsa1101416 CrossRefPubMedPubMedCentralGoogle Scholar
- Stuart, E.A., Huskamp, H.A., Duckworth, K., Simmons, J., Song, Z., Chernew, M.E., Barry, C.L.: Using propensity scores in difference-in-differences models to estimate the effects of a policy change. Health Serv. Outcomes Res. Methodol. 14(4), 166–182 (2014). https://doi.org/10.1007/s10742-014-0123-z CrossRefPubMedPubMedCentralGoogle Scholar