The New Palgrave Dictionary of Economics

2018 Edition
| Editors: Macmillan Publishers Ltd

Matching Estimators

  • Petra E. Todd
Reference work entry
DOI: https://doi.org/10.1057/978-1-349-95189-5_2104

Abstract

Matching methods are a popular method for evaluating the effects of programme or other treatment interventions. This article reviews recent developments in the econometric literature on matching estimators, including the assumptions required to justify their application, different ways of implementing the estimators and some recent empirical applications.

Keywords

Bootstrap Curse of dimensionality Kernel estimation Local linear estimation Matching Matching estimators Nearest-neighbour matching Programme effect Propensity score Semiparametric estimation Treatment effect 
This is a preview of subscription content, log in to check access

Bibliography

  1. Abadie, A. and G. Imbens 2006a. On the failure of the bootstrap for matching estimators. Technical working paper no. 325. Cambridge, MA: NBER.Google Scholar
  2. Abadie, A., and G. Imbens. 2006b. Large sample properties of matching estimators for average treatment effects. Econometrica 74: 235–267.CrossRefGoogle Scholar
  3. Angrist, J., and V. Lavy. 2001. Does teacher training affect pupil learning? evidence from matched comparisons in jerusalem public schools. Journal of Labor Economics 19: 343–369.CrossRefGoogle Scholar
  4. Behrman, J., Y. Cheng, and P. Todd. 2004. Evaluating preschool programs when length of exposure to the program varies: A nonparametric approach. The Review of Economics and Statistics 86: 108–132.CrossRefGoogle Scholar
  5. Chen, S. and M. Ravallion 2003. Hidden impact? ex-post evaluation of an antipoverty program. Policy Research Working paper no. 3049. Washington, DC: World Bank.Google Scholar
  6. Cochran, W., and D. Rubin. 1973. Controlling bias in observational studies. Sankyha 35: 417–446.Google Scholar
  7. Dehejia, R., and S. Wahba. 1999. Causal effects in non-experimental studies: Reevaluating the evaluation of training programs. Journal of the American Statistical Association 94: 1053–1062.CrossRefGoogle Scholar
  8. Dehejia, R., and S. Wahba. 2002. Propensity score matching methods for nonexperimental causal studies. The Review of Economics and Statistics 84: 151–161.CrossRefGoogle Scholar
  9. Diamond, A. and J.S. Sekhon 2005. Genetic matching for estimating causal effects: A general multivariate matching method for achieving balance in observational studies. Working paper, Department of Political Science, Berkeley.Google Scholar
  10. Efron, B., and R. Tibshirani. 1993. An introduction to the bootstrap. New York: Chapman and Hall.CrossRefGoogle Scholar
  11. Eichler, M., and M. Lechner. 2002. An evaluation of public employment programmes in the East German state of Sachsen-Anhalt. Labour Economics 9: 143–186.CrossRefGoogle Scholar
  12. Fan, J. 1992a. Design adaptive nonparametric regression. Journal of the American Statistical Association 87: 998–1004.CrossRefGoogle Scholar
  13. Fan, J. 1992b. Local linear regression smoothers and their minimax efficiencies. Annals of Statistics 21: 196–216.CrossRefGoogle Scholar
  14. Fisher, R.A. 1935. Design of experiments. New York: Hafner.Google Scholar
  15. Friedlander, D., and P. Robins. 1995. Evaluating program evaluations: New evidence on commonly used nonexperimental methods. American Economic Review 85: 923–937.Google Scholar
  16. Galiani, S., P. Gertler, and E. Schargrodsky. 2005. Water for life: The impact of the privatization of water services on child mortality in argentina. Journal of Political Economy 113: 83–120.CrossRefGoogle Scholar
  17. Gertler, P., D. Levine, and M. Ames. 2004. Schooling and parental death. The Review of Economics and Statistics 86: 211–225.CrossRefGoogle Scholar
  18. Hahn, J. 1998. On the role of the propensity score in efficient estimation of average treatment effects. Econometrica 66: 315–331.CrossRefGoogle Scholar
  19. Heckman, J. and P. Todd 1995. Adapting propensity score matching and selection models to choice-based samples. Manuscript, Department of Economics, University of Chicago.Google Scholar
  20. Heckman, J., H. Ichimura, J. Smith, and P. Todd. 1996. Sources of selection bias in evaluating social programs: An interpretation of conventional measures and evidence on the effectiveness of matching as a program evaluation method. Proceedings of the National Academy of Sciences 93: 13416–13420.CrossRefGoogle Scholar
  21. Heckman, J., J. Smith, and N. Clements. 1997a. Making the most out of social experiments: Accounting for heterogeneity in programme impacts. Review of Economic Studies 64: 487–536.CrossRefGoogle Scholar
  22. Heckman, J., H. Ichimura, and P. Todd. 1997b. Matching as an econometric evaluation estimator: Evidence from evaluating a job training program. Review of Economic Studies 64: 605–654.CrossRefGoogle Scholar
  23. Heckman, J., H. Ichimura, J. Smith, and P. Todd. 1998a. Characterizing selection bias using experimental data. Econometrica 66: 1017–1098.CrossRefGoogle Scholar
  24. Heckman, J., H. Ichimura, and P. Todd. 1998b. Matching as an econometric evaluation estimator. Review of Economic Studies 65: 261–294.CrossRefGoogle Scholar
  25. Heckman, J., R. Lalonde, and J. Smith. 1999. The economics and econometrics of active labor market programs. In Handbook of labor economics, ed. O. Ashenfelter and D. Card, Vol. 3A. Amsterdam: North-Holland.Google Scholar
  26. Hirano, K., and G. Imbens. 2004. The propensity score with continuous treatments. In Applied bayesian modeling and causal inference from incomplete data perspectives, ed. A. Gelman and X.L. Meng. New York: Wiley.Google Scholar
  27. Hirano, K., G. Imbens, and G. Ridder. 2003. Efficient estimation of average treatment effects using the estimated propensity score. Econometrica 71: 1161–1189.CrossRefGoogle Scholar
  28. Holland, P.W. 1986. Statistics and causal inference (with discussion). Journal of the American Statistical Association 81: 945–970.CrossRefGoogle Scholar
  29. Horowitz, J.L. 1992. A smoothed maximum score estimator for the binary response model. Econometrica 60: 505–532.CrossRefGoogle Scholar
  30. Horowitz, J.L. 2003. The bootstrap. In Handbook of econometrics, ed. J.J. Heckman and E.E. Leamer, Vol. 5. Amsterdam: North-Holland.Google Scholar
  31. Ichimura, H. 1993. Semiparametric least squares and weighted SLS estimation of single index models. Journal of Econometrics 58: 71–120.CrossRefGoogle Scholar
  32. Imbens, G. 2000. The role of the propensity score in estimating dose-response functions. Biometrika 87: 706–710.CrossRefGoogle Scholar
  33. Jalan, J. and M. Ravallion 1999. Efficient estimation of average treatment effects: Evidence for argentina’s trabajar program. Policy research working paper. Washington, DC: World Bank.Google Scholar
  34. Jalan, J., and M. Ravallion. 2003. Does piped water reduce diarrhea for children in rural India. Journal of Econometrics 112: 153–173.CrossRefGoogle Scholar
  35. Klein, R.W., and R.H. Spady. 1993. An efficient semiparametric estimator for binary response models. Econometrica 61: 387–422.CrossRefGoogle Scholar
  36. LaLonde, R. 1986. Evaluating the econometric evaluations of training programs with experimental data. American Economic Review 76: 604–620.Google Scholar
  37. Lavy, V. 2002. Evaluating the effects of teachers’ group performance incentives on pupil achievement. Journal of Political Economy 110: 1286–1387.CrossRefGoogle Scholar
  38. Lavy, V. 2004. Performance pay and teachers’ effort, productivity and grading ethics. Working paper no. 10622. Cambridge, MA: NBER.Google Scholar
  39. Lechner, M. 2001. Identification and estimation of causal effects of multiple treatments under the conditional independence assumption. In Econometric evaluations of active labor market policies in Europe, ed. M. Lechner and F. Pfeiffer. Heidelberg: Physica.CrossRefGoogle Scholar
  40. Manski, C. 1973. Maximum score estimation of the stochastic utility model of choice. Journal of Econometrics 3: 205–228.CrossRefGoogle Scholar
  41. Manski, C., and S. Lerman. 1977. The estimation of choice probabilities from choice-based samples. Econometrica 45: 1977–1988.CrossRefGoogle Scholar
  42. Robinson, P. 1988. Root-N consistent nonparametric regression. Econometrica 56: 931–954.CrossRefGoogle Scholar
  43. Rosenbaum, P., and D. Rubin. 1983. The central role of the propensity score in observational studies for causal effects. Biometrika 70: 41–55.CrossRefGoogle Scholar
  44. Rosenbaum, P., and D. Rubin. 1985. Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. American Statistician 39: 33–38.Google Scholar
  45. Rubin, D.B. 1974. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology 66: 688–701.CrossRefGoogle Scholar
  46. Silverman, B.W. 1986. Density estimation for statistics and data analysis. London: Chapman and Hall.CrossRefGoogle Scholar
  47. Smith, J., and P. Todd. 2005. Does matching overcome lalonde’s critique of nonexperimental estimators? Journal of Econometrics 125: 305–353.CrossRefGoogle Scholar

Copyright information

© Macmillan Publishers Ltd. 2018

Authors and Affiliations

  • Petra E. Todd
    • 1
  1. 1.