The Use of Matching Methods in Higher Education Research: Answering Whether Attendance at a 2-Year Institution Results in Differences in Educational Attainment

  • C. Lockwood Reynolds
  • Stephen L. DesJardins
Part of the Higher Education: Handbook of Theory and Research book series (HATR, volume 24)

Abstract

This chapter provides readers with the conceptual and statistical underpinnings of matching methods. These methods have gained in popularity in recent years given the push to make stronger inferential statements about the impact of educational interventions and policies. Given the likelihood of nonrandom assignment into “treatments” in higher education, matching methods seem particularly well suited to apply in many educational research contexts. We demonstrate the use of these methods by examining whether there are differences in educational outcomes depending on whether students begin their postsecondary careers in a 2- or 4-year institution. Our results indicate that estimates of the educational outcomes examined are sensitive to the choice of analytic methods employed. These results provide evidence that remedying nonrandom assignment problems that are often encountered in higher education research is important if we hope to provide accurate information to our colleagues and educational policymakers.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Abadie, A., & Imbens, G. W. (2006a). Large sample properties of matching estimators for average treatment effects. Econometrica, 74(1), 235–267.CrossRefGoogle Scholar
  2. Abadie, A., & Imbens, G. W. (2006b). On the failure of the bootstrap for matching estimators. National Bureau of Economic Research Working Paper, 0325Google Scholar
  3. Abadie, A., Drukker, D., Leber Herr, J., & Imbens, G. W. (2004). Implementing matching estimators for average treatment effects in STATA. The Stata Journal, 4(3), 290–311.Google Scholar
  4. Angrist, J. D., Imbens, G. W., & Rubin, D. B. (1996). Identification of causal effects using instrumental variables. Journal of the American Statistical Association, 91(434), 444–455.CrossRefGoogle Scholar
  5. Ashenfelter, O., & Krueger, A. (1994). Estimates of the economic return to schooling from a new sample of twins. American Economic Review, 84(5), 1157–1173Google Scholar
  6. Azevedo, J. P., Dolton, P., & Smith, J. (2005). The Econometric Evaluation of the New Deal for Lone Parents. UK Department of Work and Pensions Research Report no. 356Google Scholar
  7. Becker, S. O., & Ichino, A. (2002). Estimation of average treatment effects based on propensity scores. The Stata Journal, 2(4), 358–377.Google Scholar
  8. Behrman, J. R., Cheng, Y., & Todd, P. E. (2004). Evaluating preschool programs when length of exposure to the program varies: A nonparametric approach. Review of Economics and Statistics, 86(1), 108–132.CrossRefGoogle Scholar
  9. Black, D. A., & Smith, J. A. (2004). How robust is the evidence on the effects of college quality? Evidence from matching. Journal of Econometrics, 121(1–2), 99–124.CrossRefGoogle Scholar
  10. Bound, J., Jaeger, D. A., & Baker, R. M. (1995). Problems with instrumental variables estimation when the correlation between the instruments and the endogeneous explanatory variable is weak. Journal of the American Statistical Association, 90(430), 443–450.CrossRefGoogle Scholar
  11. Brownstone, D., & Valletta, R. (2001). The bootstrap and multiple imputations: Harnessing increased computing power for improved statistical tests. Journal of Economic Perspectives, 15(4), 129–141.CrossRefGoogle Scholar
  12. Caliendo, M., & Kopeinig, S. (2008). Some practical guidance for the implementation of propensity score matching. Journal of Economic Surveys, 22(1), 31–72.CrossRefGoogle Scholar
  13. Card, D. (1995). Using geographic variation in college proximity to estimate the return to schooling. In Aspects of Labour Market Behaviour: Essays in Honour of John Vanderkamp (pp. 201–222). Toronto/Buffalo/London: University of Toronto Press,.Google Scholar
  14. Cochran, W. G. (1968). The effectiveness of adjustment by subclassification in removing bias in observational studies. Biometrics, 24(2), 295–313.CrossRefGoogle Scholar
  15. Cook, T. D. (2002). Randomized experiments in educational policy research: A critical examination of the reasons the educational evaluation community has offered for not doing them. Educational Evaluation and Policy Analysis, 24(3), 175–199.CrossRefGoogle Scholar
  16. Crump, R. K., Hotz, V. J., Imbens, G. W., & Mitnik, O. A. (2006). Moving the Goalposts: Addressing Limited Overlap in the Estimation of Average Treatment Effects by Changing the Estimand. National Bureau of Economic Research Working Paper, 0330.Google Scholar
  17. Dehejia, R. (2005). Practical propensity score matching: A reply to Smith and Todd. Journal of Econometrics, 125(1–2), 355–364.CrossRefGoogle Scholar
  18. Dehejia, R. H., & Wahba, S. (1999). Causal effects in nonexperimental studies: Reevaluating the evaluation of training programs. Journal of the American Statistical Association, 94(448), 1053–1062.CrossRefGoogle Scholar
  19. Dehejia, R. H., & Wahba, S. (2002). Propensity score-matching methods for nonexperimental causal studies. Review of Economics and Statistics, 84(1), 151–161.CrossRefGoogle Scholar
  20. DesJardins, S. L., & McCall, B. P. (2006). Investigating the Causal Impact of the Gates Millennium Scholars Program on the Correlates of College Completion, Graduation from College, and Future Educational Aspirations of Low-Income Minority Students. A Report to the Bill and Melinda Gates Foundation.Google Scholar
  21. DesJardins, S. L., & McCall, B. P. (2008). The Impact of the Gates Millennium Scholars Program on the College Enrollment, Borrowing and Work Behavior of Low-Income Minority Students. A Report to the Bill and Melinda Gates Foundation.Google Scholar
  22. Frolich, M. (2004). Finite-sample properties of propensity-score matching and weighting estimators. Review of Economics and Statistics, 86(1), 77–90.CrossRefGoogle Scholar
  23. Hahn, J., Todd, P. & Van der Klaauw, W. (2002). Identification and estimation of treatment effects with a regression-discontinuity design. Econometrica, 69(1), 201–209.CrossRefGoogle Scholar
  24. Heckman, J. (2008). Econometric Causality. National Bureau of Economic Research Working Paper, 13934.Google Scholar
  25. Heckman, J., & Navarro-Lozano, S. (2004). Using matching, instrumental variables, and control functions to estimate economic choice models. Review of Economics and Statistics, 86(1), 30–57.CrossRefGoogle Scholar
  26. Heckman, J. J., & Robb, R., Jr. (1985). Alternative methods for evaluating the impact of interventions. Longitudinal Analysis of Labor Market Data, Econometric Society Monographs series, no. 10. 156–245.Google Scholar
  27. Heckman, J. J., Ichimura, H., & Todd, P. E. (1997). Matching as an econometric evaluation estimator: Evidence from evaluating a job training programme. Review of Economic Studies, 64(4), 605–654.CrossRefGoogle Scholar
  28. Heckman, J. J., Ichimura, H., Smith, J., & Todd, P. E. (1998a). Characterizing selection bias using experimental data. Econometrica, 66(5), 1017–1098.CrossRefGoogle Scholar
  29. Heckman, J. J., Ichimura, H., & Todd, P. E. (1998b). Matching as an econometric evaluation estimator. Review of Economic Studies, 65(2), 261–294.CrossRefGoogle Scholar
  30. Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81(396), 945–960.CrossRefGoogle Scholar
  31. Imbens, G. W., & Angrist, J. D. (1994). Identification and estimation of local average treatment effects. Econometrica, 62(2), 467–475.CrossRefGoogle Scholar
  32. Kaestle, C. F. (1993). The awful reputation of educational research. Educational Researcher, 22, 23–31.Google Scholar
  33. Kaplan, D. (n.d.). Causal Inference in Educational Policy Research. Working paper, Wisconsin Center for Education Research, WI.Google Scholar
  34. Kennedy, P. (2003). A Guide to Econometrics: Fifth Edition. Cambridge, MA: The MIT Press.Google Scholar
  35. Lagemann, E. C. (1999). An auspicious moment for education research? In E. C. Lagemann & L. S. Shulman (Eds.), Issues in Education Research: Problems and Possibilities (pp. 3–16). San Francisco, CA: Jossey-Bass.Google Scholar
  36. Lagemann, E. C. (2000). An Elusive Science: The Troubling History of Education Research. Chicago, IL: University of Chicago Press.Google Scholar
  37. Lechner, M. (2000). A Note on the Common Support Problem in Applied Evaluation Studies. Discussion paper 2001–01, Department of Economics, University of St. Gallen, Switzerland.Google Scholar
  38. Lechner, M. (2001). Identification and Estimation of Causal Effects of Multiple. Treaments under the Conditional Independence Assumption. Econometric evaluation of labour market policies (pp. 43–58). Zew Economic Studies, vol. 13.Google Scholar
  39. Leigh, D. E., & Gill, A. M. (2003). Do community colleges really divert students from earning bachelor's degrees? Economics of Education Review, 22(1), 23–30.CrossRefGoogle Scholar
  40. Leuven, E., & Sianesi, B. (2003). PSMATCH2: Stata module to perform full Mahalanobis and propensity score matching, common support graphing, and covariate imbalance testing. http:// ideas.repec.org/c/boc/bocode/s432001.html. Version 3.0.0.
  41. Levin, J. R., & O'Donnell, A. M. (1999). What to do about educational research's credibility gaps? Issues in Education, 5(2), 177–229.CrossRefGoogle Scholar
  42. Morgan, S. L., & Harding, D. J. (2006). Matching estimators of causal effects: Prospects and pitfalls in theory and practice. Sociological Methods and Research, 35(1), 3–60.CrossRefGoogle Scholar
  43. Morgan, S. L., & Winship, C. (2007). Counterfactuals and Causal Inference: Methods and Principles for Social Research. Cambridge: Cambridge University Press.Google Scholar
  44. Neyman, J. (1923). On the application of probability theory to agricultural experiments: Principles (in Polish with German Summary). Roczniki Nauk Rolniczch, 10, 21–51.Google Scholar
  45. Pagan, A.R., & Ullah, A. (1999). Nonparametric econometrics/Adrian Pagan, Aman Ullah. Cambridge, UK; New York, NY: Cambridge University Press.Google Scholar
  46. Reynolds, C. L. (2007). Three Essays on Policies and Politics in Higher Education. Unpublished Dissertation, University of Michigan, Ann Arbor, MI.Google Scholar
  47. Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1), 41–55.CrossRefGoogle Scholar
  48. Rosenbaum, P. R., & Rubin, D. B. (1985). Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. The American Statistician, 39(1), 33–38.CrossRefGoogle Scholar
  49. Rouse, C. E. (1995). Democratization or diversion? The effect of community colleges on educational attainment. Journal of Business and Economic Statistics, 13(2), 217–224.CrossRefGoogle Scholar
  50. Rouse, C. E. (1998). Do two-year colleges increase overall educational attainment? Evidence from the states. Journal of Policy Analysis and Management, 17(4), 595–620.CrossRefGoogle Scholar
  51. Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66(5), 688–701.CrossRefGoogle Scholar
  52. Schneider, B., Carnoy, M., Kilpatrick, J., Schmidt, W. H., & Shavelson, R. J. (2007). Estimating Causal Effects Using Experimental and Observational Designs. Washington, DC: American Educational Research Association.Google Scholar
  53. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston, MA: Houghton-Mifflin.Google Scholar
  54. Shavelson, R. J., & Berliner, D. C. (1988). Erosion of the education research infrastructure. Educational Researcher, 17(1), 9–12.Google Scholar
  55. Silverman, B. W. (1986). Density estimation for statistics and data analysis. London; New York: Chapman and Hall.Google Scholar
  56. Smith, J. A., & Todd, P. E. (2005a). Does matching overcome LaLonde's critique of nonexperimental estimators? Journal of Econometrics, 125(1–2), 305–353.CrossRefGoogle Scholar
  57. Smith, J. A., & Todd, P. E. (2005b). Does matching overcome LaLonde's critique of nonexperimental estimators? Rejoinder. Journal of Econometrics, 125(1–2), 365–375.CrossRefGoogle Scholar
  58. Sroufe, G. E. (1997). Improving the “Awful Reputation” of education research. Educational Researcher, 26(7), 26–28.Google Scholar
  59. Thistlethwaite, D. L., & Campbell, D. T. (1960). Regression discontinuity analysis: An alternative to the ex post facto experiment. Journal of Educational Psychology, 51, 309–317.CrossRefGoogle Scholar
  60. Weiss, C. H. (1999). Research-policy linkages: How much influence does social science research have? In UNESCO, World Social Science Report 1999 (pp. 194–205). Paris: UNESCO/ Elsevier.Google Scholar
  61. Whitehurst, G. J. (2003). The Institute of Education Sciences: New Wine, New Bottles. Speech to the American Educational Research Association's Annual Conference, April 22, 2003. Retrieved from http://ies.ed.gov/director/pdf/2003_04_22.pdf, April 22, 2008.
  62. Winship, C., & Morgan, S. L. (1999). The estimation of causal effects from observational data. Annual Review of Sociology, 25, 659–707.CrossRefGoogle Scholar
  63. Wooldridge, J. M. (2002). Econometric Analysis of Cross Section and Panel Data. Cambridge, MA: MIT Press.Google Scholar
  64. Zanutto, E. L. (2006). A comparison of propensity score and linear regression analysis of complex survey data. Journal of Data Science, 4(1), 67–91.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • C. Lockwood Reynolds
    • 1
  • Stephen L. DesJardins
    • 2
  1. 1.Department of EconomicsCollege of Business Administration, Kent State UniversityUSA
  2. 2.Center for the Study of Higher and Postsecondary Education, School of EducationUniversity of MichiganUSA

Personalised recommendations