Empirical Economics

, Volume 32, Issue 2–3, pp 491–528 | Cite as

Evaluating multi-treatment programs: theory and evidence from the U.S. Job Training Partnership Act experiment

Original Paper

Abstract

This paper considers the evaluation of programs that offer multiple treatments to their participants. Our theoretical discussion outlines the tradeoffs associated with evaluating the program as a whole versus separately evaluating the various individual treatments. Our empirical analysis considers the value of disaggregating multi-treatment programs using data from the U.S. National Job Training Partnership Act Study. This study includes both experimental data, which serve as a benchmark, and non-experimental data. The JTPA experiment divides the program into three treatment “streams” centered on different services. Unlike previous work that analyzes the program as a whole, we analyze the streams separately. Despite our relatively small sample sizes, our findings illustrate the potential for valuable insights into program operation and impact to get lost when aggregating treatments. In addition, we show that many of the lessons drawn from analyzing JTPA as a single treatment carry over to the individual treatment streams.

Keywords

Program evaluation Matching Multi-treatment program JTPA 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Abadie A, Imbens G (2006) On the failure of the bootstrap for matching estimators. Unpublished manuscript, University of California at BerkeleyGoogle Scholar
  2. Andrews D, Buchinsky M (2000) A three-step method for choosing the number of bootstrap repetitions. Econometrica 68:23–51CrossRefGoogle Scholar
  3. Angrist J, Krueger K (1999) Empirical strategies in labor economics. In: Ashenfelter O, Card D (eds) Handbook of Labor Economics Vol 3A. North-Holland, Amsterdam, pp. 1277–1366Google Scholar
  4. Ashenfelter O (1978) Estimating the effect of training programs on earnings. Rev Econ Stat 6:47–57CrossRefGoogle Scholar
  5. Black D, Smith J (2004) How robust is the evidence on the effects of college quality? Evidence from matching. J Econ 121:99–124Google Scholar
  6. Black D, Smith J, Berger M, Noel B (2003) Is the threat of reemployment services more effective than the services themselves? Evidence from the UI system using random assignment. Am Econ Rev 93:1313–1327CrossRefGoogle Scholar
  7. Bloom H, Orr L, Cave G, Bell S, Doolittle F (1993) The National JTPA Study: title II-A impacts on earnings and employment at 18 Months. Abt Associates, BethesdaGoogle Scholar
  8. Bloom H, Orr L, Bell S, Cave G, Doolittle F, Lin W, Bos J (1997) The benefits and costs of JTPA title II-A programs: key findings from the National Job Training Partnership Act study. J Hum Resources 32:549–576CrossRefGoogle Scholar
  9. Card D, Sullivan D (1988) Measuring the effect of subsidized training programs on movements in and out of employment. Econometrica 56:497–530CrossRefGoogle Scholar
  10. Courty P, Marschke G (2004) An empirical investigation of gaming responses to explicit performance incentives. J Labor Econ 22:23–56CrossRefGoogle Scholar
  11. Dehejia R, Wahba S (1999) Causal effects in non-experimental studies: re-evaluating the evaluation of training programs. J Am Stat Assoc 94:1053–1062CrossRefGoogle Scholar
  12. Dehejia R, Wahba S (2002) Propensity score matching methods for non-experimental causal studies. Rev Econ Stat 84:139–150CrossRefGoogle Scholar
  13. Devine T, Heckman J (1996) The consequences of eligibility rules for a social program: a study of the Job Training Partnership Act. Res Labor Econ 15:111–170Google Scholar
  14. Dolton P, Smith J, Azevedo JP (2006) The econometric evaluation of the new deal for lone parents. Unpublished manuscript, University of MichiganGoogle Scholar
  15. Doolittle F, Traeger L (1990) Implementing the National JTPA Study. Manpower Demonstration Research Corporation, New YorkGoogle Scholar
  16. Dorset R (2006) The New Deal for Young People: effect on the labor market status of young men. Labour Econ 13:405–422CrossRefGoogle Scholar
  17. Fan J, Gijbels I (1996) Local polynomial modeling and its applications. Chapman and Hall, New YorkGoogle Scholar
  18. Fisher R (1935) The design of experiments. Oliver and Boyd, LondonGoogle Scholar
  19. Fitzenberger B, Speckesser S (2005) Employment effects of the provision of specific professional skills and techniques in Germany. IZA Working paper no. 1868Google Scholar
  20. Frölich M (2004) Finite sample properties of propensity score matching and weighting estimators. Rev Econ Stat 86:77–90CrossRefGoogle Scholar
  21. Frölich, M (2006) A note on parametric and nonparametric regression in the presence of endogenous control variables. IZA working paper no. 2126Google Scholar
  22. Galdo J, Smith J, Black D (2006) Bandwidth selection and the estimation of treatment effects with nonexperimental data. Unpublished manuscript, University of MichiganGoogle Scholar
  23. Gerfin M, Lechner M (2002) Microeconometric evaluation of active labour market policy in Switzerland. Econ J 112:854–803CrossRefGoogle Scholar
  24. Heckman J (1979) Sample selection bias as a specification error. Econometrica 47:153–161CrossRefGoogle Scholar
  25. Heckman J, Hotz VJ (1989) Choosing among alternative nonexperimental methods for estimating the impact of training programs. J Am Stat Assoc 84:862–874CrossRefGoogle Scholar
  26. Heckman J, Navarro S (2004) Using matching, instrumental variables, and control functions to estimate economic choice models. Rev Econ Stat 86:30–57CrossRefGoogle Scholar
  27. Heckman J, Smith J (1999) The pre-programme earnings dip and the determinants of participation in a social programme: implications for simple program evaluation strategies. Econ J 109:313–348CrossRefGoogle Scholar
  28. Heckman J, Smith J (2000) The sensitivity of experimental impact estimates: evidence from the National JTPA Study. In: Blanchflower D, Freeman R (eds) Youth employment and joblessness in advanced countries. University of Chicago Press, ChicagoGoogle Scholar
  29. Heckman J, Smith J (2004) The determinants of participation in a social program: evidence from a prototypical job training program. J Labor Econ 22:243–298CrossRefGoogle Scholar
  30. Heckman J, Todd P (1995) Adapting propensity score matching and selection models to choice-based samples. Unpublished manuscript, University of ChicagoGoogle Scholar
  31. Heckman J, Ichimura H, Todd P (1997) Matching as an econometric evaluation estimator: evidence from evaluating a job training program. Rev Econ Stud 64:605–654CrossRefGoogle Scholar
  32. Heckman J, Ichimura H, Smith J, Todd P (1998a) Characterizing selection bias using experimental data. Econometrica 66:1017–1098CrossRefGoogle Scholar
  33. Heckman J, Lochner L, Taber C (1998b) Explaining rising wage inequality: explorations with a dynamic general equilibrium model of labor earnings with heterogeneous agents. Rev Econ Dynam 1:1–58CrossRefGoogle Scholar
  34. Heckman J, Smith J, Taber C (1998c) Accounting for dropouts in evaluations of social programs. Rev Econ Stat 80:1–14CrossRefGoogle Scholar
  35. Heckman J, LaLonde R, Smith J (1999) The economics and econometrics of active labor market programs. In: Ashenfelter O, Card D (eds) Handbook of Labor Economics, Vol 3A. North-Holland, Amsterdam, pp 1865–2097Google Scholar
  36. Heckman J, Hohmann N, Smith J, Khoo M (2000) Substitution and dropout bias in social experiments: a study of an influential social experiment. Q J Econ 115:651–694CrossRefGoogle Scholar
  37. Heckman J, Heinrich C, Smith J (2002) The performance of performance standards. J Hum Resources 36:778–811CrossRefGoogle Scholar
  38. Heinrich C, Marschke G, Zhang A (1999) Using administrative data to estimate the cost-effectiveness of social program services. Unpublsihed manuscript, Univerity of ChicagoGoogle Scholar
  39. Ho D, Kosuke I, King G, Stuart E (2007) Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference. Forthcoming in: Political AnalysisGoogle Scholar
  40. Imbens G (2000) The role of the propensity score in estimating dose-response functions. Biometrika 87:706–710CrossRefGoogle Scholar
  41. Kordas G, Lehrer S (2004) Matching using semiparametric propensity scores. Unpublished manuscript, Queen’s UniversityGoogle Scholar
  42. Kemple J, Doolittle F, Wallace J (1993) The National JTPA Study: site characteristics and participation patterns. Manpower Demonstration Research Corporation, New YorkGoogle Scholar
  43. LaLonde R (1986) Evaluating the econometric evaluations of training programs using experimental data. Am Econ Rev 76:604–620Google Scholar
  44. Lechner M (2001) Identification and estimation of causal effects of multiple treatments under the conditional independence assumption. In: Lechner M, Pfeiffer P (eds) Econometric evaluation of labour market policies. Physica, HeidelbergGoogle Scholar
  45. Lechner M, Smith J (2007) What is the value added by caseworkers?. Labour Econ 14:135–151CrossRefGoogle Scholar
  46. Lechner M, Miquel R, Wunsch C (2008) The curse and blessing of training the unemployed in a changing economy: the case of East Germany after unification. Forthcoming in: German Economic ReviewGoogle Scholar
  47. Lise J, Seitz S, Smith J (2005) Equilibrium policy experiments and the evaluation of social programs. NBER working paper no. 10283Google Scholar
  48. Manski C (1996) Learning about treatment effects from experiments with random assignment to treatment. J Hum Resources 31:707–733Google Scholar
  49. Michalopolous C, Tattrie D, Miller C, Robins P, Morris P, Gyarmati D, Redcross C, Foley K, Ford R (2002) Making work pay: final report on the Self-Sufficiency Project for long-term welfare recipients. Social Research and Demonstration Corporation, OttawaGoogle Scholar
  50. Neyman J (1923) Statistical problems in agricultural experiments. J R Stat Soc 2:107–180Google Scholar
  51. Orr L, Bloom H, Bell S, Lin W, Cave G, Doolittle F (1994) The National JTPA Study: impacts, benefits and costs of title II-A. Abt Associates, BethesdaGoogle Scholar
  52. Pagan A, Ullah A (1999) Nonparametric econometrics. Cambridge University Press, CambridgeGoogle Scholar
  53. Pechman J, Timpane M (1975) Work incentives and income guarantees: the New Jersey negative income tax experiment. Brookings Institution, Washington DCGoogle Scholar
  54. Plesca M (2006) A general equilibrium evaluation of the employment service. Unpublished manuscript, University of GuelphGoogle Scholar
  55. Quandt R (1972) Methods of estimating switching regressions. J Am Stat Assoc 67:306–310CrossRefGoogle Scholar
  56. Racine J, Li Q (2005) Nonparametric estimation of regression functions with both categorical and continuous data. J Econ 119:99–130Google Scholar
  57. Rosenbaum P, Rubin D (1983) The central role of the propensity score in observational studies for causal effects. Biometrika 70:41–55CrossRefGoogle Scholar
  58. Rosenbaum P, Rubin D (1984) Reducing bias in observational studies using subclassification on the propensity score. J Am Stat Assoc 79:516–524CrossRefGoogle Scholar
  59. Rosenbaum P, Rubin D (1985) Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. Am Stat 39:33–38CrossRefGoogle Scholar
  60. Roy AD (1951) Some thoughts on the distribution of earnings. Oxford Econ Pap 3:135–146Google Scholar
  61. Rubin D (1974) Estimating causal effects of treatments in randomized and non-randomized studies. J Educ Psychol 66:688–701CrossRefGoogle Scholar
  62. Smith J, Todd P (2005a) Does matching overcome LaLonde’s critique of nonexperimental methods? J Econ 125:305–53Google Scholar
  63. Smith J, Todd P (2005b) Rejoinder. J Econ 125:365–375Google Scholar
  64. Smith J, Whalley A (2006) How well do we measure public job training? Unpublished manuscript, University of MichiganGoogle Scholar
  65. Zhao Z (2004) Using matching to estimate treatment effects: data requirements, matching metrics, and Monte Carlo evidence. Rev Econ Stat 86:91–107CrossRefGoogle Scholar

Copyright information

© Springer Verlag 2007

Authors and Affiliations

  1. 1.Department of EconomicsUniversity of GuelphGuelphCanada
  2. 2.Department of EconomicsUniversity of MichiganAnn ArborUSA

Personalised recommendations