Skip to main content

Instrumental variables methods in experimental criminological research: what, why and how

Abstract

Quantitative criminology focuses on straightforward causal questions that are ideally addressed with randomized experiments. In practice, however, traditional randomized trials are difficult to implement in the untidy world of criminal justice. Even when randomized trials are implemented, not everyone is treated as intended and some control subjects may obtain experimental services. Treatments may also be more complicated than a simple yes/no coding can capture. This paper argues that the instrumental variables methods (IV) used by economists to solve omitted variables bias problems in observational studies also solve the major statistical problems that arise in imperfect criminological experiments. In general, IV methods estimate causal effects on subjects who comply with a randomly assigned treatment. The use of IV in criminology is illustrated through a re-analysis of the Minneapolis domestic violence experiment. The results point to substantial selection bias in estimates using treatment delivered as the causal variable, and IV estimation generates deterrent effects of arrest that are about one-third larger than the corresponding intention-to-treat effects.

This is a preview of subscription content, access via your institution.

References

  • Abadie, A. (2003). Semiparametric instrumental variable estimation of treatment response models. Journal of Econometrics 113(2), 231–263.

    Article  Google Scholar 

  • Angrist, J. D. (1990). Lifetime earnings and the Vietnam era draft lottery: Evidence from social security administrative records. American Economic Review 80(3), 313–335.

    Google Scholar 

  • Angrist, J. D. (2001). Estimation of limited dependent variable models with dummy endogenous regressors: Simple strategies for empirical practice. Journal of Business and Economic Statistics 19(1), 2–16.

    Article  Google Scholar 

  • Angrist, J. D. & Imbens, G. W. (1995). Two-stage least squares estimates of average causal effects in models with variable treatment intensity. Journal of the American Statistical Association 90(430), 431–442.

    Google Scholar 

  • Angrist, J. D. & Krueger, A. B. (1999). Empirical strategies in labor economics. In O. Ashenfelter & D. Card (Eds.), Handbook of labor economics, Volume IIIA (pp. 1277–1366). Amsterdam: North-Holland.

    Google Scholar 

  • Angrist, J. D. & Krueger, A. B. (2001). Instrumental variables and the search for identification. Journal of Economic Perspectives 15(4), 69–86.

    Google Scholar 

  • Angrist, J. D. & Lavy, V. C. (1999). Using Maimonides' rule to estimate the effect of class size on student achievement. Quarterly Journal of Economics 114(2), 533–575.

    Article  Google Scholar 

  • Angrist, J. D. & Lavy, V. C. (2002). The effect of high school matriculation awards – Evidence from randomized trials,” NBER Working Paper 9389, December.

  • Angrist, J. D., Imbens, G. W. & Rubin, D. B. (1996). Identification of causal effects using instrumental variables. Journal of the American Statistical Association 91(434), 444–455.

    Google Scholar 

  • Berk, R. A. & Sherman, L. W. (1988). Police response to family violence incidents: An analysis of an experimental design with incomplete randomization. Journal of the American Statistical Association 83(401), 70–76.

    Google Scholar 

  • Berk, R. A. & Sherman, L. W. (1993). Specific Deterrent Effects Of Arrest For Domestic Assault: Minneapolis, 1981–1982 [Computer file]. Conducted by the Police Foundation. 2nd ICPSR ed. Ann Arbor, Michigan: Inter-university Consortium for Political and Social Research [producer and distributor].

  • Berk, R. A., Smyth, G. K. & Sherman, L. W. (1988). When random assignment fails: Some lessons from the Minneapolis spouse abuse experiment. Journal of Quantitative Criminology 4(3), 209–223.

    Article  Google Scholar 

  • Bloom, H. S. (1984). Accounting for no-shows in experimental evaluation designs. Evaluation Review 8(2), 225–246.

    Google Scholar 

  • Boruch, R., De Moya, D. & Snyder, B. (2002). The importance of randomized field trials in education and related areas. In F. Mosteller & R. Boruch (Eds.), Evidence matters: randomized trials in education research. Washington, DC: Brookings Institution.

    Google Scholar 

  • Campbell, D. T. (1969). Reforms as experiments. American Psychologist 24, 409–429.

    Google Scholar 

  • Cook, T. D. (2001). Sciencephobia: Why education researchers reject randomized experiments, Education Next http://www.educationnext.org), Fall, 63–68.

  • Efron, B. & Feldman, D. (1991). Compliance as an explanatory variable in clinical trials. Journal of the American Statistical Association 86(413), 9–17.

    Google Scholar 

  • Farrington, D. P. (1983). Randomized experiments on crime and justice. In M. H. Tonry & N. Morris (Eds.), Crime and justice. Chicago: University of Chicago Press.

  • Farrington, D. P. & Welsh, B. C. (2005). Randomized experiments in criminology: What have we learned in the last two decades? Journal of Experimental Criminology 1, 9–38.

    Article  Google Scholar 

  • Gartin, P. R. (1995). Dealing with design failures in randomized field experiments: Analytic issues regarding the evaluation of treatment effects. Journal of Research in Crime and Delinquency 32(4), 425–445.

    Google Scholar 

  • Goldberger, A. S. (1991). A course in econometrics. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Gottfredson, D. C. (2005). Long-term Effects of Participation in the Baltimore City Drug Treatment Court: Results from an Experimental Study,” University of Maryland, Department of Criminology and Criminal Justice,” mimeo, October 2005.

  • Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association 81(396), 945–970.

    Google Scholar 

  • Imbens, G. W. & Angrist, J. D. (1994). Identification and estimation of local average treatment effects. Econometrica 62(2), 467–475.

    Google Scholar 

  • Krueger, A. B. (1999). Experimental estimates of education production functions. Quarterly Journal of Economics 114(2), 497–532.

    Article  Google Scholar 

  • Levitt, S. D. (1997). Using electoral cycles in police hiring to estimate the effects of police on crime. American Economic Review 87(3), 270–290.

    Google Scholar 

  • McCrary, J. (2002). Using electoral cycles in police hiring to estimate the effects of police on crime: comment. American Economic Review 92(4), 1236–1243.

    Google Scholar 

  • Permutt, T. & Richard Hebel, J. (1989). Simultaneous-equation estimation in a clinical trial of the effect of smoking on birth weight. Biometrics 45(2), 619–622.

    Google Scholar 

  • Powers, D. E. & Swinton, S. S. (1984). Effects of self-study for coachable test item types. Journal of Educational Psychology 76(2), 266–278.

    Google Scholar 

  • Rezmovic, E. L., Cook, T. J. & Douglas Dobson, L. (1981). Beyond random assignment: Factors affecting evaluation integrity. Evaluation Review 5(1), 51–67.

    Google Scholar 

  • Rossi, P. H., Berk, R. A. & Lenihan, K. J. (1980). Money, work, and crime: Experimental evidence. New York: Academic.

    Google Scholar 

  • Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and non-randomized studies. Journal of Educational Psychology 66, 688–701.

    Google Scholar 

  • Rubin, D. B. (1977). Assignment to a treatment group on the basis of a covariate. Journal of Educational Statistics 2, 1–26.

    Google Scholar 

  • Sherman, L. W. & Berk, R. A. (1984). The specific deterrent effects of arrest for domestic assault. American Sociological Review 49(2), 261–272.

    Google Scholar 

  • Snow Jones, A. & Gondolf, E. (2002). Assessing the effect of batterer program completion on reassault: An instrumental variables analysis. Journal of Quantitative Criminology 18, 71–98.

    Google Scholar 

  • Theil, H. (1953). Repeated least squares applied to complete equation systems. The Hague: Central Planning Bureau.

    Google Scholar 

  • Wald, A. (1940). The fitting of straight lines if both variables are subject to error. Annals of Mathematical Statistics 11, 284–300.

    Google Scholar 

  • Weisburd, D. L. (2003). Ethical practice and evaluation of interventions in crime and justice: The moral imperative for randomized trials. Evaluation Review 27(3), 336–354.

    Article  Google Scholar 

  • Weisburd, D. L., Lum, C. & Petrosino, A. (2001). Does research design affect study outcomes in criminal justice? Annals of the American Academy of Political and Social Science 578(6), 50–70.

    Google Scholar 

  • White, M. J. (2005). Acupuncture in Drug Treatment: Exploring its Role and Impact on Participant Behavior in the Drug Court Setting, John Jay College of Criminal Justice, City University of New York, mimeo.

  • Woodbury, S. A. & Spiegelman, R. G. (1987). Bonuses to workers and employers to reduce unemployment: Randomized trials in Illinois. American Economic Review 77(4), 513–530.

    Google Scholar 

  • Wooldridge, J. (2003). Introductory econometrics: A modern approach. Cincinnati, OH: Thomson South-Western.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joshua D. Angrist.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Angrist, J.D. Instrumental variables methods in experimental criminological research: what, why and how. J Exp Criminol 2, 23–44 (2006). https://doi.org/10.1007/s11292-005-5126-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11292-005-5126-x

Key words

  • causal effects
  • domestic violence
  • local average treatment effects
  • non-compliance
  • two-stage least squares