Skip to main content

Advertisement

Log in

Justifying the use of non-experimental methods and disqualifying the use of randomized controlled trials: challenging folklore in evaluation research in crime and justice

  • Published:
Journal of Experimental Criminology Aims and scope Submit manuscript

Abstract

The key limitation of non-experimental evaluation methods is that they require an assumption that all confounding factors related to treatment are identified in the statistical models developed. The key advantage of randomized experiments is that this assumption can be relaxed. In this paper, I describe and explain why this assumption is so critical for non-experiments and why it can be ignored in randomized controlled trials (RCTs). I also challenge what I describe as “folklores” that are used to justify the use of non-randomized studies despite this statistical limitation, and to justify the failure of evaluation researchers in crime and justice to use randomized experiments despite their unique ability to overcome this limitation. I conclude by reinforcing what Joan McCord had argued after a life time of review of evaluations: “(W)henever possible” evaluation studies “should employ random assignment.”

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. Standardized coefficients are used here and later in the paper to simplify the mathematical procedures and the interpretation of the effects observed.

References

  • Asscher, J. J., Deković, M., van der Laan, P. H., Prins, P. J. M., & van Arum, S. (2007). Implementing randomized experiments in criminal justice settings: an evaluation of multi-systemic therapy in the Netherlands. Journal of Experimental Criminology, 3, 113–129.

    Article  Google Scholar 

  • Baldus, D. C., Woodworth, G. G., & Pulaski, C. A. (1990). Equal justice and the death penalty: a legal and empirical analysis. Boston: Northeastern University Press.

    Google Scholar 

  • Baunach, P. J. (1980). Random assignment in criminal justice research—some ethical and legal issues. Criminology, 17, 435–444.

    Article  Google Scholar 

  • Berk, R. A. (2005). Randomized experiments as the bronze standard. Journal of Experimental Criminology, 1(4), 417–433.

    Article  Google Scholar 

  • Berk, R. A., Smyth, G. K., & Sherman, L. W. (1988). When random assignment fails: some lessons from the Minneapolis Domestic Violence Experiment. Journal of Quantitative Criminology, 4, 209–223.

    Article  Google Scholar 

  • Berk, R. A., Campbell, A., Klap, R., & Western, B. (1992). A Bayesian analysis of the Colorado spouse abuse experiment. Journal of Criminal Law and Criminology, 83, 170–200.

    Article  Google Scholar 

  • Boruch, R. (1975). On common contentions about randomized field experiments. In R. Boruch & H. W. Reicken (Eds.), Experimental testing of public policy: the Proceedings of the 1974 Social Sciences Research Council Conference on Social Experimentation (pp. 107–142). Boulder: Westview Press.

    Google Scholar 

  • Boruch, R. (1997). Randomized experiments for planning and evaluation: a practical guide. Thousand Oaks: Sage Publications.

    Google Scholar 

  • Boruch, R., Snyder, B., & DeMoya, D. (2000). The importance of randomized field trials. Crime & Delinquency, 46, 156–180.

    Article  Google Scholar 

  • Boruch, R., Victor, T., & Cecil, J. (2000). Resolving ethical and legal problems in randomized experiments. Crime & Delinquency, 46, 300–353.

    Google Scholar 

  • Botvin, G. J., Baker, E., Dusenbury, L., Botvin, E. M., & Diaz, T. (1995). Long-term follow-up results of a randomized drug abuse prevention trial in a white middle-class population. Journal of the American Medical Association, 273, 1106–1112.

    Article  Google Scholar 

  • Braga, A. A., Weisburd, D., Waring, E. J., Mazerolle, L. G., Spelman, W., & Gajewski, F. (1999). Problem-oriented policing in violent crime places: a randomized controlled experiment. Criminology, 37, 541–580.

    Article  Google Scholar 

  • Campbell, D., & Boruch, R. F. (1975). Making the case for randomized assignment to treatments by considering the alternatives: six ways in which quasi-experimental evaluations in compensatory education tend to underestimate effects. In C. A. Bennett & A. A. Lumsdaine (Eds.), Evaluation and experiment: some critical issues in assessing social programs (pp. 195–296). New York: Academic Press.

    Google Scholar 

  • Campbell, D., & Russo, J. (eds). (1999). Social experimentation. Thousand Oaks, CA: Sage Publications.

  • Clarke, R. V., & Cornish, D. B. (1972). The controlled trial in institutional research: paradigm or pitfall for penal evaluators? Home Office Research Studies. London: Her Majesty’s Stationery Office.

    Google Scholar 

  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale: Lawrence Erlbaum.

    Google Scholar 

  • Cook, T. D., & Campbell, D. (1979). Quasi-experimentation: design and analysis issues for field settings. Chicago: Rand McNally.

    Google Scholar 

  • Cook, T. D., Shadish, W. R., & Wong, V. C. (2008). Three conditions under which experiments and observational studies produce comparable causal estimates: new findings from within-study comparisons. Journal of Policy Analysis and Management, 27(4), 724–750.

    Article  Google Scholar 

  • DeLeon, G., Melnick, G., Kressel, D., & Wexler, H. K. (2000). Motivation for treatment in a prison-based therapeutic community. American Journal of Drug and Alcohol Abuse, 26, 33–46.

    Article  Google Scholar 

  • Dennis, M. L. (1988). Implementing randomized field experiments: an analysis of criminal and civil justice research. Dissertation, Northwestern University.

  • Dunford, F. W. (2000). The San Diego Navy Experiment: an assessment of interventions for men who assault their wives. Journal of Consulting and Clinical Psychology, 68, 468–476.

    Article  Google Scholar 

  • Dunford, F. W., Huizinga, D., & Elliott, D. S. (1990). The role of arrest in domestic assault: the Omaha Police experiment. Criminology, 28, 183–206.

    Article  Google Scholar 

  • Eck, J. (2002). Learning from experience in problem-oriented policing and crime prevention: the positive functions of weak evaluations and the negative functions of strong ones. In N. Tilley (Ed.), Evaluation for crime prevention. Crime prevention studies, vol. 14 (pp. 93–117). Monsey: Criminal Justice Press.

    Google Scholar 

  • Ellickson, P. L., Bell, R. M., & McGuigan, K. (1993). Preventing adolescent drug use: long-term results of a junior high program. American Journal of Public Health, 83, 856–861.

    Article  Google Scholar 

  • Erez, E. (1986). Randomized experiments in correctional context: legal, ethical, and practical concerns. Journal of Criminal Justice, 14, 389–400.

    Article  Google Scholar 

  • Esbensen, F. (1991). Ethical considerations in criminal justice research. American Journal of Police, 10, 87–104.

    Google Scholar 

  • Farrington, D. P. (1983). Randomized experiments on crime and justice. In M. Tonry & N. Morris (Eds.), Crime and justice: a review of research, vol. 4 (pp. 257–308). Chicago: University of Chicago Press.

    Google Scholar 

  • Farrington, D. P. (2003). A short history of randomized experiments in criminology: a meager feast. Evaluation Review, 27, 218–227.

    Article  Google Scholar 

  • Farrington, D. P., & Welsh, B. C. (2005). Randomized experiments in criminal justice: what have we learned in the past two decades? Journal of Experimental Criminology, 1, 9–38.

    Article  Google Scholar 

  • Farrington, D. P., Gottfredson, D. C., Sherman, L. W., & Welsh, B. C. (2002). The Maryland scientific methods score. In L. W. Sherman, D. P. Farrington, B. C. Welsh, & D. L. MacKenzie (Eds.), Evidence-based crime prevention (pp. 13–21). New York: Routledge.

    Google Scholar 

  • Feder, L., & Dugan, L. (2002). A test of the efficacy of court-mandated counseling for domestic violence offenders: the Broward experiment. Justice Quarterly, 19, 343–375.

    Article  Google Scholar 

  • Feder, L., Jolin, A., & Feyerherm, W. (2000). Lessons from two randomized experiments in criminal justice settings. Crime and Delinquency, 46(3), 380–400.

    Article  Google Scholar 

  • Flay, B. R., & Best, J. (1982). Overcoming design problems in evaluating health behavior programs. Evaluation and the Health Professions, 5(1), 43–69.

    Article  Google Scholar 

  • Geis, G. (1967). Ethical and legal issues in experimentation with offender populations. In Research in correctional rehabilitation. Washington, DC: Joint Commission on Correctional Manpower and Training.

  • Graebsch, C. (2000). Legal issues of randomized experiments on sanctioning. Journal of Crime and Delinquency, 46, 271–282.

    Article  Google Scholar 

  • Graham, J. W., Johnson, C. A., Hansen, W. B., Flay, B. R., & Gee, M. (1990). Drug use prevention programs, gender, and ethnicity: evaluation of three seventh-grade Project SMART cohorts. Preventive Medicine, 19, 305–313.

    Article  Google Scholar 

  • Heckman, J., & Smith, J. A. (1995). Assessing the case for social experimentation. Journal of Economic Perspectives, 9, 85–110.

    Google Scholar 

  • Inciardi, J. A., Martin, S. S., Butzin, C. A., Hopper, R. M., & Harrison, L. D. (1997). An effective model of prison-based treatment for drug-involved offenders. Journal of Drug Issues, 27, 261–278.

    Google Scholar 

  • Lipsey, M., Petrie, C., Weisburd, D., & Gottfredson, D. (2006). Improving evaluation of anti-crime programs: summary of a National Research Council report. Journal of Experimental Criminology, 2, 271–307.

    Article  Google Scholar 

  • Lum, C., & Yang, S.-M. (2005). Why do evaluation researchers in crime and justice choose non-experimental methods? Journal of Experimental Criminology, 1, 191–213.

    Article  Google Scholar 

  • Mackenzie, D. L. (2006). What works in corrections: reducing the criminal activities of offenders and delinquents. New York: Cambridge University Press.

    Book  Google Scholar 

  • McCord, J. (2003). Cures that harm: unanticipated outcomes of crime prevention programs. The Annals of the American Academy of Political and Social Science, 587, 16–30.

    Article  Google Scholar 

  • Oxford Dictionaries. (2002). Oxford pocket American dictionary of English language. New York: Oxford University Press.

    Google Scholar 

  • Palmer, T., & Petrosino, A. (2003). The “experimenting agency”: The California Youth Authority Research Division. Evaluation Review, 27, 228–266.

    Article  Google Scholar 

  • Paternoster, R. (1984). Prosecutorial discretion in requesting the death penalty: a case of victim-based racial discrimination. Law & Society Review, 18, 437–478.

    Article  Google Scholar 

  • Paternoster, R., & Kazyaka, A. M. (1988). Administration of the death penalty in South Carolina: experiences over the first few years. South Carolina Law Review, 39, 245–414.

    Google Scholar 

  • Pawson, R., & Tilley, N. (1997). Realistic evaluation. Beverly Hills: Sage Publications.

    Google Scholar 

  • Petersilia, J. (1989). Implementing randomized experiments: lessons from BJA’s Intensive Supervision Project. Evaluation Review, 13, 435–458.

    Article  Google Scholar 

  • Petersilia, J., & Turner, S. (1993). Evaluating intensive supervision probation/parole: results of a nationwide experiment. Washington, DC: National Institute of Justice, US Department of Justice.

    Google Scholar 

  • Petrosino, A., Boruch, R. F., Soydan, H., Duggan, L., & Sanchez-Meca, J. (2001). Meeting the challenges of evidence-based crime policy: the Campbell Collaboration. The Annals of the American Academy of Political and Social Sciences, 578, 14–34.

    Article  Google Scholar 

  • Petrosino, A. J., Boruch, R. F., Farrington, D. P., Sherman, L. W., & Weisburd, D. (2003). Toward evidence-based criminology and criminal justice: systematic reviews, the Campbell Collaboration, and the Crime and Justice Group. International Journal of Comparative Criminology, 3, 42–61.

    Google Scholar 

  • Rosenbaum, P. R., & Rubin, D. R. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70, 41–55.

    Article  Google Scholar 

  • Rosenbaum, P. R., & Rubin, D. R. (1984). Reducing bias in observational studies using subclassification on the propensity score. Journal of the American Statistical Association, 79, 516–524.

    Article  Google Scholar 

  • Rosenthal, R. (1965). The volunteer subject. Human Relations, 18, 389–406.

    Article  Google Scholar 

  • Schneider, A. L. (1986). Restitution and recidivism rates of juvenile offenders: results from four experimental studies. Criminology, 24, 533–552.

    Article  Google Scholar 

  • Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton-Mifflin.

    Google Scholar 

  • Shadish, W. R., Clark, M. H., & Steiner, P. M. (2008). Can nonrandomized experiments yield accurate answers? A randomized experiment comparing random and nonrandom assignments. Journal of the American Statistical Association, 103, 1334–1343.

    Article  Google Scholar 

  • Shepherd, J. P. (2003). Explaining feast or famine in randomized field trials: medical science and criminology compared. Evaluation Review, 27, 290–315.

    Article  Google Scholar 

  • Sherman, L. W., & Berk, R. A. (1984). The specific deterrent effects of arrest for domestic assault. American Sociological Review, 49, 261–272.

    Article  Google Scholar 

  • Sherman, L. W., & Weisburd, D. (1995). General deterrent effects of police patrol in crime “hot spots”: a randomized, controlled trial. Justice Quarterly, 12, 625–648.

    Article  Google Scholar 

  • Sherman, L. W., & Strang, H. (2004). Verdicts or inventions? Interpreting results from randomized controlled experiments in criminology. American Behavioral Scientist, 47, 575–607.

    Article  Google Scholar 

  • Sherman, L. W., Gottfredson, D., MacKenzie, J., Eck, P., & Bushway, S. (1997). Preventing crime: What works, what doesn't, what's promising. Washington, DC: National Institute of Justice, U.S. Department of Justice.

  • Solomon, P., Cavanaugh, M. M., & Draine, J. (2009). Randomized controlled trials: design and implementation for community-based psychosocial interventions. New York: Oxford University Press.

    Google Scholar 

  • Taxman, F. S. (1998). Reducing recidivism through a seamless system of care: components of effective treatment, supervision, and transition services in the community. Washington, DC: Office of National Drug Control Policy.

    Google Scholar 

  • Taxman, F. S. (2008). No illusions: offender and organizational change in Maryland’s proactive community supervision efforts. Criminology and Public Policy, 7, 275–302.

    Article  Google Scholar 

  • Weisburd, D. (2000). Randomized experiments in criminal justice policy: prospects and problems. Crime & Delinquency, 46, 181–193.

    Article  Google Scholar 

  • Weisburd, D. (2003). Ethical practice and evaluation of interventions in crime and justice: the moral imperative for randomized trials. Evaluation Review, 27, 336–354.

    Article  Google Scholar 

  • Weisburd, D. (2005). Hot spots experiments and criminal justice research: lessons from the field. The Annals of the American Academy of Social and Political Science, 599, 220–245.

    Article  Google Scholar 

  • Weisburd, D., & Green, L. (1995). Policing drug hot spots: the Jersey City Drug Market Analysis experiment. Justice Quarterly, 12, 711–736.

    Article  Google Scholar 

  • Weisburd, D., & Naus, J. (2001). Report to Special Master David Baime: assessment of the index of outcomes approach for use in proportionality review. Trenton: New Jersey Administrative Office of the Courts.

    Google Scholar 

  • Weisburd, D., & Piquero, A. R. (2008). How well do criminologists explain crime? Statistical modeling in published studies. In M. Tonry (Ed.), Crime and justice: a review of research, vol. 37 (pp. 453–502). Chicago: University of Chicago Press.

    Google Scholar 

  • Weisburd, D., Lum, C., & Petrosino, A. (2001). Does research design affect study outcomes in criminal justice? The Annals of the American Academy of Political and Social Science, 578, 50–70.

    Article  Google Scholar 

  • Wexler, H. K., Melnick, G., Lowe, L., & Peters, J. (1999). Three-year reincarceration outcomes for Amity in-prison therapeutic community and aftercare in California. Prison Journal, 79, 321–336.

    Article  Google Scholar 

  • Wilkinson, L., & Task Force on Statistical Inference, APA Board of Scientific Affairs. (1999). Statistical methods in psychology journals: guidelines and explanations. American Psychologist, 54, 594–604.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Weisburd.

Additional information

An earlier version of this paper was delivered as the Joan McCord Lecture at the American Society of Criminology Meeting in St. Louis in November of 2008. I would like to thank Breanne Cave, Lorraine Green Mazerolle, Dave McClure, Shomron Moyal, Anthony Petrosino, Cody Telep, Tal Yonaton, Gali Weissmann, Julie Willis, and David Wilson for their helpful comments on earlier drafts of this work.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Weisburd, D. Justifying the use of non-experimental methods and disqualifying the use of randomized controlled trials: challenging folklore in evaluation research in crime and justice. J Exp Criminol 6, 209–227 (2010). https://doi.org/10.1007/s11292-010-9096-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11292-010-9096-2

Keywords

Navigation