Skip to main content

Advertisement

Log in

Why do evaluation researchers in crime and justice choose non-experimental methods?

  • Original Article
  • Published:
Journal of Experimental Criminology Aims and scope Submit manuscript

Abstract

Despite the general theoretical support for the value and use of randomized controlled experiments in determining ‘what works’ in criminal justice interventions, they are infrequently used in practice. Reasons often given for their rare use include that experiments present practical difficulties and ethical challenges or tend to over-simplify complex social processes. However, there may be other reasons why experiments are not chosen when studying criminal justice-related programs. This study reports the findings of a survey of criminal justice evaluation researchers as to their methodological choices for research studies they were involved in. The results suggest that traditional objections to experiments may not be as salient as initially believed and that funding agency pressure as well as academic mentorship may have important influences on the use of randomized controlled designs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Andrews, D. A., Zinger, I., Hoge, R. D., Bonta, J., Gendreau, P. & Cullen, F. T. (1990). Does correctional treatment work? A clinically relevant and psychologically informed meta-analysis. Criminology 28(3), 369–404.

    Google Scholar 

  • Boruch, R. (1976). On common contentions about randomized field experiments. In G. Glass (Ed.), Evaluation studies review annual. Beverly Hills, CA: Sage Publications.

    Google Scholar 

  • Boruch, R., Snyder, B. & DeMoya, D. (2000a). The importance of randomized field trials. Crime and Delinquency 46(2), 156–180.

    Google Scholar 

  • Boruch, R., Victor, T. & Cecil, J. S. (2000b). Resolving ethical and legal problems in randomized experiments. Crime and Delinquency 46(3), 330–353.

    Google Scholar 

  • Burtless, G. (1995). The case for randomized field trials in economic and policy research. The Journal of Economic Perspectives 9(2), 63–84.

    Google Scholar 

  • Cameron, S. & Blackburn, R. (1981). Sponsorship and academic career success. The Journal of Higher Education 52, 369–377.

    Google Scholar 

  • Campbell, D. & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research on teaching. In N. L. Gage (Ed.), Handbook of research on teaching. Chicago: Rand McNally, American Educational Research Association.

    Google Scholar 

  • Clark, S. & Corcoran, M. (1986). Perspectives on the professional socialization of women faculty: A case of accumulated disadvantage? The Journal of Higher Education 57, 20–43.

    Google Scholar 

  • Clarke, R. V. & Cornish, D. B. (1972). The controlled trial in institutional research: Paradigm or pitfall for penal evaluators? Home Office Research Studies (Vol. 15). London, UK: Her Majesty’s Stationery Office.

    Google Scholar 

  • Cook, T. (2003). Resistance to experiments: Why have educational evaluators chosen not to do randomized experiments? The Annals of the American Academy of Political and Social Science 589, 114–149.

    Article  Google Scholar 

  • Cook, T. & Campbell, D. (1979). Quasi-experimentation: Design and analysis issues for field settings. Chicago: Rand McNally.

    Google Scholar 

  • Corcoran, M. & Clark, S. (1984). Professional socialization and contemporary career attitudes of three faculty generations. Research in Higher Education 20, 131–153.

    Article  Google Scholar 

  • Cox, S. M., Davidson, W. S., & Bynum, T. S. (1995). A meta-analytic assessment of delinquency-related outcomes of alternative education programs. Crime and Delinquency 2, 219–234.

    Google Scholar 

  • Dowden, C., Antonowicz, D. & Andrews, D. A. (2003). Effectiveness of relapse prevention with offenders: A meta-analysis. International Journal of Offender Therapy and Comparative Criminology 47(5), 516–528.

    Article  PubMed  Google Scholar 

  • Farrington, D. (1983). Randomized experiments on crime and justice. In M. Tonry & N. Morris (Eds.), Crime & justice: An annual review of research (Vol. IV, pp. 257–308). Chicago, IL: The University of Chicago Press.

    Google Scholar 

  • Farrington, D. (2003a). Methodological quality standards for evaluation research. Annals of the American Academy of Political and Social Sciences 587, 49–68.

    Article  Google Scholar 

  • Farrington, D. (2003b). A short history of randomized experiments in criminology: A meager feast. Evaluation Review 27(3), 218–227.

    Article  PubMed  Google Scholar 

  • Farrington, D. & Petrosino, A. (2001). The Campbell Collaboration Crime and Justice Group. Annals of the American Academy of Political and Social Sciences 578, 35–49.

    Google Scholar 

  • Feder, L., Jolin, A. & Feyerherm, W. (2000). Lessons from two randomized experiments in criminal justice settings. Crime and Delinquency 46(3), 380–400.

    Google Scholar 

  • Garner, J. H. & Visher, C. A. (2003). The production of criminological experiments. Evaluation Review 27(3), 316–335.

    Article  PubMed  Google Scholar 

  • Gordon, G. & Morse, E. V. (1975). Evaluation research: A critical review. The Annual Review of Sociology.

  • Heckman, J. & Smith, J. (1995). Assessing the case for social experiments. Journal of Economic Perspectives 9(2), 85–110.

    Google Scholar 

  • Kelling, G., Pate, A. M. Dieckman, D. & Brown, C. E. (1974). The Kansas City preventive patrol experiment: Summary report. Washington DC: The Police Foundation.

    Google Scholar 

  • Kuhn, T. (1970). The structure of scientific revolutions. 2nd edn. Chicago, IL: University of Chicago Press.

    Google Scholar 

  • Lipsey, M. & Wilson, D. (1993). The efficacy of psychological, educational, and behavioral treatment: Confirmation from meta-analysis. The American Psychologist 48, 1181–1209.

    Article  CAS  PubMed  Google Scholar 

  • Lipton, D., Martinson, R. & Wilks, J. (1975). The effectiveness of correctional treatment: A survey of treatment evaluation studies. New York: Praeger.

    Google Scholar 

  • Logan, C. H. & Gaes, G. G. (1993). Meta-analysis and the rehabilitation of punishment. Justice Quarterly 10, 245–263.

    Google Scholar 

  • Lösel, F. & Koferl, P. (1989). Evaluation research on correctional treatment in West Germany: A meta-analysis. In H. wegener, F. Lösel & J. Haisch (Eds.), Criminal behavior and the justice system: Psychological perspectives. New York: Springer-Verlag.

    Google Scholar 

  • MacKenzie, D. L. (2002). Reducing the criminal activities of known offenders and delinquents: Crime prevention in the courts and corrections. In L. W. Sherman, D. P. Farrington, B. C. Welsh, & D. L. MacKenzie (Eds.), Evidence based crime prevention (pp. 330–404). London, UK: Routledge.

    Google Scholar 

  • McCord, J. (2003). Cures that harm: Unanticipated outcomes of crime prevention programs. Annals of the American Academy of Political and Social Sciences 587, 16–30.

    Article  Google Scholar 

  • Palmer, T. & Petrosino, A. (2003). The “experimenting agency”: The California Youth Authority Research division. Evaluation Review 27(3), 228–266.

    Article  PubMed  Google Scholar 

  • Pawson, R. & Tilley, N. (1994). What works in evaluation research? British Journal of Criminology 34(3), 291–306.

    Google Scholar 

  • Pawson, R. & Tilley, N. (1997). Realistic evaluation. London: Sage.

    Google Scholar 

  • Petersilia, J. (1989). Implementing randomized experiments – lessons from BJA’s intensive supervision project. Evaluation Review 13(5), 435–458.

    Google Scholar 

  • Petrosino, A., Boruch, R. Soydan, H., Duggan, L. & Sanchez-Meca, J. (2001). Meeting the challenges of evidence-based policy: The Campbell Collaboration. Annals of the American Academy of Political and Social Sciences 578, 14–34.

    Google Scholar 

  • Prendergast, M. L., Podus, D., & Chang, E. (2000). Program factors and treatment outcomes in drug dependence treatment: An examination using meta-analysis. Substance Use and Misuse 35(12–14), 1931–1965.

    CAS  PubMed  Google Scholar 

  • Raul, S. & Peterson, L. (1992). Nursing education administrators: Level of career development and mentoring. Journal of Professional Nursing 8, 161–169.

    Article  PubMed  Google Scholar 

  • Reskin, B. (1979). Academic sponsorship and scientists’ careers. Sociology of Education 52, 129–146.

    Google Scholar 

  • Shadish, W., Cook, T. & Campbell, D. (2002) Experimental and quasi-experimental designs for generalized causal inferences. Boston: Houghton-Mifflin.

    Google Scholar 

  • Shepherd, J. P. (2003). Explaining feast of famine in randomized field trials. Evaluation Review 27(3), 290–315.

    Article  PubMed  Google Scholar 

  • Sherman, L. W. (2003). Misleading evidence and evidence-led policy: Making social science more experimental. The Annals of the American Academy of Political and Social Science 589, 6–19.

    Article  Google Scholar 

  • Sherman, L. W., Gottfredson, D., MacKenzie, D. L. Eck, J., Reuter, P. & Bushway, S. (1997). Preventing crime: What works, what doesn’t, what’s promising: A report to the united states congress. Washington, DC: National Institute of Justice.

    Google Scholar 

  • Sherman, L. W., Farrington, D. P., Welsh, B. C. & MacKenzie, D. L. (Eds.), (2002). Evidence based crime prevention. London, UK: Routledge.

    Google Scholar 

  • Spelman, W. & Brwon, D. K. (1984). Calling the police: Citizen reporting of serious crime. Washington, DC: United States Government Printing Office.

    Google Scholar 

  • Stufflebeam, D. L. (2001). Evaluation models. San Francisco, CA: Jossey-Bass.

    Google Scholar 

  • Wanner, R., Lewis, L. & Gregorio, D. (1981). Research productivity in academia: A comparative study of the sciences, social sciences and humanities. Sociology of Education 54, 238–253.

    Google Scholar 

  • Weisburd, D. (2000). Randomized experiments in criminal justice policy: Prospects and problems. Crime and Delinquency 46(2), 181–193.

    Google Scholar 

  • Weisburd, D. (2001). Magic and science in multivariate sentencing models: Reflections on the limits of statistical methods. Israel Law Review 35(2), 225–248.

    Google Scholar 

  • Weisburd, D. (2003). Ethical practice and evaluation of interventions in crime and justice: The moral imperative for randomized trials. Evaluation Review 27(3), 336–354.

    Article  PubMed  Google Scholar 

  • Weisburd, D. & Petrosino, A. (forthcoming). Experiments: Criminology. In K. Kempf (Ed.), Encyclopedia of social measurement. Chicago, IL: Academic Press.

  • Weisburd, D., Lum, C. & Petrosino, A. (2001). Does research design affect study outcomes? The Annals of the American Academy of Political and Social Science 578, 50–70.

    Google Scholar 

  • Wilson, D. (2000). Meta-analyses in alcohol and other drug abuse treatment research. Addiction 95(3), 419–438.

    Article  PubMed  Google Scholar 

  • Wilson, D (2001). Meta-analytic methods for criminology. Annals of the American Academy of Political and Social Sciences 578, 71–89.

    Google Scholar 

  • Wilson, D., Gallagher, C. & MacKenzie, D. L. (2000). A meta-analysis of corrections-based education, vocation, and work programs for adult offenders. Journal of Research in Crime and Delinquency 37, 347–368.

    Google Scholar 

  • Wilson, D., Gottfredson, D. & Najaka, S. (2001). School-based prevention of problem behaviors: A meta-analysis. Journal of Quantitative Criminology 17(3), 247–272.

    Article  Google Scholar 

  • Whitehead, J. & Lab, S. (1989). A meta-analysis of juvenile correctional treatment. Journal of Research in Crime and Delinquency 26(3), 276–295.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cynthia Lum.

Additional information

In August 2005, Dr. Lum’s affiliation will change to George Mason University.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lum, C., Yang, SM. Why do evaluation researchers in crime and justice choose non-experimental methods?. J Exp Criminol 1, 191–213 (2005). https://doi.org/10.1007/s11292-005-1619-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11292-005-1619-x

Key words

Navigation