Skip to main content

Advertisement

Log in

Improving evaluation of anti-crime programs: Summary of a National Research Council report

  • Published:
Journal of Experimental Criminology Aims and scope Submit manuscript

Abstract

This article summarizes a report of the National Research Council: Improving Evaluation of Anti-crime Programs. It is based on a workshop, held in September 2003, in which participants presented and discussed examples of evaluation-related studies that represent the methods and challenges associated with research at three levels: interventions directed toward individuals; interventions in neighborhoods, schools, prisons, or communities; and interventions at a broad policy level. The article, and the report on which it is based, is organized around five questions that require thoughtful analysis in the development of any evaluation plan: What questions should the evaluation address? When is it appropriate to conduct an impact evaluation? How should an impact evaluation be designed? How should the evaluation be implemented? What organizational infrastructure and procedures support high quality evaluation? The authors highlight major considerations in developing and implementing evaluation plans for criminal justice programs and make recommendations for improvement of government funded evaluation studies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Ayres, I. & Levitt S. (1998). Measuring positive externalities from unobservable victim precaution: An empirical analysis of LOJACK. Quarterly Journal of Economics 113(1), 43–77.

    Article  Google Scholar 

  • Berk, R. (1992). The differential deterrent effects of an arrest in incidents of domestic violence: A Bayesian analysis of four randomized field experiments (with Alec Campbell, Ruth Klap and Bruce Western). American Sociological Review 5(57), 689–708.

    Google Scholar 

  • Berk, R. (2003). Conducting a randomized field experiment for the California department of corrections: The experience of the inmate classification experiment. Paper presented at the Workshop on Improving Evaluation of Criminal Justice Programs, September 5, Washington DC: National Research Council. Available: http://www7.nationalacademies.org/CLAJ/Evaluation%20-%20Richard%20Berk.pdf.

  • Braga, A. (2003). Hot spots policing and crime prevention: Evidence from five randomized controlled trials. Paper presented at the Workshop on Improving Evaluation of Criminal Justice Programs, September 5, Washington DC: National Research Council. Available: http://www7.nationalacademies.org/CLAJ/Evaluation%20-%20Anthony%20Braga.doc.

  • Brainard, J. (2001). The wrong rules for social science? The Chronicle of Higher Education A21, March 9.

  • Brown, S. (1989). Statistical power and criminal justice research. Journal of Criminal Justice 17, 115–122.

    Article  Google Scholar 

  • Chamberlain, P. (2003). The benefits and hazards of conducting community-based randomized trials: Multidimensional treatment foster care as a case example. Paper presented at the Workshop on Improving Evaluation of Criminal Justice Programs, September 5, Washington DC: National Research Council. Available: http://www7.nationalacademies.org/CLAJ/Evaluation%20-%20Patricia%20Chamberlain.doc.

  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Cook, T. & Campbell, D. (1979). Quasi-experimentation: Design and analysis issues for field settings. Boston, MA: Houghton Mifflin Company.

    Google Scholar 

  • Cook, P. J. & Tauchen, G. (1984). The effect of minimum drinking age legislation on youthful auto fatalities, 1970–1977. Journal of Legal Studies 13, 169–190.

    Article  Google Scholar 

  • Cooper, H. M. (1998). Synthesizing research: A guide for literature reviews (3rd ed.). (Applied Social Research Methods Series 2.). Thousand Oaks, CA: Sage.

    Google Scholar 

  • Cooper, H. M. & Hedges, L. V. (1994). The handbook of research synthesis. New York: Russell Sage Foundation.

    Google Scholar 

  • Farrington, D. P. & Welsh, B. C. (2002). Improved street lighting and crime prevention. Justice Quarterly 19(2), 313–331.

    Article  Google Scholar 

  • Fleiss, J. (1982). Multicenter clinical trials: Bradford Hill’s contributions and some subsequent developments. Statistics in Medicine 1, 353–359.

    Google Scholar 

  • Garner, J. H. & Visher, C. A. (2003). The production of criminological experiments. Evaluation Review 27(3), 316–335.

    Article  Google Scholar 

  • Garner, J., Fagan, J. & Maxwell, C. (1995). Published findings from the spouse assault replication program: A critical review. Journal of Quantitative Criminology 11(1), 3–28.

    Article  Google Scholar 

  • Glasgow, R. E., Vogt, T. M. & Boles, S. M. (1999). Evaluating the public health impact of health promotion interventions: The RE-AIM framework. American Journal of Public Health 89, 1323–1327.

    Article  Google Scholar 

  • Herrell, J. M. & Straw, R. B. (2002). Conducting multiple site evaluations in real-world Settings (New Directions for Evaluation No. 94.). San Francisco, CA: Jossey-Bass.

    Google Scholar 

  • Kelling, G. L., Pate, T., Dieckman, D. & Brown, C. E. (1974). The Kansas City preventive patrol experiment: Technical report. Washington, DC: Police Foundation.

    Google Scholar 

  • Lipsey, M. (2000). Statistical conclusion validity for intervention research: A significant (p < .05) problem. In L. Bickman (Ed.), Validity and social experimentation: Donald Campbell’s legacy. Thousands Oaks, CA: Sage.

    Google Scholar 

  • Lipsey, M. & Wilson, D. (2001). Practical meta-analysis (Applied Social Research Methods Series Vol. 49.). Thousand Oaks, CA: Sage.

    Google Scholar 

  • Ludwig, J. & Cook, P. J. (2000). Homicide and suicide rates associated with implementation of the Brady Handgun violence prevention act. Journal of the American Medical Association 284, 585–591.

    Article  Google Scholar 

  • Manski, C. (1995). Identification problems in the social sciences. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Manski, C. & Nagin, D. (1998). Bounding disagreements about treatment effects: A case study of sentencing and recidivism. Sociological Methodology 28, 99–137.

    Article  Google Scholar 

  • National Research Council (2001). Informing America’s policy on illegal drugs: What we don’t know keeps hurting us. Committee on data and research for policy on illegal drugs. In C. F. Manski, J. V. Pepper & C. V. Petrie (Eds.), Committee on law and justice and committee on national statistics. Commission on behavioral and social sciences and education. Washington, DC: National Academy Press.

    Google Scholar 

  • National Research Council (2004) Fairness and effectiveness in policing: The evidence. Committee to review research on police policy and practices. In W. Skogan & K. Frydl (Eds.), Committee on law and justice, division of behavioral and social sciences and education. Washington, DC: The National Academies Press.

    Google Scholar 

  • National Research Council (2005). Firearms and violence: A critical review. Committee to improve research information and data on firearms. In C. F. Wellford, J. V. Pepper & C.V. Petrie (Eds.), Committee on law and justice, division of behavioral and social sciences and education. Washington, DC: The National Academies Press.

    Google Scholar 

  • National Research Council and Institute of Medicine (2001). Juvenile crime, juvenile justice. Panel on juvenile crime: Prevention, treatment, and control. In J. McCord, C.Spatz Widom & N. A. Crowell (Eds.), Committee on law and justice and board on children, youth, and families. Washington, DC: National Academy Press.

    Google Scholar 

  • Oakes, J. M. (2002). Risks and wrongs in social science research. Evaluation Review 26(5), 443–479.

    Article  Google Scholar 

  • Palmer, T. & Petrosino, A. (2003). The experimenting agency: The California youth authority research division. Evaluation Review 27(3), 228–266.

    Article  Google Scholar 

  • Petersilia, J. & Turner, S. (1993). Intensive probation and parole. In M. Tonry (Ed.), Crime and justice: A review of research (vol. 19) (pp. 281–335). Chicago, IL: The University of Chicago Press.

    Google Scholar 

  • Petrosino, A., Turpin-Petrosino, C. & Buehler, J. (2003a). Scared straight and other juvenile awareness programs for preventing juvenile delinquency: A systematic review of the randomized experimental evidence. Annals of the American Academy of Political and Social Science 589, 41–62.

    Article  Google Scholar 

  • Petrosino, A., Boruch, R. F., Farrington, D. P., Sherman, L. W. & Weisburd, D. (2003b). Toward evidence-based criminology and criminal justice: Systematic reviews, the Campbell collaboration, and the crime and justice group. International Journal of Comparative Criminology 3(1), 42–61.

    Google Scholar 

  • Rossi, P. H., Lipsey, M. W. & Freeman, H. E. (2004). Evaluation: A systematic approach (7th ed.). Thousand Oaks, CA: Sage.

    Google Scholar 

  • Rydell, C. P. & Everingham, S. S. (1994). Controlling cocaine: Supply versus demand programs. Santa Monica, CA: RAND.

    Google Scholar 

  • Shadish, W., Cook, T. & Campbell, D. (2002). Experimental and quasi-experimental designs for generalized causal inferences. Boston, MA: Houghton-Mifflin Company.

    Google Scholar 

  • Sherman, L. D. (1992). Policing domestic violence: Experiments and dilemmas. New York: Free Press.

    Google Scholar 

  • Sherman, L. D. (2004). Research and policing: The infrastructure and political economy of federal funding. Annals of the American Academy of Political and Social Science 593, 156–178.

    Article  Google Scholar 

  • Sherman, L. D. & Weisburd, D. (1995). General deterrent effects of police patrol in crime ‘hot spots’: A randomized study. Justice Quarterly 12(4).

  • Sherman, L., Gottfredson, D., MacKenzie, D., Eck, J., Reuter, P. & Bushway, S. (1997). Preventing crime: What works, what doesn’t, what’s promising: A report to the United States Congress. Washington, DC: National Institute of Justice.

    Google Scholar 

  • Sherman, L., Farrington, D., Welsh, B. & MacKenzie, D. (eds.) (2002). Evidence-based crime prevention. London, England: Routledge.

  • Stanley, K., Stjernsward, M. & Isley, M. (1981). The conduct of a cooperative clinical trial. New York: Springer-Verlag.

    Google Scholar 

  • Todd, P. (2003). Alternative methods of evaluating anti-crime programs. Paper presented at the Workshop on Improving Evaluation of Criminal Justice Programs, September 5, Washington DC: National Research Council. Available: http://nrc51/xpedio/groups/dbasse/documents/webpage/027646%7E2.doc.

  • U. S. General Accounting Office (2001). Juvenile justice: OJJDP reporting requirements for discretionary and formula grantees and concerns about evaluation studies. Washington, DC: U. S. Government Printing Office.

    Google Scholar 

  • U. S. General Accounting Office (2002a). Drug courts: Better DOJ data collection and evaluation efforts needed to measure impact of drug court programs. Washington, DC: U.S. Government Printing Office.

    Google Scholar 

  • U. S. General Accounting Office (2002b). Justice impact evaluations: One Byrne evaluation was rigorous; All reviewed violence against women office evaluations were problematic. Washington, DC: U. S. Government Printing Office.

    Google Scholar 

  • U. S. General Accounting Office (2002c). Violence against women office: Problems with grant monitoring and concerns about evaluation studies. Washington, DC: U. S. Government Printing Office.

    Google Scholar 

  • U. S. General Accounting Office (2003a). Justice outcome evaluations: Design and implementation of studies require more NIJ attention. Washington, DC: U.S. Government Printing Office.

    Google Scholar 

  • U. S. General Accounting Office (2003b). Program evaluation: An evaluation culture and collaborative partnerships help build agency capacity. Washington, DC: U. S. Government Printing Office.

    Google Scholar 

  • Wagenaar, A. (1999). Communities mobilizing for change on alcohol. Journal of Community Psychology 27(3), 315–326.

    Article  Google Scholar 

  • Weisburd, D. (2003). Ethical practice and evaluation of interventions in crime and justice: The moral imperative for randomized trials. Evaluation Review 27(3), 336–354.

    Article  Google Scholar 

  • Weisburd, D. & Taxman, F. (2000). Developing a multicenter randomized trial in criminology: The case of HIDTA. Journal of Quantitative Criminology 16(3), 315–340.

    Article  Google Scholar 

  • Weisburd, D., Petrosino, A. & Mason, G. (1993). Design sensitivity in criminal justice experiments. In M. Tonry (Ed.) Crime and justice: A review of research (vol. 17). Chicago, IL: University of Chicago Press.

    Google Scholar 

  • Weisburd, D., Lum, C. M. & Yang, S. M. (2002). When can we conclude that treatments or programs don’t work? The Annals of the American Academy of Political and Social Science 587, 31–48.

    Article  Google Scholar 

  • Weiss, C. H. (1998). Evaluation (2nd ed.). Englewood Cliffs, NJ: Prentice Hall.

    Google Scholar 

  • Wholey, J. S. (1994). Assessing the feasibility and likely usefulness of evaluation. In J. S. Wholey, H. P. Hatry & K. E. Newcomer (Eds.), Handbook of practical program evaluation (pp. 15–39). San Francisco, CA: Jossey-Bass.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carol Petrie.

Additional information

This summary article is based on the National Research Council Report of the Committee on Improving Evaluation of Anti-Crime Programs. The authors of that report are: Mark Lipsey, Ph.D., the committee chair, and committee members John Adams, Ph.D., Denise Gottfredson, Ph.D., John Pepper, Ph.D., and David Weisburd, Ph.D. The authors of this summary have remained as faithful as possible to the longer, original report. However, any differences that appear in this article are attributable to the authors alone and not to the National Research Council.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lipsey, M., Petrie, C., Weisburd, D. et al. Improving evaluation of anti-crime programs: Summary of a National Research Council report . J Exp Criminol 2, 271–307 (2006). https://doi.org/10.1007/s11292-006-9009-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11292-006-9009-6

Key words

Navigation