Advertisement

Journal of Experimental Criminology

, Volume 10, Issue 4, pp 573–597 | Cite as

Must we settle for less rigorous evaluations in large area-based crime prevention programs? Lessons from a Campbell review of focused deterrence

  • Anthony A. Braga
  • David L. Weisburd
Article

Abstract

Objectives

Evaluations from a recent Campbell systematic review of focused deterrence programs are critically reviewed to determine whether more rigorous evaluations are possible given methodological challenges such as developing appropriate units of analysis, generalizing findings beyond study sites, and controlling for the contamination of available comparison groups.

Methods

We synthesize the available evaluation literature on focused deterrence programs completed before and after the publication of the Campbell review to assess opportunities to conduct randomized controlled trials and stronger quasi-experimental evaluations.

Results

We find that focused deterrence strategies are amenable to more rigorous evaluation methodologies such as block randomized place-based trials, multisite cluster randomized trials, and quasi-experimental evaluations that employ advanced statistical matching techniques.

Conclusions

Focused deterrence programs can, and should, be subjected to more rigorous tests that generate more robust evidence on program impacts and provide further insight into the crime control mechanisms at work in these programs. More generally, our review supports the idea that program evaluators do not have to “settle for less” methodological rigor when testing large area-based crime prevention programs.

Keywords

Deterrence Randomized experiments Quasi-experiments Program evaluation 

References

  1. Abadie, A., & Gardeazabal, J. (2003). The economic costs of conflict: a case study of the Basque country. American Economic Review, 93, 113–132.CrossRefGoogle Scholar
  2. Abadie, A., Diamond, A., & Hainmueller, J. (2010). Synthetic control methods for comparative case studies: estimating the effect of California’s tobacco control program. Journal of the American Statistical Association, 105, 493–505.CrossRefGoogle Scholar
  3. Albright, J., & Marinova, D. (2010). Estimating multilevel models using SPSS, Stata, SAS, and R. http://www.indiana.edu/~statmath/stat/all/hlm/hlm.pdf.
  4. Berk, R. (2005a). Knowing when to fold ‘em: an essay on evaluating the impact of ceasefire, compstat, and exile. Criminology & Public Policy, 4, 451–466.CrossRefGoogle Scholar
  5. Berk, R. A. (2005b). Randomized experiments as the bronze standard. Journal of Experimental Criminology, 1, 417–433.CrossRefGoogle Scholar
  6. Boruch, R. F. (1975). On common contentions about randomized field experiments. In R. F. Boruch & H. L. Reicken (Eds.), Experimental testing of public policy: The proceedings of the 1974 social sciences research council conference on social experimentation (pp. 107–142). Boulder, CO: Westview Press.Google Scholar
  7. Boruch, R. F. (1997). Randomized experiments for planning and evaluation. Newbury Park, CA: Sage.Google Scholar
  8. Boyle, D. J., Lanterman, J., Pascarella, J., & Cheng, C. C. (2010). The impact of Newark’s operation ceasefire on trauma center gunshot wound admissions. Newark, NJ: University of Medicine and Dentistry of New Jersey, Violence Institute of New Jersey.Google Scholar
  9. Boyum, D. A., Caulkins, J. P., & Kleiman, M. (2011). Drugs, crime, and public policy. In J. Q. Wilson & J. Petersilia (Eds.), Crime and public policy (pp. 368–410). New York: Oxford University Press.Google Scholar
  10. Braga, A. A. (2008). Pulling levers focused deterrence strategies and the prevention of gun homicide. Journal of Criminal Justice, 36, 332–343.CrossRefGoogle Scholar
  11. Braga, A. A. (2010). Setting a higher standard for the evaluation of problem-oriented policing initiatives. Criminology & Public Policy, 9, 173–182.CrossRefGoogle Scholar
  12. Braga, A. A. (2012). Getting deterrence right? evaluation evidence and complementary crime control mechanisms. Criminology & Public Policy, 11, 201–210.CrossRefGoogle Scholar
  13. Braga, A. A. (2013). Quasi-experimentation when random assignment is not possible: observations from practical experiences in the field. In B. C. Welsh, A. A. Braga, & G. Bruinsma (Eds.), Experimental criminology: prospects for improving science and public policy (pp. 223–252). New York: Cambridge University Press.Google Scholar
  14. Braga, A. A., & Bond, B. J. (2008). Policing crime and disorder hot spots: a randomized controlled trial. Criminology, 46, 577–607.CrossRefGoogle Scholar
  15. Braga, A. A., & Weisburd, D. (2012). The effects of “pulling levers” focused deterrence strategies on crime. Campbell Systematic Reviews. doi: 10.4073/csr.2012.6.Google Scholar
  16. Braga, A. A., Weisburd, D. L., Waring, E. J., Green-Mazerolle, L., Spelman, W., & Gajewski, F. (1999). Problem-oriented policing in violent crime places: a randomized controlled experiment. Criminology, 37, 541–580.CrossRefGoogle Scholar
  17. Braga, A. A., Kennedy, D. M., Waring, E. J., & Piehl, A. M. (2001). Problem-oriented policing, deterrence, and youth violence: an evaluation of Boston’s operation ceasefire. Journal of Research in Crime and Delinquency, 38, 195–225.CrossRefGoogle Scholar
  18. Braga, A. A., Hureau, D. M., & Winship, C. (2008a). Losing faith? police, black churches, and the resurgence of youth violence in Boston. Ohio State Journal of Criminal Law, 6, 141–172.Google Scholar
  19. Braga, A. A., Pierce, G., McDevitt, J., Bond, B., & Cronin, S. (2008b). The strategic prevention of gun violence among gang-involved offenders. Justice Quarterly, 25, 132–162.CrossRefGoogle Scholar
  20. Braga, A. A., Hureau, D. M., & Papachristos, A. V. (2011). An ex-post-facto evaluation framework for place-based police interventions. Evaluation Review, 35, 592–626.CrossRefGoogle Scholar
  21. Braga, A. A., Apel, R., & Welsh, B. (2013). The spillover effects of focused deterrence on gang violence. Evaluation Review, 37, 314–342.CrossRefGoogle Scholar
  22. Braga, A. A., Hureau, D. M., & Papachristos, A. V. (2014). Deterring gang-involved gun violence: measuring the impact of Boston’s operation ceasefire on street gang behavior. Journal of Quantitative Criminology, 30, 113–139.CrossRefGoogle Scholar
  23. Campbell, D. T., & Boruch, R. F. (1975). Making the case for randomized assignment to treatment by considering the alternatives. In C. Bennett & A. Lumsdaine (Eds.), Evaluation and experiments: some critical issues in assessing social programs (pp. 195–296). New York: Academic.Google Scholar
  24. Clarke, R. V. (Ed.). (1997). Situational crime prevention: successful case studies. New York: Harrow and Heston.Google Scholar
  25. Clarke, R. V., & Cornish, D. (1972). The controlled trial in institutional research. London: H.M. Stationary Office.Google Scholar
  26. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.Google Scholar
  27. Cook, P. J. (2012). Editorial introduction: the impact of drug market pulling levers policing on neighborhood violence. Criminology & Public Policy, 11, 161–164.CrossRefGoogle Scholar
  28. Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: design and analysis issues for field settings. Chicago: Rand McNally.Google Scholar
  29. Cook, P. J., & Ludwig, J. (2006). Aiming for evidence-based gun policy. Journal of Policy Analysis and Management, 48, 691–735.CrossRefGoogle Scholar
  30. Corsaro, N., & McGarrell, E. (2009). An evaluation of the Nashville drug market initiative (DMI) pulling levers strategy. East Lansing, MI: Michigan State University.Google Scholar
  31. Corsaro, N., Brunson, R., & McGarrell, E. (2010). Problem-oriented policing and open-air drug markets: examining the Rockford pulling levers strategy. Crime & Delinquency. doi: 10.1177/0011128709345955.Google Scholar
  32. Corsaro, N., Hunt, E., Hipple, N. K., & McGarrell, E. (2012). The impact of drug market pulling levers policing on neighborhood violence: an evaluation of the high point drug market intervention. Criminology & Public Policy, 11, 167–200.CrossRefGoogle Scholar
  33. Durlauf, S., & Nagin, D. (2011). Imprisonment and crime: can both be reduced? Criminology & Public Policy, 10, 13–54.CrossRefGoogle Scholar
  34. Eck, J. (2002). Learning from experience in problem-oriented policing and situational prevention: the positive functions of weak evaluations and the negative functions of strong ones. In N. Tilley (Ed.), Evaluation for crime prevention, crime prevention studies (Vol. 14, pp. 93–117). Monsey, NY: Criminal Justice Press.Google Scholar
  35. Engel, R. S., Corsaro, N., & Skubak Tillyer, M. (2010). Evaluation of the Cincinnati initiative to reduce violence (CIRV). Cincinnati, OH: University of Cincinnati Policing Institute.Google Scholar
  36. Fagan, J. (2002). Policing guns and youth violence. The Future of Children, 12, 133–151.CrossRefGoogle Scholar
  37. Farrington, D. P., Gottfredson, D. C., Sherman, L. W., & Welsh, B. C. (2006). The Maryland scientific methods scale. In L. W. Sherman, D. P. Farrington, B. C. Welsh, & D. L. MacKenzie (Eds.), Evidence-based crime prevention (revth ed., pp. 13–21). New York: Routledge.Google Scholar
  38. Fisher, R. A. (1926). The arrangement of field experiments. Journal of the Ministry of Agriculture of Great Britain, 33, 503–513.Google Scholar
  39. Fisher, R. A. (1935). The design of experiments. Edinburgh: Oliver and Boyd.Google Scholar
  40. Goldstein, H. (1990). Problem-oriented policing. Philadelphia, PA: Temple University Press.Google Scholar
  41. Guerette, R. T. (2009). The pull, push, and expansion of situational crime prevention evaluation: an appraisal of thirty-seven years of research. In J. Knutsson & N. Tilley (Eds.), Evaluating crime reduction initiatives, crime prevention studies (Vol. 24, pp. 29–58). Monsey, NY: Criminal Justice Press.Google Scholar
  42. Harless, W. (2013). Cities use sticks, carrots to rein in gangs. October: Wall Street Journal. 14.Google Scholar
  43. Hawken, A. & Kleiman, M. (2009). Managing drug involved probationers with swift and certain sanctions. Final report submitted to the National Institute of Justice. Unpublished report.Google Scholar
  44. Heckman, J., & Smith, J. (1995). Assessing the case for social experiments. Journal of Economic Perspectives, 9, 85–110.CrossRefGoogle Scholar
  45. Kennedy, D. (1997). Pulling levers: chronic offenders, high-crime settings, and a theory of prevention. Valparaiso University Law Review, 31, 449–484.Google Scholar
  46. Kennedy, D. (2008). Deterrence and crime prevention. New York: Routledge.Google Scholar
  47. Kennedy, D., & Wong, S.-L. (2009). The high point drug market intervention strategy. Washington, DC: Community Oriented Policing Services, U.S. Department of Justice.Google Scholar
  48. Kennedy, D., Piehl, A., & Braga, A. A. (1996). Youth violence in Boston: gun markets, serious youth offenders, and a use-reduction strategy. Law & Contemporary Problems, 59, 147–196.CrossRefGoogle Scholar
  49. Kennedy, D. M., Braga, A. A., & Piehl, A. M. (1997). The (un)known universe: mapping gangs and gang violence in Boston. In D. L. Weisburd & J. T. McEwen (Eds.), Crime mapping and crime prevention (pp. 219–262). Monsey, NY: Criminal Justice Press.Google Scholar
  50. Knutsson, J. (2009). Standards of evaluations in problem-oriented policing projects: Good enough? In J. Knutsson & N. Tilley (Eds.), Evaluating crime reduction initiatives, Crime prevention studies (24th ed., pp. 7–28). Monsey, NY: Criminal Justice Press.Google Scholar
  51. LaVigne, N., & Lowry, S. (2011). Evaluation of camera use to prevent crime in commuter parking facilities: a randomized controlled trial. Washington, DC: Urban Institute.Google Scholar
  52. Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks, CA: Sage.Google Scholar
  53. Ludwig, J. (2005). Better gun enforcement, less crime. Criminology & Public Policy, 4, 677–716.CrossRefGoogle Scholar
  54. MacKenzie, D. L., Umamaaheswar, J., & Lin, L.-C. (2013). Multisite randomized trials in criminology. In B. C. Welsh, A. A. Braga, & G. Bruinsma (Eds.), Experimental criminology: prospects for improving science and public policy (pp. 163–193). New York: Cambridge University Press.Google Scholar
  55. Manski, C. F. (2013). Public policy in an uncertain world: analysis and decisions. Cambridge, MA: Harvard University Press.Google Scholar
  56. McCord, J. (2003). Cures that harm: unanticipated outcomes of crime prevention programs. Annals of the American Academy of Political and Social Science, 587, 16–30.CrossRefGoogle Scholar
  57. McGarrell, E., Chermak, S., Wilson, J., & Corsaro, N. (2006). Reducing homicide through a ‘lever-pulling’ strategy. Justice Quarterly, 23, 214–229.CrossRefGoogle Scholar
  58. Miles, T., & Ludwig, J. (2007). The silence of the lambdas: deterring incapacitation research. Journal of Quantitative Criminology, 23, 287–301.CrossRefGoogle Scholar
  59. Morgan, S. L., & Winship, C. (2007). Counterfactuals and causal inference: methods and principals for social research. New York: Cambridge University Press.Google Scholar
  60. Mosteller, F., & Boruch, R. F. (2002). Evidence matters: randomized trials in educationresearch. Washington, DC: Brookings Institution.Google Scholar
  61. Murray, D. M. (1998). Design and analysis of group-randomized trials. New York: Oxford University Press.Google Scholar
  62. Papachristos, A. V., Meares, T., & Fagan, J. (2007). Attention felons: evaluating project safe neighborhoods in Chicago. Journal of Empirical Legal Studies, 4, 223–272.CrossRefGoogle Scholar
  63. Papachristos, A. V., Wallace, D., Meares, T., & Fagan, J. (2013). Desistance and legitimacy: The impact of offender notification meetings on recidivism among high risk offenders. Unpublished manuscript.Google Scholar
  64. Pawson, R., & Tilley, N. (1997). Realistic evaluation. London: Sage.Google Scholar
  65. Raudenbush, S. W., & Bryk, T. (2002). Hierarchical linear models: applications and data analysis methods (2nd ed.). Newbury Park, CA: Sage.Google Scholar
  66. Rosenbaum, P., & Rubin, D. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70, 41–55.CrossRefGoogle Scholar
  67. Rosenbaum, P., & Rubin, D. (1985). Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. American Statistician, 39, 33–38.Google Scholar
  68. Rosenfeld, R., Fornango, R., & Baumer, E. (2005). Did ceasefire, compstat, and exile reduce homicide? Criminology & Public Policy, 4, 419–450.CrossRefGoogle Scholar
  69. Rubin, D. B. (1990). Formal modes of statistical inferences for causal effects. Journal of Statistical Planning Inference, 25, 279–292.CrossRefGoogle Scholar
  70. Sampson, R. J. (2010). Gold standard myths: observations on the experimental turn in quantitative criminology. Journal of Quantitative Criminology, 26, 489–500.CrossRefGoogle Scholar
  71. Sampson, R. J., Winship, C., & Knight, C. (2013). Translating causal claims: principles and strategies for policy-relevant criminology. Criminology & Public Policy, 12, 587–616.CrossRefGoogle Scholar
  72. Saunders, J., Lundberg, R., Braga, A. A., Ridgeway, G., & Miles, J. (2014). A synthetic control approach to evaluating multiple geographically-focused crime interventions in the same city: DMI in high point. Santa Monica, CA: Rand Corporation.Google Scholar
  73. Shadish, W., Cook, T., & Campbell, D. (2002). Experimental and quasi-experimental designs for general causal inference. Boston: Houghton-Mifflin.Google Scholar
  74. Sherman, L. W., Gottfredson, D. C., MacKenzie, D. L., Eck, J. E., Reuter, P., & Bushway, S. D. (1997). Preventing crime: what works, what doesn’t, what’s promising. Washington, DC: U.S. Department of Justice, National Institute of Justice.Google Scholar
  75. Skogan, W., & Frydl, K. (2004). Fairness and effectiveness in policing: the evidence committee to review research on police policy and practices. Washington, DC: The National Academies Press.Google Scholar
  76. Tilley, N. (2009). What’s the “what” in “what works?” Health, policing, and crime prevention. In J. Knutsson & N. Tilley (Eds.), Evaluating crime reduction initiatives, Crime prevention studies (24th ed., pp. 121–146). Monsey, NY: Criminal Justice Press.Google Scholar
  77. Tita, G., Riley, K. J., Ridgeway, G., Grammich, C., Abrahamse, A., & Greenwood, P. W. (2004). Reducing gun violence: results from an intervention in east Los Angeles. Santa Monica: RAND Corporation.Google Scholar
  78. Weisburd, D. (1993). Design sensitivity in criminal justice experiments. In M. Tonry (Ed.), Crime and justice: a review of research (Vol. 17, pp. 337–379). Chicago: University of Chicago Press.Google Scholar
  79. Weisburd, D. (2003). Ethical practice and evaluation of interventions in crime and justice: the moral imperative for randomized trials. Evaluation Review, 27, 336–354.CrossRefGoogle Scholar
  80. Weisburd, D. (2010). Justifying the use of non-experimental methods and disqualifying the use of randomized controlled trials: challenging the folklore in evaluation research in crime and justice. Journal of Experimental Criminology, 6, 209–27.CrossRefGoogle Scholar
  81. Weisburd, D., & Eck, J. (2004). What can police do to reduce crime, disorder and fear? Annals of the American Academy of Political and Social Science, 593, 42–65.CrossRefGoogle Scholar
  82. Weisburd, D., & Gill, C. (2013). Block randomized trials at places: rethinking the limitations of small N experiments. Journal of Quantitative Criminology. doi: 10.1007/s10940-013-9196-z.Google Scholar
  83. Weisburd, D., & Green, L. (1995). Policing drug hot spots: The Jersey City drug market analysis experiment. Justice Quarterly, 12, 711–735.CrossRefGoogle Scholar
  84. Weisburd, D., & Taxman, F. (2000). Developing a multi-center randomized trial in criminology: the case of HIDTA. Journal of Quantitative Criminology, 16, 315–339.CrossRefGoogle Scholar
  85. Weisburd, D., Lum, C. M., & Petrosino, P. (2001). Does research design affect study outcomes in criminal justice? Annals of the American Academy of Political and Social Science, 578, 50–70.CrossRefGoogle Scholar
  86. Weisburd, D., Wyckoff, L., Ready, J., Eck, J. E., Hinkle, J. C., & Gajewski, F. (2006). Does crime just move around the corner? a controlled study of spatial displacement and diffusion of crime control benefits. Criminology, 44, 549–592.CrossRefGoogle Scholar
  87. Weisburd, D., Telep, C., Hinkle, J., & Eck, J. (2008). The effects of problem-oriented policing on crime and disorder. Campbell Systematic Reviews. doi: 10.4073/csr.2008.14.Google Scholar
  88. Wellford, C. F., Pepper, J. V., & Petrie, C. V. (Eds.). (2005). Firearms and violence: a critical review. committee to improve research information and data on firearms. Washington, DC: The National Academies Press.Google Scholar
  89. Welsh, B. C., & Farrington, D. P. (2009). Making public places safer: surveillance and crime prevention. New York: Oxford University Press.Google Scholar
  90. Welsh, B. C., Peel, M. E., Farrington, D. P., Elffers, H., & Braga, A. A. (2011). Research design influence on study outcomes in crime and justice: a partial replication with public area surveillance. Journal of Experimental Criminology, 7, 183–198.CrossRefGoogle Scholar
  91. Wilkinson, L., & Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: guidelines and expectations. American Psychologist, 54, 594–604.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  1. 1.Rutgers UniversityNewarkUSA
  2. 2.Harvard UniversityCambridgeUSA
  3. 3.Hebrew UniversityJerusalemIsrael
  4. 4.George Mason UniversityFairfaxUSA

Personalised recommendations