Can “disciplined passion” overcome the cynical view? An empirical inquiry of evaluator influence on police crime prevention program outcomes

Abstract

Objectives

Investigate the degree and nature of influence that researchers have in police crime prevention programs and whether a high degree of influence is associated with biased reporting of results.

Methods

Meta-analytic inquiry of experimental and quasi-experimental studies (n = 42), drawn from four Campbell Collaboration systematic reviews of leading police crime prevention strategies: problem-oriented policing, “hot spots” policing, “pulling levers” policing, and street-level drug enforcement.

Results

Larger program effects are not associated with studies with higher involvement on the part of the evaluator (e.g., assisting in strategy design, monitoring implementation, overcoming implementation problems).

Conclusions

This study does not find support for the cynical view, which holds that researchers have a personal stake in the program or are pressured to report positive results. Importantly, the evaluator’s involvement in the implementation of the program may be a necessary condition of successfully executed police experiments in complex field settings.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2

Notes

  1. 1.

    The Campbell Collaboration reviews used to identify police crime prevention evaluations include strong and weak quasi-experimental designs. Based on the Maryland Scientific Methods Scale (Farrington et al. 2006), eligible quasi-experimental evaluations of police crime prevention programs would be considered “Level 3” and “Level 4” research designs. Level 3 quasi-experimental designs are regarded as the minimum design that is adequate for drawing conclusions about program effectiveness. This design rules out many threats to internal validity such as history, maturation/trends, instrumentation, testing, and mortality. The main problems of Level 3 evaluations center on selection effects and regression to the mean due to the non-equivalence of treatment and control conditions. Level 4 evaluations measure outcomes before and after the program in multiple treatment and control condition units. These types of designs have better statistical control of extraneous influences on the outcome and, relative to lower level evaluations, deal with selection and regression threats more adequately.

  2. 2.

    Nine of these 46 studies appeared in multiple systematic reviews. This was not surprising given that these 4 systematic reviews were related in the sense that all examined evaluations of innovative police crime prevention programs focused on specific crime problems. The Mazerolle et al. (2000) and Weisburd and Green (1995) evaluations were included in the problem-oriented policing, drug enforcement, and hot spots policing systematic reviews. The Sherman and Rogan (1995b) and Sviridoff et al. (1992) evaluations were included in the drug enforcement and hot spots policing systematic reviews. The Braga et al. (1999) and Sherman et al. (1989) studies were included in the problem-oriented policing and hot spots policing systematic reviews. The Green (1996) and Clarke and Bichler-Robertson (1998) studies appeared in the problem-oriented policing and drug enforcement systematic reviews. The Braga et al. (2001) evaluation was included in the problem-oriented policing and pulling levers systematic reviews.

  3. 3.

    The Caeti (1999) evaluation did not examine the differences-in-differences between treatment and control areas and, as such, did not directly measure whether the observed changes in the treatment beats were significantly different from observed changes in the control beats.

  4. 4.

    Using the guidelines on effect size interpretation suggested by Cohen (1988), a standardized mean effect size of .171 would be considered small. Other scholars, however, suggest a more nuanced interpretation of the magnitude of effect sizes in different social science fields. For instance, Lipsey (2000) suggests that a small standardized mean effect size should be defined as .10 rather than .20. Using Lipsey’s guidelines, the standardized mean effect size for the police crime prevention programs in this review would be considered moderate.

  5. 5.

    The slight overlap of the 95 % confidence intervals raised the possibility that the two parameters may in fact be different. As suggested by Rothstein (2008), we used a variety of tests to examine the possibility that our study suffered from some degree of publication bias. Our overall conclusion from these analyses was that publication bias was not a problem for our study. For instance, the classic failsafe N test yielded a z value of 18.70 and a corresponding p value of < 0.000 for the combined test of significance. The test reported that there would need to be 3,782 missing studies with zero effect to yield a combined two-tailed p value exceeding 0.05. This far exceeds the 220 studies suggested by Rosenthal’s (5 K + 10) guideline on the number of studies to be confident that the results would not be nullified. We also applied the Begg and Mazumdar rank correlation test to our pool of studies; the Kendell’s tau b for the 42 studies was 0.111 with a two-tailed p = 0.298 (based on continuity corrected normal approximation), suggesting publication bias was not operating in the analyses. As a result of these supplementary analyses, we are confident that publication bias is not a significant problem for our study of evaluator influence on program outcomes.

References

  1. Allison, D. B. (2009). The antidote to bias in research. Science, 326, 522–523.

    Article  Google Scholar 

  2. Bekelman, J. E., Li, Y., & Gross, C. P. (2003). Scope and impact of financial conflicts of interest in biomedical research. Journal of the American Medical Association, 289, 454–465.

    Article  Google Scholar 

  3. Braga, A. A. (2007). The effects of hot spots policing on crime. Campbell Collaboration. doi:10.4073/csr.2007.1.

  4. Braga, A. A. (2010). Setting a higher standard for the evaluation of problem-oriented policing initiatives. Criminology and Public Policy, 9, 173–182.

    Article  Google Scholar 

  5. Braga, A. A., & Weisburd, D. (2010). Policing problem places: Crime hot spots and effective prevention. New York: Oxford University Press.

    Google Scholar 

  6. Braga, A. A., & Weisburd, D. (2011). The effects of focused deterrence strategies on crime: A systematic review and meta-analysis of the empirical evidence. Journal of Research in Crime and Delinquency, 48, in press.

  7. Braga, A. A., Weisburd, D., Waring, E., Mazerolle, L. G., Spelman, W., & Gajewski, F. (1999). Problem-oriented policing in violent crime places: a randomized controlled experiment. Criminology, 37, 541–580.

    Article  Google Scholar 

  8. Braga, A. A., Kennedy, D. M., Waring, E. J., & Piehl, A. M. (2001). Problem-oriented policing, deterrence, and youth violence: an evaluation of Boston’s Operation Ceasefire. Journal of Research in Crime and Delinquency, 38, 195–225.

    Article  Google Scholar 

  9. Caeti, T. J. (1999). Houston’s targeted beat program. Unpublished Ph.D. dissertation. Huntsville, TX: Sam Houston State University.

  10. Clarke, R. V., & Bichler-Robertson, G. (1998). Place managers, slumlords and crime in low rent apartment buildings. Security Journal, 11, 11–19.

    Article  Google Scholar 

  11. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale: Erlbaum.

    Google Scholar 

  12. Criminal Justice Commission. (1998). Beenleigh calls for service project. Brisbane: Criminal Justice Commission.

    Google Scholar 

  13. Duval, S., & Tweedie, R. (2000). A nonparametric ‘trim and fill’ method of accounting for publication bias in meta-analysis. Journal of the American Statistical Association, 95, 89–98.

    Google Scholar 

  14. Eisner, M. (2009a). No effects in independent prevention trials: can we reject the cynical view? Journal of Experimental Criminology, 5, 163–184.

    Article  Google Scholar 

  15. Eisner, M. (2009b). Reply to the comments by David Olds and Lawrence Sherman. Journal of Experimental Criminology, 5, 215–218.

    Article  Google Scholar 

  16. Eisner, M., & Humphreys, D. (2011). Measuring conflict of interest in prevention and intervention research: A feasibility study. In T. Bliesener, A. Beelmann, & M. Stemmler (Eds.), Antisocial behavior and crime: Contributions of developmental and evaluation research to prevention and intervention (pp. 165–180). Cambridge: Hogrefe Publishing.

    Google Scholar 

  17. Farrington, D. P., Gottfredson, D. C., Sherman, L. W., & Welsh, B. C. (2006). The Maryland scientific methods scale. In L. W. Sherman, D. P. Farrington, B. C. Welsh, & D. L. MacKenzie (Eds.), Evidence-based crime prevention (rev. ed., pp. 13–21). New York: Routledge.

    Google Scholar 

  18. Geis, G., Mobley, A., & Shichor, D. (1999). Private prisons, criminological research, and conflict of interest: a case study. Crime & Delinquency, 45, 372–388.

    Article  Google Scholar 

  19. Geis, G., Mobley, A., & Shichor, D. (2000). Letter to the editor. Crime & Delinquency, 46, 443–445.

    Article  Google Scholar 

  20. Goldstein, H. (1990). Problem-oriented policing. Philadelphia: Temple University Press.

    Google Scholar 

  21. Gorman, D. M., & Conde, E. (2007). Conflict of interest in the evaluation and dissemination of ‘model’ school-based drug and violence prevention programs. Evaluation and Program Planning, 30, 422–429.

    Article  Google Scholar 

  22. Green, L. (1996). Policing places with drug problems. Thousand Oaks: Sage.

    Google Scholar 

  23. Hawken, A. & Kleiman, M. (2009). Managing drug involved probationers with swift and certain sanctions. Final report submitted to the National Institute of Justice. Unpublished report.

  24. Hope, T. (1994). Problem-oriented policing and drug market locations: three case studies. Crime Prevention Studies, 2, 5–32.

    Google Scholar 

  25. Kennedy, D. (2008). Deterrence and crime prevention. New York: Routledge.

    Google Scholar 

  26. Lanza-Kaduce, L., Parker, K. F., & Thomas, C. W. (2000). The devil is in the details: the case against the case study of private prisons, criminological research, and conflict of interest. Crime & Delinquency, 46, 92–136.

    Article  Google Scholar 

  27. Lewin, K. (1946). Action research and minority problems. Journal of Social Issues, 2, 34–46.

    Article  Google Scholar 

  28. Lipsey, M. W. (1995). What do we learn from 400 research studies on the effectiveness of treatment with juvenile delinquents? In J. McGuire (Ed.), What works: Reducing reoffending (pp. 63–78). New York: Wiley.

    Google Scholar 

  29. Lipsey, M. W. (2000). Statistical conclusion validity for intervention research: A significant (p<.05) problem. In L. Bickman (Ed.), Validity and social experimentation: Donald Campbell’s legacy (pp. 101–120). Thousand Oaks: Sage.

    Google Scholar 

  30. Lipsey, M. W. (2003). Those confounded moderators in meta-analysis: good, bad, and ugly. The Annals of the American Academy of Political and Social Science, 587, 69–81.

    Article  Google Scholar 

  31. Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks: Sage.

    Google Scholar 

  32. Mazerolle, L., Price, J., & Roehl, J. (2000). Civil remedies and drug control: a randomized field trial in Oakland, California. Evaluation Review, 24, 212–241.

    Article  Google Scholar 

  33. Mazerolle, L., Soole, D. W., & Rombouts, S. (2007). Street-level drug law enforcement: a meta-analytic review. Campbell Collaboration. doi:10.4073/csr.2007:2.

  34. Moore, M. H., Prothrow-Stith, D., Guyer, B., & Spivak, H. (1994). Violence and intentional injuries: Criminal justice and public health perspectives on an urgent national problem. In A. J. Reiss & J. Roth (Eds.), Consequences and control (Vol. 4, pp. 167–216). Washington, DC: National Academy Press.

    Google Scholar 

  35. National Research Council. (2008). Parole, desistence from crime, and community integration. Washington, DC: National Academies Press.

    Google Scholar 

  36. Olds, D. L. (2009). In support of disciplined passion. Journal of Experimental Criminology, 5, 201–214.

    Article  Google Scholar 

  37. Petersilia, J. (2008). Influencing public policy: an embedded criminologist reflects on California prison reform. Journal of Experimental Criminology, 4, 335–356.

    Article  Google Scholar 

  38. Petrosino, A., & Soydan, H. (2005). The impact of program developers as evaluators on criminal recidivism: results from meta-analyses of experimental and quasi-experimental research. Journal of Experimental Criminology, 1, 435–450.

    Article  Google Scholar 

  39. Reason, P., & Bradbury, H. (2008). The handbook of action research (2nd ed.). London: Sage.

    Google Scholar 

  40. Rothstein, H. R. (2008). Publication bias as a threat to the validity of meta-analytic results. Journal of Experimental Criminology, 4, 61–81.

    Article  Google Scholar 

  41. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.

    Google Scholar 

  42. Sherman, L. W., & Rogan, D. (1995b). Deterrent effects of police raids on crack houses: a randomized controlled experiment. Justice Quarterly, 12, 755–82.

    Article  Google Scholar 

  43. Sherman, L. W., & Strang, H. (2009). Testing for analysts’ bias in crime prevention experiments: can we accept Eisner’s one-tailed test? Journal of Experimental Criminology, 5, 185–200.

    Article  Google Scholar 

  44. Sherman, L. W., Buerger, M., & Gartin, P. (1989). Beyond dial-a-cop: A randomized test of Repeat Call Policing (RECAP). Washington, DC: Crime Control Institute.

    Google Scholar 

  45. Skogan, W. G., & Frydl, K. (Eds.). (2004). Fairness and effectiveness in policing: The evidence. Washington, DC: The National Academies Press.

    Google Scholar 

  46. Spector, P. E. (1992). Summated rating scale construction. Newbury Park: Sage.

    Google Scholar 

  47. Sviridoff, M., Sadd, S., Curtis, R., & Grinc, R. (1992). The neighborhood effects of street-level drug enforcement: Tactical Narcotics Teams in New York. New York: Vera Institute of Justice.

    Google Scholar 

  48. Weisburd, D., & Green, L. (1995). Policing drug hot spots: the Jersey City DMA experiment. Justice Quarterly, 12, 711–736.

    Article  Google Scholar 

  49. Weisburd, D., Lum, C. M., & Petrosino, A. (2001). Does research design affect study outcomes in criminal justice? The Annals of the American Academy of Political and Social Science, 578, 50–70.

    Article  Google Scholar 

  50. Weisburd, D., Telep, C. W., Hinkle, J. C., & Eck, J. E. (2008). The effects of problem-oriented policing on crime and disorder. Campbell Collaboration. doi:10.4073/csr.2008.14.

  51. Welsh, B. C., Peel, M. E., Farrington, D. P., Elffers, H., & Braga, A. A. (2011). Research design influence on study outcomes in crime and justice: a partial replication with public area surveillance. Journal of Experimental Criminology, 7, 183–198.

    Article  Google Scholar 

  52. Wilson, D. B. (2009). Missing a critical piece of the pie: simple document search strategies inadequate for systematic reviews. Journal of Experimental Criminology, 5, 429–440.

    Article  Google Scholar 

  53. Witt, M. D., & Gostin, L. O. (1994). Conflict of interest dilemmas in biomedical research. Journal of the American Medical Association, 271, 547–551.

    Article  Google Scholar 

Download references

Acknowledgments

We are grateful to the editor, David Wilson, and the anonymous reviewers for helpful comments.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Brandon C. Welsh.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Welsh, B.C., Braga, A.A. & Hollis-Peel, M.E. Can “disciplined passion” overcome the cynical view? An empirical inquiry of evaluator influence on police crime prevention program outcomes. J Exp Criminol 8, 415–431 (2012). https://doi.org/10.1007/s11292-012-9153-0

Download citation

Keywords

  • Academic–practitioner partnerships
  • Conflict of interest
  • Crime prevention
  • Evaluator influence
  • Policing