Advertisement

Journal of Experimental Criminology

, Volume 8, Issue 4, pp 415–431 | Cite as

Can “disciplined passion” overcome the cynical view? An empirical inquiry of evaluator influence on police crime prevention program outcomes

  • Brandon C. Welsh
  • Anthony A. Braga
  • Meghan E. Hollis-Peel
Article

Abstract

Objectives

Investigate the degree and nature of influence that researchers have in police crime prevention programs and whether a high degree of influence is associated with biased reporting of results.

Methods

Meta-analytic inquiry of experimental and quasi-experimental studies (n = 42), drawn from four Campbell Collaboration systematic reviews of leading police crime prevention strategies: problem-oriented policing, “hot spots” policing, “pulling levers” policing, and street-level drug enforcement.

Results

Larger program effects are not associated with studies with higher involvement on the part of the evaluator (e.g., assisting in strategy design, monitoring implementation, overcoming implementation problems).

Conclusions

This study does not find support for the cynical view, which holds that researchers have a personal stake in the program or are pressured to report positive results. Importantly, the evaluator’s involvement in the implementation of the program may be a necessary condition of successfully executed police experiments in complex field settings.

Keywords

Academic–practitioner partnerships Conflict of interest Crime prevention Evaluator influence Policing 

Notes

Acknowledgments

We are grateful to the editor, David Wilson, and the anonymous reviewers for helpful comments.

References

  1. Allison, D. B. (2009). The antidote to bias in research. Science, 326, 522–523.CrossRefGoogle Scholar
  2. Bekelman, J. E., Li, Y., & Gross, C. P. (2003). Scope and impact of financial conflicts of interest in biomedical research. Journal of the American Medical Association, 289, 454–465.CrossRefGoogle Scholar
  3. Braga, A. A. (2007). The effects of hot spots policing on crime. Campbell Collaboration. doi: 10.4073/csr.2007.1.
  4. Braga, A. A. (2010). Setting a higher standard for the evaluation of problem-oriented policing initiatives. Criminology and Public Policy, 9, 173–182.CrossRefGoogle Scholar
  5. Braga, A. A., & Weisburd, D. (2010). Policing problem places: Crime hot spots and effective prevention. New York: Oxford University Press.Google Scholar
  6. Braga, A. A., & Weisburd, D. (2011). The effects of focused deterrence strategies on crime: A systematic review and meta-analysis of the empirical evidence. Journal of Research in Crime and Delinquency, 48, in press.Google Scholar
  7. Braga, A. A., Weisburd, D., Waring, E., Mazerolle, L. G., Spelman, W., & Gajewski, F. (1999). Problem-oriented policing in violent crime places: a randomized controlled experiment. Criminology, 37, 541–580.CrossRefGoogle Scholar
  8. Braga, A. A., Kennedy, D. M., Waring, E. J., & Piehl, A. M. (2001). Problem-oriented policing, deterrence, and youth violence: an evaluation of Boston’s Operation Ceasefire. Journal of Research in Crime and Delinquency, 38, 195–225.CrossRefGoogle Scholar
  9. Caeti, T. J. (1999). Houston’s targeted beat program. Unpublished Ph.D. dissertation. Huntsville, TX: Sam Houston State University.Google Scholar
  10. Clarke, R. V., & Bichler-Robertson, G. (1998). Place managers, slumlords and crime in low rent apartment buildings. Security Journal, 11, 11–19.CrossRefGoogle Scholar
  11. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale: Erlbaum.Google Scholar
  12. Criminal Justice Commission. (1998). Beenleigh calls for service project. Brisbane: Criminal Justice Commission.Google Scholar
  13. Duval, S., & Tweedie, R. (2000). A nonparametric ‘trim and fill’ method of accounting for publication bias in meta-analysis. Journal of the American Statistical Association, 95, 89–98.Google Scholar
  14. Eisner, M. (2009a). No effects in independent prevention trials: can we reject the cynical view? Journal of Experimental Criminology, 5, 163–184.CrossRefGoogle Scholar
  15. Eisner, M. (2009b). Reply to the comments by David Olds and Lawrence Sherman. Journal of Experimental Criminology, 5, 215–218.CrossRefGoogle Scholar
  16. Eisner, M., & Humphreys, D. (2011). Measuring conflict of interest in prevention and intervention research: A feasibility study. In T. Bliesener, A. Beelmann, & M. Stemmler (Eds.), Antisocial behavior and crime: Contributions of developmental and evaluation research to prevention and intervention (pp. 165–180). Cambridge: Hogrefe Publishing.Google Scholar
  17. Farrington, D. P., Gottfredson, D. C., Sherman, L. W., & Welsh, B. C. (2006). The Maryland scientific methods scale. In L. W. Sherman, D. P. Farrington, B. C. Welsh, & D. L. MacKenzie (Eds.), Evidence-based crime prevention (rev. ed., pp. 13–21). New York: Routledge.Google Scholar
  18. Geis, G., Mobley, A., & Shichor, D. (1999). Private prisons, criminological research, and conflict of interest: a case study. Crime & Delinquency, 45, 372–388.CrossRefGoogle Scholar
  19. Geis, G., Mobley, A., & Shichor, D. (2000). Letter to the editor. Crime & Delinquency, 46, 443–445.CrossRefGoogle Scholar
  20. Goldstein, H. (1990). Problem-oriented policing. Philadelphia: Temple University Press.Google Scholar
  21. Gorman, D. M., & Conde, E. (2007). Conflict of interest in the evaluation and dissemination of ‘model’ school-based drug and violence prevention programs. Evaluation and Program Planning, 30, 422–429.CrossRefGoogle Scholar
  22. Green, L. (1996). Policing places with drug problems. Thousand Oaks: Sage.Google Scholar
  23. Hawken, A. & Kleiman, M. (2009). Managing drug involved probationers with swift and certain sanctions. Final report submitted to the National Institute of Justice. Unpublished report.Google Scholar
  24. Hope, T. (1994). Problem-oriented policing and drug market locations: three case studies. Crime Prevention Studies, 2, 5–32.Google Scholar
  25. Kennedy, D. (2008). Deterrence and crime prevention. New York: Routledge.Google Scholar
  26. Lanza-Kaduce, L., Parker, K. F., & Thomas, C. W. (2000). The devil is in the details: the case against the case study of private prisons, criminological research, and conflict of interest. Crime & Delinquency, 46, 92–136.CrossRefGoogle Scholar
  27. Lewin, K. (1946). Action research and minority problems. Journal of Social Issues, 2, 34–46.CrossRefGoogle Scholar
  28. Lipsey, M. W. (1995). What do we learn from 400 research studies on the effectiveness of treatment with juvenile delinquents? In J. McGuire (Ed.), What works: Reducing reoffending (pp. 63–78). New York: Wiley.Google Scholar
  29. Lipsey, M. W. (2000). Statistical conclusion validity for intervention research: A significant (p<.05) problem. In L. Bickman (Ed.), Validity and social experimentation: Donald Campbell’s legacy (pp. 101–120). Thousand Oaks: Sage.Google Scholar
  30. Lipsey, M. W. (2003). Those confounded moderators in meta-analysis: good, bad, and ugly. The Annals of the American Academy of Political and Social Science, 587, 69–81.CrossRefGoogle Scholar
  31. Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks: Sage.Google Scholar
  32. Mazerolle, L., Price, J., & Roehl, J. (2000). Civil remedies and drug control: a randomized field trial in Oakland, California. Evaluation Review, 24, 212–241.CrossRefGoogle Scholar
  33. Mazerolle, L., Soole, D. W., & Rombouts, S. (2007). Street-level drug law enforcement: a meta-analytic review. Campbell Collaboration. doi: 10.4073/csr.2007:2.
  34. Moore, M. H., Prothrow-Stith, D., Guyer, B., & Spivak, H. (1994). Violence and intentional injuries: Criminal justice and public health perspectives on an urgent national problem. In A. J. Reiss & J. Roth (Eds.), Consequences and control (Vol. 4, pp. 167–216). Washington, DC: National Academy Press.Google Scholar
  35. National Research Council. (2008). Parole, desistence from crime, and community integration. Washington, DC: National Academies Press.Google Scholar
  36. Olds, D. L. (2009). In support of disciplined passion. Journal of Experimental Criminology, 5, 201–214.CrossRefGoogle Scholar
  37. Petersilia, J. (2008). Influencing public policy: an embedded criminologist reflects on California prison reform. Journal of Experimental Criminology, 4, 335–356.CrossRefGoogle Scholar
  38. Petrosino, A., & Soydan, H. (2005). The impact of program developers as evaluators on criminal recidivism: results from meta-analyses of experimental and quasi-experimental research. Journal of Experimental Criminology, 1, 435–450.CrossRefGoogle Scholar
  39. Reason, P., & Bradbury, H. (2008). The handbook of action research (2nd ed.). London: Sage.Google Scholar
  40. Rothstein, H. R. (2008). Publication bias as a threat to the validity of meta-analytic results. Journal of Experimental Criminology, 4, 61–81.CrossRefGoogle Scholar
  41. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.Google Scholar
  42. Sherman, L. W., & Rogan, D. (1995b). Deterrent effects of police raids on crack houses: a randomized controlled experiment. Justice Quarterly, 12, 755–82.CrossRefGoogle Scholar
  43. Sherman, L. W., & Strang, H. (2009). Testing for analysts’ bias in crime prevention experiments: can we accept Eisner’s one-tailed test? Journal of Experimental Criminology, 5, 185–200.CrossRefGoogle Scholar
  44. Sherman, L. W., Buerger, M., & Gartin, P. (1989). Beyond dial-a-cop: A randomized test of Repeat Call Policing (RECAP). Washington, DC: Crime Control Institute.Google Scholar
  45. Skogan, W. G., & Frydl, K. (Eds.). (2004). Fairness and effectiveness in policing: The evidence. Washington, DC: The National Academies Press.Google Scholar
  46. Spector, P. E. (1992). Summated rating scale construction. Newbury Park: Sage.Google Scholar
  47. Sviridoff, M., Sadd, S., Curtis, R., & Grinc, R. (1992). The neighborhood effects of street-level drug enforcement: Tactical Narcotics Teams in New York. New York: Vera Institute of Justice.Google Scholar
  48. Weisburd, D., & Green, L. (1995). Policing drug hot spots: the Jersey City DMA experiment. Justice Quarterly, 12, 711–736.CrossRefGoogle Scholar
  49. Weisburd, D., Lum, C. M., & Petrosino, A. (2001). Does research design affect study outcomes in criminal justice? The Annals of the American Academy of Political and Social Science, 578, 50–70.CrossRefGoogle Scholar
  50. Weisburd, D., Telep, C. W., Hinkle, J. C., & Eck, J. E. (2008). The effects of problem-oriented policing on crime and disorder. Campbell Collaboration. doi: 10.4073/csr.2008.14.
  51. Welsh, B. C., Peel, M. E., Farrington, D. P., Elffers, H., & Braga, A. A. (2011). Research design influence on study outcomes in crime and justice: a partial replication with public area surveillance. Journal of Experimental Criminology, 7, 183–198.CrossRefGoogle Scholar
  52. Wilson, D. B. (2009). Missing a critical piece of the pie: simple document search strategies inadequate for systematic reviews. Journal of Experimental Criminology, 5, 429–440.CrossRefGoogle Scholar
  53. Witt, M. D., & Gostin, L. O. (1994). Conflict of interest dilemmas in biomedical research. Journal of the American Medical Association, 271, 547–551.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2012

Authors and Affiliations

  • Brandon C. Welsh
    • 1
    • 2
    • 5
  • Anthony A. Braga
    • 3
    • 4
  • Meghan E. Hollis-Peel
    • 1
    • 2
  1. 1.Northeastern UniversityBostonUSA
  2. 2.Netherlands Institute for the Study of Crime and Law EnforcementAmsterdamThe Netherlands
  3. 3.Rutgers UniversityNewarkUSA
  4. 4.Harvard UniversityCambridgeUSA
  5. 5.School of Criminology and Criminal JusticeNortheastern UniversityBostonUSA

Personalised recommendations