Research design influence on study outcomes in crime and justice: a partial replication with public area surveillance

Abstract

Does the quality of research design have an influence on study outcomes in crime and justice? This was the subject of an important study by Weisburd et al. (2001). They found a moderate and significant inverse relationship between research design and study outcomes: weaker designs, as indicated by internal validity, produced stronger effect sizes. Using a database of evaluations (n = 136) from systematic reviews that investigated the effects of public area surveillance on crime, this paper carried out a partial replication of Weisburd et al.’s study. We view it as a partial replication because it included only area- or place-based studies (i.e., there were no individual-level studies) and these studies used designs at the lower end of the evaluation hierarchy (i.e., not one of the studies used a randomized experimental design). In the present study, we report findings that are highly concordant with the earlier study. The overall correlation between research design and study outcomes is moderate but negative and significant (Tau-b = –.175, p = .029). This suggests that stronger research designs are less likely to report desirable effects or, conversely, weaker research designs may be biased upward. We explore possible explanations for this finding. Implications for policy and research are discussed.

This is a preview of subscription content, log in to check access.

Notes

  1. 1.

    Briefly, place managers (Eck 1995) are persons such as bus drivers, parking lot attendants, train conductors, and others who perform a surveillance function by virtue of their position of employment. Unlike security personnel, however, the task of surveillance for these employees is secondary to their other job duties. Defensible space (Newman 1972) involves design changes to the built environment to maximize the natural surveillance of open spaces (e.g., streets and parks) provided by people going about their day-to-day activities. Examples of design changes include the construction of street barricades or closures, re-design of walkways, and installation of windows. They can also include more mundane techniques such as the removal of objects from shelves or windows of convenience stores that obscure lines of sight in the store and the removal or pruning of bushes in front of homes so that residents may have a clear view of the outside world (Cornish and Clarke 2003).

  2. 2.

    For more details on the results, interested readers should consult the separate reviews or the larger study that includes the full body of this work (see Welsh and Farrington 2009a).

  3. 3.

    External validity, which refers to how well the effect of an intervention on an outcome is generalizable or replicable in different conditions, is difficult to investigate within one evaluation study. External validity can be established more convincingly in systematic reviews and meta-analyses of a number of evaluation studies.

  4. 4.

    Statistical conclusion validity is concerned with whether the presumed cause (the intervention) and the presumed effect (the outcome) are related. The main threats to this form of validity are insufficient statistical power to detect the effect (e.g., because of small sample size) and the use of inappropriate statistical techniques. Construct validity refers to the adequacy of the operational definition and measurement of the theoretical constructs that underlie the intervention and the outcome. The main threats to this form of validity rest on the extent to which the intervention succeeded in changing what it was intended to change (e.g., to what extent was there treatment fidelity or implementation failure) and on the validity and reliability of outcome measures (e.g., how adequately police-recorded crime rates reflect true crime rates).

  5. 5.

    Two street lighting studies that used a comparable control design (Painter and Farrington 1997, 1999) also controlled for extraneous variables. Eck (2006) rated them as level 4 on the SMS. A level 4 study involves measures of crime before and after the program in experimental and comparable control conditions, together with statistical control of extraneous variables.

References

  1. Braga, A. A. (2005). Hot spots policing and crime prevention: a systematic review of randomized controlled trials. Journal of Experimental Criminology, 1, 317–342.

    Article  Google Scholar 

  2. Braga, A. A. (2010). Setting a higher standard for the evaluation of problem-oriented policing initiatives. Criminology and Public Policy, 9, 173–182.

    Article  Google Scholar 

  3. Braga, A. A., & Hinkle, M. (2010). The participation of academics in the criminal justice working group process. In J. M. Klofas, N. K. Hipple, & E. F. McGarrell (Eds.), The new criminal justice: American communities and the changing world of crime control (pp. 114–120). New York: Routledge.

    Google Scholar 

  4. Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Chicago: Rand McNally.

    Google Scholar 

  5. Cook, T. D., Shadish, W. R., & Wong, V. C. (2008). Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within-study comparisons. Journal of Policy Analysis and Management, 27, 724–750.

    Article  Google Scholar 

  6. Cornish, D. B., & Clarke, R. V. (2003). Opportunities, precipitators and criminal decisions: A reply to Wortley’s critique of situational crime prevention. In M. J. Smith & D. B. Cornish (Eds.), Theory for practice in situational crime prevention. Crime prevention studies, vol. 16 (pp. 41–96). Monsey: Criminal Justice Press.

    Google Scholar 

  7. Eck, J. E. (1995). A general model of the geography of illicit retail marketplaces. In J. E. Eck & D. Weisburd (Eds.), Crime and place. Crime prevention studies, vol. 4 (pp. 67–94). Monsey: Criminal Justice Press.

    Google Scholar 

  8. Eck, J. E. (2006). Preventing crime at places. In L. W. Sherman, D. P. Farrington, B. C. Welsh, & D. L. MacKenzie (Eds.), Evidence-based crime prevention, rev. ed (pp. 241–294). New York: Routledge.

    Google Scholar 

  9. Ekblom, P., & Pease, K. (1995). Evaluating crime prevention. In M. Tonry & D. P. Farrington (Eds.), Building a safer society: Strategic approaches to crime prevention. Crime and justice: A review of research, vol. 19 (pp. 585–662). Chicago: University of Chicago Press.

    Google Scholar 

  10. Farrington, D. P. (2003). Methodological quality standards for evaluation research. The Annals of the American Academy of Political and Social Science, 587, 49–68.

    Article  Google Scholar 

  11. Farrington, D. P. & Welsh, B. C. (Eds.) (2001). What works in preventing crime? Systematic reviews of experimental and quasi-experimental research. Annals of the American Academy of Political and Social Science, 578 [full issue]

  12. Farrington, D. P., & Welsh, B. C. (2005). Randomized experiments in criminology: what have we learned in the last two decades? Journal of Experimental Criminology, 1, 9–38.

    Article  Google Scholar 

  13. Farrington, D. P., & Welsh, B. C. (2006). A half-century of randomized experiments on crime and justice. In M. Tonry (Ed.), Crime and justice: A review of research, vol. 34 (pp. 55–132). Chicago: University of Chicago Press.

    Google Scholar 

  14. Farrington, D. P., & Welsh, B. C. (2007). Improved street lighting and crime prevention: A systematic review. Stockholm: National Council for Crime Prevention.

    Google Scholar 

  15. Farrington, D. P., Gottfredson, D. C., Sherman, L. W., & Welsh, B. C. (2006). The Maryland scientific methods scale. In L. W. Sherman, D. P. Farrington, B. C. Welsh, & D. L. MacKenzie (Eds.), Evidence-based crime prevention, rev. ed (pp. 13–21). New York: Routledge.

    Google Scholar 

  16. Henry, G. T. (2009). Estimating and extrapolating causal effects for crime prevention policy and program evaluation. In J. Knutsson & N. Tilley (Eds.), Evaluating crime reduction. Crime prevention studies, vol. 24 (pp. 147–173). Monsey: Criminal Justice Press.

    Google Scholar 

  17. Klofas, J. M., Hipple, N. K., & McGarrell, E. F. (2010). The new criminal justice. In J. M. Klofas, N. K. Hipple, & E. F. McGarrell (Eds.), The new criminal justice: American communities and the changing world of crime control (pp. 3–17). New York: Routledge.

    Google Scholar 

  18. Lipsey, M. W. (2003). Those confounded moderators in meta-analysis: Good, bad, and ugly. The Annals of the American Academy of Political and Social Science, 587, 69–81.

    Article  Google Scholar 

  19. Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks: Sage.

    Google Scholar 

  20. Mears, D. P. (2007). Towards rational and evidence-based crime policy. Journal of Criminal Justice, 35, 667–682.

    Article  Google Scholar 

  21. Mears, D. P. (2010). American criminal justice policy: An evaluation approach to increasing accountability and effectiveness. New York: Cambridge University Press.

    Google Scholar 

  22. National Research Council. (2008). Parole, desistence from crime, and community integration. Washington, DC: National Academies Press.

    Google Scholar 

  23. Newman, O. (1972). Defensible space: Crime prevention through urban design. New York: Macmillan.

    Google Scholar 

  24. Olds, D. L. (2009). In support of disciplined passion. Journal of Experimental Criminology, 5, 201–214.

    Article  Google Scholar 

  25. Painter, K. A., & Farrington, D. P. (1997). The crime reducing effect of improved street lighting: The Dudley project. In R. V. Clarke (Ed.), Situational crime prevention: Successful case studies (2nd ed., pp. 209–226). Guilderland: Harrow and Heston.

    Google Scholar 

  26. Painter, K. A., & Farrington, D. P. (1999). Street lighting and crime: Diffusion of benefits in the Stoke-on-Trent project. In K. A. Painter & N. Tilley (Eds.), Surveillance of public space: CCTV, street lighting and crime prevention. Crime prevention studies, vol. 10 (pp. 77–122). Monsey: Criminal Justice Press.

    Google Scholar 

  27. Petersilia, J. (2008). Influencing public policy: An embedded criminologist reflects on California prison reform. Journal of Experimental Criminology, 4, 335–356.

    Article  Google Scholar 

  28. Petrosino, A., & Soyden, H. (2005). The impact of program developers as evaluators on criminal recidivism: Results from meta-analyses of experimental and quasi-experimental research. Journal of Experimental Criminology, 1, 435–450.

    Article  Google Scholar 

  29. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.

    Google Scholar 

  30. Sherman, L. W. (Ed.) (2003). Misleading evidence and evidence-led policy: Making social science more experimental. Annals of the American Academy of Political and Social Science, 589 [full issue]

  31. Sherman, L. W., Gottfredson, D. C., MacKenzie, D. L., Eck, J. E., Reuter, P., & Bushway, S. D. (1997). Preventing crime: What works, what doesn’t, what’s promising. Washington DC: National Institute of Justice, U.S. Department of Justice.

    Google Scholar 

  32. Sherman, L. W., Strang, H., Angel, C., Woods, D., Barnes, G. C., Bennett, S., et al. (2005). Effects of face-to-face restorative justice on victims of crime in four randomized, controlled trials. Journal of Experimental Criminology, 1, 367–395.

    Article  Google Scholar 

  33. Weisburd, D. (1994). Evaluating community policing: Role tensions between practitioners and evaluators. In D. P. Rosenbaum (Ed.), The challenge of community policing: Testing the promises (pp. 274–277). Thousand Oaks: Sage.

    Google Scholar 

  34. Weisburd, D. (2005). Hot spots policing experiments and criminal justice research: Lessons from the field. The Annals of the American Academy of Political and Social Science, 599, 220–245.

    Article  Google Scholar 

  35. Weisburd, D., Lum, C. M., & Petrosino, A. (2001). Does research design affect study outcomes in criminal justice? The Annals of the American Academy of Political and Social Science, 578, 50–70.

    Article  Google Scholar 

  36. Weisburd, D., Lum, C. M., & Petrosino, A. (Eds.) (2003). Assessing Systematic Evidence in Crime and Justice: Methodological Concerns and Empirical Outcomes. Annals of the American Academy of Political and Social Science, 587 [full issue]

  37. Welsh, B. C., & Farrington, D. P. (2009a). Making public places safer: Surveillance and crime prevention. New York: Oxford University Press.

    Google Scholar 

  38. Welsh, B. C., & Farrington, D. P. (2009b). Public area CCTV and crime prevention: An updated systematic review and meta-analysis. Justice Quarterly, 26, 716–745.

    Article  Google Scholar 

  39. Welsh, B. C., Mudge, M. E. & Farrington, D. P. (2010). Reconceptualizing public area surveillance and crime prevention: Security guards, place managers, and defensible space. Security Journal, 23 299–319.

    Google Scholar 

Download references

Acknowledgments

We are grateful to Chet Britt, Dan Mears, Chris Sullivan, the journal editor, and the anonymous reviewers for helpful comments.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Brandon C. Welsh.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Welsh, B.C., Peel, M.E., Farrington, D.P. et al. Research design influence on study outcomes in crime and justice: a partial replication with public area surveillance. J Exp Criminol 7, 183–198 (2011). https://doi.org/10.1007/s11292-010-9117-1

Download citation

Keywords

  • Evaluation design
  • Evidence-based crime policy
  • Public area surveillance
  • Systematic review