Advertisement

Analyzing block randomized studies: the example of the Jersey City drug market analysis experiment

  • David WeisburdEmail author
  • David B. Wilson
  • Lorraine Mazerolle
Article

Abstract

Objectives

While block randomized designs have become more common in place-based policing studies, there has been relatively little discussion of the assumptions employed and their implications for statistical analysis. Our paper seeks to illustrate these assumptions, and controversy regarding statistical approaches, in the context of one of the first block randomized studies in criminal justice—the Jersey City Drug Market Analysis Project (DMAP).

Methods

Using DMAP data, we show that there are multiple approaches that can be used in analyzing block randomized designs, and that those approaches will yield differing estimates of statistical significance. We develop outcomes using both models with and without interaction, and utilizing both Type I and Type III sums-of-squares approaches. We also examine the impacts of using randomization inference, an approach for estimating p values not based on approximations using normal distribution theory, to adjust for possible small N biases in estimating standard errors.

Results

The assumptions used for identifying the analytic approach produce a comparatively wide range of p values for the main DMAP program impacts on hot spots. Nonetheless, the overall conclusions drawn from our re-analysis remain consistent with the original analyses, albeit with more caution. Results were similar to the original analyses under different specifications supporting the identification of diffusion of benefits effects to nearby areas.

Conclusions

The major contribution of our article is to clarify statistical modeling in unbalanced block randomized studies. The introduction of blocking adds complexity to the models that are estimated, and care must be taken when including interaction effects in models, whether they are ANOVA models or regression models. Researchers need to recognize this complexity and provide transparent and alternative estimates of study outcomes.

Keywords

Block randomized design Drug markets Policing 

Notes

Acknowledgements

We would like to thank Anthony Braga, Donald Green, and Alese Wooditch for helpful comments on earlier drafts of this paper, and Tori Goldberg for help in preparing the manuscript.

References

  1. Ariel, B., & Farrington, D. P. (2010). Randomized block designs. In A. Piquero & D. Weisburd (Eds) Handbook of quantitative criminology (pp. 437–454). New York: Springer.Google Scholar
  2. Bayley, D. H. (1994). Police for the future (studies in crime and public policy). Oxford: Oxford University Press.Google Scholar
  3. Blattman, C., Green, D., Ortega, D., & Tobón, S. (2017). Pushing crime around the corner? Estimating experimental impacts of large-scale security interventions (no. w23941). National Bureau of Economic Research.Google Scholar
  4. Braga, A. A., & Weisburd, D. (2006). Problem-oriented policing: the disconnect between principles and practice. In D. Weisburd & A. A. Braga (Eds.), Police innovation: Contrasting perspectives (pp. 133–152). Cambridge: Cambridge University Press.Google Scholar
  5. Braga, A. A., Green, L. A., Weisburd, D. L., & Gajewski, F. (1994). Police perceptions of street-level narcotics activity: evaluating drug buys as a research tool. American Journal of Police, 13, 37.Google Scholar
  6. Braga, A., Papachristos, A., & Hureau, D. (2012). Hot spots policing effects on crime. Campbell Systematic Reviews, 8(8), 1–96.Google Scholar
  7. Braga, A. A., Papachristos, A., & Hureau, D. (2014). The effects of hot spots policing on crime: an updated systematic review and meta-analysis. Justice Quarterly, 31(4), 633–663.CrossRefGoogle Scholar
  8. Braga, A.A., A. Papachristos, D. Hureau, & Turchan, B.. (2017). “Hot spots policing and crime control: an updated systematic review and meta-analysis.” American Society of Criminology Annual Meetings.Google Scholar
  9. Clarke, R. V., & Weisburd, D. (1994). Diffusion of crime control benefits: observations on the reverse of displacement. Crime Prevention Studies, 2, 165–184.Google Scholar
  10. Cohen, J. (1988). Statistical power analysis for the behavioral sciences 2nd edn. Hillsdale: Lawrence Erlbaum Associates.Google Scholar
  11. Farrington, D. P., & Welsh, B. C. (2005). Randomized experiments in criminology: what have we learned in the last two decades? Journal of Experimental Criminology, 1(1), 9–38.CrossRefGoogle Scholar
  12. Feaster, D. J., Mikulich-Gilbertson, S., & Brincks, A. M. (2011). Modeling site effects in the design and analysis of multi-site trials. The American Journal of Drug and Alcohol Abuse, 37(5), 383–391.CrossRefGoogle Scholar
  13. Feise, R.J. (2002). Do multiple outcome measures require p-value adjustment?. BMC Medical Research Methodology 2(8).Google Scholar
  14. Fisher, R. A. (1925). Statistical methods for research workers. Edinburgh: Oliver and Boyd.Google Scholar
  15. Fleiss, J. L. (1986). Analysis of data from multiclinic trials. Controlled Clinical Trials, 7(4), 267–275.CrossRefGoogle Scholar
  16. Gerber, A. S., & Green, D. P. (2012). Field experiments: design, analysis, and interpretation. New York: WW Norton.Google Scholar
  17. Gill, C. E., & Weisburd, D. (2013). Increasing equivalence in small-sample place-based experiments taking advantage of block randomization methods. Experimental criminology: Prospects for advancing. Science and Public Policy, 141–162.Google Scholar
  18. Goldstein, H. (1990). Problem-oriented policing. New York: McGraw-Hill.Google Scholar
  19. Gottfredson, M. R., & Hirschi, T. (1990). A general theory of crime. Stanford: Stanford University Press.Google Scholar
  20. Green, D. P., & Vavreck, L. (2007). Analysis of cluster-randomized experiments: a comparison of alternative estimation approaches. Political Analysis, 16(2), 138–152.CrossRefGoogle Scholar
  21. Hector, A., Von Felten, S., & Schmid, B. (2010). Analysis of variance with unbalanced data: an update for ecology & evolution. Journal of Animal Ecology, 79(2), 308–316.CrossRefGoogle Scholar
  22. Jaccard, J., & Turrisi, R. (2003). Interaction effects in multiple regression (no. 72). Thousand Oaks: Sage.Google Scholar
  23. Jaccard, J., Turrisi, R., & Wan, C. K. (1990). Implications of behavioral decision theory and social marketing for designing social action programs. In J. Edwards, R.S. Tindel, L. Heath, & E.J. Posavac (Eds.) In Social influence processes and prevention (pp. 103-142). Boston: Springer.Google Scholar
  24. Kernan, W. N., Viscoli, C. M., Makuch, R. W., Brass, L. M., & Horwitz, R. I. (1999). Stratified randomization for clinical trials. Journal of Clinical Epidemiology, 52(1), 19–26.CrossRefGoogle Scholar
  25. Kirk, R. E. (1982). Experimental design. Hoboken: John Wiley & Sons, Inc.Google Scholar
  26. Langsrud, Ø. (2003). ANOVA for unbalanced data: use type II instead of type III sums of squares. Statistics and Computing, 13(2), 163–167.CrossRefGoogle Scholar
  27. Lipsey, M. W. (1990). Design sensitivity: Statistical power for experimental research (Vol. 19). Thousand Oaks: Sage. Google Scholar
  28. Pierce, G., Spaar, S., & Briggs, L. (1988). The character of police work: strategic and tactical implications. Boston: NortheasternUniversity Press.Google Scholar
  29. Repetto, T. A. (1976). Crime prevention and the displacement hypothesis. Crime and Delinquency, 56, 166–178.CrossRefGoogle Scholar
  30. Rothman, K. J. (1990). No adjustments are needed for multiple comparisons. Epidemiology, 1(1), 43–46.CrossRefGoogle Scholar
  31. Sherman, L. W. (1987). Repeat calls to police in minneapolic. Crime Control Reports No. 5. Washington, D.C.: Crime Control Institute.Google Scholar
  32. Sherman, L. W., Gartin, P. R., & Buerger, M. E. (1989). Hot spots of predatory crime: routine activities and the criminology of place. Criminology, 27(1), 27–56.CrossRefGoogle Scholar
  33. te Grotenhuis, M., Pelzer, B., Eisinga, R., Nieuwenhuis, R., Schmidt-Catran, A., & Konig, R. (2017a). A novel method for modelling interaction between categorical variables. International Journal of Public Health, 62(3), 427–431.CrossRefGoogle Scholar
  34. te Grotenhuis, M., Pelzer, B., Eisinga, R., Nieuwenhuis, R., Schmidt-Catran, A., & Konig, R. (2017b). When size matters: advantages of weighted effect coding in observational studies. International Journal of Public Health, 62(1), 163–167.CrossRefGoogle Scholar
  35. Venables, W. N. (1998). Exegeses on linear models. In S-Plus User’s Conference, Washington DC.Google Scholar
  36. Warner, B. D., & Pierce, G. L. (1993). Reexamining social disorganization theory using calls to the police as a measure of crime. Criminology, 31(4), 493–517.CrossRefGoogle Scholar
  37. Weisburd, D., & Gill, C. (2014). Block randomized trials at places: rethinking the limitations of small N experiments. Journal of Quantitative Criminology, 30, 97–112.CrossRefGoogle Scholar
  38. Weisburd, D., & Green, L. (1991). Identifying and controlling drug markets: The drug market analysis program (phase 2). Funded Proposal to the National Institute of Justice.Google Scholar
  39. Weisburd, D., & Green, L. (1994). Defining the drug market: the case of Jersey City's DMAP system. Drugs and Crime: Evaluating Public Policy Initiatives. Newbury Park: Sage.(1995)." Policing drug hotspots: Findings from the Jersey City DMA." justice quarterly.Google Scholar
  40. Weisburd, D., & Green, L. (1995). Policing drug hot spots: the Jersey City drug market analysis experiment. Justice Quarterly, 12(4), 711–735.CrossRefGoogle Scholar
  41. Weisburd, D., & Mazerolle, L. G. (2000). Crime and disorder in drug hot spots: implications for theory and practice in policing. Police quarterly, 3(3), 331–349.CrossRefGoogle Scholar
  42. Weisburd, D., Green, L., & Ross, D. (1994). Crime in street level drug markets: a spatial analysis. Criminology, 27, 49–67.CrossRefGoogle Scholar
  43. Weisburd, D., Lum, C., & Yang, S. M. (2003). When can we conclude that treatments or programs ‘Don’t work’? The Annals of the American Academy of Social and Political. Sciences, 587(May), 31–48.Google Scholar
  44. Weisburd, D., Telep, C., Hinkle, J., & Eck, J. (2010). Is problem-oriented policing effective in reducing crime and disorder? Findings from a Campbell systematic review. Criminology & Public Policy, 9(1), 139–172.CrossRefGoogle Scholar

Copyright information

© Springer Nature B.V. 2018

Authors and Affiliations

  • David Weisburd
    • 1
    • 2
    Email author
  • David B. Wilson
    • 1
  • Lorraine Mazerolle
    • 3
  1. 1.George Mason UniversityFairfaxUSA
  2. 2.Hebrew UniversityJerusalemIsrael
  3. 3.University of QueenslandBrisbaneAustralia

Personalised recommendations