Skip to main content

Advertisement

Log in

Block Randomized Trials at Places: Rethinking the Limitations of Small N Experiments

  • Original Paper
  • Published:
Journal of Quantitative Criminology Aims and scope Submit manuscript

Abstract

Objectives

Place-based policing experiments have led to encouraging findings regarding the ability of the police to prevent crime, but sample sizes in many of the key studies in this area are small. Farrington and colleagues argue that experiments with fewer than 50 cases per group are not likely to achieve realistic pre-test balance and have excluded such studies from their influential systematic reviews of experimental research. A related criticism of such studies is that their statistical power under traditional assumptions is also likely to be low. In this paper, we show that block randomization can overcome these design limitations.

Methods

Using data from the Jersey City Drug Market Analysis Experiment (N = 28 per group) we conduct simulations on three key outcome measures. Simulations of simple randomization with 28 and 50 cases per group are compared to simulations of block randomization with 28 cases. We illustrate the statistical modeling benefits of the block randomization approach through examination of sums of squares in GLM models and by estimating minimum detectable effects in a power analysis.

Results

The block randomization simulation is found to produce many fewer significantly unbalanced samples than the naïve randomization approaches both with 28 and 50 cases per group. Block randomization also produced similar or smaller absolute mean differences across the simulations. Illustrations using sums of squares show that error variance in the block randomization model is reduced for each of the three outcomes. Power estimates are comparable or higher using block randomization with 28 cases per group as opposed to naïve randomization with 50 cases per group.

Conclusions

Block randomization provides a solution to the small N problem in place-based experiments that addresses concerns about both equivalence and statistical power. The authors also argue that a 50 case rule should not be applied to block randomized place-based trials for inclusion in key reviews.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. Farrington (1983, p. 263n) notes in this regard, “(t)o understand why randomization ensures closer equivalence with larger samples, imagine drawing samples of 10, 100, or 1,000 unbiased coins. With 10 coins, just over 10 % of the samples would include 2 or less, or 8 or more, heads. With 100 coins, just over 10 % of the samples would include 41 or less, or 59 or more, heads. With 1,000 coins, just over 10 % of the samples would include 474 or less, or 526 or more, heads. It can be seen that, as the sample size increases, the proportion of heads in it fluctuates in a narrower and narrower band around the mean figure of 50 %.”

  2. Stata programs were developed to run a randomization sequence (blocked or naïve) on the JCE dataset and then run a t test comparing the treatment and control group means at baseline on the three outcomes of interest. Stata’s simulation function was then used to run each program 10,000 times and create a dataset containing the group means, t values, p values, an indicator showing whether or not the two groups were significantly different at baseline for each iteration, and the absolute average mean group difference across all iterations. We are grateful to David B. Wilson for developing the programs and simulation syntax.

  3. Of course, this is about what we would have expected given a .10 significance threshold and a fair randomization procedure. But the important point is that the block randomization approach allows us to do better.

  4. This was achieved using Stata’s ‘expand’ function, which appends the dataset to itself the specified number of times.

  5. Again, about what we would expect by chance in a fair randomization (see note 3).

  6. We calculated the correlation between the blocking factor and the three disorder outcome measures by running a GLM with only the blocking factor included. The correlation is based on taking the square root of the overall R2 of the model. We use a one-tailed test of significance following the assumption that the correlation between the blocking factor and the outcome is positive.

  7. Where the interaction between treatment and block is significant, Fleiss (1986) recommends including an interaction term in the model. When the blocking factor represents a substantively important variable, the introduction of a block by treatment interaction can also add knowledge about the differential effects of treatment across values of the blocking variable (Weisburd and Taxman 2000).

References

  • Ariel B, Farrington DP (2010) Randomized block designs. In: Piquero AR, Weisburd D (eds) Handbook of quantitative criminology. Springer, New York, pp 437–454

    Chapter  Google Scholar 

  • Bloom HS (1995) Minimum detectable effects: a simple way to report the statistical power of experimental designs. Eval Rev 19(5):547–556

    Article  Google Scholar 

  • Bloom HS (2005) Randomizing groups to evaluate place-based programs. In: Bloom HS (ed) Learning more from social experiments: evolving analytic approaches. Russell Sage Foundation, New York, pp 115–172

    Google Scholar 

  • Bloom HS, Riccio JA (2005) Using place random assignment and comparative interrupted time-series analysis to evaluate the jobs plus employment program for public housing residents. AAAPSS 599:19–51

    Article  Google Scholar 

  • Boruch R, May H, Turner H, Lavenberg J, Petrosino A, De Moya D, Grimshaw J, Foley E (2004) Estimating the effects of interventions that are deployed in many places: place-randomized trials. Am Behav Sci 47(5):608–633

    Article  Google Scholar 

  • Boruch R, Weisburd D, Berk R (2010) Place randomized trials. In: Piquero AR, Weisburd D (eds) Handbook of quantitative criminology. Springer, New York, pp 481–502

    Chapter  Google Scholar 

  • Box JF (1980) R. A. Fisher and the design of experiments. Am Stat 34(1):1–7

    Google Scholar 

  • Braga AA, Bond BJ (2008) Policing crime and disorder hot spots: a randomized controlled trial. Criminology 46(3):577–607

    Article  Google Scholar 

  • Braga AA, Weisburd DL, Waring EJ, Mazerolle LG, Spelman W, Gajewski F (1999) Problem-oriented policing in violent crime places: a randomized controlled experiment. Criminology 37(3):541–580

    Article  Google Scholar 

  • Britt CL, Weisburd D (2010) Statistical power. In: Piquero AR, Weisburd D (eds) Handbook of quantitative criminology. Springer, New York, pp 313–332

    Chapter  Google Scholar 

  • Bursik RJ Jr, Grasmick HG (1993) Neighborhoods and crime. Lexington Books, Lexington

    Google Scholar 

  • Campbell DT (1969) Reforms as experiments. Am Psychol 24:409–429

    Article  Google Scholar 

  • Cohen J (1988) Statistical power analysis for the behavioral sciences, 2nd edn. Lawrence Erlbaum, Hillsdale

    Google Scholar 

  • Donner A, Klar N (2000) Design and analysis of cluster randomization trials in health research. Arnold, London

    Google Scholar 

  • Durlauf SN, Nagin DS (2011) Imprisonment and crime. Criminol Public Policy 10(1):13–54

    Article  Google Scholar 

  • Farrington DP (1983) Randomized experiments on crime and justice. Crime Just 4:257–308

    Article  Google Scholar 

  • Farrington DP, Ttofi MM (2009) School-based programs to reduce bullying and victimization. Campbell systematic reviews 6. http://campbellcollaboration.org/lib/download/718/

  • Farrington DP, Welsh BC (2005) Randomized experiments in criminology: what have we learned in the last two decades? J Exp Criminol 1(1):9–38

    Article  Google Scholar 

  • Farrington DP, Ohlin LE, Wilson JQ (1986) Understanding and controlling crime: toward a new research strategy. Springer, New York

    Book  Google Scholar 

  • Fisher RA (1926) The arrangement of field experiments. J Min Agric 33:503–513

    Google Scholar 

  • Fisher RA (1935) The design of experiments. Oliver and Boyd, Edinburgh

    Google Scholar 

  • Flay BR, Collins Linda M (2005) Historical review of school-based randomized trials for evaluating problem behavior prevention programs. AAAPSS 599:115–146

    Article  Google Scholar 

  • Fleiss J (1986) The design and analysis of clinical experiments. Wiley, New York

    Google Scholar 

  • Grimshaw J, Eccles M, Campbell M, Elbourne D (2005) Cluster randomized trials of professional and organizational behavior change interventions in health settings. AAAPSS 599:71–93

    Article  Google Scholar 

  • Imai K, King G, Nall C (2009) The essential role of pair matching in cluster-randomized experiments, with application to the Mexican universal health insurance evaluation. Stat Sci 24(1):29–53

    Article  Google Scholar 

  • Jolliffe D, Farrington DP (2007) A rapid evidence assessment of the impact of mentoring on reoffending, Home Office Online Report 11/07, Home Office, London. http://homeoffice.gov.uk/rds/pdfs07/rdsolr1107.pdf

  • Kochel TR (2011) Constructing hot spots policing: unexamined consequences for disadvantaged populations and for police legitimacy. Crim Justice Policy Rev 22(3):350–374

    Article  Google Scholar 

  • Parker SW, Teruel GM (2005) Randomization and social program evaluation: the case of Progresa. AAAPSS 599:199–219

    Article  Google Scholar 

  • Powers E, Witmer H (1951) An experiment in the prevention of delinquency. Columbia University Press, New York

    Google Scholar 

  • Raudenbush SW, Bryk AS (2002) Hierarchical linear models: applications and data analysis methods, 2nd edn. Sage, Newbury Park

    Google Scholar 

  • Raudenbush SW, Liu X (2000) Statistical power and optimal design for multisite randomized trials. Psychol Methods 5(2):199–213

    Article  Google Scholar 

  • Raudenbush SW, Martinez A, Spybrook J (2007) Strategies for improving precision in group-randomized experiments. Educ Eval Policy Anal 29(1):5–29

    Article  Google Scholar 

  • Sherman LW, Weisburd D (1995) General deterrent effects of police patrol in crime ‘hot spots’: a randomized, controlled trial. Justice Q 12(4):625–648

    Article  Google Scholar 

  • Sherman LW, Gartin PR, Buerger ME (1989) Hot spots of predatory crime: routine activities and the criminology of place. Criminology 27(1):27–56

    Article  Google Scholar 

  • Sherman LW, Smith DA, Schmidt JD, Rogan DP (1992) Crime, punishment, and stake in conformity: legal and informal control of domestic violence. Am Soc Rev 57(2):680–690

    Article  Google Scholar 

  • Sikkema KJ (2005) HIV prevention among women in low income housing developments: issues and intervention outcomes in a place randomized trial. AAAPSS 599:52–70

    Article  Google Scholar 

  • Skogan W, Frydl K (eds) (2004) Fairness and effectiveness in policing: The evidence. Committee to Review Research on Police Policy and Practices. Committee on Law and Justice, Division of Behavioral and Social Sciences and Education. The National Academies Press, Washington, DC

  • Taylor RB (1997) Social order and disorder of street blocks and neighborhoods: ecology, microecology, and the systemic model of social disorganization. J Res Crime Delinq 34(1):113–155

    Article  Google Scholar 

  • Taylor B, Koper CS, Woods DJ (2011) A randomized controlled trial of different policing strategies at hot spots of violent crime. J Exp Criminol 7(2):149–181

    Article  Google Scholar 

  • Weisburd D (2005) Hot spots policing experiments and criminal justice research: lessons from the field. AAAPSS 599:220–245

    Article  Google Scholar 

  • Weisburd D, Braga AA (2006) Hot spots policing as a model for police innovation. In: Weisburd D, Braga AA (eds) Police innovation: contrasting perspectives. Cambridge University Press, New York, pp 225–244

    Chapter  Google Scholar 

  • Weisburd D, Eck JE (2004) What can police do to reduce crime, disorder, and fear? AAAPSS 593(1):42–65

    Article  Google Scholar 

  • Weisburd D, Green L (1995) Policing drug hot spots: the Jersey City Drug Market Analysis Experiment. Justice Q 12(4):711–735

    Article  Google Scholar 

  • Weisburd D, Lum C (2005) The diffusion of computerized crime mapping in policing: linking research and practice. Police Pract Res 6(5):419–434

    Article  Google Scholar 

  • Weisburd D, Taxman F (2000) Developing a multicenter randomized trial in criminology: the case of HIDTA. J Quant Criminol 16(3):315–340

    Article  Google Scholar 

  • Weisburd D, Mastrofski SD, McNally AM, Greenspan R, Willis JJ (2003) Reforming to preserve: compstat and strategic problem solving in American policing. Criminol Pubic Policy 2(3):421–456

    Article  Google Scholar 

  • Weisburd D, Morris NA, Ready J (2008) Risk-focused policing at places: an experimental evaluation. Justice Q 25(1):163–200

    Article  Google Scholar 

  • Weisburd D, Groff ER, Yang S-M (2012) The criminology of place: street segments and our understanding of the crime problem. Oxford University Press, New York

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Weisburd.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Weisburd, D., Gill, C. Block Randomized Trials at Places: Rethinking the Limitations of Small N Experiments. J Quant Criminol 30, 97–112 (2014). https://doi.org/10.1007/s10940-013-9196-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10940-013-9196-z

Keywords

Navigation