Journal of Experimental Criminology

, Volume 11, Issue 1, pp 141–163 | Cite as

Sample size, effect size, and statistical power: a replication study of Weisburd’s paradox

  • Matthew S. Nelson
  • Alese Wooditch
  • Lisa M. Dario
Article

Abstract

Objectives

This study expands upon Weisburd’s work (1993) by reexamining the relationship between sample size and statistical power in criminological experiments. This inquiry, now known as the Weisburd paradox, postulates that increasing the sample size of experiments does not always lead to increases in statistical power. The current research also begins to explore the potential sources of the Weisburd paradox.

Methods

The effect sizes and statistical power are computed for the outcome measures (n = 402) of all experiments (n = 66) included in systematic reviews published by the Campbell Collaboration’s Crime and Justice Coordinating Group. The design sensitivity of these experiments is reviewed by sample size, as well as other factors that may explain the variation in effect sizes and statistical power across studies.

Results

Effect sizes decline as the sample size of the experiment increases, whereas statistical power is unrelated to sample size but strongly associated with effect size. Disclosure of fidelity issues and publication bias is unrelated to statistical power and treatment effects. Variability in the dependent variable and sample demographics are significantly related to statistical power, but not to effect size.

Conclusions

The study finds support for the Weisburd paradox, as the ability to manipulate statistical power by increasing sample size is not as strong as statistical theory would suggest, and experiments with larger sample sizes generally produce smaller effects. It is believed that a relationship was not observed between sample size and statistical power because the sensitivity gained from increasing sample size is offset by effect size simultaneously decreasing.

Keywords

Experiments Statistical power Effect size Sample size Weisburd paradox 

References

  1. Alexander, R. A., Barrett, G. V., Alliger, G. M., & Kenneth, P. C. (1986). Towards a general model of non-random sampling and the impact on population correlation: generalizations of Berkson’s fallacy and restriction of range. British Journal of Mathematical and Statistical Psychology, 39(1), 90–105.CrossRefGoogle Scholar
  2. Altman, D. G. (1996). Better reporting of randomised controlled trials: the CONSORT statement. British Medical Journal, 313(7057), 570.CrossRefGoogle Scholar
  3. Bellg, A. J., Borrelli, B., Resnick, B., Hecht, J., Minicucci, D. S., Ory, M., Ogedbe, G., Orwig, D., Ernst, D., & Czajkowski, S. (2004). Enhancing treatment fidelity in health behavior change studies: best practices and recommendations from the NIH behavior change consortium. Health Psychology, 23(5), 443–451.CrossRefGoogle Scholar
  4. Berk, R. (2005). Randomized experiments as the bronze standard. Journal of Experimental Criminology, 1, 417–433.CrossRefGoogle Scholar
  5. Borrelli, B. (2001). The assessment, monitoring, and enhancement of treatment fidelity in public health clinical trials. Journal of Public Health Dentistry, 71, S52–S63.CrossRefGoogle Scholar
  6. Britt, C. L., & Weisburd, D. (2011). Statistical power. In A. R. Piquero & D. Weisburd (Eds.), Handbook of quantitative criminology (pp. 313–332). New York: Springer.Google Scholar
  7. Bus, A. G., Van Ijzendoorn, M. H., & Pellegrini, A. D. (1995). Joint book reading makes for success in learning to read: a meta-analysis on intergenerational transmission of literacy. Review of Educational Research, 65, 1–21.CrossRefGoogle Scholar
  8. Chan, A.-W., & Altman, D. G. (2005). Outcome reporting bias in randomized trials on PubMed: review of publications and survey of authors. BMJ, 330, 753.CrossRefGoogle Scholar
  9. Chan, A.-W., Hrobjartsson, A., Haahr, M. T., Gøtzsche, P. C., & Altman, D. G. (2004). Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. Journal of the American Medical Association, 291, 2457–2465.CrossRefGoogle Scholar
  10. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale: Erlbaum.Google Scholar
  11. Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Chicago: Rand McNally.Google Scholar
  12. Dickersin, K. (2005). Publication bias: Recognizing the problem, understanding its origins and scope, and preventing harm. In H. Rothstein, A. J. Sutton, & M. Borenstein (Eds.), Publication bias in meta-analysis: Prevention, assessment and adjustments (pp. 11–34). Chichester: Wiley.Google Scholar
  13. Esbensen, F. (1991). Ethical considerations in criminal justice research. American Journal of Police, 10(2), 87–104.Google Scholar
  14. Farrington, D. P. (1983). Randomized experiments on crime and justice. In M. Tonry & N. Morris (Eds.), Crime and justice (pp. 257–308). Chicago: University of Chicago Press.Google Scholar
  15. Farrington, D. P. (2003a). A short history of randomized experiments in criminology: a meager feast. Evaluation Review, 27(3), 218–227.CrossRefGoogle Scholar
  16. Farrington, D. P. (2003b). Methodological quality standards for evaluation research. The Annals of the American Academy of Political and Social Science, 587(1), 49–68.CrossRefGoogle Scholar
  17. Farrington, D. P., Gottfredson, D. C., Sherman, L. W., & Welsh, B. C. (2002). The Maryland scientific methods scale. In L. W. Sherman, D. P. Farrington, B. C. Welsh, & D. L. MacKenzie (Eds.), Evidence-based crime prevention (pp. 13–21). London: Routledge.Google Scholar
  18. Ferrari, S., & Cribari-Neto, F. (2004). Beta regression for modelling rates and proportions. Journal of Applied Statistics, 31(7), 799–815.CrossRefGoogle Scholar
  19. Fienberg, S. E., & Tanur, J. M. (1986). The design and analysis of longitudinal surveys: Controversies and issues of cost and continuity. In R. Pearson & R. Boruch (Eds.), Survey research designs: Towards a better understanding of their costs and benefits (pp. 60–93). New York: Springer.CrossRefGoogle Scholar
  20. Garner, J. H., & Visher, C. A. (2003). The production of criminological experiments. Evaluation Review, 27(3), 316–335.CrossRefGoogle Scholar
  21. Gill, C. E. (2011). Missing links: how descriptive validity impacts the policy relevance of randomized controlled trials in criminology. Journal of Experimental Criminology, 7 (3), 201–224.Google Scholar
  22. Givens, G. H., Smith, D. D., & Tweedie, R. L. (1997). Publication bias in meta-analysis: a Bayesian data-augmentation approach to account for issues exemplified in the passive smoking debate. Statistical Science, 12, 221–250.CrossRefGoogle Scholar
  23. Glazerman, S., Levy, D. M., & Myers, D. (2002). Non experimental replications of social experiments: A systematic review. Washington: Mathematics Policy Research.Google Scholar
  24. Goodman, J. S., & Blum, T. C. (1996). Assessing the non-random sampling effects of subject attrition in longitudinal research. Journal of Management, 22(4), 627–652. Springer New York.CrossRefGoogle Scholar
  25. Graebsch, C. (2000). Legal issues of randomized experiments on sanctioning. Journal of Crime and Delinquency, 46(2), 271–282.CrossRefGoogle Scholar
  26. Grant, S., Mayo-Wilson, E., Hopewell, S., Macdonald, G., Moher, D., & Montgomery, P. (2013). Developing a reporting guideline for social and psychological intervention trials. Journal of Experimental Criminology, 9(3), 355–367.CrossRefGoogle Scholar
  27. Harbord, R. M., & Higgins, J. P. (2008). Meta-regression in Stata. The Stata Journal, 8(4), 493–519.Google Scholar
  28. Heckman, J. J., & Smith, J. A. (1995). Assessing the case for social experiments. Journal of Economic Perspectives, 9(2), 85–110.CrossRefGoogle Scholar
  29. Lipsey, M. (1990). Design sensitivity: Statistical power for experimental research. Newbury Park: Sage.Google Scholar
  30. Lipsey, M. W. (2009). The primary factors that characterize effective interventions withjuvenile offenders: A meta-analytic overview.Victims and Offenders, 4, 124–147.Google Scholar
  31. Lösel, F., & Köferl, P. (1989). Evaluation research on correctional treatment in West Germany: A meta-analysis. In Criminal behavior and the justice system (pp. 334–355). Springer Berlin Heidelberg.Google Scholar
  32. McCord, J. (1978). A thirty-year followup of treatment effects. American Psychologist, 33 (3), 284–289.Google Scholar
  33. Müllen, B. (1989). Advanced BASIC meta-analysis. Hillsdale: Erlbaum.Google Scholar
  34. Olver, M. E., Stockdale, K. C., & Wormith, J. S. (2011). A meta-analysis of predictors of offender treatment attrition and its relationship to recidivism. Journal of Consulting and Clinical Psychology, 79(1), 6–21.CrossRefGoogle Scholar
  35. Rothstein, H. R. (2008). Publication bias as a threat to the validity of meta-analytic results. Journal of Experimental Criminology, 4(1), 61–81.CrossRefGoogle Scholar
  36. Sampson, R. J. (2010). Gold standard myths: observations on the experimental turn in quantitative criminology. Journal of Quantitative Criminology, 26, 489–500.CrossRefGoogle Scholar
  37. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton-Mifflin.Google Scholar
  38. Sharp, S. (1998). Meta-analysis regression. Stata Technical Bulletin 42: 16–22. In Stata Technical Bulletin Reprints, vol. 7, 148–155. College Station, TX: Stata Press.Google Scholar
  39. Sherman, L. W. (2007). The power few: experimental criminology and the reduction of harm (the 2006 Joan McCord prize lecture). Journal of Experimental Criminology, 3(4), 299–321.CrossRefGoogle Scholar
  40. Sherman, L. W. (2010). An introduction to experimental criminology. In A. R. Piquero & D. Weisburd (Eds.), Handbook of quantitative criminology (pp. 399–436). New York: Springer.CrossRefGoogle Scholar
  41. Sherman, L. W. (2013). How CONSORT could improve treatment measurement: a comment on “developing a reporting guideline for social and psychological intervention trials. Journal of Experimental Criminology, 9(3), 369–373.CrossRefGoogle Scholar
  42. Sherman, L. W., Gottfredson, D. C., MacKenzie, D. L., Eck, J., Reuter, P., & Bushway, S. D. (1998). Preventing crime: What works, what doesn’t, what’s promising. Washington: U.S. National Institute of Justice.Google Scholar
  43. Slavin, R. E., Lake, C., & Groff, C. (2009). Effective programs in middle and high school mathematics: a best-evidence synthesis. Review of Educational Research, 79(2), 839–911.CrossRefGoogle Scholar
  44. Sterne, J., Gavaghan, D., & Egger, M. (2000). Publication and related bias in meta-analysis: power of statistical tests and prevalence in literature. Journal of Clinical Epidemiology, 53, 1119–1129.CrossRefGoogle Scholar
  45. van Tulder, M. W., Suttorp, M., Morton, S., Bouter, L. M., & Shekelle, P. (2009). Empirical evidence of an association between internal validity and effect size in randomized controlled trials of low-back pain. Spine, 34(16), 1685–1692.CrossRefGoogle Scholar
  46. We, S. R., et al. (2012). Placebo effect was influenced by publication year in three-armed acupuncture trials. Complementary Therapies in Medicine 20.1, 83–92.Google Scholar
  47. Weisburd, D. (1993). Design sensitivity in criminal justice experiments: reassessing the relationship between sample size and statistical power. In M.Tonry & N. Morris (Eds.), Crime and Justice, Vol 17 (pp. 337–379). Chicago: University of Chicago Press.Google Scholar
  48. Weisburd, D. (2000). Randomized experiments in criminal justice policy: prospects and problems. Crime & Delinquency, 46(2), 181–193.CrossRefGoogle Scholar
  49. Weisburd, D., & Britt, C. (2007). Statistics in criminal justice (3rd ed.). New York: Springer.Google Scholar
  50. Weisburd, D., Petrosino, A., & Mason, G. (1993). Design sensitivity in criminal justice experiments. Crime and Justice: A Review of Research, 17, 337–379.CrossRefGoogle Scholar
  51. Weisburd, D., Lum, C. M., & Petrosino, A. (2001). Does research design affect study outcomes in criminal justice? The ANNALS of the American Academy of Political and Social Science, 578(1), 50–70.CrossRefGoogle Scholar
  52. White, K., & Pezzino, J. (1986). Ethical, practical and scientific considerations of randomized experiments in early childhood special education. Topics in Early Childhood Education, 6(3), 100–116.CrossRefGoogle Scholar
  53. Wilson, D. B. (2013). Comment on “developing a reporting guideline for social and psychological intervention trials.”. Journal of Experimental Criminology, 9(3), 375–377.CrossRefGoogle Scholar
  54. Wilson, D. B., & Lipsey, M. W. (2001). The role of method in treatment effectiveness research: evidence from meta-analysis. Psychological Methods, 6(4), 413.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  • Matthew S. Nelson
    • 1
  • Alese Wooditch
    • 1
  • Lisa M. Dario
    • 2
  1. 1.George Mason UniversityFairfaxUSA
  2. 2.Arizona State UniversityPhoenixUSA

Personalised recommendations