Prevention Science

, Volume 16, Issue 7, pp 956–966 | Cite as

Designs for Testing Group-Based Interventions with Limited Numbers of Social Units: The Dynamic Wait-Listed and Regression Point Displacement Designs

  • Peter A. WymanEmail author
  • David Henry
  • Shannon Knoblauch
  • C. Hendricks Brown


The dynamic wait-listed design (DWLD) and regression point displacement design (RPDD) address several challenges in evaluating group-based interventions when there is a limited number of groups. Both DWLD and RPDD utilize efficiencies that increase statistical power and can enhance balance between community needs and research priorities. The DWLD blocks on more time units than traditional wait-listed designs, thereby increasing the proportion of a study period during which intervention and control conditions can be compared, and can also improve logistics of implementing intervention across multiple sites and strengthen fidelity. We discuss DWLDs in the larger context of roll-out randomized designs and compare it with its cousin the Stepped Wedge design. The RPDD uses archival data on the population of settings from which intervention unit(s) are selected to create expected posttest scores for units receiving intervention, to which actual posttest scores are compared. High pretest–posttest correlations give the RPDD statistical power for assessing intervention impact even when one or a few settings receive intervention. RPDD works best when archival data are available over a number of years prior to and following intervention. If intervention units were not randomly selected, propensity scores can be used to control for non-random selection factors. Examples are provided of the DWLD and RPDD used to evaluate, respectively, suicide prevention training (QPR) in 32 schools and a violence prevention program (CeaseFire) in two Chicago police districts over a 10-year period. How DWLD and RPDD address common threats to internal and external validity, as well as their limitations, are discussed.


Group-based designs Roll-out designs Small sample designs Dynamic wait-listed design Regression point displacement design 



We thank the National Institute of Mental Health for support under grants R34MH071189 (P. Wyman, PI) and RO1MH091452 (P. Wyman, PI) and the National Institute on Drug Abuse (NIDA) under grants P30 DA027828 (C. H. Brown, PI) and R13040610 (C. T. Fok, PI).

Conflict of Interest

The authors declare that they have no conflict of interest.


  1. Baer, D. M., Wolf, M. M., & Risely, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1, 91–97.PubMedCentralCrossRefPubMedGoogle Scholar
  2. Biglan, A., Ary, D., & Wagenaar, A. C. (2000). The value of interrupted time-series experiments for community intervention research. Prevention Science, 1, 31–49.PubMedCentralCrossRefPubMedGoogle Scholar
  3. Brown, C. H., & Liao, J. (1999). Principles for designing randomized preventive trials in mental health: an emerging developmental epidemiology paradigm. American Journal of Community Psychology, 27, 673–710.CrossRefPubMedGoogle Scholar
  4. Brown, C. A., & Lilford, R. J. (2006). The stepped wedge design: A systematic review. BMC Medical Research Methodology, 6, 54. doi: 10.1186/1471-2288-6-54.PubMedCentralCrossRefPubMedGoogle Scholar
  5. Brown, C. H., Wyman, P. A., Guo, J., & Pena, J. (2006). Dynamic wait-listed designs for randomized trials: New designs for prevention of youth suicide. [Research Support, N.I.H., Extramural Research Support, U.S. Gov’t, P.H.S.]. Clinical Trials, 3, 259–271.CrossRefPubMedGoogle Scholar
  6. Brown, C. H., Wyman, P. A., Brinales, J. M., & Gibbons, R. D. (2007). The role of randomized trials in testing interventions for the prevention of youth suicide. International Review of Psychiatry, 19, 617–631. doi: 10.1080/0954026070179777.CrossRefPubMedGoogle Scholar
  7. Brown, C. H., Ten Have, T. R., Jo, B., Dagne, G., Wyman, P. A., Muthén, B. O., & Gibbons, R. (2009). Adaptive designs in public health. Annual Review Public Health, 30, 1–25.CrossRefGoogle Scholar
  8. Brown, C.H., Mason, W.A., Brown, E.C. (2014). Translating the Intervention Approach into an Appropriate Research Design—The Next Generation Designs for Effectiveness and Implementation Research. In: Z Sloboda and H Petras (Eds.), Advances in Prevention Science: Defining Prevention Science. Springer Publishing.Google Scholar
  9. Brown, C.H., Chamberlain, P., Saldana, L., Padgett, C., Wang W., Cruden G. (2014). Evaluation of two implementation strategies in fifty-one child county public service systems in two states: Results of a cluster randomized head-to-head implementation trial.Google Scholar
  10. Campbell, D. T., Stanley, J. C., & Gage, N. L. (1963). Experimental and quasi-experimental designs for research. Boston: Houghton, Mifflin and Company.Google Scholar
  11. Carr, J. E. (2005). Recommendations for reporting multiple-baseline designs across participants. Behavioral Interventions, 20, 219–224.CrossRefGoogle Scholar
  12. Catalano, R. F., Arthur, M. W., Hawkins, J. D., Berglund, L., & Olson, J. J. (1998). Comprehensive community- and school-based interventions to prevent antisocial behavior. In R. Loeber & D. Farrington (Eds.), Serious and violent juvenile offenders: Risk factors and successful interventions. Thousand Oaks: Sage.Google Scholar
  13. Chamberlain, P., Saldana, L., Brown, C. H., & Leve, L. (2010). Implementation of multidimensional treatment foster care in California: A randomized control trial of an evidence-based practice. In M. Roberts-DeGennaro & S. Fogel (Eds.), Using evidence to inform practice for community and organizational change (pp. 218–234). Chicago: Lyceum Books.Google Scholar
  14. Dymnicki, A., Henry, D., Quintana, E., Wisnieski, E., & Kane, C. (2013). Outreach worker perceptions of positive and negative critical incidents: Characteristics associated with successful and unsuccessful violence interruption. Journal of Community Psychology, 41, 200–217. doi: 10.1002/jcop.21523.CrossRefGoogle Scholar
  15. Gibbons, R. D., Hur, K., Bhaumik, D. K., & Bell, C. C. (2007). Profiling of county-level foster care placements using random-effects Poisson regression models. Health Services and Outcomes Research Methodology, 7, 97–108.CrossRefGoogle Scholar
  16. Henry, D., Allen, J., Fok, C. C., Rasmus, S., Charles, B., & People Awakening Team. (2012). Patterns of protective factors in an intervention for the prevention of suicide and alcohol abuse with Yup’ik Alaska native youth (Early Online), 1–7. The American Journal of Drug and Alcohol Abuse. doi: 10.3109/00952990.2012.704460.Google Scholar
  17. Hussey, M. A., & Hughes, J. P. (2007). Design and analysis of stepped wedge cluster randomized trials. Contemporary Clinical Trials, 28, 182–191.CrossRefPubMedGoogle Scholar
  18. Hutton, J. L. (2001). Are distinctive ethical principles required for cluster randomized controlled trials? Statistics in Medicine, 20, 473–488.CrossRefPubMedGoogle Scholar
  19. Kegeles, S. M., Hays, R. B., & Coates, T. J. (1996). The Mpowerment Project: A community level HIV prevention intervention for young gay men. American Journal of Public Health, 86, 1129–1136.PubMedCentralCrossRefPubMedGoogle Scholar
  20. Linden, A., Trochim, W. M. K., & Adams, J. L. (2006). Evaluating program effectiveness using the regression point displacement design. Evaluation and the Health Professions, 29, 407–423.CrossRefPubMedGoogle Scholar
  21. Murray, D. M., Varnell, S. P., & Blitstein, J. L. (2004). Design and analysis of group-randomized trials: A review of recent methodological developments. American Journal of Public Health, 94, 423.PubMedCentralCrossRefPubMedGoogle Scholar
  22. Quinby, R. K., Hanson, K., Brooke-Weiss, B., Arthur, M. W., Hawkins, J. D., & Fagan, A. A. (2008). Installing the communities that care prevention system: implementation progress and fidelity in a randomized controlled trial. Journal of Community Psychology, 36, 313–332. doi: 10.1002/jcop.20194.CrossRefGoogle Scholar
  23. Quinnett, P. (1995). QPR: Ask a question, save a life. Spokane: QPR Institute and Suicide Awareness/Voices of Education.Google Scholar
  24. Rosenbaum, P. A., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70, 41–55.CrossRefGoogle Scholar
  25. Skogan, W. G., Hartnett, S. M., Bump, N., & Dubois, J. (2009). Evaluation of CeaseFire-Chicago. Evanston: Northwestern University.Google Scholar
  26. Trochim, W. M. K., & Campbell, D. T. (1996). The regression point displacement design for evaluating community-based pilot programs and demonstration projects. Retrieved from
  27. Wyman, P. A., Brown, C. H., Inman, J., Cross, W., Schmeelk-Cone, K., Guo, J., & Pena, J. B. (2008). Randomized trial of a gatekeeper program for suicide prevention: 1-year impact on secondary school staff. Journal of Consulting and Clinical Psychology, 76, 104–115. doi: 10.1037/0022-006X.76.1.104.PubMedCentralCrossRefPubMedGoogle Scholar
  28. Wyman, P. A., Brown, C. H., LoMurray, M., Schmeelk-Cone, K., Petrova, M., Yu, Q., & Wang, W. (2010). An outcome evaluation of the Sources of Strength suicide prevention program delivered by adolescent peer leaders in high schools. American Journal of Public Health, 100, 1653–1661. doi: 10.2105/AJPH.2009.190025.PubMedCentralCrossRefPubMedGoogle Scholar

Copyright information

© Society for Prevention Research 2014

Authors and Affiliations

  • Peter A. Wyman
    • 1
    Email author
  • David Henry
    • 2
  • Shannon Knoblauch
    • 2
  • C. Hendricks Brown
    • 3
  1. 1.University of Rochester School of Medicine and DentistryRochesterUSA
  2. 2.University of Illinois at ChicagoChicagoUSA
  3. 3.Feinberg School of MedicineNorthwestern UniversityChicagoUSA

Personalised recommendations