Advertisement

Prevention Science

, Volume 18, Issue 6, pp 671–680 | Cite as

Alternatives to Randomized Control Trial Designs for Community-Based Prevention Evaluation

  • David Henry
  • Patrick TolanEmail author
  • Deborah Gorman-Smith
  • Michael Schoeny
Article

Abstract

Multiple factors may complicate evaluation of preventive interventions, particularly in situations where the randomized controlled trial (RCT) is impractical, culturally unacceptable, or ethically questionable, as can occur with community-based efforts focused on inner-city neighborhoods or rural American Indian/Alaska Native communities. This paper is based in the premise that all research designs, including RCTs, are constrained by the extent to which they can refute the counterfactual and by which they can meet the challenge of proving the absence of effects due to the intervention—that is, showing what is prevented. Yet, these requirements also provide benchmarks for valuing alternatives to RCTs, those that have shown abilities to estimate preventive effects and refute the counterfactual with limited bias acting in congruence with community values about implementation. In this paper, we describe a number of research designs with attending examples, including regression discontinuity, interrupted time series designs, and roll-out randomization designs. We also set forth procedures and practices that can enhance their utility. Alternative designs, when combined with such design strengths, can provide valid evaluations of community-based interventions as viable alternatives to the RCT.

Keywords

Research design Community based research 

Notes

Acknowledgments

Grateful acknowledgement is given to the investigators and staff of the Families and Communities Research Group, the Center for Alaska Native Health Research, and the conference, “Advancing Science with Culturally Distinct Samples” held at the University of Alaska Fairbanks in August 2011.

Funding

The research reported in this article was funded by the Centers for Disease Control and Prevention, the National Institute of Nursing Research, and the Robert R. McCormick Foundation.

Compliance with Ethical Standards

Conflict of Interest

The authors have no potential conflicts of interest.

Ethical Approval

All of the research reported here was conducted with the approval and under the supervision of the Institutional Review Boards of the University of Illinois at Chicago, Rush University, the University of Chicago, and/or the Illinois Department of Children and Family Services.

Informed Consent

All of the research reported in this article was conducted with the written informed consent of participants or, in the case of research involving state wards, consent of the Illinois Department of Children and Family Services.

References

  1. Aiken, L. S., West, S. G., Schwalm, D. E., Carroll, J., & Hsuing, S. (1998). Comparison of a randomized and two quasi-experimental designs in a single outcome evaluation: Efficacy of a university-level remedial writing program. Evaluation Review, 22, 207–244.CrossRefGoogle Scholar
  2. Arthur, M. W., Briney, J. S., Hawkins, J. D., Abbott, R. D., Brooke-Weiss, B. L., & Catalano, R. F. (2007). Measuring risk and protection in communities using the Communities That Care Youth Survey. Evaluation and Program Planning, 30, 197–211. doi: 10.1016/j.evalprogplan.2007.01.00.CrossRefPubMedGoogle Scholar
  3. Austin, P. C. (2007). The performance of different propensity score methods for estimating marginal odds ratios. Statistics in Medicine, 26, 3078–3094. doi: 10.1002/sim.2781.CrossRefPubMedGoogle Scholar
  4. Bechtold, D. (1988). Cluster suicide in American Indian adolescents. American Indian and Alaska Native Mental Health Research, 4, 26–35.CrossRefGoogle Scholar
  5. Berk, R., & Rauma, D. (1983). Capitalizing on nonrandom assignment to treatments: a regression-discontinuity evaluation of a crime-control program. Journal of the American Statistical Association, 78, 21–27. doi: 10.1080/01621459.1983.10477917.CrossRefGoogle Scholar
  6. Biglan, A., Ary, D., & Wagenaar, A. C. (2000). The value of interrupted time-series experiments for community intervention research. Prevention Science, 1, 31–49.CrossRefPubMedPubMedCentralGoogle Scholar
  7. Bock, R. D. (1989). Measurement of human variation: a two-stage model. In R. D. Bock (Ed.), Multilevel analysis of educational data (pp. 319–342). Orlando, FL: Academic.Google Scholar
  8. Bromley, E., Mikesell, L., Jones, F., & Khodyakov, D. (2015). From subject to participant: Ethics and the evolving role of community in health research. American Journal of Public Health, 105, 900–908. doi: 10.2105/AJPH.2014.302403.CrossRefPubMedPubMedCentralGoogle Scholar
  9. Brown, C. A., & Lilford, R. J. (2006). The stepped wedge design: A systematic review. BMC Medical Research Methodology, 6(54). doi: 10.1186/1471-2288-6-54
  10. Brown, C. H., Wyman, P. A., Guo, J., & Peña, J. (2006). Dynamic wait-listed designs for randomized trials: New designs for prevention of youth suicide. Clinical Trials, 3, 259–271.CrossRefPubMedGoogle Scholar
  11. Campbell, D. T., Stanley, J. C., & Gage, N. L. (1963). Experimental and quasi-experimental designs for research. In N. L. Gage (Ed.), Handbook of research on teaching (pp. 1–84). Boston, MA: Houghton, Mifflin Company.Google Scholar
  12. Cook, T. D., Shadish, W. R., & Wong, V. C. (2008). Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within-study comparisons. Journal of Policy Analysis and Management, 27, 724–750.CrossRefGoogle Scholar
  13. Fogg, L. (2011). Using stochastic matching to compensate for failures of randomization. Paper presented at the Advancing Science with Culturally Distinct Communities, Fairbanks, AK.Google Scholar
  14. Fok, C. C. T., & Henry, D. (2015). Increasing the sensitivity of measures to change. Prevention Science, 16, 978–986. doi: 10.1007/s11121-015-0545-z.CrossRefPubMedPubMedCentralGoogle Scholar
  15. Fok, C. C. T., Henry, D., & Allen, J. (2015). Research designs for intervention research with small samples II: Stepped wedge and interrupted time-series designs. Prevention Science, 16, 967–977. doi: 10.1007/s11121-015-0569-4.CrossRefPubMedGoogle Scholar
  16. Gorman-Smith, D. (2014). Helping communities use the evidence for youth violence prevention. Paper presented at the Public Health Grand Rounds, Center for Disease Control and Prevention, Atlanta, GA.Google Scholar
  17. Gottfredson, D. C., Cook, T. D., Gardner, F. E. M., Gorman-Smith, D., Howe, G. W., Sandler, I. N., & Zafft, K. M. (2015). Standards of evidence for efficacy, effectiveness, and scale-up research in prevention science: Next generation. Prevention Science, 16, 893–926. doi: 10.1007/s11121-015-0555-x.CrossRefPubMedPubMedCentralGoogle Scholar
  18. Guo, S., & Fraser, M. (2014). Propensity score analysis: Statistical methods and applications (2nd ed.). Thousand Oaks, CA: Sage Publications.Google Scholar
  19. Hahn, J., Todd, P., & Van der Klaauw, W. (1999). Evaluating the effect of an antidiscrimination law using a regression-discontinuity design. Cambridge, MA: National Bureau of Economic Research. NBER Working Paper 7131.CrossRefGoogle Scholar
  20. Harding, A., Harper, B., Stone, D., O’Neill, C., Berger, P., Harris, S., & Donatuto, J. (2012). Conducting research with tribal communities: Sovereignty, ethics, and data-sharing issues. Environmental Health Perspectives, 120, 6–10. doi: 10.1289/ehp.1103904.CrossRefPubMedGoogle Scholar
  21. Hayden, M. K., Lin, M. Y., Lolans, K., Weiner, S., Blom, D., Moore, N. M.,…Weinstein, R. A. (2014). Prevention of colonization and infection by klebsiella pneumoniae carbapenemase-producing enterobacteriaceae in long term acute care hospitals. Clinical Infectious Diseases, (1–30). doi: 10.1093/cid/ciu1173.
  22. Henry, D. B., Knoblauch, S., & Sigurvinsdottir, R. (2014). The effect of intensive CeaseFire intervention on crime in four Chicago police beats: Quantitative assessment. Chicago, IL: Robert R. McCormick Foundation.Google Scholar
  23. Hirano, K., & Imbens, G. W. (2004). The propensity score with continuous treatments. In A. Gelman & X.-L. Meng (Eds.), Applied Bayesian modeling and causal inference from incomplete-data perspectives (pp. 73–84). Chichester, UK: John Wiley & Sons. doi: 10.1002/0470090456.ch7.Google Scholar
  24. Lemieux, T., & Milligan, K. (2004). Incentive effects of social assistance: A regression discontinuity approach. Cambridge, MA: National Bureau of Economic Research. NBER Working Paper 10541.CrossRefGoogle Scholar
  25. Linden, A., & Adams, J. L. (2012). Combining the regression discontinuity design and propensity score-based weighting to improve causal inference in program evaluation. Journal of Evaluation in Clinical Practice, 18, 317–325. doi: 10.1111/j.1365-2753.2011.01768.x.CrossRefPubMedGoogle Scholar
  26. Middlebrook, D. L., LeMaster, P. L., Beals, J., Novins, D. K., & Manson, S. M. (2001). Suicide prevention in American Indian and Alaska native communities: A critical review of programs. Suicide and Life-Threatening Behavior, 31, 132–149.CrossRefPubMedGoogle Scholar
  27. Puhan, M. A., Bryant, D., Guyatt, G. H., Heels-Ansdell, D., & Schünemann, H. J. (2005). Internal consistency reliability is a poor predictor of responsiveness. Health and Quality Of Life Outcomes, 3, 33–43. doi: 10.1186/1477-7525-3-33.CrossRefPubMedPubMedCentralGoogle Scholar
  28. Rosenbaum, P. A., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70, 41–55.CrossRefGoogle Scholar
  29. Sanson-Fisher, R. W., Bonevski, B., Green, L. W., & D’Este, C. (2007). Limitations of the randomized controlled trial in evaluating population-based health interventions. American Journal of Preventive Medicine, 33, 155–161. doi: 10.1016/j.amepre.2007.04.007.CrossRefPubMedGoogle Scholar
  30. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generilized causal inference. Boston, MA: Houghton Mifflin.Google Scholar
  31. Skloot, R. (2010). The immortal life of Henrietta Lacks. New York: Random House. Crown/Archetype.Google Scholar
  32. Skogan, W. G., Hartnett, S. M., Bump, N., & Dubois, J. (2009). Evaluation of CeaseFire-Chicago. Evanston, IL: Northwestern University.Google Scholar
  33. Thompson, A. R., Henry, D. B., Davidson, C. V., Gottschalk, A., & Naylor., M. W. (2013). The influence of a State Oversight Program for Illinois wards on SSRI prescription request trends. Unpublished manuscript available from M.W. Naylor, Institute of Juvenile Research, 1747 W. Roosevelt Rd., Chicago, IL 60612.Google Scholar
  34. Tolan, P. H. (2002). Family-focused prevention research: ‘Tough but tender’. In H. A. Liddle, D. A. Santisteban, R. F. Levant, & J. H. Bray (Eds.), Family psychology: Science-based interventions (pp. 197–213). Washington, DC: American Psychological Association. doi: 10.1037/10438-010.CrossRefGoogle Scholar
  35. Tolan, P. H. (2014). Making and using lists of empirically tested programs: Value for violence interventions for progress and impact. Institute of Medicine/National Academy of Sciences Report. The evidence for violence prevention across the lifespan and around the world. (pp. 94–106). Washington, DC: Institute of Medicine/National Academy of Sciences Press.Google Scholar
  36. Trochim, W. M. K., & Campbell, D. T. (1996). The regression point displacement design for evaluating community-based pilot programs and demonstration projects. Unpublished manuscript. Retrieved from http://www.socialresearchmethods.net/research/RPD/RPD.pdf.
  37. Wakschlag, L. S., Briggs-Gowan, M. J., Choi, S. W., Nichols, S. R., Kestler, J., Burns, J. L., & Henry, D. (2014). Advancing a multidimensional, developmental spectrum approach to preschool disruptive behavior. Journal of the American Academy of Child and Adolescent Psychiatry, 53, 82–96. doi: 10.1016/j.jaac.2013.10.011. e3.CrossRefPubMedGoogle Scholar
  38. Ward, J. A., & Fox, J. (1977). Suicide epidemic on an Indian reserve. Canadian Psychiatric Association Journal / Revue de l’Association des psychiatres du Canada, 22, 423–426.CrossRefPubMedGoogle Scholar
  39. Weitzen, S., Lapane, K. L., Toledano, A. Y., Hume, A. L., & Mor, V. (2004). Principles for modeling propensity scores in medical research: A systematic literature review. Pharmacoepidemiology and Drug Safety, 13, 841–853.CrossRefPubMedGoogle Scholar
  40. West, S. G., Duan, N., Pequegnat, W., Gaist, P., Des Jarlais, D. C., Holtgrave, D., & Mullen, P. D. P. (2008). Alternatives to the randomized controlled trial. American Journal of Public Health, 98, 1359–1366. doi: 10.2105/AJPH.2007.124446.CrossRefPubMedPubMedCentralGoogle Scholar
  41. Wyman, P. A., Brown, C. H., Inman, J., Cross, W., Schmeelk-Cone, K., Guo, J., & Pena, J. B. (2008). Randomized trial of a gatekeeper program for suicide prevention: 1-year impact on secondary school staff. Journal of Consulting and Clinical Psychology, 76, 104–115. doi: 10.1037/0022-006X.76.1.104.CrossRefPubMedPubMedCentralGoogle Scholar
  42. Wyman, P. A., Henry, D., Knoblauch, S., & Brown, C. H. (2015). Designs for testing group-based interventions with limited numbers of social units: The dynamic wait-listed and regression point displacement designs. Prevention Science, 16, 956–966. doi: 10.1007/s11121-014-0535-6.CrossRefPubMedPubMedCentralGoogle Scholar

Copyright information

© Society for Prevention Research 2016

Authors and Affiliations

  • David Henry
    • 1
  • Patrick Tolan
    • 2
    Email author
  • Deborah Gorman-Smith
    • 3
  • Michael Schoeny
    • 3
  1. 1.University of Illinois at ChicagoChicagoUSA
  2. 2.University of VirginiaCharlottesvilleUSA
  3. 3.University of ChicagoChicagoUSA

Personalised recommendations