Advertisement

Prevention Science

, 12:103 | Cite as

Replication in Prevention Science

  • Jeffrey C. Valentine
  • Anthony Biglan
  • Robert F. Boruch
  • Felipe González Castro
  • Linda M. Collins
  • Brian R. Flay
  • Sheppard Kellam
  • Eve K. Mościcki
  • Steven P. Schinke
Article

Abstract

Replication research is essential for the advancement of any scientific field. In this paper, we argue that prevention science will be better positioned to help improve public health if (a) more replications are conducted; (b) those replications are systematic, thoughtful, and conducted with full knowledge of the trials that have preceded them; and (c) state-of-the art techniques are used to summarize the body of evidence on the effects of the interventions. Under real-world demands it is often not feasible to wait for multiple replications to accumulate before making decisions about intervention adoption. To help individuals and agencies make better decisions about intervention utility, we outline strategies that can be used to help understand the likely direction, size, and range of intervention effects as suggested by the current knowledge base. We also suggest structural changes that could increase the amount and quality of replication research, such as the provision of incentives and a more vigorous pursuit of prospective research registers. Finally, we discuss methods for integrating replications into the roll-out of a program and suggest that strong partnerships with local decision makers are a key component of success in replication research. Our hope is that this paper can highlight the importance of replication and stimulate more discussion of the important elements of the replication process. We are confident that, armed with more and better replications and state-of-the-art review methods, prevention science will be in a better position to positively impact public health.

Keywords

Replication Reproducibility Systematic Review Meta-Analysis Effectiveness 

Notes

Author Note

This paper is the result of the deliberations of the Standards of Evidence Taskforce, convened and funded by the Society for Prevention Research (Brian R. Flay, Chair). The Board of Directors of the Society for Prevention Research is pleased to have supported the preparation of this paper in hopes that it will stimulate further discussion about the importance of replication.

The views expressed in this paper are the authors’, and do not necessarily reflect the views of the authors’ institutions or the Society for Prevention Research. With the exception of the first author, order of authorship is alphabetical.

We thank Richard Catalano, Harris Cooper, Adam Haldahl, Mark Lipsey, and Patrick Tolan for their valuable feedback on earlier versions of this paper, and Kirsten Sundell for editing the final manuscript.

References

  1. Brown, C. H., Wyman, P. A., Brinales, J. M., & Gibbons, R. D. (2007). The role of randomized trials in testing interventions for the prevention of youth suicide. International Review of Psychiatry, 19, 617–631.CrossRefPubMedGoogle Scholar
  2. Bushman, B. J., & Wang, M. C. (2009). Vote-counting procedures in meta-analysis. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 207–220). New York: Sage.Google Scholar
  3. Campbell, D. T. (1986). Relabeling internal and external validity for applied social scientists. In W. M. K. Trochim (Ed.), Advances in quasi-experimental design and analysis (pp. 67–77). San Francisco, CA: Jossey-Bass.Google Scholar
  4. Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research. Chicago, IL: McNally.Google Scholar
  5. Cochran, W. G., & Cox, G. M. (1957). Experimental designs (2nd ed.). New York: Wiley.Google Scholar
  6. Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49, 997–1003.CrossRefGoogle Scholar
  7. Cooper, H., & Dorr, N. (1995). Race comparisons on need for achievement: A meta-analytic alternative to Graham’s narrative review. Review of Educational Research, 65, 483–508.Google Scholar
  8. Cooper, H. M., & Rosenthal, R. (1980). Statistical versus traditional procedures for summarizing research findings. Psychological Bulletin, 87, 442–449.CrossRefPubMedGoogle Scholar
  9. Cumming, G., & Maillardet, R. (2006). Confidence intervals and replication: Where will the next mean fall? Psychological Methods, 11, 217–227.CrossRefPubMedGoogle Scholar
  10. Egger, M., Smith, G. D., & O'Rourke, K. (2001). Rationale, potentials, and promise of systematic reviews. In M. Egger, G. D. Smith, & K. O'Rourke (Eds.), Systematic reviews in health care: Systematic reviews in context (2nd ed., pp. 3–22). London, UK: BMJ.CrossRefGoogle Scholar
  11. Eisner, M. (2009). No effects in independent prevention trials: Can we reject the cynical view? Journal of Experimental Criminology, 5, 163–183.CrossRefGoogle Scholar
  12. Elliott, D. S., & Mihalic, S. (2004). Issues in disseminating and replicating effective prevention programs. Prevention Science, 5, 47–52.CrossRefPubMedGoogle Scholar
  13. Flay, B. R. (1986) Efficacy and effectiveness trials (and other phases of research) in the development of health promotion programs. Preventive Medicine, 15, 451–474.Google Scholar
  14. Flay, B. R., Biglan, A., Boruch, R. F., González Castro, F., Gottfredson, D., Kellam, S., et al. (2005). Standards of evidence: Criteria for efficacy, effectiveness and dissemination. Prevention Science, 6, 151–175.Google Scholar
  15. Foster, E. M. (2010). The value of reanalysis and replication: Introduction to special section. Developmental Psychology, 46, 973–975.CrossRefPubMedGoogle Scholar
  16. Gigerenzer, G. (1993). The superego, the ego, and the id in statistical reasoning. In G. Keren & C. Lewis (Eds.), A handbook for data analysis in the behavioral sciences: Methodological issues (pp. 311–339). Hillsdale, NJ: Erlbaum.Google Scholar
  17. Glass, G. V. (2000). Meta-analysis at 25. retrieved from http://glass.ed.asu.edu/gene/papers/meta25.html
  18. Greenwald, A. G., Gonzalez, R., Harris, R. J., & Guthrie, D. (1996). Effect sizes and p values: What should be reported and what should by replicated? Psychophysiology, 33, 175–183.CrossRefPubMedGoogle Scholar
  19. Hedges, L. V. (1987). How hard is hard science, how soft is soft science? The empirical cumulativeness of research. American Psychologist, 42, 443–455.CrossRefGoogle Scholar
  20. Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. Orlando, FL: Academic.Google Scholar
  21. Hedges, L. V., & Vevea, J. L. (1998). Fixed- and random-effects models in meta-analysis. Psychological Methods, 3, 486–504.CrossRefGoogle Scholar
  22. Hopewell, S., Clarke, M., Moher, D., Wager, E., Middleton, P., Altman, D. G., et al. (2008). CONSORT for reporting randomised trials in journal and conference abstracts. Lancet, 371, 281–283.CrossRefPubMedGoogle Scholar
  23. Hume, D. (1739/1740). A treatise of human nature: Being an attempt to introduce the experimental method of reasoning into moral subjects. Available online at http://www.gutenberg.org/etext/4705.
  24. Hunter, J. E. (2001). The desperate need for replications. Journal of Consumer Research, 28, 149–158.CrossRefGoogle Scholar
  25. Institute of Medicine. (2008). Knowing what works in healthcare: A roadmap for the nation. Washington, DC: The National Academies Press.Google Scholar
  26. Kellam, S. G., Koretz, D., & Moscicki, E. K. (1999). Core elements of developmental epidemiologically based prevention research. American Journal of Community Psychology, 27, 463–482.CrossRefPubMedGoogle Scholar
  27. Kirby, D. (2001). Emerging answers: Research findings on programs to reduce teen pregnancy. Washington, DC: National campaign to prevent teen pregnancy.Google Scholar
  28. Killeen, P. R. (2005). An alternative to null-hypothesis significance tests. Psychological Science, 16, 345–353.CrossRefPubMedGoogle Scholar
  29. Lo, B., Wolf, L. E., & Berkeley, A. (2000). Conflict-of-interest policies for investigators in clinical trials. The New England Journal of Medicine, 343, 1616–1620.CrossRefPubMedGoogle Scholar
  30. National Institutes of Health. (2003). Training in the responsible conduct of research. Retrieved March 19, 2008, from http://grants.nih.gov/training/responsibleconduct.htm
  31. Oakes, M. (1986). Statistical inference: A commentary for the social and behavioral sciences. New York: Wiley.Google Scholar
  32. Petrosino, A., & Soydan, H. (2005). The impact of program developers as evaluators on criminal recidivism: Results from meta-analyses of experimental and quasi-experimental research. Journal of experimental criminology, 1, 435–450.Google Scholar
  33. Pfeffer, C., & Olsen, B. R. (2002). Editorial: Journal of negative results in biomedicine. Journal of Negative Results in Biomedicine, 1, 2.CrossRefPubMedGoogle Scholar
  34. Schmidt, S. (2009). Shall we really do it again? The powerful concept of replication is neglected in the social sciences. Review of General Psychology, 13, 90–100.CrossRefGoogle Scholar
  35. Seaman, M. A., & Serlin, R. C. (1998). Equivalence confidence intervals for two-group comparisons of means. Psychological Methods, 3, 403–411.CrossRefGoogle Scholar
  36. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton.Google Scholar
  37. Shadish, W. R., Fuller, S., Gorman, M. E., Amabile, T. M., Kruglanski, A. W., Rosenthal, R., et al. (1994). Social psychology of science: A conceptual and research program. In W. R. Shadish, S. Fuller, & M. E. Gorman (Eds.), The social psychology of science (pp. 3–123). New York: Guilford.Google Scholar
  38. Shadish, W. R., & Haddock, C. K. (1994). Combining estimates of effect size. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 261–281). New York: Sage.Google Scholar
  39. Schulz, K. F., (1995). Subverting randomization in controlled trials. Journal of the American Medical Association, 274, 1456–1458.Google Scholar
  40. Society for Prevention Research. (2002). Conflict of interest and disclosure statement policy. Retrieved March 18, 2008, from http://www.preventionresearch.org/history_conflict.php
  41. Stice, E., Shaw, H., Becker, C., & Rohde, P. (2008). Dissonance-based interventions for the prevention of eating disorders: Using persuasion principles to promote health. Prevention Science, 9, 114–128.Google Scholar
  42. Summerville, G. (2009). Laying a solid foundation: Strategies for effective program replication. New York: Public/Private Ventures.Google Scholar
  43. Tobler, N. S., Roona, M. R., Ochshorn, P., Marshall, D. G., Streke, A. V., & Stackpole, K. M. (2000). School-based adolescent drug prevention programs: 1998 meta-analysis. The Journal of Primary Prevention, 20, 275–336.Google Scholar
  44. Tolan, P., Keys, C., Chertok, F., & Jason, L. A. (1990). Researching community psychology: Issues of theory and methods. Washington, DC: American Psychological Association.CrossRefGoogle Scholar
  45. Valentine, J. C., Pigott, T. D., & Rothstein, H. R. (2010). How many studies do you need? A primer on statistical power for meta-analysis. Journal of Educational and Behavioral Statistics, 35, 215–247.CrossRefGoogle Scholar
  46. Weiffen, B., Lehrer, D., Leschke, J., Lhachimi, S., & Vasiliu, A. (2007). Negative results in social science. European Political Science, 6, 51–68.CrossRefGoogle Scholar
  47. Williamson, P. R., Gamble, C., Altman, D. G., & Hutton, J. L. (2005). Outcome selection bias in meta-analysis. Statistical Methods in Medical Research, 14, 515–524.CrossRefPubMedGoogle Scholar

Copyright information

© Society for Prevention Research 2011

Authors and Affiliations

  • Jeffrey C. Valentine
    • 1
  • Anthony Biglan
    • 2
  • Robert F. Boruch
    • 3
  • Felipe González Castro
    • 4
  • Linda M. Collins
    • 5
  • Brian R. Flay
    • 6
  • Sheppard Kellam
    • 7
  • Eve K. Mościcki
    • 8
  • Steven P. Schinke
    • 9
  1. 1.University of LouisvilleLouisvilleUSA
  2. 2.Oregon Research InstituteEugeneUSA
  3. 3.University of PennsylvaniaPhiladelphiaUSA
  4. 4.Arizona State UniversityTempeUSA
  5. 5.Pennsylvania State UniversityUniversity ParkUSA
  6. 6.Oregon State UniversityCorvallisUSA
  7. 7.American Institutes for ResearchWashingtonUSA
  8. 8.American Psychiatric Institute for Research and EducationArlingtonUSA
  9. 9.Columbia UniversityNew YorkUSA

Personalised recommendations