Skip to main content

Commentary on the 2015 SPR Standards of Evidence


We comment on the 2015 Society for Prevention Research standards of evidence document, summarizing major changes from the previous 2005 Standards, and point to ways in which the Standards could be further improved. We endorse important new standards, such as those on testing the causal theory and mechanisms of the intervention, improved trial reporting standards, and added attention to scale-up research and cost analyses. Despite discussion of replication in the new Standards, we are concerned about the lack of stand-alone replication standards, and the deletion of an explicit requirement for replication before an intervention is considered efficacious. Finally, we are deeply concerned about the lack of attention to the unit or level of aggregation of the intervention target. It is a major conceptual oversight. The unit targeted by an intervention (whether a cell, person, organization, community, state, nation) is a fundamental feature shaping intervention theory, research design, data collection, analyses, effect sizes, diffusion possibilities and patterns, and scale-up issues. Future Standards updates should eliminate the implicit assumption in the current text that effective preventive interventions inherently target individual persons.

This is a preview of subscription content, access via your institution.


  1. Babor, T. (2010). Alcohol: No ordinary commodity: Research and public policy. Oxford: Oxford University Press.

    Book  Google Scholar 

  2. Biglan, A. (2015). The nurture effect: How the science of human behavior can improve our lives and our world. Oakland: New Harbinger Publications.

    Google Scholar 

  3. Cook, T. D., Shadish, W. R., & Wong, V. C. (2008). Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within-study comparisons. Journal of Policy Analysis and Management, 27(4), 724–750.

    Article  Google Scholar 

  4. Cook, T. D., Steiner, P. M., & Pohl, S. (2009). How bias reduction is affected by covariate choice, unreliability, and mode of data analysis: Results from two types of within-study comparisons. Multivariate Behavioral Research, 44(6), 828–847.

    Article  Google Scholar 

  5. Costello, E. J., Compton, S. N., Keeler, G., & Angold, A. (2003). Relationships between poverty and psychopathology: A natural experiment. Journal of the American Medical Association, 290(15), 2023–2029.

    CAS  Article  PubMed  Google Scholar 

  6. Cronbach, L. J., Ambron, S. R., Dornbusch, S. M., Hess, R. D., Hornik, R. C., Phillips, D., & Weiner, S. S. (1980). Toward reform of program evaluation. San Francisco: Jossey-Bass Publishers.

    Google Scholar 

  7. Flay, B. R., Biglan, A., Boruch, R. F., Castro, F. G., Gottfredson, D., Kellam, S., & Ji, P. (2005). Standards of evidence: Criteria for efficacy, effectiveness and dissemination. Prevention Science, 6(3), 151–175. doi:10.1007/s11121-005-5553-y.

    Article  PubMed  Google Scholar 

  8. Gottfredson, DC, Cook, TD, Gardner, FE, Gorman-Smith, D, Howe, GW, Sandler, IN, Zafft, KM. (2015). Standards of evidence for efficacy, effectiveness, and scale-up research in prevention science: Next generation. Prevention Science, 1–34. doi:10.1007/s11121-015-0555-x.

  9. MacKinnon, D. (2008). Introduction to mediation analysis. Mahwah, NJ: Erlbaum.

    Google Scholar 

  10. Miller, TR., & Hendrie, D. (2012). Economic evaluation of interventions. In G. Li, & S. Baker (Eds.), Injury research: theories, methods and approaches (pp. 641–666). New York: Springer.

  11. Mrazek, P. J., & Haggerty, R. J. (1994). Reducing risks for mental disorders: Frontiers for preventive intervention research. Washington, DC: National Academy Press.

    Google Scholar 

  12. Shadish, W. R., Clark, M. H., & Steiner, P. M. (2008). Can nonrandomized experiments yield accurate answers? A randomized experiment comparing random and nonrandom assignments. Journal of the American Statistical Association, 103(484), 1334–1344.

    CAS  Article  Google Scholar 

  13. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. New York: Houghton Mifflin Company.

    Google Scholar 

  14. Spoth, R., Rohrbach, L. A., Greenberg, M., Leaf, P., Brown, C. H., Fagan, A., & Hawkins, J. D. (2013). Addressing core challenges for the next generation of type 2 translation research and systems: The translation science to population impact (TSci Impact) framework. Prevention Science, 14(4), 319–351.

    PubMed Central  Article  PubMed  Google Scholar 

  15. Valentine, J. C., Biglan, A., Boruch, R. F., Collins, L. M., Flay, B. R., Kellam, S., & Schinke, S. P. (2011). Replication in prevention science. Prevention Science, 12, 103–117. doi:10.1007/s11121-011-0217-6.

    Article  PubMed  Google Scholar 

  16. Wagenaar, A. C., & Komro, K. (2013). Natural experiments: Research design elements for optimal causal inference without randomization. In A. C. Wagenaar & S. Burris (Eds.), Public health law research: Theory and methods. San Francisco: Jossey-Bass.

    Google Scholar 

  17. Yoshikawa, H., Aber, J. L., & Beardslee, W. R. (2012). The effects of poverty on the mental, emotional, and behavioral health of children and youth: Implications for prevention. American Psychologist, 67(4), 272.

    Article  PubMed  Google Scholar 

Download references

Conflict of Interest

The authors declare that they have no competing interests.

Author information



Corresponding author

Correspondence to Anthony Biglan.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Biglan, A., Flay, B.R. & Wagenaar, A.C. Commentary on the 2015 SPR Standards of Evidence. Prev Sci 16, 927–932 (2015).

Download citation


  • Standards of evidence
  • Policy
  • Evidence-based interventions