Prevention Science

, Volume 6, Issue 3, pp 151–175 | Cite as

Standards of Evidence: Criteria for Efficacy, Effectiveness and Dissemination

  • Brian R. FlayEmail author
  • Anthony Biglan
  • Robert F. Boruch
  • Felipe González Castro
  • Denise Gottfredson
  • Sheppard Kellam
  • Eve K. Mościcki
  • Steven Schinke
  • Jeffrey C. Valentine
  • Peter Ji


Ever increasing demands for accountability, together with the proliferation of lists of evidence-based prevention programs and policies, led the Society for Prevention Research to charge a committee with establishing standards for identifying effective prevention programs and policies. Recognizing that interventions that are effective and ready for dissemination are a subset of effective programs and policies, and that effective programs and policies are a subset of efficacious interventions, SPR’s Standards Committee developed overlapping sets of standards. We designed these Standards to assist practitioners, policy makers, and administrators to determine which interventions are efficacious, which are effective, and which are ready for dissemination. Under these Standards, an efficacious intervention will have been tested in at least two rigorous trials that (1) involved defined samples from defined populations, (2) used psychometrically sound measures and data collection procedures; (3) analyzed their data with rigorous statistical approaches; (4) showed consistent positive effects (without serious iatrogenic effects); and (5) reported at least one significant long-term follow-up. An effective intervention under these Standards will not only meet all standards for efficacious interventions, but also will have (1) manuals, appropriate training, and technical support available to allow third parties to adopt and implement the intervention; (2) been evaluated under real-world conditions in studies that included sound measurement of the level of implementation and engagement of the target audience (in both the intervention and control conditions); (3) indicated the practical importance of intervention outcome effects; and (4) clearly demonstrated to whom intervention findings can be generalized. An intervention recognized as ready for broad dissemination under these Standards will not only meet all standards for efficacious and effective interventions, but will also provide (1) evidence of the ability to “go to scale”; (2) clear cost information; and (3) monitoring and evaluation tools so that adopting agencies can monitor or evaluate how well the intervention works in their settings. Finally, the Standards Committee identified possible standards desirable for current and future areas of prevention science as the field develops. If successful, these Standards will inform efforts in the field to find prevention programs and policies that are of proven efficacy, effectiveness, or readiness for adoption and will guide prevention scientists as they seek to discover, research, and bring to the field new prevention programs and policies.


standards efficacy effectiveness dissemination 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.



All points of view are those of the Society for Prevention Research and the authors, and do not necessarily reflect those of their employers or their funders. Preparation of these Standards and this paper were sponsored by the Society for Prevention Research with support from the National Institutes of Health and the Robert Wood Johnson Foundation, coordinated through the National Science Foundation. We thank Hendricks Brown, Bob Granger, Joel Grube, Paul Gruenewald, Harold Holder, Cheryl Perry, Rick Price, John Reid, Bob Saltz, and Irwin Sandler for helpful comments.


  1. Angrist, J. D., & Lavy, V. (1999). Using Maimonides’ rule to estimate the effect of class size on scholastic achievement. The Quarterly Journal of Economics, 114(2), 533–575.CrossRefGoogle Scholar
  2. Aos, S., Lieb, R., Mayfield, J., Miller, M., & Pennucci, A. (2004). Benefits and costs of prevention and Early intervention programs for youth. Washington State Institute for Public Policy. Available at
  3. Barkham, M., Margison, F., Leach, C., Lucock, M., Mellor-Clark, J., Evans, C., Benson, L., Connell, J., Audin, K., & McGrath, G. (2001). Service profiling and outcomes benchmarking using the CORE-OM: Toward practice-based evidence in the psychological therapies. Journal of Consulting and Clinical Psychology, 69, 184–196.CrossRefPubMedGoogle Scholar
  4. Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: Conceptual, strategic and statistical considerations. Journal of Personality and Social Psychology, 51, 1173–1182.CrossRefPubMedGoogle Scholar
  5. Truman, B. I., Smith-Akin, C. K., Hinman, A. R., et al. (2000). Developing the guide to community preventive services—Overview and rationale. American Journal of Preventive Medicine, 18(1S), 18–26.CrossRefPubMedGoogle Scholar
  6. Berends, M., & Garet, M. S. (2002). In (re)search of evidence-based school practices: Possibilities for integrating nationally representative surveys and randomized field trials to inform educational policy. Peabody Journal of Education, 77(4), 28–58.CrossRefGoogle Scholar
  7. Berkson, J. (1946). Limitations of the application of fourfold table analysis to hospital data. Biometrics Bulletin, 2, 47–53.Google Scholar
  8. Berkson, J. (1958). Smoking and lung cancer: some observations on two recent reports. Journal of the American Statistical Association, 53, 28–38.Google Scholar
  9. Bertrand, M., Duflo, E., & Mullainathan, S. (2002). How much should we trust differences in differences estimates? National Bureau of Economic Research (NBER) Working Paper 8841. Washington, DC: NBER.Google Scholar
  10. Biglan, A., Ary, D., & Wagenaar, A. C. (2000). The value of interrupted time-series experiments for community intervention research. Prevention Science, 1(1), 31–49.CrossRefPubMedGoogle Scholar
  11. Biglan, A., Brennan, P. A., Foster, S. L., Holder, H. D., Miller, T. L., Cunningham, P. B., et al. (2004). Helping adolescents at risk: Prevention of multiple problems of youth. New York: Guilford.Google Scholar
  12. Boruch, R. F. (Ed). (2005a). Place randomized trials: special issue. Annals of the American Academy of Political and Social Sciences, 599(1), whole issue.Google Scholar
  13. Boruch, R. F. (2005b). Comments on Jacob and Ludwig. In D. Ravitch (Ed.), Brookings papers on educational policy (pp. 67–73). Washington, DC: Brookings Institution Press.Google Scholar
  14. Botvin, G. J., Schinke, S. P., Epstein, J. A., Diaz, T., & Botvin, E. M. (1994). Effectiveness of a culturally focused and generic skills training approaches to alcohol and drug abuse prevention among minority youths. Psychology of Addictive Behaviors, 8, 116–127.CrossRefGoogle Scholar
  15. Botvin, G. J., Schinke, S. P., Epstein, J. A., Diaz, T., & Botvin, E. M. (1995). Effectiveness of cultural focused and generic skills training approaches to alcohol and drug abuse prevention among minority adolescents: Two-year follow-up results. Psychology of Addictive Behaviors, 9, 183–194.CrossRefGoogle Scholar
  16. Brown, C. H. (1993). Statistical methods for prevention trials in mental health. Statistics in Medicine, 12, 289–300.PubMedGoogle Scholar
  17. Bryk, A. S., & Raudenbush, S. W. (1992). Hierarchical linear models: Applications and data analysis methods. Newbury Park, CA: Sage.Google Scholar
  18. Caulkins, J. P., Pacula, R. L., Paddock, S., & Chiesa, J. (2004). What we can–and cannot–expect from school-based drug prevention. Drug and Alcohol Review, 23(1), 79–87.CrossRefPubMedGoogle Scholar
  19. Chatterji, P., Caffray, C. M., Jones, A. S., Lillie-Blanton, M., & Werhamer, L. (2001). Applying cost analysis methods to school-based prevention programs. Prevention Science, 2(1), 45–56.CrossRefPubMedGoogle Scholar
  20. Clarke, G. N., Hawkins, W., Murphy, M., Sheeber, L. B., Lewinsohn, P. M., & Seeley, J. R. (1995). Targeted prevention of unipolar depressive disorder in an at-risk sample of high school adolescents: A randomized trial of a group cognitive intervention. Journal of American Academy of Child and Adolescent Psychiatry, 34, 312–321.CrossRefGoogle Scholar
  21. Coalition for Evidence-Based Policy. (2003). Identifying and implementing educational practices supported by rigorous evidence: A user friendly guide. Washington, DC: The Council for Excellence in Government.Google Scholar
  22. Conduct Problems Prevention Research Group. (1992). A developmental and clinical model for the prevention of conduct disorders: The Fast Track Program. Development and Psychopathology, 4, 509–527.Google Scholar
  23. Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Skokie, IL: Rand McNally.Google Scholar
  24. Cook, T. D., & Payne, M. R. (2002). Objecting to the objections to using random assignment in educational studies. In F. Mosteller & R. Boruch (Eds.), Evidence matters: Randomized trials in education research (pp. 150–178). Brookings Institution Press: Washington, DC.Google Scholar
  25. Dawson-McClure, S. R., Sandler, I. N., Wolchik, S. A., & Millsap, R. E. (2004). Prediction and reduction of risk for children of divorce: A six-year longitudinal study. Journal of Abnormal Child Psychology, 32, 175–190.CrossRefPubMedGoogle Scholar
  26. Dishion, T. J., McCord, J., & Poulin, F. (1999). When interventions harm: Peer groups and problem behavior. American Psychologist, 54(9), 755–764.PubMedGoogle Scholar
  27. Dolan, L. J., Kellam, S. G., Brown, C., Werthamer-Larsson, L., et al. (1993). The short-term impact of two classroom-based preventive interventions on aggressive and shy behaviors and poor achievement. Journal of Applied Developmental Psychology, 14(3), 317–345.CrossRefGoogle Scholar
  28. Elliott, D. S., & Mihalic, S. (2004). Issues in Disseminating and Replicating Effective Prevention Programs. Prevention Science, 5(1), 47–52.CrossRefPubMedGoogle Scholar
  29. Eddy, J. M., Reid, J. B., & Fetrow, R. A. (2000). An elementary school-based prevention program targeting modifiable antecedents of youth delinquency and violence: Linking the Interests of Families and Teachers (LIFT). Journal of Emotional and Behavioral Disorder, 8(3), 165–176.Google Scholar
  30. Edgerton, E. A., Orzechowski, K. M., & Eichelberger, M. R. (2004). Not all child safety seats are created equal: The potential dangers of child booster seats. Pediatrics, 113, e153-e158.CrossRefPubMedGoogle Scholar
  31. Fagan, A., & Mihalic, S. (2003). Strategies for enhancing the adoption of school-based prevention programs: Lessons learned from the Blueprints for Violence Prevention replications of the Life Skills Training Program. Journal of Community Psychology, 31, 235–253.Google Scholar
  32. Farrell, A., & Meyer, A. (1997). The effectiveness of a school-based curriculum for reducing violence among urban sixth-grade students. American Journal of Public Health, 87(6), 979–984.PubMedGoogle Scholar
  33. Fisher, C. B., Hoagwood, K., Boyce, C., Duster, T., Frank, D. A., Grisso, T., Levine, R. J., Macklin, R., Spencer, M. B., Takanishi, R., Trimble, J. E., & Zayas, L. H. (2002). Research ethics for mental health science involving ethnic minority children and youths. American Psychologist, 57(12), 1024-1040.CrossRefPubMedGoogle Scholar
  34. Flannery, D. J., Vazsonyi, A. T., Liau, A. K., Guo, S., Powell, K. E., Atha, H., Vesterdal, W., & Embry, D. (2003). Initial behavior outcomes for the PeaceBuilders universal school-based violence prevention program. Developmental Psychology, 39, 292–308.CrossRefPubMedGoogle Scholar
  35. Flay, B. R. (1986). Efficacy and effectiveness trials (and other phases of research) in the development of health promotion programs. Preventive Medicine, 15, 451–474.CrossRefPubMedGoogle Scholar
  36. Flay, B. R., Biglan, A., Boruch, R. F., Castro, F. G., Gottfredson, D., Kellam, S. G., Moscicki, E. K., Schinke, S., Valentine, J. C., & Ji, P. (2004a). Standards of evidence: Criteria for efficacy, effectiveness and dissemination. Falls Church, VA: Society for Prevention Research. Available at
  37. Flay, B. R., Graumlich, S., Segawa, S., Burns, J. L., Holliday, M. Y., & Aban Aya Investigators. (2004b). Effects of two prevention programs on high-risk behaviors among African-American youth: A randomized trial. Archives of Pediatric and Adolescent Medicine, 158(4), 377–384.CrossRefGoogle Scholar
  38. Foster, E. M., Dodge, K. A., & Jones, D. (2003). Issues in the economic evaluation of prevention programs. Applied Developmental Science, 7(2), 76–86.CrossRefPubMedGoogle Scholar
  39. Glazerman, S., Levy, D. M., & Myers, D. (2003). Nonexperimental versus experimental estimates of earnings impacts. The annals of the American Academy of Political and Social Science, 589, 63–93.CrossRefGoogle Scholar
  40. Greenwood, P. (2005). Changing lives: Delinquency prevention as crime control policy, Chicago, IL: University of Chicago Press.Google Scholar
  41. Goodstadt, M. (1978). Alcohol and drug education: Models and outcomes. Health Education Monographs, 6, 263–279.PubMedGoogle Scholar
  42. Gottfredson, M. R., & Hirschi, T. (1990). A general theory of crime. Stanford: Stanford University Press.Google Scholar
  43. Griffin, K. W., Botvin, G. J., & Nichols, T. R. (2004). Long-term follow-up effects of a school-based prevention program on adolescent risky driving. Prevention Science, 5, 207–212.CrossRefPubMedGoogle Scholar
  44. Greene, W. H. (1993). Econometric analysis. New York: MacMillan.Google Scholar
  45. Greenberg, M. T. (2004). Current and future challenges in school-based prevention: The researcher perspective. Prevention Science, 5(1), 5–13.CrossRefPubMedGoogle Scholar
  46. Greenberg, M., Domitrovich, C., Graczyk, P., & Zins, J. (2001). The study of implementation in school-based preventive interventions: Theory, research and practice. Washington, DC: Center for Mental Health Services, Substance Abuse and Mental Health Administration, U.S. Department of Health and Human Services.Google Scholar
  47. Gruenewald, P. J., & Treno, A. J. (2000). Local and global alcohol supply: Economic and geographic models of community systems. Addiction, 95, S537–S549.CrossRefPubMedGoogle Scholar
  48. Gunn, B., Smolkowski, K., Biglan, A., & Black, C. (2002). Supplemental instruction in decoding skills for Hispanic and non-Hispanic students in early elementary school: A follow-up. The Journal of Special Education, 36(2), 69–79.Google Scholar
  49. Hansen, W. B., Collins, L. M., Malotte, K. C., Johnson, C. A., & Fielding, J. E. (1985). Attrition in prevention research. Journal of Behavioral Medicine, 8, 261–275.CrossRefPubMedGoogle Scholar
  50. Hansen, W. B., & Dusenbury, L. (2001). Building capacity for prevention’s next generation. Prevention Science, 2(4), 207-208.CrossRefPubMedGoogle Scholar
  51. Hedeker, D., Gibbons, R. D., & Flay, B. R. (1994). Random-effects regression models for clustered data: With an example from smoking prevention research. Journal of Consulting and Clinical Psychology, 624, 57–765.Google Scholar
  52. Holder, H. D. (1998). Alcohol and the community: A systems approach to prevention. Cambridge, UK: Cambridge University Press.Google Scholar
  53. Holder, H., Boyd, G., Howard, J., Flay, B., Voas, R., & Grossman, M. (1995). Alcohol-problem prevention policy: The need for a phases research model. Journal of Public Health Policy, 16(3), 324–346.PubMedGoogle Scholar
  54. Holder, H. D., Flay, B., Howard, J., Boyd, G., Voas, R. B., & Grossman, M. (1999). Phases of alcohol problem prevention research. Alcoholism: Clinical and Experimental Research, 23(1), 183–194.Google Scholar
  55. Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81, 945–960.Google Scholar
  56. Hollis, S., & Campbell, F. (1999). What is meant by intention to treat analysis? Survey of published randomised controlled trials. British Medical Journal, 319, 670–674.PubMedGoogle Scholar
  57. Hunter, J. E. (2001). The desperate need for replications. Journal of Consumer Research, 28(1), 149–158.CrossRefGoogle Scholar
  58. Huppler-Hullsiek, K., & Louis, T. A. (2002). Propensity score modeling strategies for the causal analysis of observational data. Biostatistics, 2, 1–15.Google Scholar
  59. Jacob, B., & Ludwig, J. (2005). Can the federal government improve government research? In D. Ravitch (Ed.), Brookings papers on educational policy (pp. 47–66). Washington, DC: Brookings Institution Press.Google Scholar
  60. Kalton, G. (1983). Introduction to survey sampling. Beverly Hills, CA: Sage.Google Scholar
  61. Kellam (2000). Community and institutional partnerships for school violence prevention. In Preventing School Violence: Plenary Papers of the 1999 Conference on Criminal Justice. Research and Evaluation–Enhancing Policy and Practice Through Research, Volume 2 NCJ 180972 (pp. 1–21). Washington, DC.Google Scholar
  62. Kellam, S. G., & Langevin, D. J. (2003). A framework for understanding “evidence” in prevention research and programs. Prevention Science, 4(3), 137–153.CrossRefPubMedGoogle Scholar
  63. Kellam, S. G., Ling, X., Merisca, R., Brown, C., & Ialongo, N. (1998). The effect of the level of aggression in the first grade classroom on the course and malleability of aggressive behavior into middle school. Development and Psychopathology, 10(2), 165–185.CrossRefPubMedGoogle Scholar
  64. Kellam, S. G., Koretz, D., & Moscicki, E. K. (1999). Core elements of developmental epidemiologically based prevention research. American Journal of Community Psychology, 27(4), 463–482.CrossRefPubMedGoogle Scholar
  65. Kellam, S. G., Rebok, G. W., Mayer, L. S., & Ialongo, N. (1994). Depressive symptoms over first grade and their response to a developmental epidemiologically based preventive trial aimed at improving achievement. Development and Psychopathology, 6(3), 463–481.Google Scholar
  66. Kelly, J. A., Murphy, D. A., Sikkema, K. J., McAuliffe, T. L., Roffman, R. A., Solomon, L. J., Winett, R. A., & Kalichman, S. C. (1997). Randomised, controlled, community-level HIV-prevention intervention for sexual-risk behaviour among homosexual men in US cities. Community HIV Prevention Research Collaborative, Lancet, 350, 1500-1505.CrossRefPubMedGoogle Scholar
  67. Kenny, D. A., & Judd, C. M. (1986). Consequences of violating the independence assumption in analysis of variance. Psychological Bulletin, 99(3), 422–431.CrossRefGoogle Scholar
  68. Korfmacher, J., O’Brien, R., Hiatt, S., & Olds, D. (1999). Differences in program implementation between nurses and paraprofessionals in prenatal and infancy home visitation: A randomized trial. American Journal of Public Health, 89(12), 1847–1851.PubMedGoogle Scholar
  69. Larsen, K. S. (2004). What is replication? The development of valid and reliable social theories. PsycCRITIQUES, [np].Google Scholar
  70. Last, J. L. (1988). A dictionary of epidemiology. New York: Oxford University Press.Google Scholar
  71. Lillie-Blanton, M. L., Werthamer, L., Chatterji, P., Fienson, C., & Caffray, C. (1998). Issues and methods in evaluating costs, benefits, and cost-effectiveness of drug abuse prevention programs for high-risk youth. In W. J. Bukoski & R. I. Evans (Eds.), Cost-benefit/cost effectiveness research of drug abuse prevention: Implications for programming and policy [NIDA Research Monograph Series no. 176] (pp. 184–213). Washington, DC: U.S. Government Printing Office.Google Scholar
  72. Lynagh, M., Perkins, J., & Schofield, M. (2002). An evidence-based approach to health promoting schools. Journal of School Health, 72, 300–302.PubMedGoogle Scholar
  73. Madison, S. M., McKay, M. M., Paikoff, R., & Bell, C. (2000). Basic research and community collaboration: necessary ingredients for the development of a family-based HIV prevention program. AIDS Education and Prevention, 12, 281-298.PubMedGoogle Scholar
  74. Mihalic, S. (2002–2004). Matrix of programs as identified by various federal and private agencies. Boulder, CO: Center for the Study and Prevention of Violence, University of Colorado. Available at
  75. Manski, C. (1995). Identification problems in the social sciences. Cambridge, MA: Harvard University Press.Google Scholar
  76. Markman, H. J., Renick, M. J., Floyd, F. J., Stanley, S. M., & Clements, M. (1993). Preventing marital distress through communication and conflict management training: A 4- and 5-year follow-up. Journal of Consulting and Clinical Psychology, 113, 153–158.Google Scholar
  77. McCaffrey, D. F., Ridgeway, G., & Morral, A. R. (2004). Propensity score estimation with boosted regression for evaluating causal effects in observational studies. Psychological Methods, 9(4), 403–425.CrossRefPubMedGoogle Scholar
  78. McCartney, K., & Rosenthal, R. (2000). Effect size, practical importance, and social policy for children. Child Development, 71(1), 173–180.CrossRefPubMedGoogle Scholar
  79. Meinert, C. L. (1986). Clinical trials: Design, conduct, analysis. New York: Oxford University Press.Google Scholar
  80. Metropolitan Area Research Group [Eron, L. D., Huesmann, L. R., Spindler, A., Guerra, N. G., Henry, D., Tolan, P., & VanAcker, R.] (2002). A cognitive-ecological approach to preventing aggression in urban settings: Initial outcomes for high-risk children. Journal of Consulting and Clinical Psychology, 70(1), 179–194.CrossRefPubMedGoogle Scholar
  81. Moher, D., Schultz, K. F., & Altman, D. (2001). The CONSRT statement: Revised recommendations for improving the quality of reports of parallel-group randomized trials. Journal of the American Medical Association, 285(15), 1987–1991.CrossRefPubMedGoogle Scholar
  82. Mościcki, E. K. (1993). Fundamental methodological considerations in controlled clinical trials. Journal of Fluency Disorders, 18, 183–196.CrossRefGoogle Scholar
  83. Murray, D. M. (1998). Design and analysis of group-randomized trials. New York: Oxford University Press.Google Scholar
  84. Nerlove, M., & Diebold, F. (1990). Unit roots in economic time series: A selective survey. In T. Bewley (Ed.), Advances in econometrics (Vol. 8). New York: JAI.Google Scholar
  85. Olds, D., Henderson, C. Jr., Kitzman, H., Eckenrode, J., Cole, R., & Tatelbaum, R. (1998). The promise of home visitation: Results of two randomized trials. Journal of Community Psychology, 26(1), 5–21.CrossRefGoogle Scholar
  86. Olds, D. L. (2002). Prenatal and infancy home visiting by nurses: From randomized trials to community replication. Prevention Science, 3(3), 153–172CrossRefPubMedGoogle Scholar
  87. Olds, D. L., Robinson, J., Pettitt, L., Luckey, D. W., Holmberg, J., Ng, R. K., Isacks, K., Sheff, K., & Henderson, C. R. (2004). Effects of home visits by paraprofessionals and by nurses: age 4 follow-up results of a randomized trial, Pediatrics, 114(6), 1560–1568CrossRefPubMedGoogle Scholar
  88. O’Malley, P. M., & Wagenaar, A. C. (1991). Effects of minimum drinking age laws on alcohol use, related behaviors and traffic crash involvement among American youth: 1976–1987. Journal of Studies of Alcohol, 52, 478–491.Google Scholar
  89. Perry, C., Komro, K., Veblen-Mortenson, S., Bosma, L., Farbakhsh, K., Munson, K., Stigler, M., & Lytle, L. (2003, February). A randomized controlled trial of the middle and junior high school D.A.R.E. and D.A.R.E. plus programs. Archives of Pediatric and Adolescent Medicine, 157, 178–184.Google Scholar
  90. Petosa, R., & Goodman, R. M. (1991). Recruitment and retention of schools participating in school health research. Journal of School Health, 61, 426–429.PubMedCrossRefGoogle Scholar
  91. Petrosino, A., Boruch, R., Rounding, C., McDonald, S., & Chalmers, I. (2000). The Campbell collaboration social, psychological, educational and criminological trials register (C2-SPECTR) to facilitate the preparation and maintenance of systematic reviews of social and educational interventions. Evaluation and Research in Education, 14(3), 206–219.Google Scholar
  92. Plotnick, R. D., Young, D. S., Catalano, R. F., & Haggerty, K. P. (1998). Benefits and costs of a family-focused methadone treatment and drug abuse prevention program: Preliminary findings. In W. J. Bukoski & R. I. Evans (Eds.), Cost-benefit/cost effectiveness research of drug abuse prevention: Implications for programming and policy [NIDA Research Monograph Series no. 176] (pp. 161–183). Washington, DC: U.S. Government Printing Office.Google Scholar
  93. Riecken, H. W., & Brouch, R. F. (Eds.). (1974). Social experimentation. New York: Academic Press.Google Scholar
  94. Rosnow, R. L. (2002). The nature and role of demand characteristics in scientific inquiry. Prevention and Treatment, 502. Retrieved January 7, 2005 from
  95. Rosenbaum, P. R. (2002). Observational studies (2nd ed.). New York: Springer-Verlag.Google Scholar
  96. Rosenbaum, P. R., & Rubin, D. B. (1985). Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. The American Statistician, 39, 33–38.Google Scholar
  97. Rosenstock, L., & Lee, L. J. (2002). Attacks on science: The risk to evidence-based policy. American Journal of Public Health, 92(1), 14–18.PubMedCrossRefGoogle Scholar
  98. Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66, 688–701.Google Scholar
  99. Scariano, S. M., & Davenport, J. M. (1987). The effects of violations of the independence assumption in the one-way ANOVA. The American Statistician, 41, 123–128.MathSciNetGoogle Scholar
  100. Segawa, E., Ngwe, J. E., Li, Y., Flay, B. R., & Aban Aya Investigatgors. (2005) Evaluation of the effects of the Aban Aya Youth Project in reducing violence among African American adolescent males using Latent Class Growth Mixture Modeling Techniques. Evaluation Review, 29, 128-148.CrossRefPubMedGoogle Scholar
  101. Schinke, S. P., Gilchrist, L. D., Lodish, D., & Bobo, J. K. (1983). Strategies for prevention research in service environments. Evaluation Review, 7, 126–136.Google Scholar
  102. Schafer, J. L., & Graham, J. W. (2002). Missing data: Our view of the state of the art. Psychological Methods, 7, 147-177CrossRefPubMedGoogle Scholar
  103. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.Google Scholar
  104. Shumaker, S. A., Legault, C., Rapp, S. R., Thal, L., Wallace, R. B., Ockene, J. K., Hendrix, S. L., Jones, B. N., Assaf, A. R., Jackson, R. D., Kotchen, J. M., Wassertheil-Smoller, S., & Wactawski-Wende, J. (2003). Estrogen plus progestin and the incidence of dementia and mild cognitive impairment in postmenopausal women: The women’s health initiative memory study: A randomized controlled trial. Journal of the American Medical Association, 289(20), 2651–2662.CrossRefPubMedGoogle Scholar
  105. Steering Committee of the Physicians’ Health Study Research Group. (1988). Preliminary report: Findings from the aspirin component of the ongoing physicians’ health study. New England Journal of Medicine, 318, 262–264.CrossRefGoogle Scholar
  106. Towne, L., & Hilton, M. (Eds.). (2004). Implementing randomized field trials in EDUCATION: Report of a workshop committee on research in education. Washington, DC: National Research Council.Google Scholar
  107. Trochim, W. M. K. (1984). Research design for program evaluation: The regression-discontinuity approach. Newbury Park, CA: Sage.Google Scholar
  108. Trochim, W. (2000). The research methods knowledge base (2nd ed.). Cincinnati, OH: Atomic Dog Publishing. Also available at (version current as of August 2004).
  109. US Department of Education. (1998). Safe and drug-free schools program: Notice of final principles of effectiveness. Federal Register 63(104), 29901–29906.Google Scholar
  110. US Department of Education. (2003). Identifying and implementing Educational practices supported by rigorous evidence: A user friendly guide. Washington, DC: US. Department of Education.Google Scholar
  111. US Department of Health and Human Services. (2000). Healthy People 2010. With Understanding and Improving Health and Objectives for Improving Health (2nd ed., 2 Vols). Washington, DC: U.S. Government Printing Office.Google Scholar
  112. Valentine, J. C., & Cooper, H. (2003). What works Clearinghouse study design and implementation assessment device (Version 1.0). Washington, DC: U.S. Department of Education. Available at (retrieved 01/06/04).
  113. Wagenaar, A. (1983). Alcohol, Young Drivers, and Traffic Accidents: Effects of Minimum Age Laws. Lexington, MA: D.C. Heath.Google Scholar
  114. Wagenaar, A. C. (1993). Research affects public policy: the case of the legal drinking age in the United States. Addiction, 88(Suppl.), 75S–81S.CrossRefPubMedGoogle Scholar
  115. Wagenaar, A. C., & Webster, D. W. (1986). Preventing injuries to children through compulsory automobile safety seat use. [erratum appears in Pediatrics 1987 Jun;79(6):863]. Pediatrics, 78(4), 662–672.PubMedGoogle Scholar
  116. Wagenaar, A., & Holder, H. (1996). The scientific process works: Seven replications now show significant wine sales increases after privatization. Journal of Studies on Alcohol, 57(5), 575–576.PubMedGoogle Scholar
  117. Weissberg, R. P., Kumpfer, K. L., & Seligman, M. E. P. (2003). Prevention that works for children and youth: An introduction. American Psychologist, 58(6–7), 425–432.CrossRefPubMedGoogle Scholar
  118. Wolchik, S. A., Sandler, I. N., Millsap, R. E., Plummer, B. A., Greene, S. M., Anderson, E. R., et al. (2002). Six-year follow-up of a randomized, controlled trial of preventive interventions for children of divorce. Journal of the American Medical Association, 288, 1–8.CrossRefGoogle Scholar
  119. Zarkin, G. A., & Hubbard, R. L. (1998). Analytic issues for estimating the benefits and costs of substance abuse prevention. In W. J. Bukoski & R. I. Evans (Eds.), Cost-benefit/cost effectiveness research of drug abuse prevention: Implications for programming and policy [NIDA Research Monograph Series no. 176] (pp. 141–160). Washington, DC: U. S. Government Printing Office.Google Scholar
  120. Zeger, S. L., Liang, K. Y., & Albert, P. S. (1988). Models for longitudinal data: a generalized estimating equation approach. Biometrics, 44, 1049–1060.PubMedMathSciNetGoogle Scholar

Copyright information

© Springer Science + Business Media, Inc. 2005

Authors and Affiliations

  • Brian R. Flay
    • 1
    • 10
    Email author
  • Anthony Biglan
    • 2
  • Robert F. Boruch
    • 3
  • Felipe González Castro
    • 4
  • Denise Gottfredson
    • 5
  • Sheppard Kellam
    • 6
  • Eve K. Mościcki
    • 7
  • Steven Schinke
    • 8
  • Jeffrey C. Valentine
    • 9
  • Peter Ji
    • 1
  1. 1.University of Illinois at ChicagoChicago
  2. 2.Oregon Research InstituteEugene
  3. 3.University of PennsylvaniaPhiladelphia
  4. 4.Arizona State UniversityTempe
  5. 5.University of MarylandCollege Park
  6. 6.American Institutes for ResearchDistrict of Columbia
  7. 7.National Institute of Mental Health (NIMH)Bethesda
  8. 8.Columbia UniversityNew York
  9. 9.Duke UniversityDurham
  10. 10.Distinguished Professor, Institute for Health Research and PolicyUniversity of Illinois at ChicagoChicago

Personalised recommendations