Skip to main content

Evaluation of Substance Abuse Prevention and Treatment Programs

  • Chapter
  • First Online:
Research Methods in the Study of Substance Abuse

Abstract

In this chapter, program evaluation is examined as a way to systematically utilize the scientific method to determine whether adolescent substance use prevention interventions implemented with families, within schools and the workplace, and at the community level achieve their intended goals. General purposes and approaches to program evaluation are described, as well as the processes associated with the planning and conduct of evaluations, with a focus on addressing common challenges for evaluators and stakeholders. Finally, a summary case involving an evaluation of a large, national school-based substance use prevention intervention is used to illustrate concepts and processes important to the evaluation of substance abuse prevention and treatment programs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For a thorough explication of the utilization of experimental and quasi-experimental designs for outcome analyses please refer to Mohr (1995).

  2. 2.

    These threats are often associated with conflicting evaluation structures and contracting arrangements that exist across subdivisions with separate administrative and organizational structures (Rodi and Paget 2007). Different values and norms may also need to be negotiated by the evaluator, especially in international designs.

  3. 3.

    On July 12, 1974, the National Research Act (Pub. L. 93-348) was signed into law, creating the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. One of the charges to the Commission was to identify the basic ethical principles that should underlie the conduct of biomedical and behavioral research involving human subjects and to develop guidelines which should be followed to assure that such research is conducted in accordance with those principles. These principles are summarized in the Belmont Report, available at http://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/.

  4. 4.

    Evaluation often falls under the classification of “practice,” However, if the evaluator is planning to utilize the data collected to contribute to knowledge beyond only the specific program under study, he/she should review the Code of Federal Regulations: 45 CFR 46 (Dept. of Health and Human Services 2009), particularly if the source of funding for the evaluation is federally-based. Furthermore, state and local regulations may vary regarding evaluations conducted in schools or communities; it is the responsibility of the evaluator to be informed and in compliance with these regulations.

  5. 5.

    The same three principles apply to the handling and reporting of data. Evaluators should insure that data are collected in a manner that offers privacy to the individuals from whom the data are collected, that data collection procedures from secondary sources or administrative files are explicitly designed and conducted so that files are not exposed to unauthorized access, and data are transported and stored in a secure manner, including the removal of any identifying information (e.g., names, birthdates) from individual level data, in addition to other characteristics that may identify groups or communities in community studies. If the evaluation is longitudinal, a “key” file should be created to link data collected at different points in time, and should be stored separately from the evaluation data. An extra layer of protection for confidential data can be secured with a certificate of confidentiality, which protects the researcher from having confidential data subpoenaed in court. In the United States, these are typically obtained from the funding office, or through research offices located at most universities.

  6. 6.

    A review of existing literature should produce a variety of measures of substance use risk and protective factors and behavioral use measures to test program theory (Arthur et al. 2002) and program processes to evaluate implementation fidelity (Dusenbury et al. 2003).

  7. 7.

    Pedhazur and Schmelkin (1991) provide an elaborate description of measurement validity, reliability, and approaches to testing these characteristics, which could be consulted to gain a foundational understanding of measurement theory and practice.

  8. 8.

    Choice of approach has implications not only for basic analytic strategy, but also data coding procedures, methods for data verification and reporting (Patton 1999; Thomas 2006).

  9. 9.

    Skills are available on the Robert Wood Johnson Foundation Website: http://www.rwjf.org/en/library/research/2009/06/the-adolescent-substance-abuse-prevention-study.html.

  10. 10.

    The model also included the constructs of implementation fidelity and exposure. Both of these constructs were thought to act as moderators of the effect of the intervention on the student’s normative beliefs, skills, and personal attitudes.

  11. 11.

    New York City was already signed on as the core school district in NY, but the bombing of the twin towers (911) occurred while the other NY city schools were being recruiting resulting in a loss of potential and current schools in the study. To increase the available sampling frame, the potential sampling area was expanded to include Newark and surrounding schools in NJ.

  12. 12.

    Survey items also measured decision-making and communication skills. Decision-making skills was measured using items drawn from a scale score developed by Goldstein and McGinnis (1997). Scores reflected the student’s rating (on a scale from 1 = never to 5 = always) of statements such as, “Before making a decision, I think about all the things that may happen as a result of that decision.” (alpha > 0.70). A composite measure of communication skills, taken from the Social Orientation Scale (Cegala 1981). Items measured the perceptions interpersonal communication confidence and competency with statements such as, “I feel confident of what to say and do during conversations.” Responses were on a Likert scale of 1 (disagree) to 5 (agree) (alpha > 0.70).

  13. 13.

    For further reading on the problem of nested/complex sampling and standard errors please consult Raudenbush and Bryk (2002).

References

  • Ajzen, I. (2002). Perceived behavioral control, self-efficacy, locus of control, and the theory of planned behavior. Journal of Applied Social Psychology, 32, 665–683.

    Article  Google Scholar 

  • Arthur, M. W., & Blitz, C. (2000). Bridging the gap between science and practice in drug abuse prevention through needs assessment and strategic community planning. Journal of Community Psychology, 28, 241–255.

    Article  Google Scholar 

  • Arthur, M. W., Hawkins, J. D., Pollard, J., Catalano, R. F., & Baglioni, A. J. (2002). Measuring risk and protective factors for substance use, delinquency and other adolescent problem behaviors: The Communities that Care Youth Survey. Evaluation Review, 26, 575–601.

    Google Scholar 

  • Bandura, A. (1997). Self-efficacy: The exercise of control. New York, NY: Freeman.

    Google Scholar 

  • Baranowski, T., & Stables, G. (2000). Process evaluation of the 5-a-day projects. Health Education and Behavior, 27, 157–166.

    Article  Google Scholar 

  • Belmont Report. (1979). The Belmont report: Ethical principles and guidelines for the protection of human subjects of research. Available at: http://www.hhs.gov/ohrp/humansubjects/guidance/belmont.html

  • Bickman, L. (1987). The functions of program theory. New Directions for Program Evaluation, 33, 5–18.

    Article  Google Scholar 

  • Bloom, B. S., Engelhart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (Eds.). (1956). Taxonomy of educational objectives—The classification of educational goals—Handbook 1: Cognitive domain. London, WI: Longmans, Green & Co. Ltd.

    Google Scholar 

  • Botvin, G. J., & Griffin, K. W. (2003). Drug abuse prevention curricula in schools. In Z. Sloboda & W. J. Bukoski (Eds.), Handbook of drug abuse prevention: Theory, science, and practice. New York, NY: Springer.

    Google Scholar 

  • Brown, C. H. (2006). Design principles and their application in preventive field trials. In Z. Sloboda & W. J. Bukoski (Eds.), Handbook of drug abuse prevention: Theory, science, and practice. New York, NY: Springer.

    Google Scholar 

  • Brown, C. H., Wang, W., Kellam, S. G., Muthen, B. O., Petras, H., Toyinbo, P., et al. (2008). Methods for testing theory and evaluating impact in randomized field trials: Intent-to-treat analyses for integrating the perspectives of person, place, and time. Drug and Alcohol Dependence, 95, S74–S104.

    Google Scholar 

  • Bukoski, W. J. (2003). The emerging science of drug abuse prevention. In Z. Sloboda & W. J. Bukoski (Eds.), Handbook of drug abuse prevention: Theory, science, and practice. New York, NY: Springer.

    Google Scholar 

  • Caracelli, V. J., & Greene, J. C. (1993). Data analysis strategies for mixed-method evaluation designs. Educational Evaluation and Policy Analysis, 15, 195–207.

    Article  Google Scholar 

  • Carson, K. V., Brinn, M. P., Labiszewski, N. A., Esterman, A. J., Chang, A. B., & Smith, B. J. (2011). Community interventions for preventing smoking in young people. Cochrane Database of Systematic Reviews, Issue 7.

    Google Scholar 

  • Castro, F. G., Morera, O. S., Kellison, J. G., & Aguirre, K. M. (2014). Mixed methods research design for prevention science: Methods, critiques, and recommendations. In Z. Sloboda & H. Petras (Eds.), Defining prevention science. New York, NY: Springer.

    Google Scholar 

  • Cegala, D. J. (1981). Interaction involvement: A cognitive dimension of communicative competence. Communication Education, 30, 109–121.

    Article  Google Scholar 

  • Centers for Disease Control and Prevention. (2004). Methodology of the Youth Risk Behavior Surveillance System. Morbidity and Mortality Weekly Report, 53 (No. RR-12).

    Google Scholar 

  • Centers for Disease Control and Prevention. (2011). U.S. Department of Health and Human Services Centers for Disease Control and Prevention. Office of the Director, Office of Strategy and Innovation. Introduction to program evaluation for public health programs: A self-study guide. Atlanta, GA: U.S. Centers for Disease Control and Prevention.

    Google Scholar 

  • Center for Substance Abuse Prevention (CSAP). (1999). Core measure initiative phase i recommendations, December 1999. Washington, DC: Substance Abuse and Mental Health Services Administration.

    Google Scholar 

  • Collins, L. M. (1994). Some design, measurement, and analysis pitfalls in drug abuse prevention research and how to avoid them: Let your model be your guide. In A. Cázares & L. A. Beatty (Eds.), Scientific methods for prevention intervention research. National Institute on Drug Abuse Research Monograph Series Number 139. Rockville, MD: National Institute on Drug Abuse.

    Google Scholar 

  • Collins, L. M., & Flaherty, B. P. (2006). Methodological considerations in prevention research. In Z. Sloboda & H. Petras (Eds.), Defining prevention science. New York, NY: Springer.

    Google Scholar 

  • Conley-Tyler, M. (2005). A fundamental choice: Internal or external evaluation? Evaluation Journal of Australasia, 4, 3–11.

    Google Scholar 

  • Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues for field settings. Boston, MA: Houghton Mifflin.

    Google Scholar 

  • DesJarlais, D. C., Sloboda, Z., Friedman, S. R., Tempalski, B., McKnight, C., & Braine, N. (2006). Diffusion of the D.A.R.E. and syringe exchange programs. American Journal of Public Health, 96, 1354–1358.

    Article  Google Scholar 

  • Dusenbury, L., Brannigan, R., Falco, M., & Hansen, W. B. (2003). A review of research on fidelity of implementation: implications for drug abuse prevention in school settings. Health Education Research: Theory & Practice, 18, 237–256.

    Article  Google Scholar 

  • Ennett, S. T., Haws, S., Ringwalt, C. L., Vincus, A. A., Hanley, S., Bowling, J. M., et al. (2011). Evidence-based practice in school substance use prevention: Fidelity of implementation under real-world conditions. Health Education Research, 26, 361–371.

    Article  Google Scholar 

  • Ennett, S. T., Ringwalt, C. L., Thorne, J., Rohrbach, L. A., Vincus, A. A., Simons-Rudolph, A., et al. (2003). A comparison of current practice in school-based substance use prevention programs with meta-analysis findings. Prevention Science, 4, 1–14.

    Article  Google Scholar 

  • Fishbein, D. H., & Ridenour, T. A. (2013). Advancing transdisciplinary translation for prevention of high-risk behaviors: Introduction to the special issue. Prevention Science, 14, 201–205.

    Article  Google Scholar 

  • Flay, B. R., Snyder, F., & Petraitis, J. (2009). The theory of triadic influence. In R. J. DiClemente, M. C. Kegler, & R. A. Crosby (Eds.), Emerging theories in health promotion practice and research (2nd ed.). New York, NY: Jossey-Bass.

    Google Scholar 

  • Foxcroft D. R., & Tsertsvadze A. (2011). Universal multi-component prevention programs for alcohol misuse in young people. Cochrane Database of Systematic Reviews, Issue 9.

    Google Scholar 

  • Gantt, H. L. (1974). Work, Wages and Profit, published by the Engineering Magazine, New York, 1910; republished as Work, Wages and Profits. Easton, PA: Hive Publishing Company.

    Google Scholar 

  • Garbarino, J. (1978). The role of schools in socialization to adulthood. Education Forum, 42, 169–182.

    Article  Google Scholar 

  • Goldstein, A., & McGinnis, E. (1997). Skillstreaming the adolescent: New strategies and perspectives for teaching prosocial skills. Champaign, IL: Research Press.

    Google Scholar 

  • Gorman, D. M., & Conde, E. (2007). Conflict of interest in the evaluation and dissemination of “model” school-based drug and violence prevention programs. Evaluation and Program Planning, 30, 422–429.

    Article  Google Scholar 

  • Graham, J. W. (2009). Missing data analysis: Making it work in the real world. Annual Review of Psychology, 60, 549–576.

    Article  Google Scholar 

  • Hallfors, D., & Godette, D. (2002). Will the ‘principles of effectiveness’ improve prevention practice? Early findings from a diffusion study. Health Education Research, 17, 461–470.

    Article  Google Scholar 

  • Hammond, A., Sloboda, Z., Tonkin, P., Stephens, R. C., Teasdale, B., Grey, S. F., et al. (2008). Do adolescents perceive police officers as credible instructors of substance abuse prevention programs?. Health Education Research, 23, 682–696.

    Article  Google Scholar 

  • Hatry, H. P., Wholey, J. S., & Newcomer, K. E. (2010). Evaluation challenges, issues, and trends. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation. San Francisco, CA: Jossey Bass.

    Google Scholar 

  • Hawkins, J. D., Catalano, R. F., & Miller, J. Y. (1992). Risk and protective factors for alcohol and other drug problems in adolescence and early adulthood: Implications for substance abuse prevention. Psychological Bulletin, 112, 64–105.

    Article  Google Scholar 

  • Jackson, C., Gesses, R., Haw, S., & Frank, J. (2012). Interventions to prevent substance use and risky sexual behavior in young people: A systematic review. Addiction, 107, 733–747.

    Article  Google Scholar 

  • Johnston, L. D., O’Malley, P. M., Bachman, J. G., & Schulenberg, J. E. (2013). Monitoring the future national survey results on drug use, 1975–2012: Volume I, secondary school students. Ann Arbor, MI: Institute for Social Research, The University of Michigan.

    Google Scholar 

  • Kumar, R., O’Malley, P. M., Johnston, L. D., & Laetz, V. B. (2013). Alcohol, tobacco, and other drug use prevention programs in U.S. schools: A descriptive summary. Prevention Science, 14, 581–592.

    Article  Google Scholar 

  • Kumpfer, K. L., & Alvarado, R. (2003). Family-strengthening approaches for the prevention of youth problem behaviors. American Psychologist, 58(6-7), 457–465.

    Google Scholar 

  • McKinnon, D. P., Fairchild, A. J., & Fritz, M. S. (2007). Mediation analysis. Annual Review of Psychology, 58, 593–614.

    Article  Google Scholar 

  • McLaughlin, J. A., & Jordan, G. B. (2004). Using logic models. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation. San Francisco, CA: Jossey Bass.

    Google Scholar 

  • Merrill, J. C., Pinsky, L., Killeya-Jones, L. A., Sloboda, Z., & Dilascio, T. (2006). Substance abuse prevention infrastructure: A survey-based study of the organizational structure and function of the D.A.R.E program. Substance Abuse Treatment, Prevention, and Policy, 6, 1–25.

    Google Scholar 

  • Mohr, L. B. (1995). Impact Analysis for program evaluation. Thousand Oaks, CA: SAGE.

    Google Scholar 

  • National Institute on Drug Abuse. (2003). Preventing drug use among children and adolescents: A research-based guide. NIH Publication No. 04-4212(A). Bethesda, MD: National Institute on Drug Abuse.

    Google Scholar 

  • O’Connor, T. G., & Rutter, M. (1996). Risk mechanisms in development: Some conceptual and methodological considerations. Developmental Psychology, 32, 787–795.

    Article  Google Scholar 

  • Pandiani, J. A., Banks, S. M., & Schacht, L. M. (1998). Personal privacy versus public accountability: A technological solution to an ethical dilemma. The Journal of Behavioral Health Services & Research, 25, 456–463.

    Article  Google Scholar 

  • Patton, M. Q. (1999). Enhancing the quality and credibility of qualitative analysis. Health Services Research, 34, 1189–1208.

    Google Scholar 

  • Pedhazur, E. J., & Schmelkin, L. (1991). Measurement, design, and analysis: An integrated approach. New York, NY: Psychology Press, Taylor & Francis.

    Google Scholar 

  • Petras, H., & Sloboda, Z. (2014). A conceptual foundation for prevention. In Z. Sloboda & H. Petras (Eds.), Advances in prevention science. Volume 1: Defining prevention science. New York, NY: Springer.

    Google Scholar 

  • Piaget, J. (1973). Main trends in psychology. London, UK: George Allen & Unwin.

    Google Scholar 

  • Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (2nd ed.). Thousand Oaks, CA: Sage Publications.

    Google Scholar 

  • Raymond, M. R. (1986). Missing data in evaluation research. Evaluation and the Health Professions, 9, 395–420.

    Article  Google Scholar 

  • Rodi, M. S., & Paget, K. D. (2007). Where local and national evaluators meet: Unintended threats to ethical evaluation practice. Evaluation and Program Planning, 30, 416–421.

    Article  Google Scholar 

  • Schinke, S. P., Botvin, G. J., & Orlandi, M. A. (1991). Substance abuse in children and adolescents. In S. P. Schinke, G. J. Botvin, & M. A. Orlandi (Eds.), Substance abuse in children and adolescents: Evaluation and intervention. Newbury Park, CA: SAGE.

    Google Scholar 

  • Schwandt, T. A. (2007). Expanding the conversation on evaluation ethics. Evaluation and Program Planning, 30, 400–403.

    Article  Google Scholar 

  • Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.

    Google Scholar 

  • Simons, H. (2006). Ethics in evaluation. In I. Shaw, I. Graham, R. Shaw, J. C. Greene, & M. M. Mark (Eds.), The Sage handbook of evaluation. Thousand Oaks, CA: SAGE Publications Inc.

    Google Scholar 

  • Sloboda, Z. (2009). School prevention. In C. Leukefeld, T. Gullotta, & M. S. Tindall (Eds.), Handbook on adolescent substance abuse prevention and treatment: Evidence-based practices. New York, NY: Springer Academic Publishing.

    Google Scholar 

  • Sloboda, Z. (2015a). “Read my lips”—Empty words: The semantics of institutionalized flawing. Substance Use and Misuse, 16, 1–6.

    Google Scholar 

  • Sloboda, Z. (2015b). Vulnerability and risks: Implications for understanding etiology and drug use prevention. In L. M. Scheier (Ed.), Handbook of adolescent drug use prevention: Research, intervention strategies, and practice. Washington, DC: American Psychological Association.

    Google Scholar 

  • Sloboda, Z., Pyakuryal, A., Stephens, P., Teasdale, B., Forrest, D., Stephens, R. C., et al. (2008). Reports of substance abuse programming available in schools. Prevention Science, 9, 276–287.

    Article  Google Scholar 

  • Sloboda, Z., Stephens, P., Pyakuryal, A., Teasdale, B., Stephens, R. C., Hawthorne, R. D., et al. (2009a). Implementation fidelity: The experience of the adolescent substance abuse prevention study. Health Education Research, 24, 394–406.

    Article  Google Scholar 

  • Sloboda, Z., Stephens, R. C., Stephens, P. C., Grey, S. F., Teasdale, B., Hawthorne, R. D., et al. (2009b). The adolescent substance abuse prevention study: A randomized field trial of a universal substance abuse prevention program. Drug and Alcohol Dependence, 102, 1–10.

    Article  Google Scholar 

  • Stephens, P. C., Sloboda, Z., Stephens, R. C., Marquette, J. F., Hawthorne, R. D., & Williams, J. (2009). Universal school-based substance abuse prevention programs: Modeling targeted mediators and outcomes for adolescent cigarette, alcohol and marijuana use. Drug and Alcohol Dependence, 102, 19–29.

    Article  Google Scholar 

  • Stephens, R. C., Thibodeaux, L., Sloboda, Z., & Tonkin, P. (2007). Research note: An empirical study of adolescent student attrition. Journal of Drug Issues, 37, 475–488.

    Article  Google Scholar 

  • Teasdale, B., Stephens, P. C., Sloboda, Z., Grey, S. F., & Stephens, R. C. (2009). The influence of program mediators on outcomes for substance users and non-users at baseline. Drug and Alcohol Dependence, 102, 11–18.

    Article  Google Scholar 

  • Teasdale, B., Stephens, P. C., Sloboda, Z., Stephens, R. C., & Grey, S. F. (2013). The effect of Hurricane Katrina on adolescent feelings of social isolation. Social Science Quarterly, 94, 490–505.

    Article  Google Scholar 

  • The Robert Wood Johnson Foundation. RWJ website: http://www.rwjf.org/en/library/research/2009/06/the-adolescent-substance-abuse-prevention-study.html

  • Thomas, D. R. (2006). A general inductive approach for analyzing qualitative evaluation data. American Journal of Evaluation, 27, 237–246.

    Article  Google Scholar 

  • Tonkin, P., Sloboda, Z., Stephens, R. C., Teasdale, B., & Grey, S. F. (2008). Is the receptivity of substance abuse prevention programming impacted by students’ perceptions of the instructor? Health Education and Behavior, 36, 724–745.

    Google Scholar 

  • Torres, R. T. (1991). Improving the quality of internal evaluation: The evaluator as consultant-mediator. Evaluation and Program Planning, 14, 189–198.

    Article  Google Scholar 

  • United Nations Office on Drug Use and Crime (UNODC). (2013). International standards for drug use prevention. http://www.unodc.org/unodc/en/prevention/prevention-standards.html

  • U.S. Department of Health and Human Services Centers for Disease Control and prevention. (2011). Office of the Director, Office of Strategy and Innovation. Introduction to program evaluation for public health programs: A self-study guide. Atlanta, GA: Centers for Disease Control and Prevention.

    Google Scholar 

  • W. K. Kellogg Foundation. (2004a). Logic model development guide. W. K. Kellogg Foundation, Battle Creek, Michigan. Accessed June 2, 2014 at: http://www.wkkf.org/resource-directory/resource/2006/02/wk-kellogg-foundation-logic-model-development-guide

  • W. K. Kellogg Foundation (2004b). W. K. Kellogg Foundation evaluation handbook. Battle Creek, MI. downloaded 5/21/2014 at: http://www.wkkf.org/resource-directory/resource/2010/w-k-kellogg-foundation-evaluation-handbook

  • Weiss, C. H. (1998). Evaluation: Methods for studying programs and policies (2nd ed.). New Jersey: Prentiss Hall.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peggy Stephens .

Editor information

Editors and Affiliations

Appendices

Appendix: Application Example—The Adolescent Substance Abuse Prevention Study (ASAPS)

The evaluation of the Adolescent Substance Abuse Prevention Study (ASAPS) was conducted with the goal of assessing the implementation and effectiveness of a two-component universal, school-based substance abuse prevention curriculum delivered by police officers who had previously been trained, and were currently teaching the Drug Abuse Resistance Education (D.A.R.E.) program. The research team for the evaluation were located at The University of Akron Institute for Health and Social Policy in Ohio, and stakeholders who collaborated with the team included D.A.R.E. America organization leadership, trainers, officers, and students, substance abuse prevention researchers, research methodologists, and statisticians from across the United States, curriculum specialists and teachers, and the Robert Wood Johnson Foundation (RWJF).

Evaluation planning begun in 1999 when RWJF funded a project to revise and evaluate the D.A.R.E. curricula currently being implemented in schools. At that time, the principal investigator contacted and invited stakeholders, researchers, and educators to participate in two planning groups: (1) a curriculum workgroup, and (2) a research design workgroup. Both groups worked concurrently on planning curriculum revisions, as well as the study’s evaluation design. This process took nearly 2 years to complete and included pilot studies, capacity building for the administration of the study, and school and police department recruitment in six large cities across the United States. The new curricula, Take Charge of Your Life (TCYL), was implemented in 120 middle schools in six sites across the continental U.S. and data collection for the outcome evaluation study began in the fall of 2001. The 9th grade component was delivered to the same cohort of students in the 2003/2004 academic year. Students were followed annually for data collection until they were in the 11th grade (2005/2006).

Defining Program Goals and Processes of the ASAPS

The overarching goal of the new curriculum was to delay substance use initiation or reduce current levels of use in an ethnically and socially diverse U.S. middle and high school populations. Curriculum materials were developed by prevention experts using criteria for effective prevention programming derived from existing meta-analyses and reviews of the literature. The curriculum workgroup used these final recommendations to develop the new middle and high school curricula, using a problem driven format based on authentic dilemmas and issues faced by teens as they are pressured or tempted to experiment with or use tobacco, alcohol, illegal drugs, or inhalants. The primary instructional strategy used student-to-student engagement through instructor guided in-depth discussions, role-playing of skills and concepts, and small group problem-solving.

The chosen framework enables students to actively utilize the intended ideas and skills as they develop their own understandings and capacities to be in control of situations where they are pressured to use tobacco, alcohol, and drugs. In order to attain these objectives, the curriculum focused on the following specific targets/constructs for change (immediate and intermediate outcomes) at the student level:

  • Consequences of substance use: understand the nature of and risks (personal, physical, social, legal) associated with the use of alcohol, drugs, tobacco, and inhalants.

  • Beliefs and attitudes toward substance use: examine and understand their own beliefs and attitudes related to alcohol, drug, tobacco, and inhalant use.

  • Normative beliefs regarding peer use of substances: correct misperceptions about the rates of substance use by same-age adolescents.

  • Decision-making skills: make positive quality of life decisions.

  • Communication skills: communicate clearly and interact positively in social and interpersonal situations.

  • Resistance/refusal skills: develop and use resistance skills.Footnote 9

The final curricula was composed of ten, 40 min lessons administered in the 7th grade, and seven, 40 min sessions delivered to the same students (cohort) when they were in the 9th grade. These targeted constructs were understood to operate within a larger, ecological set of influences ranging from family to community levels. The result of research and development of the two-part curricula was the theoretical model which guided both the intervention as a whole, and the research design for evaluating the program. This theoretical model is depicted in Fig. 20.2.

Fig. 20.2
figure 2

Program theory for adolescent substance abuse prevention program

In this model, substance use is seen as a direct outcome of the students’ knowledge and beliefs about the nature and effects of various dangerous substances, their attitudes toward the use and consequences of abuse of the various substances, and the level of personal skills which may allow them to effectively resist pressure to participate in the use of any illicit substances. Thus, the intervention targeted for change the student’s normative beliefs, skills (immediate outcomes), personal attitudes and values (intermediate outcomes) which in turn were expected to reduce the student’s intentions to use substances (intermediate outcome) and substance use (ultimate outcome). As can be seen in Fig. 20.2, these constructs are conceptualized to mediate the effects of the intervention on substance use. For this reason, the mediating constructs/variables are treated as proximal/immediate or intermediate outcomes of the intervention, while substance use is viewed as the distal/ultimate outcome.Footnote 10 In order to create a more comprehensive framework for a substance prevention program, the model also included constructs thought to directly influence the student’s normative beliefs, communication skills, and personal attitudes and values and indirectly influence the student’s substance use. These school, community and student risk indicators were included and are represented in Fig. 20.2 as social bonding, school, and community risk factors (National Institute on Drug Abuse 1999).

Formative Evaluation

The formative evaluation of the new, 7th grade curriculum began with separate focus groups of 7th grade students, their parents, middle school teachers, and D.A.R.E. police officers to elicit feedback on the curriculum goals, content, and materials. Students were asked to provide input on ways to revise the problematic situations presented, as well as for the inclusion of any other problems they felt were relevant to same-age adolescents. In addition, students were queried about their opinions regarding having police officers teaching in the classroom. Parents’ attitudes about drug abuse, prevention programming, and having police officers in the classroom were also explored. The same process was followed with local police officers who had experience with the D.A.R.E. program. Feedback on the ability/motivation of officers to implement this type of lesson and their ability to involve students in active discussion of the problems presented were solicited. The officers were also asked to evaluate how realistic each problem was and the accompanying activities and to provide input/insight into problematic situations they have been exposed to in the D.A.R.E. classroom. These problems were incorporated into the lesson plans. Finally, middle school teachers were recruited to review the lesson plans and instructor training manual.

The first trial delivery of the middle school curriculum (pilot study I) was tested in one urban middle school. All 7th grade health or social studies classes received the middle school curriculum (n = 153). All data collection instruments were tested in the feasibility study including measures of outcome and mediating constructs, fidelity of implementation, and exposure. Two local D.A.R.E. officers participated in a two day training on the curriculum theory, content, intended delivery procedures, and background information; each was responsible for half of the 7th grade classes. In order to gather as much information as possible, each class was observed by trained raters from the research institute. Officers were observed by one or more members of the curriculum workgroup during delivery of each lesson. Feedback was exchanged on the lesson and revisions were made to the lessons based on these sessions and the feedback from the focus groups.

To test the specific objectives of the study, a number of study instruments were developed, including instructor observation sheets, short, lesson specific, evaluation surveys for instructors and students, a student survey with measures of the targeted outcomes and mediators, risk and protective factors, and demographic information. These instruments relied heavily on existing measurement instruments, especially those measuring students’ substance use as well as normative beliefs, decision, communication and refusal skills, and personal attitudes and values related to substance use. Based on an extensive review of the literature, it was evident that in addition to using many existing measures, new or adapted measures of many of the study constructs would be necessary. These newly developed measures were pre-tested in the trial delivery of the middle school curriculum by the institute’s research staff.

A second pilot study was conducted in nine middle schools using a non-randomized, no-comparison group design; a convenience sample of nine schools, including urban, suburban, and rural public schools and one private (Catholic) school. Data on mediators and outcomes were collected from 462 students. The results of the second pilot study were promising with significant changes in the expected direction for the immediate outcomes of normative beliefs, refusal skills, and the perceptions of harm for ATOD use. Instructors implemented the program with high fidelity, and feedback was positive toward the feasibility of implementing the program on a larger scale. The positive results of the two pilot studies indicated the curriculum and officer training were ready for the larger efficacy study.

Outcome Evaluation

The research workgroup implemented a randomized, longitudinal control trial, to evaluate the intervention. School districts (middle and high school clusters) were randomly assigned to either a treatment (intervention schools that would receive the new TCYL curricula) or a control condition. However, ethical concerns prevented the study group from requiring the control condition schools from refraining from implementing any prevention programming; consequently, the control condition became the condition of ‘prevention programming as usual.’

Sampling

High schools and all their feeder middle schools were selected as the clusters of interest, with the school district becoming the unit of sampling. A sample size of approximately 40 school districts per condition was expected to provide statistical power of 0.80, assuming an alpha level of 0.05 and a two-tailed test of hypothesis. To insure diversity in ethnicity and geographic location, six large metropolitan areas were selected for recruitment, with the ‘core’ central city school district identified as the center point for a 50 mile sampling radius (with the goal of including urban, suburban, and rural school districts) for each region of the United States. These regions were centered in the following metropolitan areas: (1) New York City, NY/Newark, NJ,Footnote 11 (2) Detroit, MI, (3) St. Louis, MO, (4) New Orleans, LA, (5) Houston, TX, and (6) Los Angeles, CA. A sampling frame was developed for the area surrounding each ‘core’ district, with the inclusion criteria of public school districts with middle schools that housed 7th and 8th graders and four-year high schools. To ensure diversity across schools, the six sampling frames were stratified by risk status, which was calculated using a function of the proportion of students eligible for free lunch (an indicator of SES/poverty) and minority enrollment in the school.

Organizational Structure of the Study

Given that this was a large evaluation study that spanned six metropolitan regions of the United States, a well-planned administrative structure was a critical component for a successful implementation of the study plan. Two principal investigators oversaw all the study activities, a senior researcher oversaw the sampling and assignment of districts to intervention condition, another senior researcher oversaw curriculum implementation and monitoring, and a senior researcher oversaw data analysis. A project manager coordinated the recruitment of school districts and police departments and training of police officer/instructors, and a data manager oversaw data collection and processing. In addition, regional coordinators were hired to monitor on-site recruitment, communication, data collection, and retention of schools and subjects. They also supervised full-time site coordinators for each of the six regions who oversaw part- and/or full-time data collection personnel and acted as liaisons between the schools, police, and community, while supervising curricula delivery and data collection procedures.

Recruitment and Retention of Schools

The local recruiting process was then directed by the site coordinator in each region. An explanatory brochure and a cover letter were sent to the selected school district superintendents requesting their participation in the study. They were informed that their agreement to participate in the study would not guarantee that their schools would receive the program and that if they are included in the control group, they would not be able to deliver this program for two years to allow the 7th grade cohort to transition to their high school. The mailings were followed with a telephone call from the Project Manager at the Institute; this correspondence provided information to the coordinator as to the gatekeepers and appropriate person(s) with which to continue recruitment of the district schools. The actual recruitment activities for each school district varied; however, the process generally included site visits by senior research staff members to explain the study to district personnel, principals, teachers, and parents/PTA associations, along with visits to the local police/sheriff’s department to recruit local officers to train for instructors for the TCYL curricula.

School districts were required to agree to be assigned to either the control or intervention (treatment) condition to participate in the study. Once district personnel agree to participate, similar materials, a letter of support from the district superintendent, and phone calls were made to the principals of the middle schools and high schools to elicit their cooperation. In recognition of the costs associated with participation with the study, each school was offered $500 per year of participation in the study and resource materials relative to substance abuse prevention as incentives to participate. They were also promised copies of the final study report.

After all agreement letters were signed, the district was randomly assigned to either the intervention/treatment or control condition. The core districts were randomly assigned first (to achieve a balance of control and intervention inner-city districts) and the schools surrounding the ‘core’ district were assigned randomly without any attempt to achieve a balance of control and intervention districts (although the sample was fairly balanced within each region).

Retention of schools was one of the primary roles of the site coordinators. In addition to the incentives, each regional site coordinator convened a ‘Community Advisory Group,’ consisting of local community leaders representing the schools, business community, local political groups, social service agencies, community coalitions, and/or law enforcement. The Advisory Group’s role included supporting the project within the community, assisting in the interpretation of study findings, and serving as a support network to establish the program in the middle and high schools after the completion of the research project in both the experimental and control communities.

Recruitment, Retention, and Tracking of Students

Parents and students in both the intervention and control schools were required to sign a consent form to participate in the survey administration (all intervention students received the curriculum, regardless of survey consent status). Parents and students were also asked to provide information for persons who would know the whereabouts of the student for follow-up purposes. Parents and students were reminded about the need for the signed consent forms and incentives; to increase signed consent forms, incentives were offered to students in terms of classroom-wide activities, such as pizza parties. The consent forms were available in both English and Spanish.

One week before the first intervention class (7th and 9th grades), a baseline survey was administered to students who had provided appropriate consent. To ensure anonymity, student was assigned a unique identifier. This key file was then stored separately from any student data for use in matching coded student surveys to the data file at each data collection point. Once the students completed the survey, they placed the coded survey forms in the blank envelope, sealed the envelope, and deposited it in a slit on a sealed box that was located at the front of the classroom; this procedure was used at each data collection point.

Site coordinator and team members were responsible for student tracking. Students who were not present at a data collection session but were still enrolled in school were approached individually by the site coordinator and asked to complete the forms in an empty classroom or other private place to ensure confidentiality. Students who had left the district were lost to follow-up. The decision not to follow-up students who left the school district was made after an attrition pilot study was conducted by the research group and the findings indicated that the cost for locating these students was prohibitive (Stephens et al. 2007).

Data Collection and Measures

The data collected can be summarized by five general construct categories: (1) substance use outcomes, (2) mediating outcomes, (3) risk factors, (4) moderators of the intervention, and (5) indicators of implementation fidelity. The outcomes and individual risk factors were measured as part of the main student survey; other variables required data collection from additional and sometimes multiple sources, such as the officer/instructors, curriculum trainers, site coordinators and ethnographer’s interviews with students, and key community and school informants. As there were many sources of data collected for this evaluation, this example will focus only on data collected from student surveys (outcome evaluation) and implementation fidelity observations (process evaluation).

Baseline student surveys were administered immediately prior to program delivery in 7th grade, and a post-test was administered approximately 90 days after the completion of the ten session curriculum. A third post-test was administered in the 8th grade, and pre- and post-tests were administered before and after the implementation of the 9th grade curriculum (seven sessions). Post-tests in the 10th and 11th grade were administered approximately one and two years after the 9th grade post-test. Neither the officer/instructor nor the classroom teacher was present during data collection to assure students that their responses on the survey would be confidential. To maintain generalizability with national data systems and other prevention program evaluations, survey measures were adapted from ongoing studies, such as Monitoring the Future Study (Johnston et al. 2013), Center for Substance Abuse’s Core Measures (1999), and prior studies of D.A.R.E wherever possible. Any changes to the original measures were tested for validity and reliability in the feasibility study.

Since the curriculum is specifically aimed at reducing or preventing the use of alcohol, tobacco, marijuana, and inhalants, this construct was measured using the self-report of students’ use of tobacco, alcohol, marijuana, and inhalants on a paper-pencil, confidential student survey at seven points in time. The age of first use as well as pattern of use (lifetime, last 12 months, and last 30 days) and amount of use where appropriate (binge drinking) were measured using questions taken from the Monitoring the Future survey. To measure age at first use of substances, students were asked, for example, “How old were you the first time you had a full drink of an alcoholic beverage?” with ordinal response options ranging from “never” to “15 or 16.” Substance use was operationalized with questions such as, “How many TIMES (if any) have you had alcoholic beverages to drink (more than just a few sips)….” “during the last 12 month” or “during the last 30 days.” Seven response options ranged from “never” (coded as zero), to “40 or more times” (coded as 6).

With regard to mediating variables, intentions to use substances was measured with drug-specific questions, asking how likely the student was to try alcohol, tobacco, or marijuana in the next 12 months. Responses ranged from 1 (definitely will) to 5 (definitely will not). Attitudes toward the use of tobacco, alcohol and marijuana were measured by two items for each drug. Students selected a response to complete the stem, “I think it is okay for students my age to…” Response items included, “smoke cigarettes once in a while,” “drink alcohol almost every weekend,” and “smoke marijuana once in a while.” Responses ranged from 1 (agree) to 5 (disagree). Normative beliefs were measured by three questions that asked students how many 10th graders the student believed used tobacco, alcohol, or marijuana in the last 30 days. Response categories ranged from 1 (more than 75%) to 5 (10% or less). Perceptions of harmful consequences resulting from substance use were measured by three items that asked students how much they thought the use of a particular substance (alcohol, tobacco, or marijuana) affects how the brain works. The response categories ranged from 1(none) to 5 (a lot).Footnote 12

Refusal skills were assessed by responses given to three hypothetical scenarios involving the opportunity to use tobacco, alcohol, or marijuana. Students were asked to read the scenario in which a substance was offered by a peer. Each student selected the best refusal response to that offers from a list of possible responses given the person being offered does not want to use the substance. Responses were weighted according to the level of assertiveness demonstrated. For example, a response of “no, maybe later” was assigned a lower score than a response of “no thanks, I don’t want to smoke.” Scores ranged from 0 (least appropriate response) to 2 (best response chosen). The student survey also included risk and protective factors and demographic characteristics including self-reported age, sex, race/ethnicity, and family composition.

Implementation Fidelity

Fidelity of implementation was measured by the amount of intervention material actually covered in each session, the number of intervention sessions completed, the amount of role-playing, demonstration, and discussion that occurred during a sample of two sessions, the number of times the instructor reinvented or altered the material or delivery, and overall quality of delivery. These constructs fit with the recommendations of Baranowski and Stables (2000) that implementation and reach are critical components of process evaluation. These data came from four sources: (1) independent classroom observation checklists completed by trained site staff, (2) post-instruction self-report surveys by officer/instructors, (3) post-instruction self-report surveys by students, and (4) attendance records for each student in each of the intervention sessions.

Data Processing and Analysis

Data collection was overseen by the site coordinators and their team members. Student surveys were scored/scanned electronically and sent to a master data base maintained by the Institute to assure the highest possible quality. Data were assessed for completeness, that responses were within range of the values for each variable, and that data were internally consistent. Particular attention was paid to assuring that the longitudinal data on the cohort sample could be linked across all waves of data. As soon as data were received, linkages were established and Site Coordinators were contacted to remedy any discrepancies. Data from observation forms and implementation fidelity surveys were also scanned and cleaned at the Institute and stored electronically.

Data Analysis

The unit of sampling was the school district, but the curricula were delivered in classrooms within those schools, and the surveys were administered to students. Hence, students were nested within the classroom, school, and district. This complex sampling design presented the potential for biased estimates for the standard errors of any hypothesis tests. Therefore, all analyses were conducting using statistical procedures which adjusted the standard errors.Footnote 13 Logit models were utilized for binomial outcomes, in addition to weighted least squares, while maximum likelihood models were used in path modeling and structural equation models with outcomes at the ordinal and interval levels.

Evaluation Study Findings

13.1 Process Evaluation (Implementation Fidelity)

The process evaluation reported here focuses on the implementation fidelity of the officer/instructors delivering the new curriculum in the 7th and 9th grades and the exposure of students to the curriculum (as measured by lessons attended). Classroom observations conducted in the 7th grade and 9th grade all showed high implementation fidelity. Coverage of the content of the lessons in 7th grade ranged from 34 to 100% with a median of 72 and 81% content coverage for each of the two lessons observed. For the 9th grade, the proportion of content covered in each of the two lessons observed ranged from 12 to 100% with a median proportion of 70 and 78% for each of the observed lessons. Instructors’ use of the appropriate activity in their instruction was somewhat lower, with a median proportion of 63 and 44% delivering the two observed lessons with the appropriate instructional activity in the 7th grade and 50 and 60% using the appropriate instructional strategy during the observed lessons in the 9th grade (Sloboda et al. 2009a, b). While these numbers may seem low, they were relatively high compared to other studies reported in the literature (Ennett et al. 2011; Hallfors and Godette 2002; Ennett et al. 2003).

Attendance records indicated there was adequate exposure to the curricula with 69% of 7th grade intervention students attending all ten lessons and another 27% of students attending eight or more of the ten lessons. In the 9th grade, 44% of the intervention students attended 100% of the seven lessons and 17% of the intervention students attended at least five (71%) of the seven lessons.

Outcome Evaluation

The outcome evaluation was completed when the cohort of students were 11th graders. A detailed reporting of the data analytical procedures and finding is reported in Sloboda et al. (2009a, b). Forty-two high schools and 59 middle schools were assigned to the intervention (TCYL curriculum intervention) and 41 high schools with their 63 feeder middle schools were assigned to the control condition. A total of 19,529 students completed consent forms prior to the administration of the 7th pre-test (intervention group n = 11,314; control group n = 8215). Baseline surveys were completed by 10,028 intervention students and 7302 control students. During the course of the study, three high schools were lost to follow-up—two were destroyed in Hurricane Katrina and one opted out of the study. Baseline data were used to check the randomization process between the intervention and control schools; demographic characteristics and substance use outcomes showed no significant differences between the groups, thus confirming the randomization procedure was successful. Attrition analyses at the 11th grade post-test showed that overall, older students, female students, non-White students, and students who reported the use of alcohol, tobacco, or marijuana were more likely to drop out of the study. The only source of differential attrition was that students who identified as “other-race” were more likely to drop out of the control group than the intervention group (Sloboda et al. 2009a, b; Teasdale et al. 2009).

The primary goal of the intervention was to reduce or delay the onset of substance use among the cohort of students who received the TCYL curricula. The program theory proposed that the curricula would change the targets of normative beliefs, consequences of substance use, attitudes towards substance use, refusal, decision-making, and communications skills (immediate outcomes/mediators). Changes in these constructs would result in students’ intentions to avoid substance use (intermediate outcome) and changes in intentions would result in lower use of tobacco, alcohol, and marijuana (ultimate outcome). The findings did not support the hypothesis that the curriculum would have an impact on these outcomes. In fact, although there had been intervention effects on normative beliefs and refusal skills when students were in middle school, these effects were no longer significant in the 11th grade. There was no intervention effect on past 12-month outcomes of alcohol use, getting drunk on alcohol, or marijuana use. Surprisingly, alcohol use and getting drunk on alcohol in the past 30 days showed a significant effect in the opposite direction as expected, as did the 30-day use of tobacco and marijuana and the self-reported past two weeks binge drinking outcome. That is, the intervention group reported, on average, using tobacco, alcohol, and marijuana at higher rates than the control group in the 30 days prior to the 11th grade survey administration. The risk ratios for these effects ranged from 1.09 for alcohol use to 1.21 for cigarette smoking. In order to further understand these puzzling findings, subgroup analyses was utilized to test the moderating effects of prior substance use (as reported at baseline), sex, and race/ethnicity were conducted.

The analyses by substance use status, sex, and race/ethnicity did provide the evaluators with some insight for whom the program was and was not successful in reducing substance use. Males in the intervention group appeared to be driving the higher rates of alcohol use, getting drunk and bingeing on alcohol, while females in the intervention group were more likely (than their control group counterparts) to binge drink and smoke cigarettes. Non-White students in the intervention group had higher rates of cigarette use than non-White students in the control group, and White students in the intervention group had significantly higher alcohol outcomes than their White counterparts in the control group. The most surprising findings were those found when the nonusers at baseline were compared to substance users at baseline. Students who reported no use of alcohol at baseline in the intervention group were more likely than nonusers of alcohol in the control group to report binge drinking in the past two weeks, and alcohol use and getting drunk on alcohol in the past 30 days. Similarly, non-smokers in the intervention group were more likely than nonsmokers in the control group to report smoking cigarettes at the 11th grade post-test (Sloboda et al. 2009a, b).

A single finding regarding marijuana use was found in the expected direction for the intervention group. Students in the intervention group who reported using marijuana at baseline had significantly lower rates of use in the 11th grade than did non-marijuana users in the control group. In summary, the intervention appeared to work only in early marijuana users to reduce marijuana use. The intervention also appeared to have the effect of increasing smoking in females and non-Whites students, increase drinking in Whites and males, increasing problem drinking in females, and increasing smoking, alcohol use, and problem drinking in students who were nonusers in the 7th grade (Sloboda et al. 2009a, b).

What Went Wrong?

Two analytical questions were explored in an effort to provide insight into the study results. First, the basic program model of immediate, intermediate, and ultimate program targets was assessed to determine if the curriculum was targeting constructs that, if changed, would change the ultimate outcome of substance use. Path modeling was used to examine the relationships among normative beliefs, attitudes toward use, perceptions of consequences of substance use and refusal, communication, and decision-making skills on the intentions to use and actual use of cigarettes, alcohol, and marijuana. The findings proved interesting in that the effects (direct and indirect) of these mediators on each of the outcomes differed slightly; for example, refusal skills worked to reduce intentions to use cigarettes and marijuana by helping students to utilize decision-making skills to form intentions, but impacted alcohol use directly and possibly only for students who already had formed negative attitudes toward alcohol use. However, with the exception of communication skills (which appeared to increase intentions to use alcohol, and had no effect on cigarette or marijuana use), all program targets had a significant indirect or direct effect on each of the substances, with normative beliefs having the largest total (indirect + direct) effect on each substance. The effect of each of the program targets on substance use was small, and therefore the program would have to produce large changes to actually have an effect on the substance use outcomes (Stephens et al. 2009).

In fact, as noted above, while the program showed effects early on (in middle school) on normative beliefs, perceptions of consequences and refusal skills, by the 11th grade, these effects had disappeared and none of the targeted mediators showed differences between the control and intervention group. The second set of analyses used path modeling to examine the relationship of the program model constructs for baseline users and nonusers; these analyses were done to explore the single positive effect of the program on baseline marijuana users. The results of these models showed several significant effects of the treatment variable on the targeted mediators which were not seen in the intervention group overall. For nonusers, the intervention had a significant effect only on two marijuana specific mediators. Nonusers in the intervention group were significantly higher on their 9th grade marijuana refusal skills and marijuana specific normative beliefs than nonusers in the control group. For the baseline users, intervention effects were shown on mediators for each of the three substances. Baseline cigarette users in the intervention group were significantly higher than baseline users in the control group on the perceptions of harm for cigarette use. Baseline alcohol users in the intervention group were significantly higher than baseline users in the control group on normative beliefs about alcohol use and the perceptions of harm for using alcohol. Finally, baseline marijuana users in the intervention group were higher on their intentions not to use marijuana, marijuana refusal skills, and normative beliefs about marijuana use.

The TCYL intervention appeared to have no effect on the cigarette specific mediators in the nonuser group, but the program did appear to have a significant direct effect on cigarette use in the direction of higher use for TCYL baseline nonusers. There was, however, a significant program effect for the baseline user group on cigarette specific perceptions of consequences. The findings were similar for alcohol use, with the addition of a significant program effect on normative beliefs surrounding the use of alcohol for students who were baseline users of alcohol. The results for the model of marijuana use are similar for both baseline users and nonusers. The main effect of the program on marijuana use and the intentions not to use marijuana for baseline users became nonsignificant when the mediators are included in the same model, indicating full mediation of the program through marijuana specific refusal skills and normative beliefs, both of which showed positive program effects for baseline nonusers and users. Neither baseline users nor baseline nonusers showed any program effects on the global measures of communication or decision-making skills (Teasdale et al. 2009).

These findings partially explained why the intervention had beneficial programmatic impacts on marijuana use for baseline users. That is, the TCYL intervention reduced beliefs about the normative nature of marijuana use and increased refusal skills, compared to control students. In contrast, no explanation was found for why the program had the negative impacts for nonusers. It is interesting to note that students in the TCYL program did not report increased intentions to use alcohol, tobacco, and marijuana relative to control students. Based on the program theory, any impacts of the program should have worked through the targeted mediators (normative beliefs, consequences, and skills) and behavioral intentions. This was not the case, leaving the question of what did the intervention do to create the negative outcomes for baseline nonusers? If it was not the proposed theoretical mediators that influenced these outcomes, what components of the intervention impacted substance use? The evaluation group and other researcher continue to explore these findings to determine what went wrong and whether or not the intervention can be improved to change the targeted constructs.

Lessons Learned from the ASAPS

There are many lessons to be learned from the ASAPS, but this chapter focuses on those regarding the evaluation process. First, the ASAPS as an evaluation undertaking was a success in that the evaluation team collaborated with a multitude of stakeholders to come to agreement on program goals and processes, as well as how to evaluate those processes and outcomes. The shared responsibility and input of these stakeholders provided the framework for a comprehensive evaluation, the results of which continue to be utilized by substance abuse prevention specialists, educators, and public health professionals.

Second, this study illustrates the importance of incorporating program theory into the evaluation process and measuring and analyzing constructs and processes that compose that theory. While the evaluators expected the program goal of reduced substance use to be met, they were, with only one exception, not achieved. To understand this contradictory finding, they had the measures of the program theory and implementation procedures to examine to determine why the intervention did not work as anticipated. They found that while the program did impact some of the proposed mediators of substance use, the changes were not consistent or large enough to have an effect on substance use across a subpopulation of students. The program was implemented with fidelity, so this did not appear to be the weakness. Perhaps the content of the curricula was not powerful enough to change the targeted constructs? The lesson learned is that the program should be revised to strengthen its impact on these constructs before being implemented in middle and high schools.

Finally, the importance of utilizing program evaluation findings is also illustrated in this study. As a result of these analyses, the evaluators came to two important conclusions: (1) the TCYL curricula was not a universal curricula, and would be more appropriate for high risk students, and (2) it may suggest to focus on targeted subpopulations of students (users OR nonusers) rather than using “universal” programs that are delivered to a diverse population of students. These conclusions were taken seriously by the decision-maker/stakeholders (D.A.R.E. America), who decided not to implement the TCYL program until further revisions and testing had been done to insure the intervention achieved the primary goal of reducing substance use in adolescents.

Dissemination of Findings

The findings of any program evaluation should be broadly disseminated to facilitate decision-making by other program implementers, stakeholders, and researchers. The ASAPS study dissemination process included annual reports and presentations to RWJF and other stakeholders. Presentations were made at professional conferences and a large number of peer-reviewed articles on the study have been published (in addition to those articles cited in this chapter, please see Brown et al. 2008; DesJarlais et al. 2006; Hammond et al. 2008; Merrill et al. 2006; Sloboda et al. 2008; Teasdale et al. 2013; Tonkin et al. 2008).

All student survey data were also made available to the public through the Inter-university Consortium for Political and Social Research (ICPSR), and is available to researchers and students at: https://www.icpsr.umich.edu/icpsrweb/landing.jsp. By providing other evaluators, program implementers, and researchers access to the data and findings for this evaluation, the evaluators anticipate improvements in not only in substance abuse prevention programming, but in evaluation research as well.

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Cite this chapter

Stephens, P., Sloboda, Z., Kenne, D. (2017). Evaluation of Substance Abuse Prevention and Treatment Programs. In: VanGeest, J., Johnson, T., Alemagno, S. (eds) Research Methods in the Study of Substance Abuse. Springer, Cham. https://doi.org/10.1007/978-3-319-55980-3_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-55980-3_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-55978-0

  • Online ISBN: 978-3-319-55980-3

  • eBook Packages: Social SciencesSocial Sciences (R0)

Publish with us

Policies and ethics