Skip to main content
Log in

The Relative Importance of Provider, Program, School, and Community Predictors of the Implementation Quality of School-Based Prevention Programs

  • Published:
Prevention Science Aims and scope Submit manuscript

Abstract

Previous research has demonstrated the importance of a variety of factors on the implementation of school-based prevention programs, specifically characteristics of program providers, program structure, school climate, and school and community structure. The current study expands this research by examining the potential relationships between all of these factors and implementation quality in a series of multilevel models. Using data from a nationally representative sample of 3,730 program providers surveyed in 544 schools, it was found that program structure characteristics were of greater importance in the prediction of high quality implementation than were characteristics of the program providers, school climate, and school and community structure. Implications of these findings are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The original researchers used materials from government agencies, technical assistance providers, professional organizations, program marketers, and scientific literature to develop a list of 20 categories of activities and strategies to prevent and/or reduce delinquency (Gottfredson and Gottfredson 2002). These categories were intended to describe each important aspect of any prevention program or activity. Principals were then asked to name up to five different specific programs or activities for each category; thus, how each program was classified was a principal decision. One program/activity could be listed under multiple categories; in the original study, such multi-component programs made up 17% of the activities identified by principals (Gottfredson et al. 2000). The original researchers then randomly sampled 1 program or activity in each of 14 categories per school and sent the Activity Coordinator survey, which asks for more detailed information about the program, to the individual identified as responsible for the activity. If the same individual was named by the principal multiple times and this person was identified multiple times in the random selection process (as could be the case with multi-component programs), the principal was asked to identify a different person as responsible for one of the activities. If this was not possible, the original researchers re-sampled the activities (Gottfredson et al. 2000). In the current sample, multi-component programs were found in 63 schools (12%). To ensure that the inclusion of these programs did not alter the results, the models were rerun without these schools in the sample. Results were very similar to the results reported here, displaying significant relationships in the same direction and of the same strength.

  2. Generally, all teachers in participating schools were sampled, and a sufficient number of students were sampled to produce an estimated 50 respondents per school. When a student roster containing student gender was available, students were systematically sampled within gender. Otherwise, students were stratified by grade level and sampled.

  3. The final sample of 544 schools is 42.27% of the original 1,287 schools and 64.15% of the 848 schools that responded in Phase One. Student enrollment in these 544 schools ranged from 97 to 2,912, with a mean of 790.31 and a standard deviation of 478.40.

  4. Measurement scales are based on scales developed and copyrighted by Gary Gottfredson (see Gottfredson et al. 2000).

  5. Regression imputation is a standard process for dealing with missing data (Raghunathan et al. 2001). Given the significant correlations among the various indicators of implementation quality (ranging from 0.110 to 0.518, p < 0.01), using three or four indicators to predict a program’s score on one or two indicators fall within the acceptable guidelines of this practice.

  6. As can be seen in Table 2, Conscientiousness appears to have limited variability and a positive skew; this can potentially affect this study’s findings. Self-reported questionnaires designed to collect responses aimed at personal performance can threaten the reliability and validity of outcome measures due to social desirability bias. While these threats have been found to be limited in research, there nevertheless exists the possibility that social desirability can affect outcome measures by producing spurious results, suppressing real results, or moderating the relationships between the independent and dependent variables (Ganster et al. 1983).

  7. Results of exploratory factor analyses from which these factors were created can be seen in Payne et al. (2006).

  8. The following Census variables are markers for the Community Poverty factor: welfare (average household public assistance income), female-headed households (ratio of single females with children under 18 to married couples with children under 18), median income (proportion of households with income below $27,499), poverty (ratio of persons below the 1.24 poverty level to persons above), divorce rate (ratio of persons over 15 years who are married to those who are separated, divorced, or have a spouse absent), and unemployment (proportion of unemployed males/females in the labor force) (Simonsen 1998).

  9. The following variables are markers for the Urbanicity factor: population size (total population), urban level (city level type), and urbanicity (the proportion of people living within an urban area) (Simonsen 1998).

  10. Regression imputation is a standard process for dealing with missing data (Raghunathan et al. 2001). Ten different census variables were used for imputation. For each imputed variable, those census variables with the largest correlations with the variable to be imputed were used. Between 1 and 128 schools required imputation for exogenous variables taken from sources other than the teacher surveys. The two exogenous variables that are taken from the teacher survey (Percentage Teachers African–American and Number of Different Students Taught) were missing data for 221 and 220 schools, respectively. Because imputation was required for a large number of schools, results were examined for possible changes when these two variables were not included in the model; no significant changes were seen.

  11. The generic level one model is Yij = β0j + β1jX1ij + … βkjXkij + rij where Yij is the outcome for ith individual in jth school, β0j is the intercept or the mean level of the outcome for each j school, βkj is the regression coefficient for the effect on the outcome variable of Xij, which is a level one predictor of the outcome, and rij is the level one error term. This is also known as the within-school equation. The generic level two model is β0j = γ00 + γ01W1j + γ02W2j + … γ0qWqj + u0j, β1j = γ10 + γ11W1j + γ12W2j + … γ1qWqj + u1j, βkj = γk0 + γk1W1j + γk2W2j + … γkqWqj + ukj where β0j is the school level mean of the outcome from level one equation, γ00 is the level two intercept or the mean level of the level one outcome for the entire sample, γ0q is the regression coefficient for the effect on the school level mean of Wqj, which is a level two predictor, and u0j is the level two error term. As in the level one model, βkj is the coefficient for a level one predictor of Yij, the individual-level outcome. However, in level two, βkj becomes an outcome itself of a level two predictor Wqj. This is also known as the between-school equation.

  12. Centering adjusts the outcome (either level one or level two) for the particular control variable and the outcome is the predicted value for the program or school with the mean value of that interval variable. Binary variables were left uncentered. Therefore, in the level one equation, the outcome is the predicted value for a program whose value is zero on the binary variable, while the outcome in the level two equation is the predicted value for a school whose value is zero on the binary variable.

  13. It is unlikely, however, that the basic results of our study would change had more of these schools been included. Exploratory analyses of potential biases introduced by the limited response rates examined participating schools that were located in similar communities as the majority of non-participating schools and found results similar to those seen here. Therefore, it seems likely that the inclusion of the non-participating schools would have resulted in actually intensifying the relationships reported in this study. Of course, it is possible that the relationships of interest are not linear in the region of the distribution in which non-participating schools fall, or that some characteristic might alter the relationships established. However, the linear relationships among this study’s measures seem to indicate that, if anything, the results presented here provide conservative estimates of the relationships.

References

  • Battistich, V., Schaps, E., Watson, M., & Solomon, D. (1996). Prevention effects of the Child Development Project: Early findings from an ongoing multi-site demonstration trial. Journal of Adolescent Research, 11, 12–35.

    Article  Google Scholar 

  • Berends, M., Bodilly, S.J., & Kirby, S.N. (2002). Facing the challenges of whole-school reform. New American schools after a decade. Santa Monica, CA: RAND.

    Google Scholar 

  • Berman, P., & McLaughlin, M.W. (1978). Federal programs supporting educational change, Vol VIII: Implementing and sustaining innovations. Santa Monica, CA: RAND.

    Google Scholar 

  • Bernd, M. (1992). Shared decision making requires effective instructional leadership. NASSP Bulletin, 76, 64–69.

    Article  Google Scholar 

  • Boerm, M., Gingiss, P., & Roberts-Gray, C. (2007). Association of the presence of state and district health education policies with school tobacco prevention program practices. Journal of School Health, 77, 207–214.

    Article  PubMed  Google Scholar 

  • Bosworth, K., Gingiss, P., Potthoff, S., & Roberts-Gray, C. (1999). A Bayesian model to predict the success of the implementation of a health education innovation. Journal of Evaluation and Research, Evaluation and Program Planning, 22, 1–11.

    Article  Google Scholar 

  • Botvin, G.J. (1990). Substance abuse prevention: Theory, practice, and effectiveness. In M. Tonry & J.Q. Wilson (Eds.), Drugs and crime (pp. 461–520). Chicago: Chicago University Press.

    Google Scholar 

  • Botvin, G.J. (2003). From research to policy: Advancing prevention science and practice. Presidential Address presented at the 11th Annual Meeting of the Society of Prevention Research. Washington, D.C.

  • Botvin, G.J., Batson, H.W., Witts-Vitale, S., Bess, V., Baker, E., & Dusenbury, L. (1989a). A psychological approach to smoking prevention for urban Black youth. Public Health Reports, 12, 279–296.

    CAS  Google Scholar 

  • Botvin, G.J., Dusenbury, L., James-Oritz, S., & Kerner, J. (1989b). A skills training approach to smoking prevention among Hispanic youth. Journal of Behavioral Medicine, 12, 179–296.

    Article  Google Scholar 

  • Botvin, G.J., Baker, E., Dusenbury, L., Tortu, S., & Botvin, E.M. (1990). Preventing adolescent drug abuse through a multi-modal cognitive-behavioral approach: Results of a 3-year study. Journal of Consulting and Clinical Psychology, 58, 437–446.

    Article  CAS  PubMed  Google Scholar 

  • Botvin, G.J., Baker, E., Dusenbury, L., Botvin, E.M., & Diaz, T. (1995a). Long-term Follow-up results of a randomized drug abuse prevention trial in a white middle-class population. Journal of the American Medical Association, 273, 1106–1112.

    Article  CAS  PubMed  Google Scholar 

  • Botvin, G.J., Schinke, S., & Orlandi, M.A. (1995b). School-based health promotion: Substance abuse and sexual behavior. Applied and Preventive Psychology, 4, 167–184.

    Article  Google Scholar 

  • Boyd, V. (1992). School context: Bridge or barrier to change? Office of Educational Research and Improvement Contract No. RP 1002003. Austin, TX: Southwest Educational Development Library.

    Google Scholar 

  • Brink, S.G., Based-Enquist, K.M., O’Hara-Tompkins, N.M., Parcel, G.S., Gottlieb, N.H., & Lovato, C.Y. (1995). Diffusion of an effective tobacco prevention program. Part I: Evaluation of the dissemination phase. Health Education Research, 10, 283–295.

    Article  CAS  PubMed  Google Scholar 

  • Catalano, R.F., Arthur, M.W., Hawkins, J.D., Berglund, M.L., & Olson, J.J. (1998). Comprehensive community and school-based interventions to prevent antisocial behavior. In R. Loeber & D.F. Farrington (Eds.), Serious and violent juvenile offenders (pp. 248–283). Thousand Oaks, CA: Sage.

    Google Scholar 

  • Dane, A.V., & Schneider, B.H. (1998). Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clinical Psychology Review, 18, 23–45.

    Article  CAS  PubMed  Google Scholar 

  • Elliot, D.S., & Mihalic, S. (2004). Issues in disseminating and replicating effective prevention programs. Prevention Science, 5, 47–53.

    Article  Google Scholar 

  • Ennett, S.T., Tobler, N.S., Ringwalt, C.L., & Flewelling, R.L. (1994). How effective is drug abuse resistance education? A meta-analyses of Project DARE outcome evaluations. American Journal of Public Health, 84, 1394–1404.

    Article  CAS  PubMed  Google Scholar 

  • Ennett, S.T., Ringwalt, C.L., Thorne, J., Rohrbach, L.A., Vincus, A., & Simons-Rudolph, A. (2003). A comparison of current practice in school-based substance use prevention programs with meta-analysis findings. Prevention Science, 4, 1–14.

    Article  PubMed  Google Scholar 

  • Farrell, A.D., Meyer, A.L., Kung, E.M., & Sullivan, T.N. (2001). Development and evaluation of school-based violence prevention programs. Journal of Clinical Child Psychology, 31, 235–253.

    Google Scholar 

  • Fullan, M.G. (1991). The new meaning of educational change. New York: Teachers College Press.

    Google Scholar 

  • Fullan, M.G. (1992). Successful school improvement: The implementation perspective and beyond. Philadelphia: Open University Press.

    Google Scholar 

  • Fullan, M.G., & Pomfret, A. (1977). Research on curriculum and instruction implementation. Review of Educational Research, 47, 335–397.

    Google Scholar 

  • Gager, P.J., & Elias, M.J. (1997). Implementing prevention programs in high-risk environments: Application of the Resiliency Paradigm. American Journal of Orthopsychiatry, 67, 363–373.

    Article  CAS  PubMed  Google Scholar 

  • Ganster, D.C., Hennessey, H.W., & Luthans, F. (1983). Social desirability response effects: Three alternative models. Academy of Management Journal, 26, 321–331.

    Article  Google Scholar 

  • Gingiss, P., Roberts-Gray, C., & Boerm, M. (2006). Bridge-it: A system for predicting implementation fidelity for school-based tobacco prevention programs. Prevention Science, 7, 197–207.

    Article  PubMed  Google Scholar 

  • Glasgow, R.E., Lichtenstein, E., & Marcus, A.C. (2003). Why don’t we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition. American Journal of Public Health, 93, 1261–1267.

    Article  PubMed  Google Scholar 

  • Goldstein, A.P., & Sorcher, M. (1973). Changing managerial behavior by applied learning techniques. Training and Development Journal, 27, 36–39.

    Google Scholar 

  • Gottfredson, D.C. (2001). Delinquency and schools. New York: Cambridge University Press.

    Google Scholar 

  • Gottfredson, D.C., & Gottfredson, G.D. (2002). Quality of school-based prevention programs: Results from a national survey. Journal of Research in Crime and Delinquency, 39, 3–36.

    Article  Google Scholar 

  • Gottfredson, D.C., Gottfredson, G.D., & Hybl, L.G. (1993). Managing adolescent behavior: A multi-year, multi-school study. American Educational Research Journal, 30, 179–215.

    Google Scholar 

  • Gottfredson, D.C., Gottfredson, G.D., & Skroban, S. (1996). A multimodal school-based prevention demonstration. Journal of Adolescent Research, 11, 97–116.

    Article  Google Scholar 

  • Gottfredson, G.D., Gottfredson, D.C., Czeh, E., Cantor, D., Crosse, S., & Hantman, I. (2000). A national study of delinquency prevention in school final report. Ellicott City, MD: Gottfredson Associates, Inc.

    Google Scholar 

  • Gottfredson, D.C., Wilson, D.B., & Najaka, S.S. (2002). School-based crime prevention. In D.P. Farrington, L.W. Shernman, & B. Welsh (Eds.), Evidence based crime prevention (pp. 56–164). London: Harwood.

    Google Scholar 

  • Hawkins, J.D., Arthur, M.W., & Catalano, R.F. (1995). Preventing substance abuse. In M. Tonry & D.P. Farrington (Eds.), Building a safer society: Strategic approaches to crime prevention (pp. 343–428). Chicago: University of Chicago Press.

    Google Scholar 

  • Hunter, L., Elias, M.J., & Norris, J. (2001). School-based violence prevention: Challenges and lessons learned from an action research project. Journal of School Psychology, 39, 161–175.

    Article  Google Scholar 

  • Jaycox, L.H., McCaffrey, D.F., Weidmer, B., Shelley, G.A., Blake, S.M., & Peterson, D.J. (2006). Challenges in the evaluation and implementation of school-based prevention and intervention programs on sensitive topics. American Journal of Evaluation, 27, 320–336.

    Article  Google Scholar 

  • Joyce, B., & Showers, B. (2002). Student achievement through staff development (3rd ed.). Alexandria, VA: Association for Supervision and Curriculum Development.

    Google Scholar 

  • Kam, C.M., Greenberg, M.T., & Walls, C.T. (2003). Examining the role of implementation quality in school-based prevention using the PATHS curriculum. Prevention Science, 4, 55–63.

    Article  PubMed  Google Scholar 

  • Kegler, M.C., Steckler, A., Malek, S.H., & McLeroy, K. (1998). A multiple case study of implementation in 10 local Project ASSIST coalitions in North Carolina. Health Education Research, 13, 225–238.

    Article  CAS  PubMed  Google Scholar 

  • Lipsey, M.W. (1992). Juvenile delinquency treatment: A meta-analytic inquiry into the variability of effects. In T.D. Cook, H. Cooper, D.S. Cordray, H. Hartmann, L.V. Hedges, R.J. Light, et al. (Eds.), Meta-analysis for explanation: A casebook (pp. 83–127). New York: Sage.

    Google Scholar 

  • Lipsey, M.W., & Derzon, J.H. (1998). Predictors of violent or serious delinquency in adolescence and early adulthood: A synthesis of longitudinal research. In R. Loeber & D.P. Farrington (Eds.), Serious and violent juvenile offenders: Risk factors and successful interventions (pp. 86–105). Thousand Oaks, CA: Sage.

    Google Scholar 

  • McCormick, L.K., Steckler, A., & McLeroy, K.R. (1995). Diffusion of innovations in schools: A study of adoption and implementation of school-based tobacco prevention curricula. American Journal of Health Promotion, 9, 210–219.

    CAS  PubMed  Google Scholar 

  • McLaughlin, M.W. (1990). The RAND change agent study revisited: Macro-perspective and micro realities. Educational Researcher, 19, 11–16.

    Google Scholar 

  • Mihalic, S.F., Fagan, A.A., & Argamaso, S. (2008). Implementing the lifeskills training drug prevention programs: Factors related to implementation fidelity. Implementation Science, 3, 1–16.

    Article  Google Scholar 

  • National Institute on Drug Abuse. (2003). What do schools really think about prevention research? Blending research and reality. Bethesda, MD: Author.

    Google Scholar 

  • Nunnery, J., Slavin, R.E., Madden, N.A., Ross, S., Smith, L., Hunter, P., et al. (1997). Effects of full and partial implementations of Success For All on student reading achievement in English and Spanish. Paper presented at the annual meeting of the American Educational Research Association, Chicago, IL.

  • Olds, D.L. (2002). Prenatal and infancy home visiting by nurses: From randomized trials to community replication. Prevention Science, 3, 153–172.

    Article  PubMed  Google Scholar 

  • Parcel, G.S., O’Hara-Tompkins, N.M., Harrist, R.B., Basen-Engquist, K.M., McCormick, L.K., Gottlieb, N.H., et al. (1995). Diffusion of an effective tobacco prevention program. Part II: Evaluation of the adoption phase. Health Education Research, 10, 297–307.

    Article  CAS  PubMed  Google Scholar 

  • Payne, A.A. (2009). Do predictors of implementation quality of school-based prevention programs differ by program type? Prevention Science, 10, 151–167.

    Article  PubMed  Google Scholar 

  • Payne, A.A., Gottfredson, D.C., & Gottfredson, G.D. (2006). School predictors of the intensity of implementation of school-based prevention programs: Results from a national study. Prevention Science, 7, 225–237.

    Article  PubMed  Google Scholar 

  • Petersilia, J. (1990). Conditions that permit intensive supervision programs to survive. Crime and Delinquency, 36, 126–156.

    Article  Google Scholar 

  • Raghunathan, T., Lepkowski, J., Van Hoewyk, J., & Solenberger, P. (2001). A multivariate technique for multiply imputing missing values using a sequence of regression models. Survey Methodology, 27, 85–95.

    Google Scholar 

  • Raudenbush, S.W., Bryk, A.S., Cheong, Y.F., & Congdon, R.T. (2004). HLM 6: Hierarchical linear and nonlinear modeling. Lincolnwood, IL: Scientific Software International.

    Google Scholar 

  • Rohrbach, L.A., Graham, J.W., & Hansen, W.B. (1993). Diffusion of a school-based substance abuse prevention program: Predictors of program implementation. Preventive Medicine, 22, 237–260.

    Article  CAS  PubMed  Google Scholar 

  • Rohrbach, L.A., D’Onofrio, C.N., Backer, T.E., & Mongomery, S.B. (1996). Diffusion of substance abuse prevention programs. American Behavioral Scientist, 39, 919–934.

    Article  Google Scholar 

  • Rohrbach, L.A., Grana, R., Sussman, S., & Valente, T.W. (2006). Type II translation: Transporting prevention interventions from research to real-world settings. Evaluation and the Health Professions, 29, 302–333.

    Article  Google Scholar 

  • Schoenwald, S.K., & Hoagwood, K. (2001). Effectiveness, transportability, and dissemination of interventions: What matters when? Psychatric Services, 52, 1190–1196.

    Article  CAS  Google Scholar 

  • Silivia, E.S., & Thorne, J. (1997). School-based drug prevention programs: A longitudinal study in selected school district. Research Triangle, NC: Research Triangle Institute.

    Google Scholar 

  • Simonsen, A.A. (1998). The effects of community disorganization on school administrative practices: Implications for delinquency prevention practice. Unpublished Masters Thesis. University of Maryland: College Park.

  • Smith, D.W., McCormick, L.K., Steckler, A.B., & McLeroy, K.R. (1993). Teachers’ use of health curricula: Implementation of growing health, project SMART, and the teenage health teaching modules. Journal of School Health, 63, 349–354.

    Article  CAS  PubMed  Google Scholar 

  • Smith, L., Ross, S., & Nunnery, J. (1997). Increasing the chances of Success For All: The relationship between program implementation quality and student achievement at eight inner-city schools. Paper presented at the annual meeting of the American Educational Research Association, Chicago, IL.

  • Sobeck, J.L., Abbey, A., & Agius, E. (2006). Lessons learned from implementing school-based substance abuse prevention curriculums. Children and Schools, 28, 77–85.

    Google Scholar 

  • Stoll, L., & Fink, D. (1996). Changing our schools: Linking school effectiveness and school improvement. Philadelphia: Open University Press.

    Google Scholar 

  • Tobler, N.S. (1992). Drug prevention programs can work: Research findings. Journal of Addictive Diseases, 11, 1–28.

    Article  CAS  PubMed  Google Scholar 

  • Tobler, N.S. (2000). Lessons learned. Journal of Primary Prevention, 20, 261–274.

    Article  Google Scholar 

  • Walker, E.M. (2004). The impacts of state policies and actions on local implementation efforts: A study of whole school reform in New Jersey. Educational Policy, 18, 338–363.

    Article  Google Scholar 

  • Wilson, S.J., Lipsey, M.W., & Derzon, J.H. (2003). The effects of school-based intervention programs on aggressive behavior: A meta-analysis. Journal of Consulting and Clinical Psychology, 71, 136–150.

    Article  PubMed  Google Scholar 

  • Young, R.L., deMoor, C., Wilder, M.B., Gully, S., Hovell, M.F., & Elder, J.P. (1990). Correlates of health facilitator performance in a tobacco-use prevention program: Implications for recruitment. Journal of School Health, 60, 463–467.

    Article  CAS  PubMed  Google Scholar 

  • Zins, J.E., Elias, M.J., Greenberg, M.T., & Pruett, M.K. (2000). Special issue: Implementation of prevention programs. Journal of Educational and Psychological Consultation, 11, 1–172.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Allison Ann Payne.

Additional information

The authors would like to extend a special thank you to Gary and Denise Gottfredson for the generous use of their National Study of Delinquency Prevention in Schools data.

Appendix

Appendix

Table 7 Correlations between implementation quality indicators and predictors
Table 8 Correlations between implementation quality indicators and predictors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Payne, A.A., Eckert, R. The Relative Importance of Provider, Program, School, and Community Predictors of the Implementation Quality of School-Based Prevention Programs. Prev Sci 11, 126–141 (2010). https://doi.org/10.1007/s11121-009-0157-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11121-009-0157-6

Keywords

Navigation