Advertisement

Prevention Science

, Volume 9, Issue 4, pp 264–275 | Cite as

School Climate and Teachers’ Beliefs and Attitudes Associated with Implementation of the Positive Action Program: A Diffusion of Innovations Model

  • Michael W. BeetsEmail author
  • Brian R. Flay
  • Samuel Vuchinich
  • Alan C. Acock
  • Kin-Kit Li
  • Carol Allred
Article

Abstract

Teacher- and school-level factors influence the fidelity of implementation of school-based prevention and social character and development (SACD) programs. Using a diffusion of innovations framework, the relationships among teacher beliefs and attitudes towards a prevention/SACD program and the influence of a school’s administrative support and perceptions of school connectedness, characteristics of a school’s climate, were specified in two cross-sectional mediation models of program implementation. Implementation was defined as the amount of the programs’ curriculum delivered (e.g., lessons taught), and use of program-specific materials in the classroom (e.g., ICU boxes and notes) and in relation to school-wide activities (e.g., participation in assemblies). Teachers from 10 elementary schools completed year-end process evaluation reports for year 2 (N = 171) and 3 (N = 191) of a multi-year trial. Classroom and school-wide material usage were each favorably associated with the amount of the curriculum delivered, which were associated with teachers’ attitudes toward the program which, in turn, were related to teachers’ beliefs about SACD. These, in turn, were associated with teachers’ perceptions of school climate. Perceptions of school climate were indirectly related to classroom material usage and both indirectly and directly related to the use of school-wide activities. Program developers need to consider the importance of a supportive environment on program implementation and attempt to incorporate models of successful school leadership and collaboration among teachers that foster a climate promoting cohesiveness, shared visions, and support.

Keywords

Fidelity Primary prevention Elementary Children 

Notes

Acknowledgements

Two of the authors, Flay and Allred, are married.

Funding Disclosure

This project was funded by the National Institute on Drug Abuse, grant #R01-DA13474.

References

  1. Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, 179–211. doi: 10.1016/0749-5978(91)90020-T.CrossRefGoogle Scholar
  2. Ajzen, I., & Fishbein, M. (1980). Understanding attitudes and predicting social behavior. Englewood Cliffs, NJ: Prentice Hall.Google Scholar
  3. Bandalos, D. L., & Finney, S. J. (2001). Item parceling issues in structural equation modeling. In G. A. Marcoulides (Ed.), New developments and techniques in structural equation modeling (pp. 269–296). Mahwah, NJ: Erlbaum.Google Scholar
  4. Basch, C. E. (1984). Research on disseminating and implementing health education programs in schools. Journal of School Health, 54, 57–66.PubMedGoogle Scholar
  5. Battistich, V., Schaps, E., & Wilson, N. (2004). Effects of an elementary school intervention on students’ “connectedness” to school and social adjustment during Middle school. The Journal of Primary Prevention, 24, 243–262. doi: 10.1023/B:JOPP.0000018048.38517.cd.CrossRefGoogle Scholar
  6. Botvin, G. J. (2004). Advancing prevention science and practice: Challenges, critical issues, and future directions. Prevention Science, 5, 69–72. doi: 10.1023/B:PREV.0000013984.83251.8b.PubMedCrossRefGoogle Scholar
  7. Chen, H.-T. (1998). Theory-driven evaluations. In A. J. Reynolds & H. J. Walberg (Eds.), Advances in educational productivity: Evaluation research for educational productivity, (vol. 7). US: Elsevier.Google Scholar
  8. Connell, D. B., Turner, R. R., & Mason, E. F. (1985). Summary of findings of the School Health Education Evaluation: Health promotion effectiveness, implementation, and costs. Journal of School Health, 55, 316–321.PubMedGoogle Scholar
  9. Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clinical Psychology Review, 18, 23–45. doi: 10.1016/S0272-7358(97)00043-3.PubMedCrossRefGoogle Scholar
  10. Dent, C. W., Sussman, S., & Flay, B. R. (1993). The use of archival data to select and assign schools in a drug prevention trial. Evaluation Review, 17, 159–181. doi: 10.1177/0193841X9301700203.CrossRefGoogle Scholar
  11. Domitrovich, C. E., & Greenberg, M. T. (2000). The study of implementation: current findings from effective programs that prevent mental disorders in school-aged children. Journal of Educational & Psychological Consultation, 11, 193–221. doi: 10.1207/S1532768XJEPC1102_04.CrossRefGoogle Scholar
  12. Dusenbury, L., Brannigan, R., Falco, M., & Hansen, W. B. (2003). A review of research on fidelity of implementation: Implications for drug abuse prevention in school settings. Health Education Research, 18, 237–256. doi: 10.1093/her/18.2.237.PubMedCrossRefGoogle Scholar
  13. Elliott, D. S., & Mihalic, S. (2004). Issues in disseminating and replicating effective prevention programs. Prevention Science, 5, 47–53. doi: 10.1023/B:PREV.0000013981.28071.52.PubMedCrossRefGoogle Scholar
  14. Flay, B. R., & Allred, C. G. (2003). Long-term effects of the Positive Action program. American Journal of Health Behavior, 27(S1), S6–S21.PubMedGoogle Scholar
  15. Flay, B. R., Allred, C. G., & Ordway, N. (2001). Effects of the Positive Action program on academic achievement and discipline: Two matched-control comparisons. Prevention Science, 2, 71–89. doi: 10.1023/A:1011591613728.PubMedCrossRefGoogle Scholar
  16. Flay, B. R., Biglan, A., Boruch, R. F., Castro, F. G., Gottfredson, D., Kellam, S., et al. (2005). Standards of evidence: Criteria for efficacy, effectiveness and dissemination. Prevention Science, 6(3), 151–175. doi: 10.1007/s11121-005-5553-y.PubMedCrossRefGoogle Scholar
  17. Flay, B. R., & Collins, L. M. (2005). Historical review of school-based randomized trials for evaluating problem behavior prevention programs. Annals of the American Academy of Political and Social Science, 599, 115–146. doi: 10.1177/0002716205274941.CrossRefGoogle Scholar
  18. Games, B., Millsap, M. A., & Goodson, B. (2002). When implementation threatens impact: Challenging lessons from evaluating educational programs. Peabody Journal of Education, 77(4), 146–166. doi: 10.1207/S15327930PJE7704_7.CrossRefGoogle Scholar
  19. Gittelsohn, J., Merkle, S., Story, M., Stone, E. J., Steckler, A., Noel, J., et al. (2003). School climate and implementation of the pathways study. Preventive Medicine, 37(6 pt 2), S97–S106. doi: 10.1016/j.ypmed.2003.08.010.PubMedCrossRefGoogle Scholar
  20. Glasgow, R. E., Vogt, T. M., & Boles, S. M. (1999). Evaluating the public health impact of health promotion interventions: The RE-AIM framework. American Journal of Public Health, 89, 1322–1327.PubMedCrossRefGoogle Scholar
  21. Graham, J. W., Taylor, B. J., Olchowski, A. E., & Cumsille, P. E. (2006). Planned missing data designs in psychological research. Psychological Methods, 11, 323–343. doi: 10.1037/1082-989X.11.4.323.PubMedCrossRefGoogle Scholar
  22. Hahn, E. J., Noland, M. P., Rayens, M. K., & Christie, D. M. (2002). Efficacy of training and fidelity of implementation of the life skills training program. Journal of School Health, 72, 282–287.PubMedGoogle Scholar
  23. Han, S. S., & Weiss, B. (2005). Sustainability of teacher implementation of school-based mental health programs. Journal of Abnormal Child Psychology, 33, 665–679. doi: 10.1007/s10802-005-7646-2.PubMedCrossRefGoogle Scholar
  24. Harachi, T. W., Abbott, R. D., Catalano, R. F., Haggerty, K. P., & Fleming, C. B. (1999). Opening the black box: Using process evaluation measures to assess implementation and theory building. American Journal of Community Psychology, 27, 711–731. doi: 10.1023/A:1022194005511.PubMedCrossRefGoogle Scholar
  25. Hoy, W., Tarter, C., & Kottkamp, R. (1991). Open schools, healthy schools: Measuring organizational climate. Newbury Park, CA: Sage.Google Scholar
  26. Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indices in covariance structure analysis: conventional criteria versus new alternatives. Structural Equation Modeling, 6, 1–55.Google Scholar
  27. Kam, C.-M., Greenberg, M. T., & Walls, C. T. (2003). Examining the role of implementation quality in school-based prevention using the PATHS curriculum. Prevention Science, 4, 55–63. doi: 10.1023/A:1021786811186.PubMedCrossRefGoogle Scholar
  28. Kealey, K. A., Peterson, A. V. J., Gaul, M. A., & Dinh, K. T. (2000). Teacher training as a behavior change process: principles and results from a longitudinal study. Health Education and Behavior, 27, 64–81. doi: 10.1177/109019810002700107.PubMedCrossRefGoogle Scholar
  29. Kline, R. B. (2005). Principles and practice of structural equation modeling (2nd ed.). New York: Guilford.Google Scholar
  30. Lillehoj, C. J., Griffin, K. W., & Spoth, R. (2004). Program provider and observer ratings of school-based preventive intervention implementation: Agreement and relation to youth outcomes. Health Education and Behavior, 31, 242–257. doi: 10.1177/1090198103260514.CrossRefGoogle Scholar
  31. Marsh, H. W., Hau, K.-T., Balla, J. R., & Grayson, D. (1998). Is more ever too much? The number of indicators per factors in confirmatory factor analysis. Multivariate Behavioral Research, 33, 181–220. doi: 10.1207/s15327906mbr3302_1.CrossRefGoogle Scholar
  32. McCormick, L. K., Steckler, A. B., & McLeroy, K. R. (1995). Diffusion of innovations in schools: A study of adoption and implementation of school-based tobacco prevention curricula. American Journal of Health Promotion, 9, 210–219.PubMedGoogle Scholar
  33. Payne, A. A., Gottfredson, D. C., & Gottfredson, G. D. (2006). School predictors of the intensity of implementation of school-based prevention programs: Results from a national study. Prevention Science, 7, 225–237. doi: 10.1007/s11121-006-0029-2.PubMedCrossRefGoogle Scholar
  34. Pentz, M. A. (2004). Form follows function: Designs for prevention effectiveness and diffusion research. Prevention Science, 5, 23–29. doi: 10.1023/B:PREV.0000013978.00943.30.PubMedCrossRefGoogle Scholar
  35. Raudenbush, S. W., & Liu, X. (2000). Statistical power and optimal design for multisite randomized trials. Psychological Methods, 5, 199–213. doi: 10.1037/1082-989X.5.2.199.PubMedCrossRefGoogle Scholar
  36. Rogers, E. M. (1995). Diffusion of innovations (4th ed.). New York: Free Press.Google Scholar
  37. Rogers, E. M. (2002). Diffusion of preventive innovations. Addictive Behaviors, 27, 989–993. doi: 10.1016/S0306-4603(02)00300-3.PubMedCrossRefGoogle Scholar
  38. Rohrbach, L. A., D’Onofrio, C. N., Backer, T. E., & Montgomery, S. B. (1996). Diffusion of school-based substance abuse prevention programs. American Behavioral Scientist, 39, 919–934. doi: 10.1177/0002764296039007012.CrossRefGoogle Scholar
  39. Rohrbach, L. A., Graham, J. W., & Hansen, W. B. (1993). Diffusion of a school-based substance abuse prevention program: Predictors of program implementation. Preventive Medicine, 22, 237–260. doi: 10.1006/pmed.1993.1020.PubMedCrossRefGoogle Scholar
  40. Rohrbach, L. A., Ringwalt, C. L., Ennett, S. T., & Vincus, A. A. (2005). Factors associated with adoption of evidence-based substance use prevention curricula in US school districts. Health Education Research, 20, 514–526. doi: 10.1093/her/cyh008.PubMedCrossRefGoogle Scholar
  41. Sheldon, S. B. (2005). Testing a structural equation model of partnership program implementation and parent involvement. The Elementary School Journal, 106, 171–187. doi: 10.1086/499197.CrossRefGoogle Scholar
  42. Sidman, M. (1960). Tactics of scientific research. New York: Basic Books.Google Scholar
  43. Slavin, R. E., & Fashola, O. S. (1998). Show me the evidence: Proven and promising programs for America’s schools. Thousand Oaks, CA: Corwin.Google Scholar
  44. Smith, D. W., McCormack, W. P., Steckler, A., & McLeroy, K. R. (1993). Teachers’ use of health curricula: Implementation of Growing Healthy, Project SMART, and the Teenage Health Teaching Modules. Journal of School Health, 63, 349–354.PubMedCrossRefGoogle Scholar
  45. Social and Character Development Research Consortium (2004). Social and character development research program evaluation instrument. Washington, DC: Institute of Education Sciences, U.S. Department of Education.Google Scholar
  46. Solomon, D., Battistich, V., Watson, M., Schaps, E., & Lewis, C. (2000). A six-district study of educational change: direct and mediated effects of the Child Development Project. Social Psychology of Education, 4, 3–51. doi: 10.1023/A:1009609606692.CrossRefGoogle Scholar
  47. State of Hawaii Department of Education Systems of Accountability. (2006). School accountability: School status and improvement report. Retrieved 11/28, 2006, from http://arch.k12.hi.us/school/ssir/ssir.html#.
  48. Taggart, V. S., Bush, P. J., Zuckerman, A. E., & Theiss, P. K. (1990). A process evaluation of the District of Columbia “Know Your Body” project. Journal of School Health, 60, 60–66.PubMedGoogle Scholar

Copyright information

© Society for Prevention Research 2008

Authors and Affiliations

  • Michael W. Beets
    • 3
    Email author
  • Brian R. Flay
    • 1
  • Samuel Vuchinich
    • 1
  • Alan C. Acock
    • 1
  • Kin-Kit Li
    • 1
  • Carol Allred
    • 2
  1. 1.Department of Public HealthOregon State UniversityCorvallisUSA
  2. 2.Positive Action, Inc.Twin FallsUSA
  3. 3.Department of Exercise Science, Arnold School of Public HealthUniversity of South CarolinaColumbiaUSA

Personalised recommendations