Prevention Science

, 10:100

Altering School Climate through School-Wide Positive Behavioral Interventions and Supports: Findings from a Group-Randomized Effectiveness Trial

  • Catherine P. Bradshaw
  • Christine W. Koth
  • Leslie A. Thornton
  • Philip J. Leaf
Article

DOI: 10.1007/s11121-008-0114-9

Cite this article as:
Bradshaw, C.P., Koth, C.W., Thornton, L.A. et al. Prev Sci (2009) 10: 100. doi:10.1007/s11121-008-0114-9

Abstract

Positive Behavioral Interventions and Supports (PBIS) is a universal, school-wide prevention strategy that is currently implemented in over 7,500 schools to reduce disruptive behavior problems. The present study examines the impact of PBIS on staff reports of school organizational health using data from a group-randomized controlled effectiveness trial of PBIS conducted in 37 elementary schools. Longitudinal multilevel analyses on data from 2,596 staff revealed a significant effect of PBIS on the schools’ overall organizational health, resource influence, staff affiliation, and academic emphasis over the 5-year trial; the effects on collegial leadership and institutional integrity were significant when implementation fidelity was included in the model. Trained schools that adopted PBIS the fastest tended to have higher levels of organizational health at baseline, but the later-implementing schools tended to experience the greatest improvements in organizational health after implementing PBIS. This study indicated that changes in school organizational health are important consequences of the PBIS whole-school prevention model, and may in turn be a potential contextual mediator of the effect of PBIS on student performance.

Keywords

School Climate Organization Fidelity Positive behavioral interventions and support 

The increasing legislative demands on schools to provide safe and orderly learning environments have resulted in greater emphasis on the use of school-based prevention programs. Local school districts and prevention scientists are turning to school-wide prevention models, such as Positive Behavioral Interventions and Supports (PBIS; Sugai and Horner 2006), to promote a positive school climate and reduce behavior problems. This universal prevention model aims to systematically and consistently manage student behavior problems by creating a school-wide program that clearly articulates positive behavioral expectations, provides incentives to students meeting expectations, and encourages data-based decision-making. It is estimated that PBIS is currently implemented in over 7,500 schools in at least 44 states (Sugai 2008) and several other countries around the world (Sprague 2008).

While there is growing interest in school-wide PBIS among policymakers, researchers, and educators, there has been relatively limited systematic research on the impact of PBIS using randomized controlled trial designs. The current study used data from a randomized controlled effectiveness trial of PBIS conducted in 37 elementary schools to determine the impact of school-wide PBIS training on the staff members’ perceptions of schools’ organizational health. We also explored how the effects on the schools’ organizational health varied as a function of baseline or “naturally occurring” elements of PBIS and the speed with which schools implemented PBIS with high fidelity.

School-wide Prevention Programs

A number of universal, whole-school preventive interventions have been developed over the past 20 years, including the Child Development Project (Battistich et al. 1996); the Olweus Bullying Prevention Program (Olweus et al. 2007); Project Achieve (Knoff 2000); Preventing, Acting Upon, and Resolving (PAR) Comprehensive Behavior Management System (Rosenberg and Jackman 2003); and PBIS (Sugai and Horner 2006) (also see Gottfredson 1986, 1987; Gottfredson et al. 1993). Although the specific models vary in terms of their theoretical orientation, activities, and specific foci, they share a common emphasis on altering the school context in order to influence children’s behavior and academic performance. Most of the whole-school strategies seek to provide staff and students with clearly articulated rules and consequences for behavior, and well-established processes and procedures for problem solving (Knoff 2000; Sugai and Horner 2006). These programs are attractive to school districts and administrators because they typically can be implemented by teachers rather than specialists (e.g., psychologists), can be adapted to be consistent with a school’s culture and climate, and typically require fewer resources than other standardized curricula. Despite the growing use and acceptance of these whole-school reform initiatives, there have been few rigorous studies of the effects of school-wide behavioral interventions on school climate. In addition, little attention has been paid to the fidelity with which these programs are implemented when widely disseminated.

Positive Behavioral Interventions and Supports

PBIS is one such whole-school prevention model that is increasingly popular among schools in the U.S. PBIS is a non-curricular universal prevention strategy that aims to modify the school environment by creating improved systems (e.g., discipline, reinforcement, data management) and procedures (e.g. office referral, reinforcement, training, leadership) that promote positive change in staff and student behaviors. The program draws upon behavioral, social learning, and organizational behavioral principles (Lewis and Sugai 1999; Lindsley 1992) traditionally used with individual students, and extends them to an entire student body across all school contexts. Seeking to prevent disruptive behavior and enhance the school’s organizational climate, this multi-tiered strategy creates and sustains primary (school-wide/universal), secondary (targeted/selective), and tertiary (individual/indicated) systems of support. The three-tiered prevention model follows a public health approach (Mrazek and Haggerty 1994), whereby two levels of targeted and indicated programs are implemented to complement the universal school-wide components of the model (for a review, see Horner et al. 2005; Sugai and Horner 2006; Sugai et al. 2000). While the multi-tiered PBIS model presents an opportunity for implementing more intensive preventive interventions for children who do not respond adequately to the universal program, the current paper focuses solely on the universal school-wide PBIS model.

Initial results from non-randomized studies indicate that implementation of school-wide PBIS was associated with a reduction in office discipline referrals (Taylor-Greene et al. 1997) and suspensions (Horner et al. 2005), as well as improvements in student academic performance (Sugai and Horner 2006). The developers of PBIS recently conducted a 3-year randomized trial of school-wide PBIS using a waitlist design. Findings from this study indicated that PBIS was associated with improvements in students’ perceptions of safety at school and an increase in third grade reading performance (Horner et al. 2008). Furthermore, preliminary findings from the current 5-year longitudinal randomized controlled trial of PBIS in 37 elementary schools indicated that training in PBIS was associated with significant reductions in suspensions and office referrals (Bradshaw et al. 2008b) and improvements in the school staff members’ perceptions of the schools’ organizational health (Bradshaw et al. 2008a). Yet the issue of program fidelity is of particular relevance for non-curricular and non-manualized school-based programs like PBIS, which are intended to be adaptable in different school contexts to address varying cultures, climates, and work attitudes (Bradshaw et al. 2008). Consequently, the developers of PBIS created a multi-component implementation quality measure called the School-wide Evaluation Tool (SET; Horner et al. 2004), which can be used to monitor program fidelity over multiple years of implementation.

Implementation Quality and School Climate

Implementation fidelity is defined as the extent to which a program is implemented as specified by its developers (Dusenbury et al. 2003; Hill et al. 2007; Mowbray et al. 2003). Common indicators of fidelity include program adherence, dosage, quality of program delivery, and participant responsiveness (Dane and Schneider 1998; Durlak and Dupree 2008; Dusenbury et al. 2003; Fixsen et al. 2005). Although most researchers acknowledge the importance of implementation fidelity and its association with program outcomes (Durlak and Dupree 2008), few studies have included measures of program fidelity across multiple time points to systematically document the association, especially among both intervention and comparison schools (Domitrovich et al. 2008).

There is also increasing interest in the association between school contextual factors and the implementation quality of prevention programs (Domitrovich et al. 2008; Payne et al. 2006). Prior research in clinical settings suggests a potential link between the organizational context (e.g., culture or climate) in which an intervention is implemented and implementation quality (Glisson 2002; Hohmann and Shear 2002; Schoenwald and Hoagwood 2001), as well as program outcomes (Glisson and Hemmelgarn 1998; Hemmelgarn et al. 2006). Payne et al. (2006) also found significant associations between school contextual factors (e.g., principal support, organizational capacity) and the implementation quality of school-based prevention programs. However, few studies have empirically examined the extent to which school contextual factors are influenced by universal prevention programs, and whether this association varies as a function of implementation fidelity.

Organizational health is an important aspect of school climate, which includes an emphasis on academic achievement, friendly and collegial relationships among staff, respect for all members of the school community, supportive administrative leadership, consistent discipline policies, attention to safety issues, and family and community involvement (Hoy and Tarter 1997). Prior research on school organizational health indicates that it is positively associated with multiple indicators of student performance and negatively associated with student absenteeism and school suspensions (Bevans et al. 2007; Gottfredson et al. 2005). The association between school health and positive student and staff outcomes suggests that school organizational health would be an important target for school improvement initiatives and programs (Bevans et al. 2007). Yet schools that are less organizationally healthy, chaotic, and have staff who feel they have limited efficacy may be at an increased risk for poor implementation fidelity (Domitrovich et al. 2008; Gottfredson et al. 2002; Lochman 2003). Consequently, the association between school climate and implementation quality is likely complex, as organizational health is potentially both a predictor of implementation quality and an outcome of PBIS.

Overview of the Current Study

The present study uses data from a randomized controlled effectiveness trial of school-wide PBIS in elementary schools to better understand the complex association between PBIS, school climate, and implementation quality. We extended our prior work regarding the proximal impact of PBIS on organizational health (Bradshaw et al. 2008a) by examining the intent to treat outcomes after 4 years of PBIS implementation. We then explored the influence of implementation quality on school climate. Since PBIS is a confluence of several effective school and behavior management strategies (Bradshaw et al. 2008), we first explored whether the presence of PBIS components prior to formal training was associated with either the intercept or slope of organizational health. More specifically, one might predict that high levels of naturally occurring elements of PBIS would be associated with a more organizationally healthy school environment prior to training. Furthermore, higher levels of naturally occurring elements of PBIS may attenuate the impact of formal training in PBIS. Finally, we examined the effects of PBIS on school climate, taking into consideration the speed with which schools adopted the program; this allowed us to determine if there were differences in the impact of PBIS on school climate for early versus late implementers of PBIS.

Method

Data

Data for the current study come from a 5-year group randomized trial of school-wide PBIS conducted between 2002 and 2007. Thirty-seven Maryland public elementary schools from five school districts volunteered to participate in the trial. Because PBIS is implemented universally, a prerequisite for program implementation is that the majority of staff display a willingness and commitment to adopting the program (Horner et al. 2005). The schools were matched on select baseline demographics (e.g., percentage of students receiving free or reduced meals, school enrollment, percentage of students suspended). Once matched, 21 schools were randomized to the intervention condition (“PBIS”) and 16 were assigned to the comparison condition (“Comparison”). The Comparison schools agreed to refrain from implementing PBIS for the duration of the study.

Training

Each of the 21 schools assigned to receive PBIS training formed internal PBIS teams comprised of five to six core members (teachers, administrators) who attended an initial 2-day summer training led by Dr. George Sugai, one of the developers of PBIS. Training was provided on the following seven critical features of PBIS: (1) formation and functioning of a collaborative PBIS team; (2) technical assistance from a behavioral support “coach”; (3) clearly defining expectations for positive student behavior; (4) defining behavioral expectations which are taught to all students; (5) development of a school-wide system for rewarding students exhibiting expected positive behaviors; (6) creation of an agreed upon system for responding to behavioral violations; and (7) development of a formal system for collecting, analyzing, and using disciplinary data. Ongoing support and technical assistance was provided to the schools by local PBIS behavior support coaches, a regional coordinator, and a state leadership team. To ensure and maintain consistently high levels of implementation fidelity, PBIS school teams attended annual 2-day summer booster training events. Additional supports and professional development were provided to the schools’ behavior support coaches through state training events four times each year. The initial training and all subsequent booster training and technical assistance events were coordinated and led by the PBIS State Leadership Team. These sessions were also attended by other PBIS teams from across the state (for additional information on the training and support infrastructure, see Barrett, Bradshaw, and Lewis-Palmer 2008).

Participating staff

Data for the present study were collected from 2,596 school staff members across 37 elementary schools. The participating school staff included general education teachers (n = 1,437; 55.35%) and student support staff (e.g., school psychologists, counselors, teaching assistants, office staff, resource teachers) (n = 1,159; 44.65%). Of the participating staff, 91.29% were female, 86.56% were Caucasian, and 13.44% were African American. Approximately 32.05% of the participating staff were in their twenties, 23.92% in their thirties, 23.15% in their forties, 18.10% in their fifties, and 2.77% were 60 years of age or older. The average number of teachers per school (i.e., average cluster size) was 68.49 (range 36–107).

Participating schools

The sample of participating elementary schools was diverse, and matching analyses revealed that there were not many differences in the characteristics of the schools participating in the trial in comparison to the non-urban schools in the state that were not participating in the trial (Stuart and Leaf 2007). Nearly half of the participating schools received Title I support, and 48% were located in suburban communities, 41% in urban fringe communities, and 11% in rural communities. Baseline school-level demographic characteristics are provided in Table 1. An overall MANOVA on the school-level characteristics indicated no significant difference between schools trained in PBIS and the comparison schools at baseline (Wilks’ Λ = .674, F (9, 19) = 1.022, p = .46).
Table 1

School characteristics and unadjusted mean Organizational Health Inventory (OHI) subscale scores for PBIS and Comparison schools at baseline

 

PBIS (n = 21 schools)

Comparison (n = 16 schools)

Mean

SD

Mean

SD

School characteristics

 School Enrollment

471.76

132.78

505.50

188.57

 Student to Teacher Ratio

18.48

4.33

18.61

4.69

 Free/Reduced Meals (%)

42.93

19.22

36.25

20.93

 Special Education Students (%)

13.24

4.27

15.08

6.66

 Caucasian Students (%)

53.81

33.16

67.51

28.99

 Student Mobility (%)

25.88

8.24

20.51

7.19

 Suspension (%)

7.73

7.43

5.06

4.73

 Math Performance (%)a

47.20

22.37

46.96

19.05

 Reading Performance (%)a

50.66

19.32

52.94

16.43

OHI subscale scores

 Resource Influence

2.85

.28

2.91

.37

 Staff Affiliation

3.10

.34

3.19

.28

 Academic Emphasis

2.42

.32

2.58

.31

 Institutional Integrity

2.80

.33

2.78

.24

 Collegial Leadership

3.21

.36

3.21

.43

 Overall OHI

2.95

.24

3.01

.26

aPercentage of 5th grade students who scored in the “proficent” or “advanced” range on the state’s standardized test. An overall MANOVA on the school-level characteristics indicated no significant difference between schools trained in PBIS and the Comparison schools at baseline, Wilks’ Λ = .674, F (9, 19) = 1.022, p = .46. The means for OHI subscale scores were not adjusted for individual- or school-level covariates. A MANOVA on the five OHI subscale scores indicated no overall significant difference between PBIS and comparison schools at baseline, Wilks’ Λ = .89, F (5, 31) = .76, p = .58

Measures

Staff characteristics

Staff members completed a brief demographics questionnaire that included questions regarding their sex (0 = female, 1 = male), race/ethnicity (0 = Caucasian; 1 = non-Caucasian), age group (ordinal variable with 0 = 20−30; 1 = 31−40; 2 = 41−50; 3 = 51−60 and 4 = 60 and over), and occupational role (0 = general educator; 1 = student support staff (e.g., special education, student services) in the school).

School characteristics

Baseline school-level characteristics were obtained from the State’s Department of Education, including student enrollment (number of students enrolled in the school), faculty turnover (percentage of faculty new to the school for that school year), student mobility (percentage of students migrating in plus the percentage migrating out), and percentage of students receiving free or reduced cost meals (percentage of children eligible to receive free or reduced priced meals, an indicator of student poverty).

Organizational health

The Organizational Health Inventory for Elementary Schools (OHI; Hoy and Feldman 1987) is a widely used, previously validated measure of staff reports of the schools’ organizational health (Hoy and Tarter 1997; Hoy et al. 1991). The OHI consists of 37 items that measure the five aspects of a healthy functioning school: institutional integrity (the school’s ability to cope successfully with destructive outside forces, teachers are protected from unreasonable community and parental demands), staff affiliation (warm and friendly interactions, positive feelings about colleagues, commitment to students, trust and confidence among the staff, and sense of accomplishment), academic emphasis (students are cooperative in the classroom, respectful of other students who get good grades, and are driven to improve their skills), collegial leadership (principal’s behavior is friendly, supportive, open, egalitarian, and neither directive nor restrictive), and resource influence (principal’s ability to lobby for resources for the school and positively influence the allocation of district resources). Participants responded to all items on a four-point scale from “rarely occurs” to “very frequently occurs.” Items were scored such that a higher score indicated a healthier school environment. An overall OHI score (referred to as “overall OHI”) was calculated by averaging all the items on the measure. Prior analyses on the current data revealed a five-factor structure similar to the one originated by Hoy and Miskel (1996) and demonstrated that the subscales had moderate to high internal reliabilities (Cronbach alphas ranged from .73 to .95) (see Bevans et al. 2007). A MANOVA on the five OHI subscale scores indicated no overall significant difference between PBIS and Comparison schools at baseline (Wilks’ Λ = .89, F (5, 31) = .76, p = .58).

Implementation quality

The School-Wide Evaluation Tool (SET; Sugai et al. 2001) was used to assess universal, school-wide PBIS implementation quality. The instrument was completed each spring by an independent trained rater who was unaware of the school’s PBIS implementation status. The SET consists of 29 items (scored on a three-point scale with 0 = not implemented, 1 = partial implementation, and 2 = full implementation) organized into the seven subscales that represent the seven key features of school-wide PBIS described above. During the assessment, the external observer assessed the degree to which a school has each of the model’s seven critical features in place by reviewing written materials and established discipline procedures (e.g., discipline handbook, school improvement goals, behavioral incident summaries); noting visual displays of expected behaviors posted in various locations throughout the school; and interviewing administrators, teachers, and students about school procedures, policies, standards, and consequences for positive behavior and rule infractions. The SET has strong psychometric properties, including high internal consistency (Cronbach’s alpha = .96), high inter-observer reliability, and strong test-retest reliability (Horner et al. 2004). The developers of PBIS posited that the intended benefits of the program occur when the overall summary score on the SET (average score for all seven key features) reaches 80% (Horner et al. 2004). Both the continuous overall SET score (see Fig. 1) and a dichotomous variable indicating high or low fidelity (i.e., >80% and <80%, respectively) were used as indicators of implementation quality in the current paper.
Fig. 1

Unadjusted mean SET scores of staff at PBIS and Comparison Schools at baseline and years 1 through 4

Procedure

The data for this study were collected on an annual basis, beginning in May prior to randomization and participation in the initial July PBIS training event, and thereafter in May before the subsequent summer booster training event in July. Staff reports of the schools’ organizational health were collected via an individually-addressed survey packet. The survey packets were mailed in bulk to the school and distributed to the school staff by the principal, school psychologist, or administrative assistant in their faculty mailboxes. Staff participation was voluntary, and participants provided written consent. To ensure confidentiality, staff members completed the study materials on their own time and returned the materials directly to the researchers through the U.S. mail in the self-addressed, stamped envelope provided by the researchers. Each staff questionnaire packet included a small incentive (e.g., disposable ballpoint pen, bookmark (with an approximate value less than one dollar)). The staff response rate ranged from 80–86% of all eligible staff in the schools across the 5 years of the project. Baseline data, along with 4 subsequent years of data were included in the current paper. The Committee on Human Subjects Research at the researchers’ institution provided approval for this study.

Analyses

Preliminary descriptive analyses and analyses on the school-level SET data were conducted in SPSS 15.0. To examine our primary research questions regarding the impact of training in school-wide PBIS on the growth in the different facets of organizational health, longitudinal analyses were conducted using a three-level approach in Mplus 5.1 (Muthén and Muthén 1998). The Mplus software utilizes a general structural equation modeling framework to estimate the latent intercept and slope (growth) parameters. Within a single analysis model, the program simultaneously estimates the continuous latent intercept and slope parameters, as well as the random effects (Muthén and Muthén 1998). Maximum likelihood estimation with robust standard errors was used to estimate the parameters.

A multilevel approach was selected for the current study because both the data (staff nested within schools) and hypotheses (the impact of PBIS on changes in staff perceptions) are multilevel in nature (Luke 2004; Raudenbush and Bryk 2002). More specifically, a multilevel modeling approach allows adjustment for covariates at both the staff-level and school-level that have been linked in prior research with variation in perceptions of the school climate (Bevans et al. 2007; Koth et al. 2008). While one might argue that although the staff members’ individual perceptions of the school tended to be highly correlated (as indicated by ICCs ranging from .18 for institutional integrity to .32 for overall OHI), thereby showing a relatively high degree of within-group agreement (i.e., 18% to 32%), aggregating these data at the school level can mask some systematic variation in perceptions (Raudenbush and Bryk 2002). In fact, our prior research with the OHI indicated that there were systematic differences in perceptions based on factors such as role, gender, and age (Bevans et al. 2007), which through the multilevel modeling approach, can be included as covariates. Furthermore, the items on the OHI focused on individual perceptions of the school’s organizational context, rather than group-level perceptions (Klein and Kozlowski 2000).

As noted above, all multilevel models controlled for the following school-level characteristics, which have been previously linked with perceptions of organizational health (Bevans et al. 2007): percentage of students receiving free and reduced meals status (FARMs), student mobility, faculty turnover, and (the natural log of) school enrollment. Similarly, individual-level variation in perceptions of organizational health was controlled for by including the following four staff-level characteristics in the models as covariates: sex, race (Caucasian vs. non-Caucasian), role in school (e.g., general education teacher vs. student support staff), and age. These variables were included as potential covariates on both the intercept and the slope of organizational health in all multilevel models. Given the group randomized controlled trial design, intervention status (PBIS vs. Comparison) and implementation quality (performance on the SET) were modeled as school-level variables. We explored whether inclusion of a quadratic term improved model fit. The linear model fit the data very well (e.g., overall OHI: CFI = .97; TLI = .97; RMSEA = .04) and although the addition of the quadratic term slightly improved the fit (e.g., overall OHI: CFI = .99; TLI = .99; RMSEA = .03), the improvement was not sufficient enough to warrant an extra parameter. Therefore, we presented the results for the more parsimonious linear models. The level for statistical significance was set at p< .05, however, given the relatively small number of schools in the group randomized trial (n = 37) and included in the stratified analyses (n = 21), p-values of less than .10 are considered marginally significant and discussed as trends (Murray 1998).

We tested a series of three models for each of the six outcomes (overall OHI, collegial leadership, institutional integrity, resource influence, staff affiliation, and academic emphasis). Model 1 examined the intent-to-treat effect of PBIS on the six outcomes, adjusting for the staff- and school-level covariates. Model 2 included the baseline SET score (continuous) to explore if the effect of training in PBIS was attenuated by the inclusion of baseline levels of naturally occurring elements of PBIS, and to determine if these baseline levels of PBIS were associated with organizational health (either intercept or slope). Finally, based on the work of Horner et al. (2004) regarding the 80% threshold for high fidelity PBIS implementation as measured by the SET, Model 3 included the dichotomous high fidelity indicator (i.e., Overall SET score >80% = 1; Overall SET score <80% = 0) at the end of the first year of the trial (post-training) to determine whether early (i.e., those meeting the 80% threshold) or late implementers of PBIS (i.e., those not meeting the 80% threshold) evinced greater improvements in organizational health as a result of training in PBIS. We also explored the effect of PBIS on changes in organizational health, irrespective of formal training in the model.

Results

Intent to Treat Analyses

Inspection of the slope coefficients in the models (see Model 1 in Tables 2, 3, 4, 5, 6, 7) indicated that, after adjusting for the staff- and school-level covariates, there was a significant positive intervention effect on overall OHI, resource influence, staff affiliation, and academic emphasis (p < .05) across the 4 years of the trial (see Fig. 2a–f). There were no statistically significant intervention effects on growth in collegial leadership or institutional integrity. Consistent with prior work by Muthén and Muthén (2000), we re-centered the intercept to be at Years 2, 3, and 4 to determine at what point (post-training) the PBIS-trained schools differed from the Comparison schools in their (school- and staff-level covariate) adjusted mean score on organizational health. These sensitivity analyses indicated that the adjusted mean scores for the two groups differed significantly at Year 3 on collegial leadership, overall OHI, and staff affiliation, but not until Year 4 for resource influence and academic emphasis. By the end of Year 4, there were still no significant differences between PBIS and Comparison schools in the adjusted mean scores on intuitional integrity, although it approached statistical significance (z = 1.80, p = .072). We calculated the effect size estimates for the change or growth in organizational health between baseline and Year 4 (Hedges 2007). The effect size was .29 for overall OHI, .24 for staff affiliation, .22 for academic emphasis, .21 for resource influence, .20 for collegial leadership, and .16 for institutional integrity.
Fig. 2

Intent to treat effects of PBIS on organizational health. a overall OHI, b collegial leadership, c institutional integrity, d resource influence, e. staff affiliation, f academic emphasis

Table 2

Multilevel results indicating impact of PBIS on overall organizational health

 

Model 1

Model 2

Model 3sd

Intent to treat

Continuous SET score measured at baseline

Dichotomous SET score measured at the end of year 1

Intercept

Slope

Intercept

Slope

Intercept

Slope

Coef (se)

Coef (se)

Coef (se)

Coef (se)

Coef (se)

Coef (se)

Staff characteristica

 Sex

−.074 (.042)

.016 (.013)

    

 Minority Status

−.003 (.038)

.002 (.008)

    

 Role

.022 (.017)

−.001 (.007)

    

 Age

.020 (.011)

.006 (.003) *

    

School characteristics

 PBIS Intervention Status

−.007 (.087)

.059 (.022) **

−.007 (.086)

.059 (.022) **

−.037 (.109)

.079 (.019) **

 Faculty Turnover

−.012 (.005) *

0.0 (.002)

−.013 (.005) *

0.0 (.002)

−.012 (.005) *

0.0 (.001)

 Student Mobility

0.0 (.005)

−.002 (.001) *

0.0 (.006)

−.002 (.001) *

0.0 (.005)

−.002 (.001) *

 Free/Reduced Meals (%)

−.003 (.002)

.001 (0.0)

−.003 (.002)

.001 (0.0)

−.003 (.002)

.001 (0.0)

 Enrollment

−.187 (.129)

.076 (.024) **

−.186 (.135)

.076 (.027) **

−.190 (.128)

.077 (.024) **

 SET Score

0.0 (.002)

0.0 (.001)

.049 (.096)

−.032 (.014) *

 SS BIC

4514.650

 

4523.964

 

4520.908

 

 RMSEA

.043

 

.042

 

.042

 

PBIS Intervention Status was coded 0 (Comparison) and 1 (PBIS). Age was coded as an ordinal variable from 0 (age 20–29) to 4 (age 60 and over), minority status was coded 0 (Caucasian) and 1 (non-Caucasian), sex was coded 0 (female) and 1 (male), and role was coded 0 (general educator) and 1 (student support staff). Free/reduced meals indicates percentage of students in the school receiving free or reduced-cost meals. Enrollment is calculated as the log of enrollment. All multilevel analyses included the log of school enrollment to facilitate interpretation of the estimates. SET School-wide Evaluation Tool. aThe staff-level covariates were included in all three models, but the coefficients are only reported for Model 1, as they did not change considerably across models. SS BIC sample-size adjusted Bayesian information criterion. RMSEA root mean square error of approximation

* p ≤ .05; **p ≤ .01

Table 3

Multilevel results indicating impact of PBIS on collegial leadership

 

Model 1

Model 2

Model 3

Intent to treat

Continuous SET score measured at baseline

Dichotomous SET score measured at the end of year 1

Intercept

 

Intercept

Slope

Intercept

Slope

Coef (se)

Coef (se)

Coef (se)

Coef (se)

Coef (se)

Coef (se)

Staff characteristica

 Sex

−.054 (.054)

.035 (.022)

    

 Minority Status

.032 (.059)

−.009 (.015)

    

 Role

.060 (.041)

.003 (.015)

    

 Age

−.010 (.016)

.007 (.006)

    

School characteristics

 PBIS Intervention Status

.043 (.148)

.073 (.041)

.043 (.147)

.072 (.042)

−.038 (.198)

.119 (.039) **

 Faculty Turnover

−.017 (.009) *

0.0 (.003)

−.017 (.008) *

−.001 (.003)

−.017 (.009)

0.0 (.003)

 Student Mobility

−.002 (.009)

.001 (.002)

−.001 (.010)

.001 (.002)

−.002 (.009)

.001 (.002)

 Free/Reduced Meals (%)

−.001 (.003)

−.001 (.001)

−.001 (.003)

−.001 (.001)

−.001 (.003)

−.001 (.001)

 Enrollment

−.142 (.194)

.058 (.047)

−.145 (.206)

.068 (.053)

−.148 (.195)

.062 (.046)

 SET Score

0.0 (.004)

−.001 (.001)

.131 (.160)

−.075 (.033) *

 SS BIC

10941.882

 

10941.882

 

10947.969

 

 RMSEA

.051

 

.050

 

.050

 

PBIS Intervention Status was coded 0 (Comparison) and 1 (PBIS). Age was coded as an ordinal variable from 0 (age 20–29) to 4 (age 60 and over), minority status was coded 0 (Caucasian) and 1 (non-Caucasian), and sex was coded 0 (female) and 1 (male), and role was coded 0 (general educator) and 1 (student support staff). Free/reduced meals indicates percentage of students in the school receiving free or reduced-cost meals. Enrollment is calculated as the log of enrollment. All multilevel analyses included the log of school enrollment to facilitate interpretation of the estimates. SET school-wide evaluation tool. aThe staff-level covariates were included in all three models, but the coefficients are only reported for Model 1, as they did not change considerably across models. SS BIC sample-size adjusted Bayesian information criterion. RMSEA root mean square error of approximation

* p ≤ .05; **p ≤ .01

Table 4

Multilevel results indicating impact of PBIS on institutional integrity

 

Model 1

Model 2

Model 3

Intent to treat

Continuous SET score measured at baseline

Dichotomous SET score measured at the end of year 1

Intercept

Slope

Intercept

Slope

Intercept

Slope

Coef (se)

Coef (se)

Coef (se)

Coef (se)

Coef (se)

Coef (se)

Staff characteristica

 Sex

−.009 (.060)

−.001 (.019)

    

 Minority Status

−.031 (.035)

.015 (.016)

    

 Role

.060 (.031)

−.009 (.010)

    

 Age

.028 (.014)

.011 (.005) *

    

School characteristics

 PBIS Intervention Status

−.010 (.101)

.049 (.036)

−.009 (.101)

.047 (.038)

−.049 (.079)

.068 (.033) *

 Faculty Turnover

−.002 (.005)

−.002 (.002)

−.002 (.006)

−.002 (.002)

−.002 (.005)

−.002 (.002)

 Student Mobility

.004 (.004)

−.003 (.002)

.005 (.004)

−.004 (.002) *

.004 (.004)

−.003 (.002)

 Free/Reduced Meals (%)

.001 (.003)

.001 (.001)

.001 (.003)

.002 (.001)

.001 (.003)

.001 (.001)

 Enrollment

−.259 (.104) *

.093 (.038) *

−.268 (.123) *

.104 (.041) *

−.262 (.103) *

.095 (.038) *

 SET Score

0.0 (.003)

0.0 (.001)

.065 (.082)

−.032(.027)

 SS BIC

9725.528

 

9725.528

 

9733.776

 

 RMSEA

.020

 

.020

 

.019

 

PBIS Intervention Status was coded 0 (Comparison) and 1 (PBIS). Age was coded as an ordinal variable from 0 (age 20–29) to 4 (age 60 and over), minority status was coded 0 (Caucasian) and 1 (non-Caucasian), and sex was coded 0 (female) and 1 (male), and role was coded 0 (general educator) and 1 (student support staff). Free/reduced meals indicates percentage of students in the school receiving free or reduced-cost meals. Enrollment is calculated as the log of enrollment. All multilevel analyses included the log of school enrollment to facilitate interpretation of the estimates. SET school-wide evaluation tool. aThe staff-level covariates were included in all three models, but the coefficients are only reported for Model 1, as they did not change considerably across models. SS BIC sample-size adjusted Bayesian information criterion. RMSEA root mean square error of approximation

* p ≤ .05; **p ≤ .01

Table 5

Multilevel results indicating impact of PBIS on resource influence

 

Model 1

Model 2

Model 3

Intent to treat

Continuous SET score measured at baseline

Dichotomous SET score measured at the end of year 1

Intercept

Slope

Intercept

Slope

Intercept

Slope

Coef (se)

Coef (se)

Coef (se)

Coef (se)

Coef (se)

Coef (se)

Staff characteristica

 Sex

−.074 (.057)

.028 (.019)

    

 Minority Status

.018 (.042)

−.008 (.011)

    

 Role

.034 (.027)

.001 (.012)

    

 Age

.011 (.014)

0.0 (.004)

    

School characteristics

 PBIS Intervention Status

−.048 (.116)

.066 (.031) *

−.049 (.115)

.067 (.030) *

−.074 (.134)

.084 (.028)

 Faculty Turnover

−.004 (.008)

−.002 (.002)

−.004 (.008)

−.002 (.002)

−.004 (.008)

−.002 (.002)

 Student Mobility

.001 (.008)

−.002 (.001)

.001 (.008)

−.002 (.001)

.001 (.008)

−.002 (.002)

 Free/Reduced Meals (%)

−.006 (.003)

.001 (.001) *

−.006 (.003)

.001 (.001) *

−.006 (.003)

.001 (.001) *

 Enrollment

.011 (.115)

.085 (.035) *

.023 (.162)

.081 (.038) *

.009 (.154)

.086 (.034) *

 SET Score

−.001 (.003)

0.0 (.001)

.042 (.129)

−.029 (.024)

 SS BIC

8982.412

 

8991.454

 

8990.596

 

 RMSEA

.032

 

.032

 

.033

 

PBIS Intervention Status was coded 0 (Comparison) and 1 (PBIS). Age was coded as an ordinal variable from 0 (age 20–29) to 4 (age 60 and over), minority status was coded 0 (Caucasian) and 1 (non-Caucasian), and sex was coded 0 (female) and 1 (male), and role was coded 0 (general educator) and 1 (student support staff). Free/reduced meals indicates percentage of students in the school receiving free or reduced-cost meals. Enrollment is calculated as the log of enrollment. All multilevel analyses included the log of school enrollment to facilitate interpretation of the estimates. SET school-wide evaluation tool. aThe staff-level covariates were included in all three models, but the coefficients are only reported for Model 1, as they did not change considerably across models. SS BIC sample-size adjusted Bayesian information criterion. RMSEA root mean square error of approximation

* p ≤ .05; **p ≤ .01

Table 6

Multilevel results indicating impact of PBIS on staff affiliation

 

Model 1

Model 2

Model 3

Intent to treat

Continuous SET score measured at baseline

Dichotomous SET score measured at the end of year 1

Intercept

Slope

Intercept

Slope

Intercept

Slope

Coef (se)

Coef (se)

Coef (se)

Coef (se)

Coef (se)

Coef (se)

Staff characteristica

 Sex

−.099 (.053) *

.008 (.018)

    

 Minority Status

−.106 (.047)

.009 (.013)

    

 Role

−.035 (.021)

−.002 (.009)

    

 Age

.043 (.014) **

.006 (.004)

    

School characteristics

 PBIS Intervention Status

−.041 (.080)

.068 (.023) **

−.040 (.080)

.069 (.021) **

−.048 (.119)

.083 (.027) **

 Faculty Turnover

−.018 (.006) **

.002 (.002)

−.018 (.006) **

.002 (.002)

−.018 (.006) **

.001 (.002)

 Student Mobility

.001 (.005)

−.002 (.001)

.001 (.005)

−.002 (.001)

.001 (.005)

−.002 (.001)

 Free/Reduced Meals (%)

−.004 (.002) *

0.0 (.001)

−.004 (.002) *

0.0 (0.0)

−.004 (.002) *

0.0 (.001)

 Enrollment

−.331 (.143) *

.074 (.034) *

−.330 (.146) *

.065 (.034)

−.332 (.144) *

.076 (.033) *

 SET Score

0.0 (.002)

.001 (0.0)

.012 (.109)

−.026 (.031)

 SS BIC

7637.094

 

7642.688

 

7644.813

 

 RMSEA

.036

 

.035

 

.035

 

PBIS Intervention Status was coded 0 (Comparison) and 1 (PBIS). Age was coded as an ordinal variable from 0 (age 20–29) to 4 (age 60 and over), minority status was coded 0 (Caucasian) and 1 (non-Caucasian), and sex was coded 0 (female) and 1 (male), and role was coded 0 (general educator) and 1 (student support staff). Free/reduced meals indicates percentage of students in the school receiving free or reduced-cost meals. Enrollment is calculated as the log of enrollment. All multilevel analyses included the log of school enrollment to facilitate interpretation of the estimates. SET School-wide evaluation tool, aThe staff-level covariates were included in all three models, but the coefficients are only reported for Model 1, as they did not change considerably across models. SS BIC sample-size adjusted Bayesian information criterion. RMSEA root mean square error of approximation

* p ≤ .05; **p ≤ .01

Table 7

Multilevel results indicating impact of PBIS on academic emphasis

 

Model 1

Model 2

Model 3

Intent to treat

Continuous SET score measured at baseline

Dichotomous SET score measured at the end of year 1

Intercept

Slope

Intercept

Slope

Intercept

Slope

Coef (se)

Coef (se)

Coef (se)

Coef (se)

Coef (se)

Coef (se)

Staff characteristica

 Sex

−.181 (.049) **

.032 (.014) *

    

 Minority Status

−.123 (.057) *

−.013 (.016)

    

 Role

−.085 (.034) *

.014 (.010)

    

 Age

.012 (.012)

.006 (.004)

    

School characteristics

 PBIS Intervention Status

−.031 (.090)

.043 (.017) *

−.031 (.087)

.044 (.015) **

.016 (.092)

.030 (.019)

 Faculty Turnover

−.014 (.005) *

−.001(.001)

−.014 (.006) **

0.0 (.001)

−.014 (.005) **

0.0 (.001)

 Student Mobility

−.012 (.007)

−.002 (.001)

−.012 (.007)

−.002 (.001)

−.012 (.007)

−.002 (.001)

 Free/Reduced Meals (%)

−.004 (.002)

0.0 (.001)

−.004 (.002)

0.0 (0.0)

−.004 (.002)

0.0 (.001)

 Enrollment

−.264 (.152)

.070 (.021) **

−.248 (.150)

.061 (.020) **

−.259 (.152)

.068 (.021) **

 SET Score

−.001 (.002)

.001 (0.0) *

−.077 (.082)

.022 (.011) *

 SS BIC

7607.107

 

7611.537

 

7614.561

 

 RMSEA

.016

 

.018

 

.016

 

PBIS Intervention Status was coded 0 (Comparison) and 1 (PBIS). Age was coded as an ordinal variable from 0 (age 20–29) to 4 (age 60 and over), minority status was coded 0 (Caucasian) and 1 (non-Caucasian), and sex was coded 0 (female) and 1 (male), and role was coded 0 (general educator) and 1 (student support staff). Free/reduced meals indicates percentage of students in the school receiving free or reduced-cost meals. Enrollment is calculated as the log of enrollment. All multilevel analyses included the log of school enrollment to facilitate interpretation of the estimates. SET school-wide evaluation tool. aThe staff-level covariates were included in all three models, but the coefficients are only reported for Model 1, as they did not change considerably across models. SS BIC sample-size adjusted Bayesian information criterion. RMSEA root mean square error of approximation

* p ≤ .05; **p ≤ .01

Finally, we examined the correlation between the intercept and slope for each of the models and found a significant negative association between intercept and slope for resource influence (z = −4.10, p < .01), collegial leadership (z = −2.27, p < .01), overall OHI (z = −2.70, p < .01), and academic emphasis (z = −2.63, p < .01). These findings suggest that schools with lower baseline levels of these elements of organizational health tended to improve the most over the course of the study. The correlation between the slope and interception was, however, non-significant for staff affiliation and institutional integrity.

Exploration of Implementation Quality

Effect of training in PBIS on SET scores

We conducted some preliminary analyses on the SET fidelity scores for schools randomized to the PBIS and Comparison conditions across each study year (see Fig. 1). A MANOVA indicated there were no significant differences in scores on the continuous SET scores between the schools in the PBIS and Comparison conditions at baseline (Wilks’ Λ = .795, F (8,20) = .645, p = .73). Analysis of the continuous SET data using repeated measures general linear model suggested a significant intervention effect (i.e., interaction between intervention condition x time) for the overall SET score (Wilks’ Λ = .38, F (4,32) = 13.36, p = .001, η2 = .63, d = 3.22). Using the 80% SET score threshold identified by Horner et al. (2004) as the criteria for high implementation quality, we found that none of the schools met this level at baseline, but 66.7% (n = 14) of the PBIS and 6.3% (n = 1) of Comparison schools met this criteria at the end of Year 1, and 85.7% (n= 18) of the PBIS and 18.8% (n= 3) of the Comparison schools met this criteria at the end of Year 2. Similarly, 95.2% (n= 20) of the PBIS and 6.3% (n= 1) of the Comparison schools did by the end of Year 3, and 100% (n= 21) of the PBIS and 6.3% (n= 1) of the Comparison schools did by the end of Year 4. Across all years of the trial, 100% of the PBIS and 31.25% (n = 5) of the Comparison schools ever met the 80% SET criteria.

Baseline SET as a predictor of growth in organizational health

We included the baseline (continuous) SET score in the multilevel models for each outcome (see Model 2 in Tables 2, 3, 4, 5, 6, 7) to explore whether naturally occurring levels of PBIS prior to training influenced growth in organizational health. The effect of baseline SET on growth in organizational health was only significant for the academic emphasis subscale (see Model 2 of Table 7); however, the effect on the slope was relatively small (coef = .001, p < .05). This suggests that aside from the modest effect on academic emphasis, baseline levels of PBIS were not associated with significant improvements in organizational health over and above randomization to the intervention condition. Furthermore, examination of the coefficient for the intervention effect (PBIS vs. Comparison) remained consistent and did not vary substantially once the baseline SET score was added to the model (i.e., comparing across Models 1 and 2). We also explored whether the baseline SET score without intervention status in the model was associated with increased growth, and found a similar effect of the baseline SET score on academic emphasis (coef = .001, p < .05), but not on any of the other five outcomes. In addition, we did not observe a significant association between baseline SET scores and OHI intercepts, which suggests that there was no association between “naturally occurring” PBIS and the schools’ organizational context.

Early versus late implementers of PBIS

Our final series of analyses focused on the SET scores at the end of Year 1 of the trial in order to explore whether the effects achieved through PBIS varied as a function of the speed with which the schools reached high fidelity implementation of PBIS. We added the dichotomous SET score (i.e., >80% vs. <80%) to the model for each outcome (see Model 3, Tables 2, 3, 4, 5, 6, 7). As noted above, 1 Comparison school and 14 of the PBIS trained schools achieved this level by the end of Year 1. The inclusion of the end of Year 1 dichotomous SET score appeared to strengthen the effects on staff affiliation and overall OHI slightly as compared to the intervention effects (coefficients) reported in Model 1. The intervention effects for collegial leadership and institutional integrity became significant in Model 3 when implementation fidelity at the end of Year 1 was considered, as compared to Model 1 which simply examined the intent-to-treat effect. Interestingly, the dichotomous SET variable was a significant negative predictor of growth in overall OHI (see Model 3 in Table 2) and collegial leadership (see Model 3 in Table 3), such that schools meeting the high implementation criteria by the end of Year 1 (i.e., early implementers) tended to improve less than the schools not reaching high implementation by the end of Year 1 (i.e., late implementers). In contrast, meeting the implementation criteria by the end of Year 1 was positively associated with growth in academic emphasis (see Model 3 in Table 7). We then reran these models removing the intervention status from the model and found that the end of Year 1 dichotomous SET score was positively associated with the slope of academic emphasis (coef = .035, p < .001), but was not associated with any of the other five OHI outcomes.

We further explored this trend by eliminating the non-trained schools from the analyses and observed that among the 21 schools trained in PBIS, those that made the 80% SET criteria by the end of Year 1 (n = 14) tended to have a higher intercept (although not significantly so) than the schools that did not meet this criteria (n = 7); the effect for collegial leadership was, however, marginally significant (z = 1.732, coeff = .244, p = .08). Furthermore, the PBIS schools that met the 80% criteria tended to improve less (i.e., had a flatter slope) than the PBIS schools which did not meet this criteria.

Finally, we explored the effect of ever achieving the 80% criteria on the SET across all years of the study (i.e., 5 of 16 Comparison schools and all 21 PBIS schools), and found that ever making the SET criteria was not significantly associated with growth in any aspect of organizational health when the intervention status variable was included in the model. However, when we removed intervention status from the model (essentially ignoring the randomization) we observed a significant effect of ever making the high implementation quality level on the slope of overall OHI (coef = .046, p < .01), academic emphasis (coef = .042, p < .001), and staff affiliation (coef = .051, p < .01).

Discussion

The current study examined the impact of PBIS training on improvements in school climate, as measured by staff reports of the school’s organizational health. The intent-to-treat analyses indicated that PBIS training was associated with significant improvements in resource influence, staff affiliation, academic emphasis, and the overall OHI score. The effects for institutional integrity and collegial leadership became significant when the end of Year 1 dichotomous SET score was included as a covariate in the model. It is not clear why there were no main effects of PBIS on institutional integrity and collegial leadership; however, the PBIS model did not specifically target principals or their management style. It is possible that PBIS affected some aspects of principal leadership not assessed through the OHI (e.g., attributes of effective leadership directly related to the implementation of PBIS or other programs). Similarly, training in PBIS did not appear to influence institutional integrity, or the schools’ sensitivity to unreasonable community demands. Although community and parent involvement in PBIS is strongly encouraged, in the context of increasing accountability, school district and legislative requirements might have greater influence on school policies than do unreasonable community demands. Additional research is needed to clarify why the intervention effects on these outcomes only became significant when implementation quality was included in the analyses.

Our sensitivity analyses showed that the intervention effects on growth in several aspects of organizational health reached significance at the end of the third year of implementation, and were sustained through the end of the trial. Whereas the current analyses on the effects of PBIS through Year 4 indicated that the effect sizes for the growth in organizational health ranged from .21 for resource influence to .29 for overall OHI, our previous work on the OHI outcomes through the end of Year 3 indicated that the effect sizes were slightly higher (ranging from .24 for academic emphasis to .34 for resource influence) (Bradshaw et al. 2008a). This slight attenuation of the effect sizes at Year 4 as compared to Year 3 suggests that the impact of PBIS on school climate may peak around the third year, and level off or possibly decline slightly thereafter. As noted above, we explored the inclusion of a quadratic term in the current models; however, the linear model fit the data as well as the model with the quadratic term. It is possible that schools implementing universal PBIS may get initial benefits in school climate as a result of implementing the school-wide program, but may require secondary prevention efforts to meet the needs of children not responding adequately to the universal program (Sugai and Horner 2006) in order to continue to experience growth in organizational health in subsequent years. Additional research is needed beyond 4 years of implementation to determine the long-term growth trajectory of organizational health among schools implementing PBIS.

We analyzed the SET data to examine our second and third research questions related to the influence of implementation quality on improvements in school climate. Although schools across the two conditions did not differ in their SET scores at baseline, the PBIS trained schools implemented PBIS with higher fidelity than the Comparison schools. As anticipated, all of the trained schools implemented PBIS with high fidelity within the 4 years of post-training data collection. Whereas the developers of PBIS anticipated that it would take schools between 3 to 5 years to implement all critical elements of the model with high fidelity (Sugai and Horner 2006), our SET data indicated that two-thirds of the trained schools met this level within the first year, and all but one school had met this level by the end of the third year of implementation. This suggests that the training procedures used within the state, and thus the effectiveness trial, can effectively bring about efficient and high quality implementation of PBIS. It is important to note that the PBIS training and support efforts examined in the current study were coordinated, facilitated, and funded by the PBIS state team, not the researchers. The lack of a standardized PBIS training format, intensity, and duration across states and/or schools likely contributes to variations in both the implementation quality and outcomes. Additional research is needed to determine the most efficient and effective training and support structure to maximize implementation quality and optimize outcomes (Domitrovich et al. 2008).

Although the high-quality implementation (eventually) achieved in all of the trained schooled also suggested that the intent-to-treat approach was adequate for accounting for the main effects of the intervention on the outcomes, we believed it would be potentially instructive to explore whether the inclusion of the SET data would attenuate the effects of the intervention. Surprisingly, we found that the baseline continuous SET score was not associated with either intercept or growth in organizational health, aside from a significant (albeit very small) positive effect on the slope of academic emphasis (regardless of whether intervention status was included in the model). Furthermore, the inclusion of the baseline SET score had little impact on the intervention effects (see Model 2 of Tables 27). This finding suggests that baseline SET scores are not particularly informative in predicting the growth in organizational health, nor were they associated with the intercept of organizational health. Furthermore, our prior research suggested that baseline SET scores were not predictive of the speed with which the schools implemented PBIS with high fidelity (Bradshaw et al. 2008). It is possible that the presence of some elements of PBIS at baseline is not sufficient to produce positive OHI. Those elements might be seen as disjointed components which, taken alone, have little effect on school climate; however, the confluence of the core elements of the model implemented at a high level may be necessary to produce a favorable school climate. Additional research is needed with larger, more diverse samples to determine the precise threshold of PBIS implementation which is associated with significant improvements in organizational health.

Surprisingly, we found that at the end of Year 1, the dichotomous SET score was negatively associated with growth in some aspects of organizational health (e.g., overall OHI), but it attenuated some of the intervention effects (e.g., academic emphasis) and slightly strengthened the effects on other outcomes (i.e., collegial leadership and institutional integrity). Further inspection of the negative associations between the end of Year 1 80% SET criteria and growth in organizational health in the stratified analyses (n = 21 trained schools) suggested that the trained schools which implemented PBIS the fastest (i.e., early implementers) tended to be more organizationally healthy prior to receiving training, but tended to improve less than the trained schools which took longer to implement the program with fidelity. This inverse relationship, although not reaching statistical significance, is consistent with the negative (significant) correlations observed between the intercept and slope terms for several of the outcomes (e.g., overall OHI and academic emphasis) which emerged in the intent-to-treat analyses. It is likely that the diminished power in this stratified analysis (n = 21 schools) limited our ability to detect a significant difference in the slopes between the early versus late implementers of PBIS (Murray 1998). Regardless, these trends suggest that schools that implemented the program faster tended to start off more organizationally healthy, but benefitted slightly less (with regard to organizational health) than trained schools that took a few years longer to reach high implementation quality; however, the later implementing schools tended to improve the most through PBIS. The effects of PBIS on school climate may be strongest in schools which are less organizationally healthy; therefore, it may be efficient to target training efforts to the schools that would benefit the most from the model. However, we again caution readers regarding this finding, as it is only marginally significant in the current sample of just 21 intervention schools.

Finally, we explored the effect of ever making the high fidelity SET criteria, irrespective of formal training. We found that after removing the intervention status variable, implementation fidelity was associated with improvements in some aspects of organizational health (i.e., overall OHI, academic emphasis, and staff affiliation); however, we cannot interpret the association between PBIS implementation and growth in organizational health to be causal since the ever-making fidelity level was not randomly assigned. It is important to note that when we controlled for the (randomized) PBIS training, the effect of ever making the SET fidelity criteria on growth in all aspects of OHI became non-significant. We also considered employing other analytic approaches, such as complier average causal effect estimation to explore the causal impact of training in PBIS, while accounting for school-level treatment compliance and noncompliance (Jo 2002). However, we were limited by the relatively small number of schools enrolled in the trial and the fact that all of the trained schools eventually complied with the treatment within the 4-year post-treatment period. Further research with this sample will explore the potential utility of these and other causal modeling approaches.

Limitations

The reliance on staff self-report measures may have influenced the pattern of findings, as many staff were likely knowledgeable about the purpose of the study and their school’s intervention status. Although the OHI survey materials were distributed to the staff at school in self-addressed survey packets, it is possible that school administrators or other school staff could have imposed demand characteristics that influenced the participants’ responses on the OHI. The overall response rate was relatively high for a study of this scale; however, it is possible that the staff who chose to participate in the study differed in some way from those who did not. The participating schools were diverse with regard to size, location (rural/suburban), and student characteristics. As noted above, analyses by Stuart and Leaf (2007) using matching procedures indicated that the schools in the trial did not differ significantly from other non-urban elementary schools in the state with regard to baseline school-level characteristics (e.g., achievement, school size). It was not feasible to randomly select schools from the state; therefore, we relied on voluntary schools, which could have differed from non-voluntary schools in the level of organizational health. Different effects might be observed in schools that are resistant to adopting PBIS.

Further work is also needed to examine the effect of PBIS on organizational factors in urban schools, as well as middle and high schools, as implementation of PBIS in these contexts might be associated with varying levels of implementation quality and/or rates of growth in organizational health. Although we controlled for covariates at the school- and staff-levels previously shown to be associated with organizational health (Bevans et al. 2007), additional research is needed to determine if these and other factors moderate the impact of PBIS on staff and student outcomes. The limited number of schools in the trial limits our power to examine interactions. The strong PBIS training and sustainability network present in Maryland is not available in all states (see Barrett et al. 2008); thus, it is unclear whether similar effects will occur in other contexts where the training and support network is less extensive.

The current study only examined the impact of training in the universal PBIS model. While some PBIS schools may have implemented additional supports or programs for students who did not respond to the universal model, these efforts were not systematic across schools. Furthermore, schools in the Comparison condition were also implementing other programs, which may have influenced the findings. In fact, prior work by Gottfredson and Gottfredson (2002) indicates that most schools are implementing multiple programs simultaneously, yet few prevention programs are implemented with high quality outside of research studies (also see Domitrovich et al. 2008). We attempted to monitor the number and type of different programs implemented in the 37 trial schools and found that the number of programs (other than PBIS) implemented in the schools tended to increase over the course of the trial. For example, the Comparison schools reported implementing an average of 1.3 programs to prevent behavior and social-emotional problems in the first year of the trial and 5.8 programs in the final year. Similarly, among the PBIS schools, an average of 1.5 additional prevention programs were implemented in the first year, whereas an average of 5.1 programs was implemented in the schools in the final year. The most common programs focused on character education/development, social-emotional/social skills, bullying prevention, drug prevention (e.g., D.A.R.E.), and conflict resolution/peer mediation; however, the implementation quality of these other programs is unknown. Anecdotal evidence suggests that the implementation quality of these programs varied considerably across schools.

General Conclusion

Taken together, the findings of the current study suggest that training in school-wide PBIS is associated with significant changes in the schools’ organizational context. Although there was some variation in implementation quality in both the PBIS and Comparison schools, analyses indicated that the effects tended to be greatest around 3 years post-training. Furthermore, the diffusion of PBIS in the Comparison schools did not translate into significant improvements in school climate. Schools with lower levels of organizational health at baseline appeared to benefit the most from the model. Given the association between organizational health and positive student and staff outcomes observed in previous studies (Hoy and Tarter 1997), we anticipate that improvements in the schools’ organizational health will prove to be an important mediator of the effect of PBIS on multiple outcomes. High-quality implementation of PBIS and enhancements in the schools’ organizational health may also increase the capacity of the staff and school environment to implement other preventive interventions for children not responding to the universal model (Aarons and Sawitzy 2006).

Acknowledgements

Support for this project comes from grants from the Centers for Disease Control and Prevention (R49/CCR318627, 1U49CE000728, K01CE001333-01) and the National Institute of Mental Health (1R01MH67948-1A and T32 MH19545-11). The authors are particularly grateful to Susan Barrett and Jerry Bloom of the Sheppard Pratt Health System, Milton McKenna and Andrea Alexander of the Maryland State Department of Education, and the other members of the PBIS Maryland State Leadership for their support of this project.

Copyright information

© Society for Prevention Research 2008

Authors and Affiliations

  • Catherine P. Bradshaw
    • 1
  • Christine W. Koth
    • 1
  • Leslie A. Thornton
    • 1
  • Philip J. Leaf
    • 1
  1. 1.Department of Mental Health, Johns Hopkins Center for the Prevention of Youth ViolenceJohns Hopkins Bloomberg School of Public HealthBaltimoreUSA

Personalised recommendations