The research-to-practice gap is a pervasive reality for schools and limits the effectiveness of evidence-based practices (EBPs) for students’ social, emotional, behavioral, and academic achievement (Sanetti & Collier-Meek, 2019). Multi-tiered Systems of Support (MTSS) is a widely used prevention framework that uses data to facilitate schools’ selection of appropriate EBPs aligned with student need (e.g., Tier 1 is universal programming delivered to all students), monitor students’ progress, and adjust service delivery based on that progress (Fuchs & Fuchs, 2006). However, low-quality implementation attenuates students’ gains making clear that schools need ways to be continuously responsive to faltering implementation. Access to data can be a powerful tool in this regard. School teams equipped with information about their school’s implementation context can guide resources toward enhancing aspects of that context (leadership behaviors, climate, citizenship behaviors) known to undergird high-quality implementation (Williams et al., 2020). However, data-driven decision-making often is constrained because school teams lack the specialized knowledge required to effectively collect, analyze, and interpret data (Kippers et al., 2018). Human-centered design (HCD) is a useful approach to improve educators’ data-driven decision-making. HCD leverages the strengths, limitations, and experiences of people who use a product (e.g., a data report) to inform redesign of that product to better meet the needs of the user (Lyon et al., 2020a, 2020b). Including key stakeholders in the process of developing tools like data reports is crucial to effective use of those tools to enhance implementation.

Data-Driven Decision-Making

MTSS offers a way to organize the selection and delivery of EBPs (Sugai & Horner, 2009) and includes four key elements: screening, progress monitoring, multiple tiers of support based on student need, and data-driven decision-making (Chard et al., 2008). Central to the success of MTSS is making timely, data-driven decisions to prevent and address social, emotional, behavioral, and academic struggles (McIntosh & Goodman, 2016). While student data are commonly used to monitor progress of and adapt service delivery (Buzhardt et al., 2020), data also can be used to inform and support implementers’ use of EBPs within a MTSS framework (Sanetti & Collier-Meek, 2015). The Consolidated Framework for Implementation Research (CFIR; Damschroder et al., 2009) is widely applied to understand EBP implementation and outlines a wide range of implementation determinants, including those at the level of the inner setting (e.g., school implementation climate). Key implementation determinants can be measured to provide formative feedback that school-based implementation teams can use to guide, adapt, and monitor EBP implementation. However, there are several barriers that constrain the ability of schools to engage in the data-driven decision-making that would best support EBP implementation.

Barriers to Data-Driven Decision-Making

Many school teams make use of distributed leadership structures that gather and use data to inform efforts to improve aspects of the school organizational context in relation to implementation. There are currently no established procedures that provide teams with data on implementation-specific organizational factors as part of a structured decision-making and action planning process. In general, school teams need to have the right members (e.g., formal and informal leaders) to create distributed leadership and establish key norms (e.g., adopt solution-focused conversations, limit side-bar conversations, engage in respectful communication) that enable effective collaboration and productivity (Salas et al., 2018). These alone, however, are insufficient to promote a system-level assessment-to-action process that drives improvements in implementation-specific organizational factors (Rosenfield et al., 2018). In addition, school teams also need feasible, low-cost yet effective supports that enable them to gather reliable and valid data on implementation-specific organizational factors and use these data to inform decisions to develop, deploy, and monitor the impact of action plans (i.e., detailed plans regarding what, for whom and by whom, when, and to achieve what goal) tailored to context-specific needs in their school. However, it is difficult to set up systems to collect, analyze, and present data in a user-friendly way. This limits the extent to which data can be used to support the high-quality implementation necessary for EBPs to have their intended effect. Fortunately, HCD can help to identify aspects of this process that can be optimized to best meet the needs of school teams.

Human-Centered Design

Problematic design can undermine otherwise appealing and effective products (Karsh, 2004). HCD is dedicated to the development of products that are compelling, intuitive, and effective (International Standards Organization, 1999), and includes methods that ground the development process in information about the needs and desires of people who will ultimately use the product (Courage & Baxter, 2005; Norman & Draper, 1986). HCD processes typically involve: understanding users, their needs, and their contexts/constraints; iteratively developing design solutions; and evaluating the extent to which those solutions meet identified needs (ISO, 1999; Lyon et al., 2020a, 2020b). These steps are intended to result in parsimonious and accessible designs that are usable for the broadest possible range of stakeholders. There has been a recent surge in the application of HCD principles and methods to the development and implementation of psychosocial interventions for use in applied clinical and community settings (Altman et al., 2018; Lyon et al., 2021, 2020a, 2020b). To ensure a product’s appropriateness, development should incorporate key processes including iterative prototyping and frequent feedback from users in the context of ongoing usability testing (Rubin et al., 2008; Zhou, 2007).

Although they bring novel methods, HCD processes are highly compatible with existing implementation frameworks. In particular, multilevel implementation determinants such as those identified in the CFIR are particularly useful for supporting early steps of the HCD process and surfacing information about users, needs, and contexts that can drive subsequent redesign efforts (Lyon et al., 2019). While innovation-level determinants (e.g., design quality) might lead directly to high-priority usability issues (Munson et al., 2022), individual, inner, and outer setting factors can yield important design constraints that design solutions should be able to address.

Appropriateness of products is crucial to high-quality implementation in schools. Once evaluations end, permanent products are left in the implementation setting (e.g., schools, classrooms) without research-related resources to support ongoing use. EBPs often are implemented without the collaboration of stakeholders in the setting in which the EBP will be used. Stakeholders are infrequently invited to partner in the implementation effort to offer input and feedback as to whether an EBP is usable, feasible, or appropriate for schools, which leads to low use and sustainment (Triplett et al., 2021). Local stakeholders are uniquely situated to provide feedback about aspects of the implementation context that facilitate or inhibit successful EBP implementation; they can offer insight to the leadership practices (e.g., proactively removing barriers to implementing an EBP), climate (e.g., recognition of staff expertise related to EBP use), and citizenship behaviors (e.g., helping others to ensure proper implementation of an EBP) known to support implementation (Lyon et al., 2018; Williams et al., 2020). As frontline implementers, local stakeholders also are uniquely situated to provide feedback on whether a product intended to support implementation efforts (e.g., implementation context feedback report) is usable in the context of active EBP implementation.

Present Study

This study occurred as part of a larger federally-funded project that examined the iterative adaptation and validation of measures capturing key constructs of the school organizational implementation context conducted in the Northwestern and Midwestern United States of America (Lyon et al., under review). Teachers were asked to complete a suite of measures that capture key organizational factors in a school that influence educators’ EBP adoption, delivery, and sustainment. The measures assessed implementation leadership, implementation climate, and implementation citizenship behavior (Lyon et al., 2022; Thayer et al., 2022). Data from the original study were aggregated at the school and district levels, and a feedback report was generated. The purpose of this qualitative case study was to gather stakeholder feedback from multiple perspectives (expert intermediaries, district administrators, and school principals) about the feedback report to inform the best ways to use data to drive MTSS implementation in public schools. This study serves as an example for how implementation scientists can gather formative feedback from implementation practitioners and other key stakeholders to help them strengthen various aspects of implementation processes, including data-driven implementation strategies, which is a gap in the literature. We addressed the following research question: How does including school-based implementation practitioners’ feedback support the development of a meaningful and useable end product (e.g., data report)?

Methods

Setting and Participants

The University of Washington institutional review board approved the study (Study No. 52311). Participants were purposively recruited in two ways. First, principals (n = 52) and district administrators (n = 11) of schools that participated in the larger study were eligible to participate. Second, principals and district administrators nominated other principals and district administrators. Eight expert intermediaries, defined as organizations that disseminate information about and/or provide support for the adoption of specific EBPs (Proctor et al., 2019) were recruited from two prominent purveyor organizations for the selected EBPs (i.e., Positive Behavior and Intervention Supports (PBIS) and Promoting Alternative Thinking Strategies (PATHS)). PBIS (Sugai & Horner, 2009) and PATHS (Kusché & Greenberg, 2005) were selected in the parent grant to have two different evidence-based practices to be able to show the measures that were developed—School Implementation Climate Scale (Thayer et al., 2022), School Implementation Leadership Scale (Lyon et al., 2022), School Implementation Citizenship Behavior Scale (Ehrhart et al., 2015; Lyon et al., 2018)—were implementation agnostic processes. Expert intermediaries were prioritized because they work to implement interventions in schools and districts across the United States, which gives them a unique perspective of the barriers to educators’ use of data to support effective EBP implementation. The data report generated for this study is intended for use by school leaders and district administrators (for similar example, see Skar et al., 2022), which is why other school and district personnel were not recruited. Potential participants were contacted via email or phone, invited to participate in a 60–90-min interview, and informed of the $100 gift card incentive. This recruitment process resulted in a total of 30 individuals (n = 4 expert intermediaries, n = 8 district administrators, and n = 18 principals). All principals and four district administrators were from the parent project and represented four relatively large school districts in the Northwestern United States and two large school districts in the Midwestern United States. Four district administrators from three additional Northwestern districts were successfully recruited via nomination from other district administrators. Thirty-seven principal and district administrator participants from the parent study did not respond to this study’s invitation. See Table 1 for recruitment rates by participant type and district. All participants provided informed consent. Qualitative research prioritizes saturation of information over a specific number of respondents. However, there is growing consensus that 20–30 semi-structured interviews are needed to reach saturation, though the number could be as low as 9 given a homogenous target population (Boddy, 2016; Hennick & Kaiser, 2022).

Table 1 Recruitment and participation rates by district and participant type

Principals and district administrators worked in districts that were socioeconomically (approximately 55.2% identify as low income) and racially/ethnically diverse (34.8% White; 23.7% Hispanic/Latinx; 18.9% Black /African American; 11.2% Asian; 8.1% Multiethnic; 1.4% Native Hawaiian/Pacific Islander; 1.0% American Indian or Alaskan Native). The sample was predominantly female (n = 15, 51.7%) with an average age range of 35–44 years. The ethnic backgrounds were: 82.8% white, 6.9% Asian, 3.4% African American, and 3.4.% multiracial/multiethnic. Their highest educational attainment was: 31.0% had a doctoral degree, 62.1% had a master’s degree, and 3.4% had a bachelor’s degree. See Table 2.

Table 2 Demographics

Procedures

Data from the original study were aggregated at the school and district levels, and a report was generated. During the week prior to the interview, principals and district administrators were emailed a report with aggregated data for their individual school or district, respectively; expert intermediaries were sent a district-level report as an example. The report included: (1) mean ratings for each construct (school implementation leadership [SILS], school implementation climate [SICS], and school implementation citizenship behavior [S-ICBS]) for that school/district; (2) mean ratings for each construct for that school/district; and (3) benchmark ratings. Organizational theory (e.g., theory of organizational change for readiness; theory of strategic implementation leadership and implementation climate) suggests that these aspects of the implementation context should be similar among implementers and consistently enacted to meaningfully influence EBP implementation (Aarons et al., 2011; Weiner et al., 2011; Williams et al., 2020). When aspects of the implementation context are similarly shared among implementers, there is alignment with regard to perceptions of the organization and suggest a more positive and conducive implementation context. Empirical evidence shows organizational averages of these constructs range from 1.93 (Ehrhart et al., 2014) to 2.42 (Aarons et al., 2014), suggesting a benchmark would need to be set higher to ensure room for improvement was possible (scale range for both constructs is 0–4). As such, benchmarks were set to “3-To a Very Great Extent” for implementation leadership and climate and “3-Fairly Often” for S-ICBS (see Figs. 1–4 in the supplementary materials for an excerpt from a sample data report).

It was infeasible to have all respondents provide feedback on each of the 18 subscales included across the SILS (n = 7), SICS (n = 7), and S-ICBS (n = 4) (see Table 3 for description of each subscale). Therefore, to ensure adequate information was gathered for each subscale while not compromising data integrity due to interviewee fatigue, participants were randomly assigned subscales to review for each construct. Participants also were allowed to select subscales that seemed important to them based on the data. Selections were well distributed with at least two participants opting to focus their planning efforts on each available subscale, which ensured adequate representation that each subscale was reviewed and discussed. Individual semi-structured interviews were held for each stakeholder type to allow for comparisons across groups.

Table 3 Description of School Implementation Leadership, Climate, and Implementation Citizenship Behavior subscales

A systematic and comprehensive interview guide to capture feedback on the data report was used. The interview guide reviewed definitions of the organizational factors, checked for comprehension, and followed a think-aloud protocol (Benbunan-Fich, 2001) where participants told the interviewer what they were thinking as they processed the report. Interview questions elicited open-ended responses about the: (1) interpretation of data (e.g., “On the report, please review one graph under implementation climate by describing what you are looking at and how you are interpreting what you are seeing.”); (2) accuracy of the report (e.g., “How accurately does this report reflect implementation leadership in your school?” and (3) use of report (e.g., “Please describe some strategies you might use to address this issue.”). Three female research study coordinators with BA or higher degrees who had at least one year of qualitative research experience conducted all interviews via videoconference. Interviewers had no prior interactions with participants. Only the interviewer and the interviewee were present. At the start of the interview, interviewers provided the purpose of the research study. All interviews were audio recorded and lasted approximately 60 min.

Data Analysis

Interviews were transcribed and uploaded to NVivo QSR 12 for data management. The coding scheme was developed using a rigorous, systematic, transparent, and iterative approach using the following steps. First, four members of the research team independently coded two initial transcripts to identify recurring codes. Second, they met as a group to discuss recurring codes and developed a codebook using an integrated approach as certain codes were conceptualized during the interview guide development (i.e., deductive approach), and other codes were developed through a close reading of the two transcripts (i.e., inductive approach; Bradley et al., 2007; Neale, 2016). Next, the research team met to discuss common codes interpreted from the transcripts to include in the final codebook. Then, operational definitions of each code were documented as well as examples of when to use and not use the code. See Table 4.

Table 4 Definitions of Codes

The coding scheme was applied to the data to produce a descriptive analysis of each code and refined throughout the data analytic process (Bradley et al., 2007). Two members of the research team coded all data and overlapped on 20% of randomly selected transcripts to determine inter-rater reliability. They met together on a weekly basis to discuss, clarify, verify, and compare emerging codes to ensure consensus. Agreement between raters was excellent (percent agreement, which relies on the proportion of agreement of coded units between two independent judges = 92.1% for expert intermediaries, 94.7% for district administrators, and 96.3% for principals). Data saturation was reached at the point at which no new insights were obtained, and no new themes were identified when the codebook was applied across the text segments (Guest et al., 2006; Saunders et al., 2018).

Results

Interpretation/Description of the Report/Feedback

Many participants felt the data report was useful, well organized, straightforward, easily digestible with a nice, visually appealing layout. The majority preferred the consistent use of color and formatting (spacing, font size) on all graphs as well as consistent presentation of data (e.g., scales, number of respondents, median vs. mean reporting). One principal noted, “the line graphs clearly show where the median is and where my school compares to others—which is very easy to decipher.” Participants also noted legends were helpful. Highlights of the data reports included the discussion/interpretation blurbs of the data. An expert participant commented, “…above the bar graph there’s the reminder, “Implementation citizen behavior scale assesses…” So, [I know] what am I looking at. I think that the keys and the summaries under the item graph is helpful.” And a district administrator noted:

The plot shows how the example school’s leader is perceived to engage in these behaviors.’ So that’s cool, I like that. I felt like I wanted that above some of my tables before. I’m actually looking at those to see if I missed it, because I really like that, it reminds me about what the components were. I like that there’s no acronyms, but everything’s completely written out. It’s really helpful.

Participants highlighted the visual representation of individual schools in relation to other schools in their district and the benchmark ratings in terms of “areas of growth for the school” as indicated by “handy little messages down there that tell me—‘zero percent of the respondents answered favorably.’ So, I need to pay attention to this.” One expert participant noted that the visual representation allowed for easy comparisons between the school and the benchmark ratings:

[It’s] easy to compare the schools to one another within the district, I’d be really interested in what this looks like over time because this is a single snapshot…it’s really easy to look at that school and say oh, my gosh, we got to do something for them. And it’s easy to look and say overall, we’re doing pretty well in this area, and maybe room to grow in other ones.

Participants, especially principals, also explicitly noted the readability of each graph as the layout and distribution allows for “very quick skim [ming] across to draw some conclusions from, whereas a lot of times when you have a lot of data that’s compiled into one report it can be a little bit overwhelming.” Other participants noted that it was important to receive “legitimate staff feedback” on EBP implementation and school practices. One principal said, “It’s a great way to kind of assess where we’re at.”

Interestingly, participants noted a number of points of confusion related to each of the constructs and recommended clear definitions of constructs be included and repeated in all relevant sections. One principal suggested including specific school examples underneath each item. Many participants, despite role, requested modifications that pertained to stylistic visualization of the data including color choice (avoid yellow and colors with a traditionally negative connotation such as “red” which can be associated with “incorrect/wrong”) and font size. District participants also noted that data points on individual graphs should correspond to specific school sites to aid in data interpretation and application. One district administrator commented, “I’m struggling because I would like to know what schools they are, so if you could ideally put schools, that would give me a little bit more context…” Many participants also preferred more detailed information with specific recommendations to address areas of improvement. One expert participant commented,

…I would love to see the district averages for those because for this school that I’m looking at, it looks like ‘Rewards’ is incredibly low…I’d want to know if that’s consistent across the district or is this a particular area that we want to focus on.

Participants recommended that: (1) important information be highlighted to draw the reader’s attention (e.g., strengths and areas of improvement); (2) the number of graphs be limited to maintain the reader’s attention; (3) information at the school and district level be presented and readily digested; and (4) provide specific, actionable strategies to address constructs moving forward. With regard to the latter, participants noted specific strategies that may have worked for other schools that achieved higher marks were of interest.

Application of Data to District/School in District

Participants’ description of the application of data to the overall district and/or individual schools varied. Some participants noted that the constructs represented may be inappropriate in some domains. For example, the data highlighted the disconnect between staff EBP implementation and rewards (e.g., promotion and other incentives, etc.). For rewards, a number of participants commented on how schools were not able to provide time off as a reward or other traditional incentives such as bonuses and raises. A district administrator commented,

I definitely would throw out, after looking at “Rewards,” and going back to the definition of what rewards is, the use of financial incentives, bonus and raises. This isn’t even applicable really to public schools because we don’t do bonuses and raises associated with the use of anything. There could be some comp time, potentially associated with some of the work, but that varies so much by building. So, I’m not going to focus as much [on Rewards], even though that’s the lowest [score].

Principals also commented on the reason for low “Rewards” scores:

I look at “Rewards” as being so low, and I’d want to look into what’s the definition of “Rewards” to make sure, are we really that bad? ‘Our school provides perks, incentives, coffee cards [to teachers/staff] who use evidence-based practices.’ I would refuse to do that ever. We’re not allowed to gift public funds, and we are a taxpayer institution, so the only time they get perks or incentives are if our community organizations donate.

Another expert participant highlighted that schools may include personnel who use the data and others who do not use collected data, which may be a challenge for applicability. Several participants noted that the data were positive as noted here:

As I was going through the list here I see that “Advocacy” is an area that I think there’s some good positives across the board-- moderate to great and higher. And so that shows that we can build from the fact that people here do want kids to be successful, that there’s a strong sense of advocacy which is why they’re working in a Title I school with high mobility and high free and reduced lunch. So, that’s a thing that brings people to work in this particular environment.

Other participants illuminated areas that they were not aware were an issue (e.g., schedules need to be amended so teachers have more time for collaboration and training, communication between the district and school buildings with regard to EBP implementation). One principal remarked,

So, ‘this school uses professional development time to support staff to use evidence-based practices over time.’ My first thought is I’ve done nothing but use evidence-based practices in both academics and behavior, so those numbers make me go, “what!?” Yeah. Every single thing that we do, it comes out of research, or I don’t do it. So that means that the respondent lens—I’m not communicating effectively enough that it’s all research-based. That’s what that tells me.

Across all levels, participants provided explanations for low scores. These included a heavy workload, school and district size, and communication barriers between district administrators and principals as well as principals and staff. Principals reflected on data accuracy and how the data reports met their expectations. There were a few principals who were surprised by their data citing existing systems that would apply to items on each scale that they thought were in place. For example, one principal commented:

The school connects implementation of EBP to teacher’s school performance evaluations, 30 percent say “not at all”. Only 40 percent say to a “great extent”. And then another 30 are in the yellow. So, on this graph, there’s red now for “not at all”, which I’m actually kind of surprised at those numbers, based on what I know our evaluation tool is. So, I’m really surprised at the red and the “slight extent”. I mean, we’re evaluating teachers, and knowing that best practices are part of their evaluation.

Overall, principals felt that the data report provided evidence that current internal programs, changes, and growth plans were working and that these should continue forward in subsequent years. Some principals said that the report gave information on how to change for the next year, including highlighting lower scoring areas that need to be improved and building on higher scoring items. For example, one principal commented:

It makes me smile inwardly, the difference on the last graph—‘teachers/school staff here are proponents of evidence-based practices’ and ‘very great’ slanted all the way over from ‘moderate’ all the way to ‘very great extent’ to the implementations on the far left side—‘the teachers and school staff advocate for EBP implementations in their interactions with other staff.’ So, I’m doing it, but I’m not sure if my colleagues are doing it, and we don’t talk about it, so we can’t tell if we’re doing it or not. Just kind of calls to the work that we’re doing as this being my second year as principal at [school] around the culture of collaboration and just general culture of—culture and climate of the staff.

Reactions to the Report

Participants reacted to the data report in both positive and negative ways. Some participants were glad that schools were at or around the benchmark for each construct and encouraged by strong displays of leadership in some schools. One principal commented, “Regardless of whatever the baseline was or who set it, I notice, oh, good, my school is above and below the mean of the other schools.” A number of principals reported that they were motivated to “jump off items for improvements (e.g., team effort, growth, and communication) moving forward.” Others noted the strong ratings of principals and attributed those ratings to principals’ approachability/collaborative problem solving and buy-in of EBPs and various school initiatives.

Some participants had negative reactions to the data report that primarily surrounded concerns around low scores, interpretation of the data, and the need for growth in areas that were scored less favorably. One expert participant noted the disconnect between the data collection and data dissemination to teachers as seen here:

This one is better than we see in a lot of schools, because a lot of schools collect data, but a lot of them don’t know how to use it. So, the fact that a high percentage already are seen as ‘moderate or above,’ I think is pretty good. We need to know, are there tools that they need to do better decision-making with the data? And then, ‘this school collects data about how well the EBP is being implemented.’ 12.5 percent say, ‘very great.’ So, my guess is they’re doing it, but most people don’t know about it, so how is it they can let people know about it. And then, ‘this school provides data-driven feedback to all staff about their delivery.’ So, with this one, it’s more a negative than a positive. But data-driven feedback—it could be that staff don’t know if the feedback they’re getting is data-driven. So maybe when they’re presenting more of their results, they need to talk about how those are come up with, because some people say to a “great extent.”

Some district administrators felt concerned and disappointed about certain schools falling below the benchmark, that teachers/staff reported low confidence and support from leadership, and low EBP use in some schools. One district administrator commented,

Wow. This is bad. This means that it’s the perception of the people who are supposed to be implementing the behavior-based practices—and no one believes that they’re getting support or recognition for it. They’re not using data to support it. And it doesn’t look like it’s being integrated with any kind of fidelity, because it’s well-below the benchmark.

Many principals were shocked, hurt, disappointed, and concerned because of low scores on the report and/or the mismatch between their ratings and those of comparable schools in their district. One principal noted,

The school continues to improve in effort… I am pleased that 63 percent say ‘to a great extent’ that we have continuous improvement efforts. So, that’s good. I’m glad that they recognize that. Concerning though, that there’s quite a larger red section in the next one, ‘school connects implementation to teacher school staff performance evaluation.’

Use of Reports

Participants noted that the data reports were helpful in identifying strengths and areas of improvement for individual schools. One principal commented it was helpful to see the implementation climate scores as they “were a little more on the lower side” and suggests a more significant area of focus for future work. The data reports also spurred thinking around initiatives that could be implemented district-wide. For example, one district administrator commented:

I think in order for adults to change behavior, they need time and support to do that. And I think a little bit of recognition goes a long way. So, I would start and focus on those because that might also bring up the delivery of evidence-based practices, which could improve data or the use of data. That people see the benefit of what they’re doing and people notice and recognize the hard work it takes.

Other participants said the data reports would help generate action plans for further data collection, monitoring, and analysis. For example, one principal noted,

But I think “Use of Data” would probably be the area I would do specific work. I mean, all of it needs to be improved, to be honest. But I think the data piece is central to some of the other things—like the recognition, the rewards and even communicating things that come up in other areas. I think the data piece is central for that to be effective.

Participants also recommended that the data reports could be used to improve communication among school staff to increase buy-in around EBP use and between staff and the community as a means to disseminate information as well as inform schools about the status of other schools in the district, which could be an opportunity to learn and grow from higher performing schools. In addition, the data reports could be a useful tool to discuss among individual school teams. One principal said, “I will share [these data] with the PBIS leadership team…and then ask them to tell me what we’re going to do.” Lastly, participants said the data reports would be helpful to identify what individual school staff want and need for successful EBP implementation. The data reports also generated strategies for areas of improvement including communication, proactiveness, and availability. Some strategies included coordinating changes with the district office, sharing data, and emphasizing outcomes with all staff, involving the greater community (families, students), and utilizing existing supports. One district administrator noted where they would start based on these data:

If I go back up to the top, looking at other districts surveyed and where we’re at, even with the highest, I do think it starts with leadership. So again, if conditions aren’t right—even though again that’s the highest, I don’t think we can just go to the teachers and say okay, we need to work on this without actually doing some very strategic work. Leadership, both at the school level and also at the district level, and how we’re aligning our supports and services –that’s probably where I would want to start.

Discussion

Participants provided a rich description of the ways in which educators interpret and process research data as well as concrete recommendations to make data reports more useable and meaningful to support EBP implementation in public schools. The results of this qualitative case study: (1) point to the importance of incorporating stakeholder feedback as a methodology to ensure the end product (e.g., data report) is meaningful and applicable to the setting; and (2) has direct implications for how to incorporate stakeholder feedback to help shape and improve data visualization and interpretation for better use in schools’ decision-making process to support MTSS and other EBP implementation. The most practical implication of these results is the well-defined list of recommendations that educators offered to support the utility and use of data in schools (see Table 5).

Table 5 Stakeholder recommendations by qualitative code

The US Department of Education (2008) reports how confident educators feel about their knowledge and skills in data analysis and interpretation affects data collection and the prevalence of data-informed decision-making in public schools. Stakeholders highlighted many elements that could potentially ease the burden of data interpretation and usage. For example, stylistic elements like font choice and size and color of graphical representation helped ease educators’ review and processing of data. In addition, stakeholders recommended that data reports need to be simple with limited information (e.g., a few visualizations or graphs/images) that draw the reader to what to focus on (e.g., strengths/areas of improvement), and be clear and interpretable (e.g., all graphical representation should include defined scales, legends, and a brief description of how to disentangle school-specific vs. district level data). Stakeholders also highlighted the importance for data reports to clearly provide definitions of constructs (e.g., implementation leadership, implementation climate, and citizenship) and school-specific examples to illustrate those constructs and recommendations for how to bolster areas of improvement. These stakeholder-generated suggestions for redesign also reflect the advantages of including stakeholders in the development of tools they will be expected to use. Guided by principles of HCD (Lyon et al., 2021), we now have information to improve upon the design of these reports in ways that are likely to promote principals’, administrators’, and districts’ use of them to guide data-driven decision-making in support of high-quality EBP implementation.

Limitations and Future Directions

First, participants were from six school districts in the Northwestern and Midwestern United States, which limits the geographic generalizability of the findings. Future studies should include a more representative sample given school systems may vary in how they use data to support EBP implementation in their local contexts. The sample represented a subset of principals and administrators that participated in the original study. It is possible that those who agreed to participate in the present study differed from those that did not in ways that artificially constrained the information provided (e.g., those with more familiarity with data may have been more motivated to participate). Moreover, while this study included multiple stakeholders involved with EBP implementation, it did not examine the perspectives of teachers who also play an influential role. Teachers often are asked to lead EBP implementation efforts in their classroom and their perspective on the use of data would be helpful in ensuring accurate interpretation and application of data to drive implementation and sustainment. Lastly, while we did incorporate stakeholder feedback, there was no member checking of results, which can strengthen the accuracy and credibility of qualitative data.

Future research should examine the utility and use of data in schools to inform implementation efforts. There may be a practical and empirical need for a school-level assessment-to-action organizational implementation strategy or system that informs educators’ data-driven decisions and actions within a continuous improvement process. A functional and scalable process that capitalizes on school-wide collaboration and commitment to support implementation of EBP may foster a more conducive school organizational context to promote better implementation and outcomes for students. An organizational implementation strategy that uses a quality improvement cycle that when repeated over time generates site-specific action plans that include implementation strategies tailored to cultivate and maintain a supportive organizational implementation context seems worthy of future exploration and research given the importance and underutilization of data-driven decision-making in schools.

Conclusion

This study illustrates the importance of structured reports that capture and organize stakeholder input to support the interpretation and application of data in driving MTSS implementation efforts in public schools. Results have important implications for the design and application of data reports to increase the use of research data in supporting educators’ delivery of MTSS, and potentially other EBPs, in public school settings. These findings demonstrate that frontline implementers can interpret and use data about their school or district’s implementation context to identify ways to improve upon it. While promising, these findings also demonstrate improvements that can be made to make the data more visually appealing and/or easier to understand, increasing the likelihood that the reports will be helpful for implementers. Such insights underscore the importance of inviting feedback from those who will need to digest and act upon data reports to support MTSS and other EBP implementation in schools.