Keywords

Introduction

Although the evidence base regarding the outcomes of Schools as Community Hubs (SaCH) in Australia is still limited, there is growing research around the factors necessary for success (Maier et al., 2018). An example of this is the Building Connections ‘How to Hub Australia’ framework which lists 12 important factors to consider in the development and implementation of SaCH (Cleveland et al., 2022). ‘Evaluation and evidence’ is identified as one of the important factors in this framework, as well as by others working in the field (Carpenter et al., 2011; Department of Education and Child Development [SA], 2017). However, this extends beyond just evaluation, to the need for SaCH to be learning organisations that continually reflect, adapt, and progress, with an organisation, leaders and staff that think and act evaluatively, as detailed by Clinton et al. (2023) in this edited book. While the evidence is currently limited, evaluative thinking (ET) is starting to emerge as a key factor that assists organisations such as schools and SaCH to be learning organisations, prioritising evaluation and evidence-based decision making (Kuji-Shikatani et al., 2015; Malloy et al., 2016) The focus of this chapter, therefore, is to synthesise the current literature on ET and apply it to the context of SaCH.

Evaluative Thinking

ET is an area of increasing interest in the evaluation literature, as it is theorised to be a crucial factor in the successful implementation and achievement of intended outcomes for initiatives and organisations (Earl & Timperley, 2015; Lu et al., 2019). Due to its emerging nature, the empirical research base is small but growing. Existing studies suggest that ET explains some of the positive impacts of programs and initiatives (Clinton, 2014; Grinó et al., 2014; Wyatt, 2017), demonstrating that this area is worthy of increased research focus.

What is Evaluative Thinking?

ET, which has been described as “a habit of mind, motivated by a never-satiated desire for evidence” (Buckley, n.d., para. 2), is the set of skills and mindsets necessary for a person or organisation to engage in and realise the benefits of evaluation (Buckley et al., 2015; Earl & Timperley, 2015; Grinó et al., 2014). It is closely linked to critical thinking as well as reflective practices. Associated behaviours and skills include data collection and analysis, systematic questioning, problem-solving, reflecting, and making evidence-based decisions (Fierro et al., 2018; Vo, 2013). A belief in the value of evaluation and evidence, inquisitiveness, a willingness to test assumptions, and being open to change are some of the mindsets and attitudes associated with ET (Archibald et al., 2011; Vo et al., 2018).

The debate in the literature regarding exactly what ET entails is ongoing, and there is not yet one widely accepted definition (McIntosh et al., 2020; Patton, 2018). However, one definition cited by a growing number of authors (see King, 2020; Lu et al., 2019; McFadden & Williams, 2020) is by Buckley et al., (2015, p. 378), which states that ET is:

Critical thinking applied in the context of evaluation, motivated by an attitude of inquisitiveness and belief in the value of evidence pursuing deeper understanding through reflection and perspective-taking, and informing decisions in preparation for action.

Specifically, we can argue that evaluative thinkers in education demonstrate behaviours and skills such as setting clear goals, collecting and analysing data, adapting based on evidence, reflecting and seeking feedback, and making evidence-informed decisions (Clinton, 2021).

Why is Evaluative Thinking Important?

ET is increasingly acknowledged as a crucial factor in developing an organisation’s evaluative culture (Fierro et al., 2018; McIntosh et al., 2020). An evaluative culture is related to an organisation's evaluation capacity and use, which assists in achieving higher quality implementation of initiatives and interventions that lead to improved outcomes. This is especially relevant in education initiatives that are innovative or are adapted for local contexts. Earl and Timperley (2015) suggest that traditional evaluation activities are often difficult and less productive in these situations, due to regular revisions of the initiative design, the implementation, and the intended outcomes.

Several projects in NGOs and community-based organisations have, however, shown that ET can be developed within programs and initiatives, and can positively impact implementation (Baker et al., 2006; Lu et al., 2019). A study of ET development in international NGOs found that, although “embracing ET required a shift in practices and investment of time, human resources, and money, the benefits they gained from it justified the costs” (Grinó et al., 2014, p. 60). In one of the NGOs in the study, implementing ET approaches, which included closer reviews of the program data, led to the realisation that an intervention they thought was successful, actually wasn’t, and was possibly even leading to adverse outcomes (Grinó et al., 2014).

An investigation of the effect of evaluation engagement on the outcomes of public health interventions found that evaluation can provide “reasonably unique contributions to the overall program outcomes” (Clinton, 2014, p. 1). Evaluation use, therefore, plays a vital role in initiatives and programs achieving their intended outcomes and furthers the argument that organisations should be motivated to think evaluatively and engage in evaluation (Buckley et al., 2015; Hattie & Smith, 2021). However, US-based research shows that only about 20% of evaluations conducted in community-based organisations are performed by professional evaluators (Janzen et al., 2017). This indicates that most evaluation work is completed by internal evaluators and non-evaluation staff, who often have no qualifications and limited skills and experience. Therefore, if these organisations develop a culture of ET, it will increase the effectiveness and value of the work they are already doing.

Developing ET can be challenging, especially when needing to overcome strong cultures that may be distrusting of evaluation. Lu et al. (2019) recently identified facilitators and barriers to developing ET in NGOs. This work reinforces the idea that ET is more than merely doing evaluation and that engaging in ET needs to be intentional. Potential facilitators include transparency, structured reporting processes, a desire for measurement, and learning how to improve outcomes. Potential barriers included limited funding, overburdened staff, and lack of strategic planning (Lu et al., 2019).

The Centre for Educational Statistics and Evaluation (CESE), in the NSW Department of Education, has a focus on developing the ET of teachers and school leaders to improve school quality, and therefore the outcomes of students (CESE, 2015). This is conducted in several ways, including by providing resources on their website, running professional development for school leaders, offering coaching by experienced evaluators, and incorporating the building of ET mindsets and skills into system-wide improvement strategies. One successful CESE initiative found that the ET capacity of teachers was able to be built when they were supported by experienced instructional leaders, provided with the necessary tools and time, and were given both professional learning and the time and opportunity to put it into practice (Wyatt, 2017).

Therefore, research shows that ET is potentially a critical factor in the success of programs and initiatives and should be considered when developing interventions, including those involving schools. It is also a skill that can be developed by school leaders and staff and within community-based programs. However, there is currently little documented evidence of ET being explicitly considered in the design or practice of initiatives and programs, especially in community-focused organisations or schools.

Evaluative Thinking in Schools as Community Hubs

Education is the primary field in which the modern discipline of program evaluation developed and expanded (Hogan, 2007; Madaus et al., 1983), and there is beginning to be explicit discussion in the literature of the potential impact of evaluation and ET in schools and on student outcomes (Cheng & King, 2017; Clinton, 2021). However, there is still limited understanding of the extent of evaluation use and ET in schools and how this affects program implementation and outcomes, especially for innovative programs such as SaCH (Earl & Timperley, 2015).

Evaluation practice and ET have been identified as essential in school improvement practices, with many of the largest effect sizes for improving teacher practice related to evaluation and ET (Clinton et al., 2015; Hattie & Smith, 2021). Evaluative practices are likely to be even more important when considering SaCH, because of their aim to address ‘wicked’ problems through a Collective Impact approach (Fry, 2019; Smart, 2017). Evaluation and ET are necessary for SaCH, due to the complexity of the implementation process. Implementation is never complete in a school hub, due to the need to continuously adapt based on data and evidence, and the changing needs of the users (Clinton et al., 2023). In the design and early implementation phases, decisions must be made about what is most appropriate for the hub, based on the local context. Once implementation commences, data needs to be regularly collected, to allow for investigation of the outcomes of the decisions made. This provides an evidence base, to ensure that informed decisions can be made, and implementation can be adapted as necessary. This cycle continues, as implementation will never be complete. Ongoing data collection, monitoring and evaluation are required, to ensure continued effectiveness, and that changing contexts are noticed and acted upon (Clinton et al., 2023).

Existing Research on Evaluative Thinking and Schools as Community Hubs

Although ET appears to be an important factor in the successful implementation and achievement of outcomes in community-based programs, it is under-researched, especially in relation to SaCH. The lack of focus on ET in SaCH literature is not surprising. Despite appearing to be a natural fit with the work being done in most schools, especially those with an improvement focus (Earl & Timperley, 2015; Hattie & Zierer, 2017), evaluation is still missing from most school-based work. This is an ongoing issue, with Cousins et al. identifying, in 2006, that limited prior experience with evaluation and systematic inquiry is one of the most significant barriers to evaluation and evaluative inquiry in schools. These barriers persist in schools in general (Earl & Timperley, 2015; Piggot-Irvine, 2009), and in SaCH in particular (Kerr & Dyson, 2019; Provinzano et al., 2020).

The lack of hub schools engaging with evaluation and ET is slowly changing, especially in organisations that work to support SaCH, known as ‘backbone organisations’ (Kania & Kramer, 2011, 2013). Several Australian and US SaCH initiatives are supported by backbone organisations that are district or education department based or are funded by philanthropists. This includes Our Place (2022) in Victoria, Community Hubs Australia (2019) which operates across four Australian states, City Connects in Boston (Bowden et al., 2020), and the Chicago Community Schools Initiative (Ray & Egner, 2019).

One example of a backbone organisation is the New York City Community Schools program, which has been running since 2014 and by 2019 was supporting more than 200 SaCH, with a budget of $195 million (Jacobson, 2019). A community schools office in the Department of Education supports the schools and hubs. A theory of change has been developed for this program, showing an explicit engagement with evaluation (Johnston et al., 2017). The model includes four key pillars, which are evidence-based but allow for flexibility and adaptability to each local context. “The use of data to inform continuous improvement is also a core component” of the New York City programs, with all schools having access to real-time data to inform decision making (Johnston et al., 2020, p. 10). Therefore, there is significant engagement with evaluation and ET at a system level, however, it is unknown to what level this has flowed through to the individual school and hub level.

Evaluation has been identified as an important factor in delivering quality after-school programs (an activity in many school hubs), by researchers and practitioners in the US (Russell & Newhouse, 2021). ET is noted as an important factor for success, shown when “staff and leaders think critically about data, are curious about the conditions under which the results emerged, and are genuinely interested and motivated to use evaluation data to inform, launch, and execute program improvement efforts” (Berry & Sloper, 2021, p. 168). The focus is on not just collecting data, but engaging in critical and evaluative thinking, to ensure data is used to continuously improve. The authors suggest that building relationships, capitalising on the curiosity of staff, understanding the program logic, understanding what data is collected and how it is used, and developing strategic plans are all important steps to building the evaluative thinking of staff (Berry & Sloper, 2021).

SaCH backbone organisations in Australia are also making progress on integrating evaluation and ET into their ways of working, including Our Place (2022), Community Hubs Australia (2015, 2019) and Logan Together (2017, 2018). For example, Our Place (2020, 2022) produces annual progress reports, along with reports detailing the research and evidence behind their approach (McLoughlin et al., 2020). These documents show that Our Place values evaluation, data, and evidence-based decision making, as demonstrated in Fig. 1, the Our Place implementation framework (Our Place, 2020).

Fig. 1
A block diagram represents the implementation framework of Our Place. It includes the various steps of establishment and engagement, initial implementation, sustained implementation, and review. The framework indicates that Our Place encourages evaluation, data, and evidence-based decision-making.

Our Place implementation framework (Our Place, 2020, p. 7)

The organisation is demonstrating ET, even if it does not identify it by name, as their publications identify many aspects of ET in their work. This includes the use of evaluation, evaluation frameworks and theories of change, a focus on collecting data and tracking outcomes, the sharing of results, and a focus on building organisational capacity and capabilities.

Our Place has demonstrated success with its first SaCH, Doveton College, which opened in 2012 after five years of planning. Positive outcomes achieved by the school and hub include increased school-readiness among children who attend the on-site early childhood centre, increased school attendance, improved standardised testing results in years 7 and 9, and significant engagement by the community with the adult learning programs offered at the college (Doveton College, 2014; Glover, 2020; Our Place, 2019). Our Place has supported only 10 SaCH [which Our Place (2021) describe as place-based approaches that utilise the universal platform of a school], all located in Victoria, many of which are in the early stages of development, and it relies on significant philanthropic investments to do so (Our Place, 2022). This model, of a backbone organisation funded primarily by philanthropy, appears to be successful in individual sites but is not replicable at scale, nor is it reflective of the broader field of SaCH in Australia.

Instead, many hubs appear to be working independently, operating without the support of a backbone organisation, sometimes not even aware that they are operating as a hub. Often, these hubs develop haphazardly through engagement with individual partners and by offering specific activities, rather than through a strategic approach to support students and the community (Sanjeevan et al., 2016). ET is therefore important for these hubs, to ensure that they are asking the right questions about their programs and collecting the data to be able to answer them. This ensures necessary adaptation can occur, informed by evidence. It also allows for the demonstration of impact, which improves the ability to attract ongoing funding, which is an issue identified by many working in the field (Chandler & Cleveland, 2021).

In Australia, federal, state, and local governments have been responsible for funding various programs to support the development of SaCH. Although evaluations have sometimes been conducted on these models (Department of Education and Training (Vic), 2015; Jose et al., 2019; Press et al., 2015), these are usually conducted early in the implementation process (Sanjeevan et al., 2016). This is often too soon to identify outcomes, which can take a long time to be detectable—as is common in Collective Impact interventions aiming to address wicked problems (Fry, 2019; Zuckerman, 2022)—such as academic outcomes at a whole-school level (Heers et al., 2016; Provinzano et al., 2020). Therefore, evaluation needs to be an ongoing process that hubs are engaging in, to allow for the determination of outcomes along the journey, which can be used to demonstrate that the implementation is effective.

However, there is still not consensus in the literature of the most appropriate outcomes by which to determine the success of SaCH, and these may also vary between hubs implementing different programs and initiatives, on different scales, with different target users and large differences in resources (Jacobson, 2016; Sondergeld et al., 2020). Therefore, each hub needs to decide on the approach best suited to their context, showing the need for ET, to ensure this is done effectively and efficiently. This allows for hub schools to evaluate programs and activities according to their own model and context, in line with their proposed theory of change. Further, an increased sharing of the findings of these internal and external evaluations should allow for the building of a knowledge base across the field regarding what outcomes are achievable, and what success looks like for SaCH in different contexts.

Conclusion

Evaluation is not currently an area of focus in much of the SaCH field, especially in Australia, despite it being identified as a likely factor required for success (Cleveland et al., 2022). This means that the benefits of evaluation are currently underutilised, making the path to successful implementation and achievement of intended outcomes more difficult than necessary (Clinton et al., 2023). There are, however, many identified barriers to conducting formal evaluations in most SaCH. Therefore, ET, as a way for SaCH to access the benefits of evaluation in a more user-friendly and cost-effective manner, needs to be explored. There is currently little research in this area. However, some organisations working in the field appear to have ET as a core part of their ways of working, even if they don’t identify it explicitly as ET.

The link between ET and successful SaCH is not yet proven, despite it being likely, and supported by a small but growing area of research in SaCH and related fields (Berry & Sloper, 2021; Piggot-Irvine, 2009). Further research in this area is therefore required. If the link is identified, then a focus on the development of ET in SaCH, and their staff, can begin. This should increase the likelihood of successful implementation and achievement of intended outcomes by SaCH, therefore increasing return on investment for governments, schools, and the community (see Aston et al., 2023).