Keywords

Introduction

Schools as Community Hubs (SaCH) aim to address some of society's most complex, complicated, or wicked problems (Fry, 2019). SaCHFootnote 1 are a type of school-community partnership that aim to improve outcomes in the school and community (Jacobson, 2016; Maier et al., 2017), and are often defined as:

… schools which act as a focal point for a range of family, community and health services for their students, families, staff and the wider population. They are likely to have community facilities located on site and to offer community access throughout the school day and out of school hours. They are also likely to work with local partners to deliver services such as childcare, health and social services, adult education and family learning, sports or arts activities. (Dyson et al., 2002, p. iv)

Each SaCH is unique, due to the local context and adaptations over time to respond to community needs. However, in Australia, they typically involve the co-location of facilities or services on a school site and/or the sharing of school facilities with government agencies, non-government organisations (NGOs), service providers and the community, allowing for the offering of services beyond the typical capacity of schools (Black et al., 2010; Cleveland, 2016).

SaCH aim to address some of society's most complex, or ‘wicked’, problems (Fry, 2019), such as inequities in social, economic, and educational outcomes. Trying to address wicked problems through individual programs or initiatives has demonstrated limited success (Hanleybrown et al., 2012; The World Bank, 2020). There is an increasing acknowledgement that working collectively and collaboratively to address the complex underlying causes resulting in inequalities is likely to be the only way to achieve lasting impact (Byron, 2010; Fry, 2019; Kania & Kramer, 2011). SaCH are one such attempt to implement a Collective Impact approach, allowing for the integration of services in one location targeted to the community's needs (Logan Together, 2018; Moore, 2014). Demonstrating the merit and worth of community hubs is challenging, as generalisable and reproducible population evidence for hubs has yet to be realised. Furthermore, evaluating these initiatives is a complex and contextually based concern.

Policies, resourcing, and strategy building cannot proceed on illusions and anecdotal evidence. Those who advocate for and see merit in SaCH are required to provide evidence for or against the effect on student outcomes and benefits for the community at large. In some ways evaluating SaCH is a 'wicked problem.'

While there is overwhelming agreement in the literature on the importance of evaluating SaCH, there is some contention about the nature of what constitutes credible evidence. For example, evidence suggests that community engagement can generally impact the quality of life or lifelong engagement in education. But assessing that impact is difficult when approaches to community engagement vary widely (Bolam et al., 2006; Milton et al., 2012; Popay, 2006). The literature has focused on a wide range of interventions, with tremendous diversity in terms of definitions of community engagement and evaluative methods. However, there is no substantive evidence of positive impacts on populations in broad areas of health, education, and social development measures. Thus, there are important questions related to the nature of evaluation, assessment, implementation, and what constitutes credible evidence for SaCH.

There are numerous challenges to understanding the implementation and impact of SaCH. For example, these challenges include the duration of initiatives, levels of collaboration, varying levels of implementation across different contexts and initiatives, and the collection of information about implementation. Similarly, in the evaluation realm, understanding the indicators of success to determine attribution or contribution pragmatically is challenging given the variable stakeholders in the school context. It is vital to evaluate success across these various perspectives (Fig. 1) of the complex education ecosystem in which SaCH are located.

Fig. 1
A stacked Venn diagram that represents the education ecosystem specifies the variable stakeholders in the school context. The various stakeholders are student and family, teacher and classroom, school, community, systems, and society, along with the presence of Sa C H.

The education ecosystem (Image by the authors)

SaCH as Place-Based Initiatives

To assist in understanding the place and worth of evaluation and the development of evaluation frameworks, it is helpful to view the context of SaCH as place-based initiatives. The purpose of SaCH is to engage other stakeholders in this place called 'school.' Many different terms are in the literature for place-based approaches, including area-based approaches, comprehensive community initiatives, and collective impact initiatives (Bellefontaine & Wisener, 2011). Various definitions of place-based approaches have also been offered (Bellefontaine & Wisener, 2011; Moore & Fry, 2011), and they can be broadly defined as “stakeholders engaging in a collaborative process to address issues as they are experienced within a geographic space, be it a neighbourhood, a region or an ecosystem” (Bellefontaine & Wisener, 2011).

Recently, a particular form of place-based approach that focuses on results and shared effort between various groups has emerged. The idea is that groups within the community come together to adopt a collective impact to ensure impact on the whole community. Collective Impact (CI) initiatives aim to create independent but often overlapping and related solutions to major social problems (Kania & Kramer, 2011). Rather than working in isolation, and sometimes at cross purposes, in CI initiatives, groups of key stakeholders work together with shared agendas and measurement systems, undertake “mutually reinforcing activities” (p. 39) and ongoing communication, and have a specifically created “backbone support organisation” (p. 39) that coordinates their activities.

The impact of these initiatives can be evaluated by measuring performance or outcomes across multiple organisations. The developed measures may be organisation-specific but can become part of a common reporting platform, so each organisation's performance and outcomes can be benchmarked and compared across participating bodies. All participating bodies can use common indicators and data collection methods, and extensive training and support are provided to enable the collection of high-quality data and interpretations (Kramer, Parkhurst & Vaidyanathan, 2009). Seeing SaCH as a structure that is designed as a collective impact in context provides an opportunity to consider evaluation for the whole organisation and assist in orchestrating an evaluative process.

Considering place-based initiatives and collective impact allows the application of simple rules for evaluation. These rules use evaluation to enable rather than limit strategic learning and planning. Figure 2 provides a ‘conceptual cube' that shows the multi-dimensional foundations for evaluating place-based delivery approaches highlighting the relationship of growth over time, the context and the different phases of implementation (Dart, 2018, p. 2).

Fig. 2
A 3-dimensional diagram of the conceptual cube represents the multi-dimensional foundations for evaluating place-based delivery approaches. It highlights the relationship of growth over time, the context, including changes in focus, and key evaluation criteria.

Foundations of place-based initiatives over time (Dart, 2018, p. 2)

Fry (2019, p. 55) used evidence to suggest that there are four central practices that are interconnected and interdependent that need to be in place for place-based initiatives:

  • Collaborate. Relate, connect and collaborate across sectors.

  • Community engagement. Engage and empower the community.

  • Holistic thinking. Think and act holistically.

  • Adaptation. Take an adaptive and responsive approach.

Similar to the Dart model, Fry suggests that the maturity of the development must be considered in the evaluation process. Considering SaCH as place-based initiatives that are built on the premise of collective impact provides the backdrop to consider an approach to evaluation.

SaCH and Evaluation

Evaluations conducted within complex contexts concern multiple interconnected elements such as policy, guidelines, organisational responsibilities, people, and resources. Therefore, evaluations can generate credible assessments of success. The contention is that utilising the embedded evaluation process can yield the evidence needed to support the progress towards related goals and the sustainability of projects by ensuring the flow and use of evaluative information (Clinton, 2014). The claim is that while complex evaluation must be embedded in the community and hence a part of the education ecosystem from the 'get-go,' then these initiatives have the greatest probability of impact. This is the challenge, given the nature of schools and the education system.

Evaluations claim a particular program or other entity's “value, merit, worth, significance, or quality” (Fournier, 2005, pp. 139–140). Evaluation can help communities, policymakers, program designers, and funders determine which interventions work best and under what conditions and identify the innovations that should be stopped, modified, scaled up or replicated in other communities (Lee & Chavis, 2015). The Evaluation discipline also highlights the importance of testing whether theories and approaches are working and building the evidence base for what works in the context of the education ecosystem.

Evaluations can measure performance by, for example, monitoring inputs, activities, and outputs. They can also measure outcomes within a given period and evaluate impact, such as the long-term changes attributable to the school and community activities (Kramer, Parkhurst & Vaidyanathan, 2009). There are several different types of evaluation. For example, a needs analysis is used to learn what the people or communities might need in general or concerning a specific issue. Process evaluation or formative evaluation tells how the project is operating, whether it is being implemented the way it was planned, and whether problems in implementation have emerged. Finally, an outcome evaluation examines the extent to which a project has achieved the outcomes set at the outset, examines the overall effectiveness and impact of a project and its quality, and can provide evidence about the cost–benefit, effectiveness, or value for investment.

While many evaluation approaches exist, it is suggested that no one method is best for all situations. Instead, the best approach varies according to factors such as fit with fundamental values, the intent of the evaluation, the nature of critical stakeholders, and available resources. Regardless of the approach, there is a large degree of overlap in the suggested purposes and methods. The steps relating to any particular approach will differ in the nature of the methods and tasks related to each step. Many descriptions of the steps, emphasise their iterative nature and suggest that a particular order is not always followed.

Evaluation frameworks facilitate a systematic approach to evaluation and enable multiple stakeholders to understand the fit between the program and the evaluation process while assisting in identifying and agreeing on appropriate objectives and approaches. Therefore, an evaluation framework is suggested to guide a way of working and an implementation framework or model to specify the process and assessment activity required to access evaluative information (Arbour, 2020). Figure 3 outlines an iterative process that enables questions about what we should measure and how we might understand the impact across the education ecosystem.

Fig. 3
A continuous cycle diagram represents the process of integrating an evaluation framework. It exhibits a process that enables understanding the conceptual measurement and evaluating the impact across the education ecosystem.

A process for embedding an evaluation framework (Diagram by the authors)

An Evaluation Framework for SaCH

An evaluation framework can guide a way of working, and it must meet the evaluation standards, support the development of rigorous and develop a methodology that is fit for purpose, and allow for the development and implementation of evaluation and assessment activities. Furthermore, every step of this process needs to be transparent and reproducible.

Schools are considered participatory and often utilise existing community strengths, groups, and relationships to increase engagement and action. SaCH aims for schools to partner with communities in shared design and maximise outcomes' accountability (Allen-keeling, 2020). This may involve utilising and valuing local and cultural knowledge in the evaluation process and engaging with community leaders, citizens, and local groups about the findings and the recommended actions. There can also be greater and faster learning from evaluations when more of the community actively engages in a shared evaluation approach.

Like many organisations, schools are awash with data. The issue is how to interpret, use and find value and purpose in the data. Thus, the claim is that what is needed is an evaluative framework to support the interpretation and flow of information. That is, not using evaluation to collect more data but to support developing and enabling all participants within the organisation to think and act evaluatively (Buckley et al., 2015).

The CDC evaluation framework meets all the requirements and yet allows organisations to build the context into the framework to ensure that the community's view is represented, and that the evaluation process is fit for purpose. The diagram below (Fig. 4) sets the evaluation framework within a community's worldview and suggests the importance of continuous consultation and feedback.

Fig. 4
A cycle arrow diagram represents the evaluation framework within a community’s worldview and suggests the importance of continuous consultation, feedback, and engagement of stakeholders.

CDC&P Framework for Public Health Evaluation (2000) (Figure by authors)

Several components of this framework engage stakeholders, consider the program context and theory of action, focus on appropriate methods, gather credible evidence, justify conclusions, and finally utilise lessons learned. The diagram above illustrates this process. The steps in this model allow for the development of evaluative approaches, measurements, infrastructure, information management process, and importantly make interpretations for the translation of results to all corners of the community.

Stakeholders Engagement

Determining a view of success and being able to articulate the key factors that influence and contribute to that success is what evaluation is all about. It requires an evaluation team to demonstrate evaluative thinking and assist in interpreting evidence leading to action. The concern is that the community participants, policymakers, researchers, educators, practitioners, urban designers and planners are likely to have varying views of success and hence require different answers and sometimes different data. There can also be much variance in what is considered credible evidence and stakeholders may also vary in their view of success. For example, some may focus on the economic impact of student outcomes and others on engagement in the activities. Considering these multiple notions of success is critical for a thriving and flourishing SaCH.

This initial phase fosters transparency about the evaluation's purpose and identifies the audience of the evaluation. Most significant, it clarifies the primary and secondary intended users. In relation to place-based initiatives for SaCH, conducting a stakeholder analysis within the education ecosystem is essential to build the design phase, consider school and community needs, and ensure the community, physical and infrastructure, and organisational strengths are identified and built upon. There needs to be early identification of the many different audiences and stakeholders in schools as community hubs. It is critical to understand the needs of those stakeholders—the policymakers, the providers, the participants, the researchers—and their view of success and the information required to determine ongoing engagement.

Program Description

This phase provides the opportunity to build a shared understanding of the theory of change underlying the initiative. This will often include the development of a logic model and a description of the longitudinal stages of development of the program. Program Logics are dynamic or living documents used to help guide expectations and what needs to be measured (Funnell, 2000).

Developing a program logic requires working through the SaCH theory of change or action by identifying the links between the resources available within the program, the activities that were undertaken, the outputs, and the short, intermediate, and long-term outcomes. Program logic recognises the relationships between different levels of the program and the multiple stakeholders and accommodates the complexity of implementation (Funnell, 1997, 2000). While the development of program logic is often a collaborative and interactive process comprising representatives of all stakeholders, the use of existing evidence is also often brought to the fore. This approach enables stakeholders to gain ownership of the program, work together to understand the activities undertaken and the resources available within a program, and consider the factors that influence outcomes (Funnell, 1997, 2000). Developing a program logic with hub stakeholders is a valuable way to work with them to clarify the intended outcomes and key evaluation questions.

Evaluation Focus

This phase provides an opportunity to narrow and prioritise outcomes for measurement. This step entails considering ‘the what and the how’ of the various parts of the logic model that can be measured and in what order. Working collaboratively to prioritise the evaluation based on a shared understanding of the theory of change identified in the logic model is essential.

It simply is not possible—or useful—for an evaluation to try to answer all questions for all stakeholders. Instead, there must be a focus and debate about priorities. Focusing on the evaluation design means undertaking planning about where the evaluation is headed and what steps will be taken to get there. For example, after data collection begins, changing procedures might be difficult or impossible, even if better methods become apparent. A thorough plan anticipates intended uses and creates an evaluation strategy that has the greatest chance of being effective. Among the items to consider when focusing on an evaluation are its purpose, users, uses, questions, methods, and the agreements that summarise roles, responsibilities, budgets, and deliverables for those who will conduct the evaluation. Establishing and prioritising evaluation questions are key components. These questions relate to the development of the program logic as determined by the stakeholders. At this juncture, the focus shifts to models or approaches to evaluation activity. Paproth et al. (2023) consider some key factors in understanding success and, in some cases, the factors that will mediate success along the implementation path, including thinking and acting evaluatively. Cleveland et al. (2022) have developed a framework to support the development, implementation and sustainability of SaCH. While providing insights into the key factors that need to be considered in evaluation, the model demonstrates the complicated and complex nature of SaCH. Each element can offer key questions for an evaluation of the effectiveness and efficiency of the SaCH.

In addition, Clinton (2014) demonstrated key components in understanding the impact and sustainability of key long-term initiatives. Across several evaluations utilising structural equation modelling, Clinton illustrated six factors (Fig. 5) that causally influence the success of programs or initiatives. For example, what level of implementation of any service, such as the number of children that use a swimming pool on a school campus, will influence the degree of program success. Similarly, it is important to consider levels of collaboration as these are essential for successful place-based initiatives that desire a collective impact. Therefore, it is argued that these components must be assessed in any evaluation.

Fig. 5
A block diagram represents the key factors that influence the success of the initiatives. The key factors include the meeting of K P Is, degree of implementation, adaptation, evaluation, organizational development, and collaboration.

Key Factors relating to successful program development (Image by lead author)

Without community engagement, much of the work of community hubs can fall short of desired impact (Preskill, 2017). Ensuring a continuous feedback loop utilising rich stories that bring the key stakeholders together can enhance ongoing engagement in SaCH. Similarly, mapping implementation and adaptation across time will allow for a longitudinal consideration of the merit and worth of the hub's programs (Fernandez et al., 2019).

Understanding the value of long-term participation is critical for success. This notion is developed in the corporate world via Customer Lifetime Value, which is simply the customer's lifetime value as measured by the number of transactions over a period of time. This allows an organisation to predict the value for participants and subsequently consider where effort should be placed. For SaCH, this approach to valuing is much more appropriate and beneficial than single measures over time. These factors would form a useful starting place for developing a measurement model of indicators for monitoring influencing factors and outcomes.

Gathering Credible Evidence

This step puts the evaluation plan into action by considering how credible evidence will be gathered. Credible data is the basis of a good evaluation. This step covers the plan for the evaluation and monitoring program, the intended uses, and feasibility issues. This means thinking broadly about what counts as “evidence”—it could, for example, be the results of a formal experiment or a set of systematic observations. It depends on the questions posed and what kind of information the stakeholders will find credible.

This phase identifies evaluation indicators and performance measures, data sources and methods, as well as roles and responsibilities. There are several mediating short, medium, and long-term factors that require the administration of outcome measures. These need to be monitored through the life cycle of the SaCH initiative. The methods must be appropriate for the school and community. A mixed-methods approach (quantitative and qualitative) is often conducted to gather information to determine the level of implementation and impact.

In this phase, methods employed, and data gathered must be fit for purpose and hence must be seen as believable, trustworthy, and relevant by all stakeholders. This relates to the evaluation standards as illustrated by the suggested evaluation model, and further is at the heart of Fry (2019) and Dart models (2018). This step entails considering collaboratively what really counts as ‘credible evidence.’

Justifying Conclusions

The evidence collected in an evaluation must be analysed, interpreted, and triangulated. The interpretation of data has to be considered from several different stakeholder and systems perspectives to reach justified judgments. These judgments relate to the evidence gathered and are aligned with benchmarks set by the stakeholders. According to Milstein and Wetterhall (2000), this involves (a) analysis to synthesise the findings, (b) interpretation to determine what those findings mean, (c) judgments to determine how the findings should be valued based on the selected indicators or benchmarks set, and (d) recommendations to determine what claims, if any, are indicated. The power of evaluation allows an understanding of the lifetime value of exposure to SaCH. Such processes support the development of a system for continuous quality improvement and sharing of information for learning is a powerful vehicle for the sustainability and scale of SaCH.

Ensure Use and Share Lessons Learned

The last step is perhaps the most important—to ensure the use of the evaluation and share its lessons learned. The evaluation framework needs to describe plans for using evaluation results and disseminating findings. Clear, specific plans for evaluation use should be discussed from the beginning. This could include a broad overview of how findings are to be used and more detailed information about the intended methods for sharing results with stakeholders. This is a critical and often neglected section of the evaluation plan.

What is essential here is articulating the planned outcomes over time and then considering the levels of evidence required to evaluate implementation fidelity and adaptation that leads to considering sustainability and scale. These are dynamic elements that will change over time, and it is this change that needs to be considered and built into an evaluation plan. Ensuring that the original intention of the SaCH is present but considering adaptation and organisational development along the way is essential but also needs evaluating as part of the process.

Evaluations are undertaken to adjudge and improve the effectiveness of interventions. Some activities that promote use and dissemination include designing the evaluation from the start to achieve intended uses, preparing stakeholders for eventual use, interpretations, and adaptations, providing continuous feedback to stakeholders, scheduling follow-up meetings with intended users to facilitate the transfer of conclusions into appropriate actions or decisions, and disseminating lessons to those who have a need or a right to know or an interest in the project.

Recognising the Role of the Evaluation Standards as Key Values in Evaluating SaCH

The international program evaluation standards from the Joint Committee on Standards for Education Evaluation (Yarbrough et al., 2010) provide values or guidelines to follow when developing evaluation plans (see Table 1). These standards are designed to ensure the integrity and worth of the evaluation. The evaluation standards also provide indicators to judge the quality of an evaluation system.

Table 1 The International Program Evaluation Standards from the Joint Committee on Standards for Education Evaluation (Yarbrough et al., 2010)

Many organisations and evaluation associations have contextually based guidelines that address issues of quality and ethics together. Hence, there are multiple resources available. In this case, the CDC framework employs the Standards for Program Evaluation (2010): utility, feasibility, propriety, and accuracy. There is also the addition of evaluation accountability.

Final Word

Ensuring that the discipline of evaluation is front and centre when developing SaCH is core to the success of SaCH interventions. The chapter has presented an overview of evaluation as a support vehicle for the successful implementation, improvement, and scalability of great ideas. Evaluation not only provides an understanding of what works (or not) but also provides a mechanism to support the ongoing sustainability of organisational processes and infrastructures—as well as increasing the probability of sustainable impact.

The suggestion is that evaluation activity should provide ways to continuously document the work of evaluation to understand the nature of value and answer not only questions of what worked and for whom, in what circumstances, but also what comes next. Stakeholders want to know what was done, what was achieved and understand the value relative to investment.

We have argued that to achieve this impact, there needs to be a shift from a traditional focus of measuring change and not seeking a linear cause and effect relationship. Gates and Fils-Aime (2022) suggests “reshaping evaluation from rendering discrete assessments of performance to facilitating ongoing evaluative processes and deliberation amongst those involved and affected about the value of what they are up to and what should be done next.” This shift in mindset to embed evaluative thinking and evaluative activity into a system is a continuous process that requires maintenance and reflection.