Introduction

Educational organizations and educators around the world are constantly seeking to improve and advance education, so as to provide students with the best education possible. That effort itself takes place in a changing world—from global warming, pandemics, and wars to shifting social realities. Educational change is therefore more a daily reality than a choice. Or, as Stoll (2020) stated, this “isn’t a time for cruise control (…) we have to shift gear” (p. 428) to achieve deep and meaningful change for students.

Unfortunately, educational change is often not sustainable (Askel-Williams & Koh, 2020; Cohen & Mehta, 2017). Changes to education are complex, dynamic, and emerge through interactions in context (Spillane, 2012). Rapid digital, economic, and societal changes lead to complex problems that are influenced by many interdependent factors and are hard to disentangle. Finding effective and sustainable solutions requires a deep understanding of problems and their causes (Ramley, 2014).

In today’s world, educators need to find solutions for these complex problems in a rapidly changing context. These are not easy processes for schools to navigate. Using evidence to untangle and manage complex problems has shown promise, as it is crucial for developing targeted approaches in a complex and dynamic environment such as education (Hargreaves & Shirley, 2009). More specifically, using evidence helps with critically analyzing assumptions and identifying possible blind spots related to change initiatives (Vanlommel, 2022), for example, by identifying the problem within the school organization—instead of relying on a gut feeling—and showing whether the focus of the change matches the problem that needs to be managed. In that way, evidence use can help break existing patterns and routines by providing educational organizations with new insights to drive their complex change processes.

However, different interrelated factors influence the complexity of educational change. Effective educational change must attend to structure and culture, as well as to capacities in different parts of the system, in alignment with the outside world (Bates, 2013). Problems cannot be solved by fixing one element. Inequality in education, for example, cannot be solved by developing a new vision or setting up a new curriculum if the underlying norms and beliefs are not addressed. Finegood (2012), amongst others, warned against reductionist approaches that focus on changing one part of the system. Educational change should be studied and developed based on the interdependencies “between individuals, organizations, or levels in the system” (Finegood, 2012, p. 125), and solutions should target improvements to processes instead of outcomes. In complex environments, change initiatives that are approached in terms of the interdependencies between parts of the system are more likely to lead to effective and sustainable change (Finegood, 2012; Fullan, 2006). To understand the different elements in schools and the broader educational system that influence complex change initiatives, a systems theory lens thus seems inevitable.

From a systems perspective, schools can be seen as dynamic collections of tightly or loosely coupled subsystems or of people who constantly interact and co-evolve (Behrens & Foster-Fishman, 2007; DiMaggio & Powell, 1983; Monahan et al., 1994). Accordingly, when this perspective is applied to educational change, different stakeholders and different elements in schools mutually influence the change process. That means that teachers or school leaders hold important roles in initiating, diffusing, and adopting educational change, which will be affected by their competences and the extent to which they use their energies, passion, and time to promote that change (Barrenechea et al., 2023). Moreover, teachers or leaders may influence change beyond the boundaries of their teams or schools and positively affect the larger system through “building bridges” (Barrenechea et al., 2023). Teachers or leaders can act as change agents and cross boundaries within and across schools. This is important, because the complexity of educational change makes it hard for individual teachers, leaders, and schools to address complex problems and challenges individually (Stoll, 2020).

In sum, the changes education faces are growing more and more complex, challenging educators to gain a deep understanding of problems and solutions and how different elements in their context are interrelated. Using evidence to inform change—or evidence-informed change (EIC), as we will call it from here on—can help untangle the wickedness and interrelatedness of problems. Thus far, however, there seems to be a lack of conceptual clarity regarding what counts as evidence. Moreover, scholars have shown that schools still find it challenging to use multiple sources of evidence in their change processes (e.g., Austin & Claasen, 2013; Grol & Wensing, 2004; Vanlommel, 2022; Wubbels & Van Tartwijk, 2017). To enhance the effectiveness of educational change that aims to enhance teaching and learning, in this review we want to develop a conceptualization of EIC (RQ 1) and to identify factors that stimulate EIC (RQ 2). To do so, we will review multidisciplinary literature on this topic, as other sectors such as health and management also have longstanding traditions of working with evidence. As sustainable change requires a systems perspective (Finegood, 2012; Fullan, 2006), the potentially influential factors will not be investigated as independent elements. In this study, we focus on explaining the interdependencies of these factors in relation to the system’s readiness, capacity, and alignment regarding EIC.

Theoretical framework

Evidence: What is it and how can it be used to support educational change?

Attention to the use of evidence to drive practice in schools and other organizations has been steadily increasing. The main reason for this trend is the need to adapt to the rapidly changing environment we live in, and to adapt organizational outcomes accordingly (e.g., Austin & Claasen, 2013; Grol & Wensing, 2004). Evidence is used in this way across the globe—from Australia to the USA, and from the UK to South Africa—and across different sectors—such as health, social work, management, and education.

“Evidence” is a term that authors use easily, but its meaning is often left unclear. From an etymological perspective, “evidence” can be traced back to its Latin root, “evidentia”, which is a compound of “ex-”, meaning out of, or from, and “videre”, meaning to see. Based on that etymology, evidence started out as referring to something that you have seen (clearly) yourself. According to frequently used dictionaries, evidence is now seen as “facts, information, documents, etc.” (Cambridge Dictionary) or “the facts, signs, or objects” (Oxford Dictionary) that give you reason to believe that something is true (both dictionaries). When talking about evidence, one thus needs to keep in mind that evidence has a dual role; it informs and supports drawing a conclusion. Additionally, evidence needs to have a certain “quality” for it to be called evidence. The quality of evidence depends on one’s confidence in the evidence and the evidence’s validity for the use that is being made of it. Validity in (qualitative) research requires careful collection and documentation of evidence that allows transparent insight into its source. Validity also refers to the extent to which the data collected are appropriate for answering the guiding research question (FitzPatrick, 2019). Searching for criteria that help us define the quality of evidence, we say that evidence needs to be suited for its purpose (Does the evidence to be used match its intended aim?) and needs to be transparent (Can the evidence be investigated and discussed by other people?). Integrating the etymology of the word evidence and the idea of the quality of evidence, in this article we define evidence as an information source that is transparent, appropriate for its intended purpose, and permits drawing a conclusion.

When investigating evidence use in education, a shift over time can be observed. At first, organizations were focused on having their work be evidence-based. This indicated that research, and more specifically scientific or formal research, was used as the source of evidence to base practices on. Research was long considered as the best available evidence, as it was systematically collected, empirically grounded, and clearly showed what works for improving practice (e.g., Davies, 1999). A recent study by Groß Ophoff et al. (2023) showed that schools that reported more research-informed activities demonstrated better results in the average student performance assessment at the end of the school year.

However, implementing scientific evidence in practice has been considered problematic; practitioners seldom use formal research (Grol & Wensing, 2004). This has to do with the intended users’ skills as far as meaningfully engaging with research, but also with the way research is produced, executed, and communicated (Biesta, 2007; Graves & Moore, 2018). Research “rarely translates into simple, linear changes in practice” (Browns & Greany, 2018, p. 2). Both the context in which one works and the context in which the research is conducted influence how research might be applicable to practice. Simply stated, “what works” does not necessarily work for each school (Biesta, 2007; Bryk et al., 2015; Graves & Moore, 2018). An approach is needed in which the technical question of “what works” could be addressed in close connection with normative and contextual questions about what is desirable (Biesta, 2007).

Attention to the use of other evidence sources increased. The importance of using data was stressed more often (cf. Schildkamp, 2019). Data are pieces of information that are systematically collected and organized to represent some aspect of the organization (Wayman et al., 2012). Data are collected within the context of the organization, and therefore contextualized. An example is feedback from stakeholders, such as students. A third source of evidence that was found to be important is expertise or personal knowledge, defined as the knowledge and competences needed to perform a certain job, gathered over the years that people perform their jobs (Smith, 2005). Sometimes, personal knowledge is so strongly grounded in experience that it cannot be fully expressed (Tschannen-Moran & Nestor-Baker, 2004). This type of evidence helps professionals understand what data and research mean within their context (Vanlommel et al., 2018). However, the use of data or personal knowledge as evidence has disadvantages as well. Using only personal knowledge often leads to biased judgements and stereotyping (Brookhart, 2003; Feinberg & Shapiro, 2009; Vanlommel & Schildkamp, 2019). Moreover, decisions based on personal knowledge often provide superficial, practical solutions for complex situation (Schildkamp & Kuiper, 2010). Data alone do not provide solutions, either. Without personal knowledge to understand what data mean and what data are needed, data can remain meaningless (Brown et al., 2017) or educators can be overwhelmed by all the data that surround them (Vanlommel, 2022).

Because the use of any one source of evidence has disadvantages, the evidence-informed movement emerged. To overcome the obstacles associated with individual sources of evidence, the importance of combining evidence sources is stressed in the evidence-informed movement (Brown et al., 2017; Vanlommel, 2022). The rapid and complex changes in education call for deeper questioning of the assumptions and beliefs that frame both the problems and the solutions pertaining to educational change. Using different sources of evidence can help bring actual causes to the surface and interrupt routines and patterns that hinder the effectiveness and sustainability of change (Vanlommel, 2022). Evidence-informed in this article therefore refers to the use of a combination of multiple sources of evidence to inform organizational processes.

Scholars mainly focus on evidence-informed practice, decision-making, or policy, but it is important to gain a better insight in the vital role evidence can play in change processes in organizations Evidence-informed change might strengthen the effectiveness, sustainability and innovative capacity of change initiatives in education. Up till now educational change often fails to break existing patterns and routines (Bates, 2013). A reason may be that educational change is ideologically based (Verbiest, 2021) or mostly implemented based on intuition or "gut feeling," without a clear vision or strategy (Vanlommel et al., 2018). Due to a common lack of time and money, innovations are also rarely evaluated and not further institutionalized (Kirschner et al., 2004). Wubbels and Van Tartwijk (2017, p. 9) found that “evidence on the potential effects of innovations (…) was not used at all”. As a result, the outcomes of educational change disappear and the impact remains unclear (Verbiest, 2021). Using evidence to monitor and evaluate processes of change can contribute to the effectiveness, and the systematic approach of EIC might also support the sustainability.

It thus seems crucial that educational organizations are willing and able to use evidence suitable for their context and combine it with other evidence sources to support their change processes. If they can to do that, schools can adapt to our fast-changing society and focus on managing complex problems in a sustainable way. Evidence-informed change (EIC) thus has a lot of potential. But how can EIC be understood and supported in (educational) organizations?

A systems approach to supporting evidence-informed change

Educational change is complex and dynamic (Hargreaves, 2003). Processes of educational change are influenced by many interrelated factors, such as the socio-economic context of students and parents, the resources of schools, relations between teachers and students, and growing ethnic diversity. Fast-paced changes in the environment, for example, and the growing influence of artificial intelligence, also make it impossible to follow blueprints and linear change models. The dynamic and complex nature of educational change requires more complex and adaptive approaches to change.

Complexity science offers insights into the challenges associated with understanding such complex system dynamics (Castellani & Hafferty, 2009) that need to be adaptive and responsive (Paina & Peters, 2012). The interrelatedness between different elements in schools, universities, and the broader educational system requires an understanding of the interdependencies of different parts of the system (Lanham et al., 2013). Berta et al. (2014) used a whole-system approach to change to investigate what factors contribute to the development and best use of evidence and how evidence-informed innovations could diffuse beyond the boundaries of the subsystems in which they originate. They found three key elements that strengthen the effectiveness and sustainability of change in social systems: the system’s readiness, capacity, and alignment. Below, we will discuss these elements in relation to (evidence-informed) change, as they will be used as part of the organizational framework in our literature review.

The system’s readiness for change

A system’s readiness for change is an important precursor of effective and sustainable change (Jones et al., 2005). Translated to the educational context, this concept refers to the overall readiness of schools, teachers, and leaders to make changes in an evidence-informed way. The levels of readiness can vary across people, teams, or subsystems, and can change over time (Wang et al., 2023). Readiness for change is a precondition for accepting and adopting a particular plan (e.g., to make evidence-informed change), purposefully interrupting the status quo and moving forward (Wang et al., 2023). Readiness does not just refer to how ready educators are ready to implement EIC developed by others. An important aspect is educators’ readiness to read research, collect data, combine and weigh evidence, and seek new ways of and ideas for adapting their practices. Creating readiness for change might require “unfreezing” existing mind-sets and creating motivation and energy for EIC (Cummings et al., 2016).

Schools’ and educators’ readiness for change are influenced simultaneously by different factors (Holt et al., 2007). At the individual level, readiness reflects the extent to which teachers and leaders are cognitively and emotionally ready to change. It is influenced by beliefs among educators that (a) They are capable of implementing a proposed change (i.e., change-specific efficacy); (b) The proposed change is appropriate in their school context (i.e., appropriateness); (c) School leaders are committed to the proposed change;, and (d) The proposed change is beneficial (i.e., personal valence; Holt et al., 2007). Emotions that come with change, such as enjoyment, anxiety, or anger, impact an educator’s readiness for EIC (Ittner et al., 2019).

At the organizational level, the research evidence about system readiness for change in schools is still limited. Agnew and Van Balkom (2009) found that cultural readiness for change is influenced by the congruency between espoused and enacted values, between the values of (evidence-informed) change and the broader (school) vision, and by the political forces that articulate EIC as a priority. Further, trust in schools supports teachers’ readiness for EIC by reducing change-related resistance and stress (Zayim & Kondakci, 2015).

It is difficult to implement changes if the people who are most affected are not involved (Armenakis & Harris, 2002). Teacher agency in change processes can contribute to a greater commitment to the change and will help teachers make meaning of change (Datnow, 2020; Monahan et al., 1994). However, educational change does not result from individual actions, but through interactions within an organizational context (Datnow & Stringfield, 2014). Readiness for change in schools is often characterized by processes of negotiation, interaction, resistance, and compromise seeking and is strongly determined by the level of participation in the change process (Wang et al., 2023). Participation and collaboration do not always have a positive effect on teachers’ readiness for change, however. If teachers have the feeling that they are asked to participate or collaborate without having any real influence on the change process, they are likely to resist the change (Hargreaves & Shirley, 2020). This might also affect future readiness, as teachers might feel cynical about subsequent change processes—also known as aversion to future change.

The system’s capacity for change

Even though (school) organizations are faced with similar forces for change, their responses can differ (Hargreaves, 2023). The extent to which teachers, teams, and schools are able to enact evidence-informed change largely depends on the system’s capacity for change. This refers to a combination of attributes at the individual, collective and organizational levels that allow (school) organizations to implement changes without compromising daily practice or subsequent change processes (Meyer & Stensaker, 2006). Capacity building is an essential component of any successful change process or improvement strategy (Fullan, 2006).

At the organizational level, the presence or absence of resources, structural or cultural elements, may hinder or support change. Fox example, structures can sustain disciplinary silos and inhibit co-construction that is based on a broad array of evidence from different perspectives and stakeholders. A culture of openness toward interdisciplinary and transdisciplinary approaches and toward other systems of thought and sources of knowledge is a driver for change (O'Brien et al. 2013).

The system’s capacity for EIC also calls upon the collective capacity of professionals working together to make evidence-informed changes to improve practice through mutual support, accountability, and challenge (Fullan, 2006; Harris, 2011). Collective dispositions, shared feelings, emotions, and relations among team members define a social system’s capacity for change (Fullan, 2010). For example, teachers’ collective efficacy (a team’s feeling that together they can handle change) greatly influences the acceptance, implementation, and sustainability of change initiatives (Monahan et al., 1994; Vanlommel et al., 2023). Collective capacity may also be understood from a social capital perspective (Hargreaves & Fullan, 2015), insofar as it includes collaboration, shared decision-making, collective responsibility for teaching and learning, collaborative inquiry, and mutual trust.

Individual capacity for EIC demands a range of knowledge and dispositions. For example, educators need to be able to make sense of evidence and use the evidence to improve teaching and learning in meaningful ways (Datnow & Hubbard, 2016). Educators often need new knowledge and skills to collect, combine, and use evidence such as data (Gummer & Mandinach, 2015).

The system’s capacity for change is more than just the capability to change important elements of teaching and learning that need to be improved; insight into what is already good and needs to be protected is also necessary (Monahan et al., 1994). Capacity for change requires an understanding of strengths, weaknesses, and the balance between change and stability that the context requires.

The system’s alignment for change

The system’s alignment for change refers to the extent to which the strategy, structure, and culture of a system are aligned with the environment and the change initiative to create a synergetic whole that makes change possible. System alignment is not an absolute measure. It can range from complete opposition or misalignment to perfect synergy and alignment (Semler, 1997). The alignment or misalignment of a system greatly influences the diffusion, adaption, and acceptance of change. (Mis)alignment is a consequence of similarities or differences in understandings, expectations, goals, beliefs, values, and opinions among stakeholders engaged in various stages of educational change.

The broad concept of organizational alignment is comprised of distinct but interrelated aspects. For one thing, there needs to be alignment between the strategy, goals, and different activities related to educational change within an organizational structure. This is also called structural alignment. When processes and activities are well-aligned, the output of each process in the subsystem contributes to goal attainment for the larger system (Swanson, 1994). In education, the achievement of the goals of the individual teacher should contribute to the goals of the team to which the teacher belongs, and the realization of team goals should in turn contribute to improvement of the school. With the aim of enhancing EIC in schools, for example the goals of evidence use need to be aligned between teachers, teams and the school organization.

Further, there should be a match between the norms, beliefs and routines that define the school culture. This is what is called cultural alignment. Cultural values have a strong effect on the behavior of people in a change process (Hargreaves, 2003) and act as a filter through which change is interpreted (Kelchtermans, 2009). Agreement between the cultural values of the school and the (implicit) values that underly (evidence-informed) change greatly influences the success or failure of change initiatives. Finally, there needs to be alignment between the inside and the outside world (i.e., environmental alignment). Change initiatives should be aligned with both the internal goals of schools and the needs and demands of the broader societal, economic and political environment (Sahlberg, 2021). The effectiveness and sustainability of EIC depends on the degree of congruence, consistency, or fit both within and between each of these components of alignment (Moss et al., 2022).

Present study

The use of a combination of evidence sources is valuable for managing the complex problems education faces, as it helps with developing targeted approaches to educational change in a complex and dynamic environment (Hargreaves & Shirley, 2009). Using evidence (information sources that are transparent, appropriate for their intended purpose, and that permit drawing a conclusion) can inform and support change processes in education. Schools that aim to change programs or practices of teaching and learning need a clear understanding of context, problems and solutions. Further evidence can help monitor and evaluate (lack of) progress and results in processes of change. Evidence-informed change (EIC) can help school organizations with realizing sustainable change. While EIC is an emergent phenomenon, there seems to be a lack of conceptual clarity about what counts as evidence. Moreover, scholars have shown that schools still find it challenging to use multiple sources of evidence in their change processes (e.g., Austin & Claasen, 2013; Grol & Wensing, 2004; Vanlommel, 2022; Wubbels & Van Tartwijk, 2017). To provide clarity around the concept of EIC and insight into factors that can help schools stimulate EIC, our literature review focuses on the following questions:

  1. 1.

    How can evidence-informed change (EIC) be conceptualized? (RQ 1)

  2. 2.

    What factors stimulate evidence-informed change (EIC)? (RQ 2)

Because change initiatives that take an approach based on the interdependencies between parts of the system are more likely to lead to sustainable change in complex systems (Finegood, 2012; Fullan, 2006), a whole-system approach will be used in our literature review. Based on our theoretical framework, we created a research model (see Fig. 1) that will be used as a theoretical framework to organize our findings.

Fig. 1
figure 1

Research model for our literature review

Method

As our study aims to clarify a key concept in the literature (RQ 1) as well as characteristics influencing EIC (RQ 2), and aims to identify and analyze knowledge gaps, we conducted a scoping review (Munn et al., 2018). A scoping review is “a form of knowledge synthesis that addresses an exploratory research question aimed at mapping key concepts, types of evidence, and gaps in research related to a defined area or field by systematically searching, selecting, and synthesizing existing knowledge” (Colquhoun et al., 2014, pp. 1294). Therefore, databases were searched to identify relevant studies, studies were selected based on inclusion criteria, experts were consulted, and the selected articles were then analyzed and synthesized with a data-extraction form (Arksey & O’Malley, 2005; Colquhoun et al., 2014; Levac et al., 2010; Tricco et al., 2018). The steps will be discussed in detail below.

Literature search

The literature was searched and publications collected for this review in October 2022. A library professional was consulted to advise on our literature search. We used a combination of two search strings. The first search string included the search terms “evidence-informed” and “evidence informed”. The second search string included the search terms “change”, “reform”, “improvement”, and “innovation”. The combination of these search strings was used in four online databases: ERIC, PsycINFO, Scopus, and Web of Science. These databases were chosen to include articles from several fields and sciences. The search was limited to sources published from 2000 onwards. Table 1 presents an overview of the number of studies obtained for each search string and the combination of search strings per database. The total number of studies retrieved was 7168 (Step 1 in the Flow Chart of the Search Process in Fig. 2). After removing duplicates, 5769 articles remained (Step 2 in Fig. 2).

Table 1 Search string results
Figure 2
figure 2

Flow chart of the search strategy for this literature review

Literature selection

To ensure the relevance of the selected literature, we applied four criteria during the selection process. First, the article had to be published in a peer-reviewed journal. Second, the article had to be empirical. Third, the article had to be about using more than one source of evidence (e.g., articles only about stimulating research use in organizations were not included). Fourth, the article was about (an) evidence-informed change (process), not just about implementing an evidence-informed tool, program, intervention, or the like (e.g., articles about implementing evidence-informed programs against bullying in schools were not included).

We reviewed the total of 5769 articles manually by reading the titles and abstracts (Step 3 in Fig. 2). After this step, 45 articles remained. We read the full texts of these remaining articles to determine whether they matched our criteria (Step 4 in Fig. 2). After this step, three articles remained. It was quite striking that just three empirical articles focused on evidence-informed change as such.

Because literature selection in a scoping review is not linear, but rather an iterative process that involves searching the literature, refining the search strategy, and reviewing articles for inclusion (Levac et al., 2010), we discussed our search strategy again. This discussion led to a refinement in two areas. First, we decided that we were too strict in the exclusion of some of the articles. More specifically, we excluded some articles because they were about the content of an innovation, not about the process of change. Carefully reading the articles, we concluded that they were about stimulating the use of evidence within organizations. This could also be seen as an evidence-informed change process: organizations were moving toward using evidence to inform their work, which calls for changes on different levels (Austin & Claassen, 2008). We therefore decided to re-analyze the excluded articles and include those about stimulating evidence use in organizations. Second, we only wanted to include empirical articles to have a sound foundation for our findings related to RQ 2. For the conceptualization of evidence-informed change, this was not a prerequisite. We therefore decided to include conceptual articles as well, but to use them only to answer RQ 1.

Based on this discussion, we re-reviewed our collected articles by reading titles and if necessary abstracts. We read 52 additional full texts, resulting in the inclusion of 16 additional articles (Step 5 in Fig. 2).

Additional literature selection

The set of articles that remained after reading the full texts (Step 5 in Fig. 2) was presented to an expert peer group for consultation (cf. Arksey & O’Malley, 2005; Colquhoun et al., 2014). The group consisted of three experts in both the field of innovation and taking an evidence-informed approach. We invited them in a digital setting and discussed the final set of articles with them. We asked them for additional articles that were missing in this set and for insights beyond those in the literature. Additionally, all reference lists from the articles were analyzed (snowballing). Based on these two steps, we added four additional articles (Step 6 in Fig. 2). This led us to our final set of 23 articles.

Analysis and synthesis of the literature

Our full final set of articles was used for answering RQ 1, which focused on conceptualizing evidence-informed change processes. As the final set included both conceptual and empirical articles, we decided to use only the 14 empirical studies to answer RQ 2, which focused on the factors influencing to which change processes were evidence-informed change processes. The main reason for this decision was that we wanted to present an empirical foundation for our findings related to what influences evidence-informed change.

To analyze data from these 23 articles and gain in-depth insight into evidence-informed change conceptualizations and processes, we designed a data-extraction form. The use of such a form ensured that comparable data could be gathered from the selected publications (Arksey & O’Malley, 2005; Petticrew & Roberts, 2006). The form contained 22 questions (Questions 1–14 focused on general characteristics of the studies, Questions 15–18 focused on the evidence sources used, Question 19 focused on the definition of change, and Questions 20–22 on the influential factors). We used direct quotes from the articles to fill in the data-extraction form to stay faithful to what was written by the authors, for objectivity purposes.

After filling in the data-extraction form, we coded the data per research question, using sensitizing concepts derived from our conceptual model. This provided us with a general sense of reference and guidance in approaching our data (e.g., G. Bowen, 2006). Both authors coded part of the data and discussed the codes together until consensus was reached.

Results

Below we present the general characteristics of the included studies, and then address the research questions on the conceptualization of evidence-informed change (RQ 1) as well as factors stimulating evidence-informed change (RQ 2).

General characteristics of the included articles

In Table 2, an overview of the included articles is presented. Three initial findings could be reached by looking at their general characteristics. First, 14 empirical and 9 conceptual articles were included, indicating that information about evidence-informed change in general is scarce, but empirical data on the matter is scarcer. Second, most studies were conducted in Europe (n = 10), followed by North America (n = 9). Only a few studies (n = 4) were conducted elsewhere, in Australia and New Zealand. Third, the bulk of the studies were conducted in the health sector (n = 9), followed by education (n = 7) and social work (n = 4). Three studies were conducted in other sectors. Fourth, of the 23 included studies, 12 studies were published after 2015, four studies between 2010 and 2015, and six studies before 2005—with the oldest publication coming from 2004. This indicates that evidence-informed change is a relative newcomer on the scene.

Table 2 Overview of included papers

Conceptualization of evidence-informed change (RQ 1)

To analyze the way in which the included articles conceptualized EIC, we used our full final set of articles (n = 23) and analyzed them in two steps. First, we looked at how change was conceptualized. Second, we looked at how evidence that could inform change was conceptualized. The results are described below.

Conceptualization of change in the included articles

All but two articles in this review focused on the realization of organizational change, as the aim was to stimulate all actors in the organization to use evidence for change. The other two articles (Brown et al., 2017; Klinner et al., 2015) mainly focused on change at the individual (practitioner) level. Six studies specifically focused on evidence-informed change (Austin, 2008; Bamber, 2015; Brown & Greany, 2018; Brown et al., 2017, 2018; Evans, 2020). Others, for example, mentioned that they wanted to stimulate the use of evidence-informed practice in making changes (Bowen & Zwi, 2005; Brown, 2017; Hodson & Cooke, 2004; Hole et al., 2016; Klinner et al., 2015; Lwin & Beltrano, 2022; Lwin et al., 2022; McEwen et al., 2008; Newhouse et al., 2007; Peters et al., 2016), stimulate evidence-informed (organizational) decision-making for change (Bowen et al., 2009; Conway et al., 2019; Hardy et al., 2015; Rousseau, 2020; Semenic et al., 2015), or evidence-informed inquiry for change (Timperley & Parr, 2007). Nevertheless, all articles aimed at or advocated for achieving change (also described as transformation or improvement) through stimulating the use of evidence. Improving practice was the goal of change in all articles, independent of the sector concerned.

Conceptualization of evidence in the included articles

In Table 3, the conceptualization of evidence per article is given. The concept of evidence was rarely defined. Only the study by Bowen and Zwi (2005) did so. The authors mostly used examples to describe what they meant by evidence that could inform the change process that was central in their studies. Hardy et al., (2015, p. 1) mentioned “existing evidence” as an example of evidence. As this description was too vague, we could not code this in a specific category. All other examples could be categorized into the following categories: research, data, expertise, and other.

Table 3 Conceptualization of evidence per article
Research

In 21 out of the 23 studies, research was used as a source of evidence. Most authors (n = 12) were very brief in describing what they meant by research (Austin, 2008; Bamber, 2015; Bowen et al., 2009; Conway et al., 2019; Evans, 2020; Jansson & Forsberg, 2016; Klinner et al., 2015; Lwin & Beltrano, 2022; Lwin et al., 2022; McEwen et al., 2008; Rousseau, 2020; Semenic et al., 2015; Timperley & Parr, 2007). Others (n = 3) added descriptive but broad terms such as “soundly conducted” or “the best available, current, valid, and relevant” to describe what they saw as research evidence (Hodson & Cooke, 2004; Hole et al., 2016; Peters et al., 2016). In four studies, authors described what types of research they considered as evidence, including both quantitative and qualitative research findings, as well as findings from systematic research conducted by researchers and practitioner inquiry (Bowen & Zwi, 2005; Brown, 2017; Brown & Greany, 2018; Brown et al., 2017; Newhouse et al., 2007).

Data

In 18 out of the 23 studies, data were mentioned as a source of evidence. As was the case with research, most authors (n = 12) were very brief in describing what they considered as data, for example, by describing them as “feedback”, “preferences”, “sources” or “views” from different stakeholders (Hodson & Cooke, 2004; Jansson & Forsberg, 2016; Lwin & Beltrano, 2022; Lwin et al., 2022; McEwen et al., 2008; Peters et al., 2016; Semenic et al., 2015), or mentioning specific types of data, such as administrative, evaluation, financial, organizational, or student achievement data (Austin, 2008; Bamber, 2015; Brown & Greany, 2018; Rousseau, 2020; Timperley & Parr, 2007). Others (n = 6) were brief, too, but used a combination of both descriptions (Bowen & Zwi, 2005; S. Bowen et al., 2009; Brown et al., 2017; Brown et al., 2018; Evans, 2020; Newhouse et al., 2007).

Expertise

In 21 out of the 23 studies, expertise was discussed as a source of evidence. This was described as “expertise” (Brown, 2017; Brown & Greany, 2018; Hardy et al., 2015; Hodson & Cooke, 2004; Hole et al., 2016; Jansson & Forsberg, 2016; Newhouse et al., 2007), “expert knowledge” (Bowen & Zwi, 2005), “expert opinion” (Rousseau, 2020), “practitioner wisdom” or “practitioner sources” (Bamber, 2015; Evans, 2020). Others (n = 8) used a combination of those terms and other descriptions such as “competencies”, “concerns”, “tacit knowledge”, “experiences”, and “professional judgment” (Austin, 2008; Bowen et al., 2009; Brown et al., 2017; Lwin & Beltrano, 2022; Lwin et al., 2022; McEwen et al., 2008; Peters et al., 2016; Semenic et al., 2015). In two articles (Brown et al., 2018; Conway et al., 2019), expertise was not defined, but they described how it was gained and used in the process. Although different concepts were used, all authors focused on evidence derived from experience gained by an individual in their studies or career.

Other

In 11 articles, other sources than research, data, or expertise were mentioned as evidence. These were case context (Conway et al., 2019; Lwin & Beltrano, 2022; Lwin et al., 2022), feasibility (Semenic et al., 2015), politics (Bowen & Zwi, 2005), and stakeholder values (Bowen et al., 2009; Brown et al., 2017; Hole et al., 2016; Jansson & Forsberg, 2016; Lwin & Beltrano, 2022; Lwin et al., 2022; McEwen et al., 2008; Peters et al., 2016).

Factors stimulating evidence-informed change (EIC) (RQ 2)

Our second research question aims at understanding factors that can stimulate EIC. For this purpose, we used only the empirical articles included in our final set (n = 14). Our research model was based on a whole-system approach, arguing that the factors influencing EIC are interdependent and contribute to the readiness, capacity, and alignment of the whole system for EIC. We used these concepts to organize our results and have clustered all identified factors under these three main categories. An overview can be found in Table 4. In the subsequent section, we discuss each category with its identified factors in detail.

Table 4 Overview of factors influencing evidence-informed change

Factors supporting the system’s readiness for EIC

System readiness refers to the overall readiness of people, processes, and resources in systems and is a critical precursor to successful EIC. In 11 studies, factors related to system readiness were mentioned. First, our literature review identified factors describing people’s readiness for change. This starts with people's awareness that EIC is valuable and necessary, and that multiple sources of evidence are needed to provide a clear and complete picture of performance and outcomes at all levels of the system (Brown, 2017; Conway et al., 2019; Hole et al., 2016; Lwin et al., 2022; McEwen et al., 2008; Semenic et al., 2015). Individuals must also understand the relevance of EIC to their work and feel motivated to use evidence (Hardy et al., 2015; McEwen et al., 2008). Finally, the readiness of the system for EIC depends on the shared understanding among individuals that all stakeholders have a role in EIC (Brown, 2017; Conway et al., 2019; Semenic et al., 2015).

Second, factors related to organizational processes contribute to the system’s readiness for EIC. Implementation needs to be carefully sequenced and paced (Conway et al., 2019). Moreover, systems (e.g., an organization’s intranet) must be set up to provide (advice on) evidence in general and EIC in particular (McEwen et al., 2008). Guidelines need to be written or updated as well (Conway et al., 2019). Additionally, organizational processes for training and education for practitioners related to evidence use and EIC need to be set up (Newhouse et al., 2007). Finally, adding expectations related to the use of evidence and EIC to job descriptions would also contribute to system readiness (Newhouse et al., 2007).

Third, certain resources also define a system’s readiness for EIC. In addition to commonly described resources such as time (Bowen et al., 2009; Brown, 2017; Conway et al., 2019; Hardy et al., 2015; McEwen et al., 2008) and adequate finances to get started with EIC (Brown, 2017; Hardy et al., 2015), other resources were also identified. It is important for professionals to have access to various sources of evidence (Bowen et al., 2009; Brown, 2017; Conway et al., 2019; McEwen et al., 2008; Newhouse et al., 2007). Resources in the form of tools to support the use of evidence contribute to system readiness (Bowen et al., 2009; Newhouse et al., 2007), as do professionals who can support and assist in obtaining and using evidence (Bowen et al., 2009).

Finally, two other, more general factors were found to contribute to system readiness for EIC: supportive organizational structures (i.e., clear responsibilities and the existence of networks; Bowen et al., 2009) and an organizational policy and climate that supports the use of evidence and EIC (Bowen et al., 2009; Brown & Greany, 2018; Conway et al., 2019; Timperley & Parr, 2007).

Factors contributing to the system’s capacity for EIC

System capacity refers to individual, collective, and organizational capacity for EIC. In all 14 empirical studies, factors related to system capacity were mentioned as important influences for successful EIC. First, factors related to individuals' capacity for EIC were identified. It seems important to have sufficient knowledge and analytical competencies to use evidence for EIC (Conway et al., 2019; Hole et al., 2016; McEwen et al., 2008). Individuals must also feel confident to engage with evidence, as well as believe they are capable of applying EIC in their work (Conway et al., 2019; Hole et al., 2016; Lwin et al., 2022; McEwen et al., 2008). Moreover, when individuals act as change agents for EIC by driving EIC in their organization, this would in turn benefit the organization's capacity for EIC (Hole et al., 2016).

Second, our literature review revealed factors related to the organizational capacity for EIC. For example, it is important for organizations to collect, offer and disseminate relevant evidence to professionals in a timely manner (Bowen et al., 2009). Here, establishing networks or professional learning communities in the organization could play an important role (Brown & Greany, 2018; Hole et al., 2016). Clear communication about EIC is also a factor that contributes to organizational capacity (Bowen et al., 2009; Conway et al., 2019; Hardy et al., 2015; Semenic et al., 2015). Communication should be about not only the use and availability of evidence, but also the process of change: what is the current state of affairs, what problems are being encountered and what is the next step? Finally, a system’s capacity for EIC depends on the organization's ability to prioritize and focus (Bowen et al., 2009; Hardy et al., 2015; McEwen et al., 2008); otherwise people are "too busy dealing with the urgent that they can’t get to the important" (Bowen et al., 2009, p. 95). This can lead to a general feeling among staff that there are too many organizational issues to address, and that EIC is therefore not a priority.

We found little evidence on collective factors that influenced a system’s capacity for EIC. Our review did identify factors specifically related to leadership capacity for EIC. Therefore, we dropped ‘collective and added ‘leadership’. Leaders must not only have the necessary knowledge and skills to implement EIC (Hodson & Cooke, 2004; Lwin et al., 2022; Timperley & Parr, 2007), they must also have the competencies to communicate clearly about EIC (Bowen et al., 2009; Hardy et al., 2015; Hodson & Cooke, 2004; McEwen et al., 2008). In addition, their professional credibility, and the importance they place on using evidence also contribute to the system's capacity for EIC (Hodson & Cooke, 2004). Furthermore, it is important that leaders act as role models for EIC and behave accordingly (Conway et al., 2019; Hardy et al., 2015; Hodson & Cooke, 2004; Lwin et al., 2022), for example, by applying EIC themselves, being (closely) involved in it, or by acting as an EIC “opinion leader” or champion. Leaders who provide positive feedback encouraging EIC also make a positive contribution to EIC in their organization (Brown, 2017; Hardy et al., 2015; Hodson & Cooke, 2004; Lwin et al., 2022). When leaders direct, by specifically asking professionals to use evidence (Jansson & Forsberg, 2016) and providing opportunities for professionals to expand their knowledge and skills related to evidence-informed change (McEwen et al., 2008), this also influences EIC in the organization. In addition, it is important that leaders properly manage and provide the resources needed for EIC (Newhouse et al., 2007).

Factors contributing to the system’s alignment for EIC

System alignment refers to the similarities or differences in understandings, expectations, goals, beliefs, values, and opinions among stakeholders or subsystems engaged in various stages of EIC. In eight studies, factors related to a system’s alignment for EIC were mentioned. First, studies showed the importance of structural alignment. This involves connection between the goals of different change initiatives within the organizational structure. The goals of EIC need to be linked with the broader organizational goals (Brown & Greany, 2018; Hardy et al., 2015; McEwen et al., 2008; Timperley & Parr, 2007). Initiatives should be taken to bridge differences regarding EIC between different parts of the systems (e.g., teams, departments). Furthermore, organizing interactions between leaders and practitioners could contribute to structural alignment within the system (Hodson & Cooke, 2004; McEwen et al., 2008). Scheduling moments, such as during a team meeting, to regularly discuss ideas related to and the progress of EIC also contributes to structural alignment (Hardy et al., 2015).

Second, factors related to cultural alignment within the system influence EIC. To this end, the included articles mainly mentioned the importance of a shared vision regarding EIC (Hodson & Cooke, 2004; Lwin et al., 2022).

Third, factors related to environmental alignment are important for EIC. Organizing interactions between users and "providers" of evidence, and between researchers and practitioners can improve this type of alignment (Hodson & Cooke, 2004; McEwen et al., 2008; Peters et al., 2016; Semenic et al., 2015). Moreover, encouraging interprofessional collaboration across disciplines (Conway et al., 2019) and establishing and participating in networks that focus on the use of evidence and EIC (Hodson & Cooke, 2004; McEwen et al., 2008) can also help with environmental alignment.

Overview

An overview of the results of our literature review as incorporated in our research model is presented in Fig. 3. This figure provides a conceptualization of EIC as an outcome of our review study. The influencing factor ‘collective’ from our initial research model is replaced by ‘leadership’ based on our results.

Fig. 3
figure 3

Summary of the outcomes based on our research model

Discussion

The use of evidence can be a strong lever enhancing the effectiveness and sustainability of complex educational change, yet there has been little shared understanding on what it means to change in an evidence-informed way, what counts as evidence and how using it can be supported in practice. A striking first finding of our review is the limited conceptualization of what counts as evidence-informed and what sources of evidence can be used to guide change in schools. Our study showed that many sources of evidence can be used to support educational change, although clear conceptualizations were often lacking in the articles. Data were sometimes described very broadly, such as sources from different stakeholders, where other others delimited it to specific sources such as administrative or evaluative data. Although the definitions of expertise were often vague, all authors in our study saw experience as a form of tacit or expert knowledge that is gained by an individual throughout their studies or career. Results showed different types of research that can be taken into account, including both quantitative and qualitative research findings, as well as findings from systematic research conducted by researchers and practitioner inquiry as long as it is soundly conducted, valid, relevant and the best research available. We also found few sources of evidence that could not be fit in the categories data, research or expertise, for example stakeholder values or politics. Depending on the context, other sources of information might be relevant to use in the decision process. We started from a broad view on evidence, defining it as an information source that is transparent, appropriate for its intended purpose, and permits drawing a conclusion. Differing decision processes in differing contexts might start from a critical reflection on what counts as evidence.

Our results have led to a refined model that enabled us to clarify EIC as the combination of evidence sources (research, data, expertise) that are suitable for the intended purpose and transparent to inform a change process that is focused on improving practices for its stakeholders (e.g. students, teachers). Stimulating EIC in education should be approached from a systems perspective, focusing on the system’s capacity and readiness for change and alignment.

An educational system’s capacity to make evidence-informed change largely depends on people’s awareness, ability and motivation to use different sorts of evidence to get a rich, fine-grained and unbiased picture of the question and context. Given the complexity and dynamics of educational change, we argue that it is impossible to rely on only one source of evidence, such as research. Choosing and using research that is appropriate for the context requires data, for example, indicators of the socio-economic status of students in the educational system at hand. Transforming data into useable knowledge requires expertise to understand what data mean in the context (Vanlommel & Schildkamp, 2019). Expertise does not stand alone either, but needs to be understood in the light of local data, for example, about the history of the school. This has led to a definition of EIC as the combination of evidence sources (research, data, expertise), that are suitable for the intended purpose and transparent to inform a change process that is focused on improving practice for its stakeholders (e.g., students, teachers).

A system’s readiness for change largely depends on individuals’ awareness of the merits of EIC and the pitfalls of decision bias when relying on one-sided evidence. Educational leaders should model critical thinking and support strategic partnerships between researchers and practitioners. Educational systems also need the capacity to change in an evidence-informed manner. A school’s capacity depends not only on individual skills and dispositions, but also on a team’s collective efficacy regarding EIC and a culture of support, reflection and constructive conflict. Leaders need resilience and persistence, because EIC breaches the DNA of education. Promoting EIC asks educators to challenge their assumptions, become aware of their implicit knowledge and inferences and change their routines. It is likely to make people feel insecure or questioned as a professional. Asking educators to change habits can make them feel as if they have been doing things wrong for years (Kelchtermans, 2009). Evidence-informed change requires a shift in mind-set and culture, which is likely to bring insecurity and resistance. Leaders have the important task of providing supportive feedback and showing consideration for teachers’ feelings. In times of change, leaders’ influence on teacher change is largely exerted through the relational path (Vanlommel et al., 2023).

Organizations need the capacity to collect, provide and disseminate relevant evidence among staff in a timely manner, but most important, they need to prioritize. Otherwise, people are so busy dealing with the urgent that they cannot get to the important. Enhancing evidence-informed change in education is not easy and requires deliberate attention to and alignment between the interrelated elements of the educational system, for example, between practice, research and policy, between structure and culture or between teachers, students and parents.

In systems such as education these interrelated elements often exist only to a limited extent. Enhancing evidence-informed change in education will require targeted interventions and a stronger alignment between research, policy and practice. Policymakers and educational leaders need to be very clear about the reasoning and rationale behind changes that create understanding and motivation. Clear communication can create a feeling of urgency and insight into the need to change and why evidence is important. Understanding the risk of relying on one-sided or unsubstantiated evidence helps to create motivation for EIC. In many educational systems around the world, educators have little insight into human judgement and decision-making theory. Starting with high commitment toward their students, building on long-standing experience, and being absorbed by everyday practice’s fast and intuitive decision-making is often more of a habit that a deliberate choice. When educators learn to understand the strengths and pitfalls of information sources and how evidence can help improve practice, intrinsic motivation for EIC is likely to grow. Good examples and collegial support can also help in that regard. Innovation champions for example with high individual capacity for EIC can support and stimulate EIC in their schools. They can disseminate knowledge about EIC, facilitate understanding and provide targeted support to peers. Most importantly, they can also serve as boundary-crossers between different parts of the system, by facilitating knowledge brokerage and alignment of values and meaning (Vanlommel, 2022). Networks within and between different parts of the educational system might provide the supportive infrastructure needed for this (Poortman et al., 2022).

We must mention some limitations of this study. Despite the broad search, including studies from different fields, only 14 articles could be used to identify factors influencing EIC. Most of these were qualitative case studies. The recent publication dates show that EIC is an emerging field in which most insights are built on conceptual articles. These findings stress the need for more empirical research to understand and explain how EIC can be enhanced in practice. Future research should include more quantitative empirical studies with broader and more generalizable insights on EIC. Despite our search for articles that combined more then one source of evidence, although many articles promised a broad view in the title or abstract, evidence-informed change was often limited to use of research evidence. Given that different sources of evidence are needed and intertwined in practice, future research should start from a broader view on EIC to move the field forward. Our conceptualization can be a valuable starting point in that regard.

For practice, our findings highlight the need to invest in schools’ and educators’ readiness and capacity for evidence-informed change. Stronger alignment between different parts of the system, between policy and practice and between researchers and practitioners is therefore needed. For policy, the supporting factors we identified from a systems perspective can guide targeted policy development.

Change processes in education are complex and dynamic. The motivation and capacity to use a broad array of evidence can help intentionally interrupt routines and overcome individual and/or cultural decision bias, opening the door for effective and sustainable educational change.