Our realist RCT of the Learning Together intervention proceeds via a number of stages, only the first of which has so far been completed. A summary of the staged theoretical and methodological framework for realist RCTs is provided in Fig. 1, and each stage is outlined in detail below. Our first stage consisted of developing various a priori hypotheses about how intervention mechanisms might interact with context differentially to produce outcomes (CMO configurations). We did this at the start of the current phase III trial. The CMO hypotheses were informed by the findings of the prior pilot trial and an earlier feasibility study [15, 16], as well as by the sociological theory that informed the original intervention logic model and design [17] and empirical evidence regarding the processes of school effects on health [10, 18]. The second stage will involve refining or augmenting this limited list of a priori hypotheses prior to the collection of quantitative, follow-up data. This will be informed by the findings which will emerge from the process evaluation which is integral to the current phase III RCT. In the third stage, we will test our hypotheses using a combination of process and outcome data from the phase III trial: for example, to examine moderation and mediation. Our aim is thus not merely to assess whether LT is an effective intervention or not, but to develop empirically informed mid-range theory [6, 19] (i.e. theory about empirical phenomena that can be verified with data) about school processes, how these may be modified to reduce aggression and bullying, and how this is shaped by context.
Stage 1: pre-hypothesised theory of change and CMO configurations
Our original theory of change drew predominantly upon sociological theory, focusing on system-level change. Before the earlier pilot trial, we described this initial theory of change using a diagrammatic logic model (Fig. 2). Informed by Markham and Aveyard’s [17] theory of human functioning and school organisation, it started from the theoretical position that schools have a wide-ranging influence on students’ attitudes and action. Students’ capacities for practical reasoning (thinking) and affiliation (forming relationships), which are essential to their choosing to avoid bullying, aggression and other risk behaviours, are theorised as being facilitated by their being committed to a school’s ‘instructional’ and ‘regulatory’ orders. Based on Bernstein’s work [20], the instructional order refers to school processes of enabling student learning. The regulatory order focuses on how the school inculcates student values and sense of belonging.
The theory suggests that schools build student commitment to these instructional and regulatory orders by modifying ‘boundaries’ and ‘framing’ [17]. It is theorised that if schools reduce barriers between the school and the communities they serve, between students and teachers, between student groups, and between academic subjects, and if they increase the extent to which provision is framed in terms of the needs and preferences of students, then more students will become committed to the instructional and regulatory orders of the school [17] and will develop the practical reasoning and affiliations necessary to avoid risk behaviours.
The LT intervention aims to reduce bullying and aggression via boundary erosion and increasing student-centred ‘framing’ of school provision to promote students’ commitment to the school instructional and regulatory orders. These mechanisms encourage students to make healthier choices and the intervention is a way to trigger these mechanisms. The original theory of change was presented in a diagrammatic logic model of intervention processes, mechanisms and outcomes at the end of the pilot trial (Fig. 2). The model at this stage was linear, specifying hypothesised causal pathways that would lead to intended health benefits in ideal circumstances. This linearity reflected our initial lack of engagement with realist evaluation as well as our desire to present a relatively clear two-dimensional diagrammatic summary of the intervention to schools. The linear logic model was, however, a useful starting point in setting out the main mechanisms to be examined in the current realist RCT, which we hope will occur in most schools (boundary erosion, student centring and student commitment to instructional and regulatory orders) in order to enable intervention benefits to be realised.
Pre-hypothesised intervention mechanisms
To examine empirically whether this is the case we will examine various mediation hypotheses suggested by this logic model and the intervention theory underlying it:
-
▪ Hypothesis 1: LT schools will report reduced student-student, student-staff and academic-broader learning boundaries and increase student-centred framing at follow-up.
-
▪ Hypothesis 2: LT schools will report higher rates of student commitment to the schools’ instructional and regulatory orders by follow-up.
-
▪ Hypothesis 3: LT schools will report higher rates of student life skills and positive, trusting and empathetic relationships and lower rates of student involvement in anti-school peer groups by follow-up.
-
▪ Hypothesis 4: intervention effects on primary and secondary trial outcomes will be mediated by these reductions in boundaries, increases in student-centred framing, student commitment, skills and relationships, and decrease in involvement in anti-school peer groups.
Pre-hypothesised contextual barriers and facilitators
The logic model as developed at the end of the pilot trial (presented in Fig. 2) was also limited by not engaging with how contextual factors might modify intervention implementation and mechanisms. We will develop hypotheses about this in the course of the evaluation (see stage 2, below) but we started with a number of a priori hypotheses about how context might moderate intervention mechanisms and outcomes, informed by literature on the sociology of education, youth and health and previous similar interventions (cited below).
-
▪ Hypothesis 1: the intervention will be more acceptable and be implemented with better fidelity when it is line with existing school institutional approaches and teacher practices which aim to erode the boundaries which the intervention is addressing [18, 21].
-
▪ Hypothesis 2: the intervention will be implemented with better fidelity when the school has the capacity to implement it properly, in terms of: the action team being chaired or otherwise led by a person with real authority in the school; the action team involving other individuals, which means it is taken seriously both by staff and students; the action team being formally linked in to the school decision making structures (e.g. through involvement of a senior leader); and the school being a generally functional institution: e.g. stable staffing, not in crisis with respect to targets and inspections [15, 18, 21].
-
▪ Hypothesis 3: the intervention will be implemented with better fidelity in schools that include students with varying degrees of educational engagement in its activities (e.g. action groups), including students who have a history of, or are considered likely to be involved in bullying behaviours [15, 16].
-
▪ Hypothesis 4: drawing on evidence from cross-sectional studies from the United States (US) [22, 23], we hypothesise that schools will implement restorative approaches with less fidelity of function in schools with higher numbers of African Caribbean or minority ethnic students. The hypothesis suggests that schools may be unconsciously prejudiced in their practice. A theoretical explanation proposed is that school institutions (similar to criminal justice systems) see a high ratio of ethnic minorities as a perceived threat and intensify punitiveness. These processes may play out differently in the UK which is a different social and historical context from the US.
-
▪ Hypothesis 5: the intervention will be more effective in schools with more students of low socio-economic status (SES) backgrounds since eroding boundaries is hypothesised as more important for these students [17].
Regarding gender, a similar whole-school intervention addressing bullying and aggression in the US [24] reported a range of benefits for boys but not girls. However, in the absence of a process evaluation, the reasons for these differential effects are unclear. Therefore, before developing hypotheses regarding the gendered nature of effects, we will examine emerging data from our process evaluation with a focus on gender to hypothesise at stage 2 (below) whether and how the intervention may be implemented for and received by girls and boys.
This development of CMO configurations meant that our theory of change moved from being simplistic, linear and acontextual to being realist and contextual. However, we retained our original, linear logic model since incorporating all these CMO configurations into a single diagram would have involved mapping out a complex array of processes in a three-dimensional model, which would have lost any clarity. The logic model, though acontextual, nonetheless suggests how the intervention might proceed under ideal conditions and is also useful to communicate the basic intervention theory with schools, young people, parents and other groups.
Stage 2: empirical process evaluation to refine CMO hypotheses
In this stage, which is ongoing, we will refine and augment the theory and hypotheses developed in stage 1 by drawing on empirical evidence. In our phase III trial’s integral process evaluation, we are collecting data on intervention implementation, receipt, acceptability and normalisation (i.e. sustainability), as well as mechanisms and context. In traditional trials, process evaluation is used to examine intervention fidelity so that it can be determined whether any limitations in effectiveness reflect a true lack of effectiveness of the intervention model or merely a failure of implementation of that model. However, our process evaluation will go beyond this to explore intervention mechanisms and how these interact with context to generate outcomes (or fail to do so).
Data are being collected via:
-
▪ diaries completed by intervention deliverers (trainers and external facilitators working in schools to establish and support action groups);
-
▪ researcher observations;
-
▪ interviews with school staff, students and intervention deliverers;
-
▪ surveys that monitor satisfaction and implementation of core components of the intervention; and
-
▪ in-depth case studies involving participant observation, focus groups and interviews in a selection of intervention schools.
Qualitative research captures a sense of research participants’ own meanings, their sense of agency and how this inter-relates with the social structure of intervention context. It can thus suggest hypotheses about the complex mechanisms by which our intervention might work, including those not originally anticipated by us in stage 1 and those issues that were under-theorised at stage 1 (e.g. gender). Qualitative data may provide important insight into contexts and unintended pathways that can then be tested via quantitative mediation and moderation analyses. While qualitative research can also be conducted after quantitative analyses to try to explain unexpected findings, this is less than ideal since it may mean that quantitative analyses are insufficiently focused on testing hypotheses (and so vulnerable to charges of ‘data dredging’) and qualitative analyses may be biased towards searches for data that confirm quantitative findings. Thus, during the course of the trial, the qualitative data described above will be analysed using framework analysis and grounded theory methods [25] to refine our theory and CMO hypotheses prior to quantitative testing in stage 3.
Stage 3: testing mid-level CMO theories
In this stage, we will test hypotheses derived in stages 1 and 2 via quantitative analyses of effect mediation (to examine mechanisms) and moderation (to examine contextual contingencies). There are concerns among trialists that within trials multiple analyses can lead to false positive results [26]. However, our approach of grounding these in hypothesis testing minimises these risks and ensures transparency. The use of prior theory and empirical evidence derived from stages 1 and 2 means that the quantitative testing described below will be limited in number and focused on promising hypotheses rather than being unfocused ‘fishing expeditions’ likely to produce as many false as real positive results [27–29]. Combining process and outcome data enables us to develop empirically informed mid-range theory [6, 19] about school processes and how these may be modified by the intervention, and the extent to which LT may be transferable to a range of population and contexts. Causal mediation analysis helps to identify process or mediating variables that lie in the causal pathways between the treatment and the outcome [30, 31]. Mediators are post-baseline measures of interim effects which may or may not account for intervention effects on end-outcomes. In our phase III trial, mediation analysis will assess whether intervention effects on aggression and bullying might be accounted for by intermediate effects on: boundaries between and among student and staff and between students’ academic and broader learning boundaries; the student-centred framing of provision; student commitment to schools’ instructional and regulatory orders; student life skills and affiliation; and student involvement in anti-school peer groups. For example, we will examine if boundaries between staff and students mediate any effects of the LT intervention on bullying and aggression in schools. Within the context of evaluating social interventions such as this, measuring underlying change mechanisms (i.e. mediators) as well as outcomes provides information about which mechanisms are critical for influencing outcomes. This information will enable evaluators conducting realist RCTs to refine the theory of change at this stage to focus on effective components of interventions, specify causal pathways and remove ineffective components and insignificant mechanisms responsible for change. Table 1 illustrates the variables and scales used to test mediators in the LT trial. These map directly on to the pathways and constructs presented in the logic model in Fig. 2. Ultimately the analysis will assess whether these appear to mediate intervention beneficial effects on primary and secondary trial outcomes.
Table 1 School-level and individual-level pre-hypothesised mediators of Learning Together Whereas mediators establish ‘how’ or ‘why’ one variable predicts or causes a change in an outcome variable, moderators address ‘when’ or ‘for whom’ a predictor is more strongly related to an outcome [31]. Operationally, a moderator is a baseline variable that alters the direction or strength of the relationship between a predictor and an outcome [31]. This allows evaluators to investigate not only the general effectiveness of an intervention, but which interventions work best for which people and in which settings [31].
Informed by previous whole school interventions [21] and the pilot trial [16] in LT, we hypothesise that schools already aiming to erode staff-student boundaries before the intervention commences are more likely to implement school level actions via the school action group meetings to change school boundaries and thus ultimately report lower levels of bullying and aggression at follow-up. We also hypothesise that schools with higher levels of students of low socio-economic status will report lower levels of bullying and aggression at follow-up. Thus, schools’ priorities (assessed via baseline interviews with school staff) and students’ socio-economic status are both introduced as moderators of the relationship between the intervention and levels of bullying and aggressive behaviour. These interaction effects (i.e. moderators) are important because they allow evaluators to assess whether the intervention is inappropriate at a particular time and context or even potentially harmful. In our trial, we will examine various school-level and individual-level characteristics that moderate implementation and mechanisms. Table 2 summaries the data sources used for this examination. With only 20 intervention schools we are unlikely to be able to test statistically whether implementation and effectiveness is significantly better in the types of school contexts set out in the above CMO configurations, so most of these data and analyses will be qualitative to support hypothesis building rather than testing. Given some of the constraints on testing CMO hypothesis, we allow for the possibility that the results of these analyses may be indeterminate or negative with the theory of change.
Table 2 Exploring pre-hypothesised moderators of Learning Together (LT) Ethical issues
The Learning Together trial, which is used as a worked example in this manuscript has been approved by the Institute of Education Research Ethics Committee (18 November 2013 ref. FCL 566) and the University College London Research Ethics Committee (30 January 2014, Project ID: 5248/001).