Abstract
Under-identification of mental health difficulties (MHD) in children and young people contributes to the significant unmet need for mental health care. School-based programmes have the potential to improve identification rates. This systematic review aimed to determine the feasibility of various models of school-based identification of MHD. We conducted systematic searches in Medline, Embase, PsycINFO, ERIC, British Education Index, and ASSIA using terms for mental health combined with terms for school-based identification. We included studies that assessed feasibility of school-based identification of students in formal education aged 3–18 with MHD, symptomatology of MHD, or exposed to risks for MHD. Feasibility was defined in terms of (1) intervention fit, (2) cost and resource implications, (3) intervention complexity, flexibility, manualisation, and time concerns, and (4) adverse events. Thirty-three studies met inclusion criteria. The majority focused on behavioural and socioemotional problems or suicide risk, examined universal screening models, and used cross-sectional designs. In general, school-based programmes for identifying MHD aligned with schools’ priorities, but their appropriateness for students varied by condition. Time, resource, and cost concerns were the most common barriers to feasibility across models and conditions. The evidence base regarding feasibility is limited, and study heterogeneity prohibits definitive conclusions about the feasibility of different identification models. Education, health, and government agencies must determine how to allocate available resources to make the widespread adoption of school-based identification programmes more feasible. Furthermore, the definition and measurement of feasibility must be standardised to promote any future comparison between models and conditions.
Background
Mental health difficulties (MHD) in children and young people (CYP) are an important public health challenge globally (Patel et al. 2007). MHD, including diagnosed psychiatric disorders, as well as subclinical symptoms of poor mental health (e.g. behavioural and socioemotional problems), are associated with a number of negative short- and long-term social, health, academic, and economic outcomes (Belfer 2008; Breslau et al. 2008; Green et al. 2005; Jokela et al. 2009). Whilst several early intervention strategies have shown success in reducing the burden of MHD in CYP (Children and Young People’s Mental Health and Wellbeing Taskforce 2015; Fazel et al. 2014; National Health Service England 2016), only about 15–30% of CYP with MHD receive any treatment (Burns et al. 1995; Eklund and Dowdy 2014; Kohn et al. 2004). Under-identification contributes to this gap: frontline gatekeepers such as teachers or primary care providers only identify 0.6–16% of CYP with MHD (Jensen et al. 2011; Levitt et al. 2007).
Improving rates of identification is important for increasing access to care and support for CYP with MHD. Schools are well placed to identify and support CYP with MHD due to their near universal access to CYP, high number of contact hours, close relationships with students and families, and the fact that the majority of MHD begin during the schooling years (Department of Health and Department for Education 2017; Humphrey and Wigelsworth 2016; Weist et al. 2007; Williams 2013). Furthermore, the recent UK Government Green Paper on CYP’s Mental Health sets expectations for schools to take a central role in the identification of and response to MHD (Department of Health and Department for Education 2017). Yet, despite these expectations, many school staff members feel unprepared to recognise MHD in their students (Day et al. 2017; Evans et al. 2016).
Schools currently use four main models to identify students with MHD (Panel 1). Universal screening refers to the assessment of all students using self-, teacher-, or parent-report measures. Selective screening is similar to universal screening but only assesses students with certain identifiable risk factors. Staff in-service training refers to increasing school staff’s knowledge and building capacity to recognise and refer students at-risk of or experiencing MHD. Curriculum-based models centre around educating students about MHD and rely on students to identify MHD and communicate concerns appropriately. In the UK, over 80% of schools currently rely on ad hoc identification of MHD. Systematic approaches are far less common; for example, only 15% of schools use universal screening, and a quarter use selective screening (NatCen Social Research and The National Children’s Bureau Research and Policy Team 2017).
School-based programmes have the potential to improve rates of MHD identification in CYP (Anderson et al. 2018). In the design and implementation of such programmes, it is important to consider not only effectiveness, but also social validity (Craig et al. 2008; Humphrey and Wigelsworth 2016). Social validity refers to ‘social importance’, or how much value society ascribes to the goals, procedures, and effects of a given programme (Wolf 1978). The social validity of an identification programme may refer to its feasibility, acceptability, and utility (Humphrey and Wigelsworth 2016), and is key for promoting successful implementation and long-term sustainability.
In this review, ‘feasibility’ refers to the impact of factors that affect programme implementation, including demand, ease of delivery, practicality, flexibility, and some aspects of acceptability (Bird et al. 2014; Bowen et al. 2009). Barriers and facilitators at the intervention level as well as the larger context in which an intervention takes place (e.g. school or policy context) can affect implementation and sustainability (Domitrovich et al. 2008; Ozer 2006). Understanding these barriers and facilitators is important for scaling up evidence-based mental health interventions in schools (Fazel et al. 2014). To be sustainable, programmes must be feasible for all stakeholders, including students, parents, school staff, and mental health professionals.
Despite the clear importance of, and recent policy focus on, early identification of MHD, there is a paucity of evidence for the effectiveness, feasibility, and acceptability of school-based identification models, especially within the UK (Fazel et al. 2014; Humphrey and Wigelsworth 2016). A recent systematic review (Anderson et al. 2018) examined the effectiveness of school-based models of identification. The present linked review sought to determine the feasibility of various models of school-based MHD identification.
Methods
The protocol for this review is registered with the International Prospective Register of Systematic Reviews (PROSPERO; https://www.crd.york.ac.uk/prospero; registration number: 42016053084 (18 January 2017 version)).
Definition of Feasibility
At the outset of this review, a literature search returned only one systematic tool for measuring the feasibility of mental health interventions: the Structured Assessment of FEasibility (SAFE) tool (Bird et al. 2014) (additional frameworks have been developed and tested since; see Weiner et al. (2017)). The SAFE tool features sixteen different aspects of feasibility, which we adapted (excluding ‘effectiveness’, ‘pilotable’, and ‘reversible’ criteria) to fit our research questions. To facilitate concise reporting, we further grouped the aspects of feasibility into four headings:
-
1.
Intervention fit: matches prioritised goals, applicable to population of interest
-
2.
Cost and resource implications: costly setup, cost-saving, additional human resources, additional material resources, staff training, on-going supervision
-
3.
Intervention complexity, flexibility, manualisation, and time concerns
-
4.
Adverse events
Inclusion and Exclusion Criteria
We included studies that assessed feasibility of school-based interventions to identify students aged 3–18 years with (1) diagnosable MHD, (2) symptoms of mental ill health, or (3) exposure to psychosocial risk leading to increased risk of MHD. Most of the above feasibility categories are inherently informative; for example, whether a programme was perceived as too complex has clear, direct implications for feasibility. However, for three categories—staff training, on-going supervision, and manualisation—we required a more detailed comment than whether or not a programme included these elements (i.e. a statement about the implications of these elements on a programme’s implementation or sustainability). We excluded studies that focused on cost-effectiveness, as this outcome is included in our linked review (Anderson et al. 2018). We had no restriction regarding informants about feasibility. We included studies that did not explicitly state that they were measuring feasibility, but did report on at least one of the outcomes listed above, and studies that included in-principle findings (i.e. did not examine a specific intervention). We excluded studies that focused on global or specific learning disabilities or psychometric properties of an identification measure. We did not restrict study design.
Search Strategy
We searched the following electronic databases in May and June 2017 and again in July 2018: Medline and Embase via OvidSP; PsycINFO, ERIC, and British Education Index via EBSCOhost; and ASSIA via ProQuest. The search strategy (Supplementary Table 1) included two domains: school-based identification and mental health. We combined our search terms with subject heading terms in each database. We collected additional citations through hand-searching reference lists of key publications and relevant journals.
Study Selection and Data Extraction
Independent reviewers (ES, JKA, EH) double screened studies in three stages: (1) reviewers screened titles and removed obviously irrelevant citations; (2) reviewers judged abstracts against inclusion/exclusion criteria; and (3) reviewers examined full texts of potentially relevant citations against inclusion/exclusion criteria. We resolved disagreements by discussion. Two reviewers (ES, JKA) piloted data extraction tables with three studies to ensure they captured all relevant information. The reviewers independently extracted data; disagreements were solved by discussion, and if necessary by a third reviewer (EH). We extracted information on study design, study aims, school level(s), identification measures, informants, programme descriptions, and sample characteristics. Regarding feasibility, we extracted outcomes according to the SAFE categories, feasibility informants, and the method of determining feasibility.
Critical Appraisal
We assessed quantitative studies using the Canadian Effective Public Health Practice Project (EPHPP) Quality Assessment Tool for Quantitative Studies (Armijo-Olivo et al. 2012) and qualitative studies using the Critical Appraisal Skills Programme (CASP) Qualitative Research Checklist (Critical Appraisal Skills Programme (CASP) 2013). We used both tools to assess mixed methods studies.
Synthesis of Results
We provide a numerical account of included studies and employ narrative synthesis to present results, with studies grouped based on the type of identification model evaluated. We use Popay et al.’ (2006) guidance on narrative synthesis to guide reporting and provide a summary and conclusions in the discussion (Popay et al. 2006).
Results
Thirty-three studies met inclusion criteria (see Figure 1 for PRISMA flowchart and Supplementary Table 2 for an account of studies). The vast majority were conducted after the year 2000 (n = 29) and were from the United States(n = 27). Most studies used a cross-sectional design to assess feasibility (n = 26) and examined universal screening (n = 30). Behavioural and socioemotional problems (n = 14) and suicide risk (n = 11) were the most-studied conditions.
Quality of Included Studies
Quantitative Studies
We provide quality ratings in Supplementary Table 3. The overall methodological quality of included studies was low. The majority were rated ‘weak’ in study design (n = 25) and had ‘moderate’ risk of selection bias (n = 21). Only seven studies used validated and reliable tools to measure feasibility. Drop-out rates varied between studies.
Qualitative Studies
Four of the studies with qualitative elements (D’Souza et al. 2005; Kirk 2014; Nadeem et al. 2016; Whitney et al. 2011) scored well on the CASP tool, indicating appropriate research design, recruitment, data collection, and data analysis. Each study also included a clear statement of aims and findings and added value to the evidence base. One study (Gilmore et al. 2004) did not score as highly, due to lack of a clear aim and insufficiently rigorous data analysis.
Feasibility of School-Based Identification of Mental Health Difficulties
We present characteristics of included studies in Table 1. We present feasibility findings and feasibility reporting by study in Supplementary Tables 4 and 5, respectively.
Universal and Selective Screening
Thirty studies reported on the feasibility of universal or selective screening (Supplementary Table 5). Fourteen reported on screening programmes for behavioural and socioemotional problems (Bruhn et al. 2014; Chartier et al. 2008; Davis 2014; Donohue et al. 2015; Edmunds et al. 2005; Gilmore et al. 2004; Kirk 2014; McManus 2009; Nemeroff et al. 2008; Poulsen et al. 2015; Romer 2012; Shortt et al. 2006; Vander Stoep et al. 2005; Walker et al. 1994), eight on suicide risk (Eckert et al. 2003; Fox et al. 2013; Gould et al. 2005; Hallfors et al. 2006a; Miller et al. 1999; Robinson et al. 2011; Scherff et al. 2005; Whitney et al. 2011), four on substance abuse (Chatterji et al. 2004; Curtis et al. 2014; Hallfors et al. 2006b; Hallfors et al. 2000), three on depression (Chatterji et al. 2004; Fox et al. 2013; Lyon et al. 2016), and one each on ADHD (Barry et al. 2016), anxiety (Chatterji et al. 2004), and eating disorders (D’Souza et al. 2005).
Intervention Fit
Twenty-two studies considered whether screening programmes were applicable to students and fit with prioritised goals. From the perspective of school staff, screening for behavioural and socioemotional problems (Davis 2014; Gilmore et al. 2004; Kirk 2014; McManus 2009; Romer 2012; Shortt et al. 2006; Walker et al. 1994) and eating disorders (D’Souza et al. 2005) matched school priorities in practice. However, when asked about in-principle feasibility, staff did not view identification of such problems as a school responsibility (Bruhn et al. 2014). Similarly, four in-principle studies comparing different identification models found that school staff were not persuaded screening for suicide risk was beneficial or acceptable (Eckert et al. 2003; Miller et al. 1999; Scherff et al. 2005; Whitney et al. 2011), and questioned whether screening was within schools’ remit (Whitney et al. 2011). In these studies, staff preferred in-service training and curriculum-based models over screening. In practice, views on screening for suicide risk were mixed, with some staff finding it an acceptable model and others feeling it was not beneficial (Hallfors et al. 2006a; Robinson et al. 2011). Support from teachers and superintendents increased student participation in screening, particularly for programmes that parents did not view as important (Barry et al. 2016). In general, parental support for screening was strong; nearly all parents (84–89%) supported screening for depression and suicide risk (although support differed by ethnicity and parental history of mental illness) (Fox et al. 2013) and over 99% of parents were satisfied with a post-disaster screening programme for behavioural and socioemotional problems (Poulsen et al. 2015). Similarly, students and mental health professionals found it important to screen for risk for behavioural and socioemotional problems (Romer 2012; Shortt et al. 2006).
In terms of the relevance for students, school staff and parents generally viewed screening programmes for behavioural and socioemotional problems favourably (Davis 2014; Kirk 2014; McManus 2009; Nemeroff et al. 2008). In contrast, staff raised concerns about the applicability of programmes for suicide risk (Hallfors et al. 2006a; Miller et al. 1999) and eating disorders (D’Souza et al. 2005), believing that students would not take these screenings seriously. Indeed, the programme for eating disorders was more effective for female students than for male students, and boys generally viewed the programme less favourably than did girls (D’Souza et al. 2005). Similarly, in a programme for suicide risk, those at highest risk were less likely to find the programme helpful (Robinson et al. 2011). Compared with behavioural and socioemotional problems, conditions that received less support with respect to screening were less prevalent in students. For less common conditions, selective screening for smaller, higher-risk groups had greater acceptance (D’Souza et al. 2005; Hallfors et al. 2006b).
Cost and Resource Implications
Thirteen studies considered cost and resource implications of screening. Regardless of condition screened for, schools were concerned about programme sustainability due to human resource requirements from the data collection stage to the provision of on-going support for identified students (Bruhn et al. 2014; D’Souza et al. 2005; Donohue et al. 2015; Hallfors et al. 2006a; Hallfors et al. 2000; Vander Stoep et al. 2005; Whitney et al. 2011). These concerns were reflected in a modelling study that found depression screening would require additional mental health professionals in order to accommodate newly identified students (Lyon et al. 2016). Whilst several programmes offered training for school staff, only two reported on training feasibility. One study reported teacher training required only one hour (Barry et al. 2016), although another found that many staff declined training, believing their professional training to be sufficient (Hallfors et al. 2006a). Similarly, several programmes offered supervision to school staff, but only two studies commented on supervision feasibility. These studies found that on-going supervision was crucial and that school staff doubted programme sustainability in the absence of on-going support from research staff (D’Souza et al. 2005; Hallfors et al. 2006a). In terms of additional material resources, schools’ most common concern was about purchasing screening tools or equipment for computerised testing (Bruhn et al. 2014; Donohue et al. 2015; Hallfors et al. 2000; Vander Stoep et al. 2005).
Five studies commented directly on costs of screening with mixed findings. One study reported general concerns about schools’ budgets (Bruhn et al. 2014), whilst four reported absolute costs. Two studies on behavioural and socioemotional screening found relatively low costs: data collection alone cost £4.60 per student (Edmunds et al. 2005) and a full programme (including follow-up support) cost US$9–15 per student (Vander Stoep et al. 2005). However, two other studies reported much higher costs of US$149–194 per student screened (Chatterji et al. 2004; Walker et al. 1994). These costs were not clearly related to the identity of those who delivered the screening programme (i.e. school staff (Edmunds et al. 2005; Walker et al. 1994), research staff (Vander Stoep et al. 2005), or a combination of in-school/external staff (Chatterji et al. 2004)).
Intervention Complexity, Flexibility, Manualisation, and Time Concerns
Thirteen studies commented on the complexity of screening programmes. Not all programmes were perceived as complex (Gilmore et al. 2004; Hallfors et al. 2006b; McManus 2009), but when they were, common factors of difficulty included obtaining consent, persuading teachers to release student time to complete assessments, collecting and analysing data, and integrating programmes into school culture. Schools viewed active parental consent requirements as a significant barrier (Barry et al. 2016; Chartier et al. 2008; Kirk 2014); obtaining consent was particularly difficult for students of lower socioeconomic status (Barry et al. 2016) and for students at high risk for MHD (Chartier et al. 2008). There was no clear consensus regarding the preferred method or mode of data collection. Whilst some questionnaires were easy to complete (McManus 2009), others used difficult wording (Donohue et al. 2015). For school staff, there was also no consensus regarding whether screening was more feasible in computerised or traditional format, but students tended to prefer computerised assessment (Hallfors et al. 2000; Nemeroff et al. 2008). Perceptions of programmes requiring school data varied; some were viewed as complex (Edmunds et al. 2005) but others as relatively easy (Hallfors et al. 2006b), as determined by availability and ease of use of school records. School staff also found it difficult to fully comply with screening protocols (Hallfors et al. 2006a) and to integrate screening into existing structures (D’Souza et al. 2005).
Eighteen studies commented on time concerns. Across conditions, school staff believed programmes were time-prohibitive, and had difficulty finding time to administer questionnaires, enter and analyse data, follow-up with identified students, and integrate programmes into school culture (Barry et al. 2016; D’Souza et al. 2005; Donohue et al. 2015; Eckert et al. 2003; Edmunds et al. 2005; Gilmore et al. 2004; Hallfors et al. 2006a; Hallfors et al. 2000; Kirk 2014; Miller et al. 1999; Nemeroff et al. 2008; Scherff et al. 2005). Programmes that identified large numbers of students were more likely to be viewed as overly time-intensive due to follow-up requirements (Hallfors et al. 2006a). Only four studies (Curtis et al. 2014; Davis 2014; Hallfors et al. 2000; Robinson et al. 2011) reported that staff, parents, and students viewed screening as time-efficient, with computerised assessment helping to reduce time requirements (Hallfors et al. 2000). Four of the five studies that quantified time resources found that completion of questionnaires required 15–50 minutes (Curtis et al. 2014; Edmunds et al. 2005; McManus 2009; Vander Stoep et al. 2005) and follow-up with identified students required 10–30 minutes per student (Curtis et al. 2014), though one programme reported time requirements of 6.43 hours per student (Walker et al. 1994). Whilst school staff generally believed that this was reasonable, large numbers of identified students overwhelmed schools (Hallfors et al. 2006a).
Four studies evaluated screening programme flexibility, all of which found that school staff valued the ability to tailor programmes to fit schools’ and students’ needs. Schools adapted programmes both in terms of format (Curtis et al. 2014; D’Souza et al. 2005) and target population (Hallfors et al. 2006a; Nemeroff et al. 2008), which increased perceived feasibility.
Adverse Events
Only two studies reported on potential harms of screening, both of which concerned programmes that screened for suicide risk. These studies compared distress and suicidal ideation of students who were or were not exposed to questions about suicide, and found no significant difference in either distress or suicidal ideation between the groups (Gould et al. 2005; Robinson et al. 2011), including for students at high risk for suicide (Gould et al. 2005).
Staff In-service Training
Eight studies reported on the feasibility of staff in-service training, six of which focused on identification of suicide risk (Eckert et al. 2003; Eckert et al. 2006; Kalafat and Elias 1994; Miller et al. 1999; Nadeem et al. 2016; Scherff et al. 2005; Whitney et al. 2011) and one of which on ADHD (Sayal et al. 2006). All but two studies (Nadeem et al. 2016; Sayal et al. 2006) examined in-principle feasibility and all examined the views of school staff.
Intervention Fit
All eight studies commented on intervention fit. In general, teachers perceived in-service training for identifying ADHD as appropriate, relevant, and useful (Sayal et al. 2006). There was no clear consensus on whether in-service training for identifying suicide risk matched school priorities. Although staff questioned whether mental health should be the responsibility of schools (Nadeem et al. 2016; Whitney et al. 2011), they generally viewed in-service training as beneficial for students (Eckert et al. 2003; Miller et al. 1999; Scherff et al. 2005). Furthermore, although school staff believed that in-service training was appropriate for a variety of students (Miller et al. 1999), some evidence suggested that female students may find staff in-service training more acceptable than do males (Eckert et al. 2006). Finally, school staff expressed concern about teacher and parent buy-in (Nadeem et al. 2016; Whitney et al. 2011).
Cost and Resource Implications
Two studies commented on cost and resource implications of staff in-service training. School staff thought that resources for mental health were crucial, particularly for students without access to care outside of school (Nadeem et al. 2016). Staff thought that the training was valuable and that it would help them to identify and support students with mental health needs (Nadeem et al. 2016; Whitney et al. 2011).
Intervention Complexity, Flexibility, Manualisation, and Time Concerns
Two studies reported on complexity of staff in-service training programmes. School staff viewed in-service training as complex due to difficulties communicating with parents about their child’s risk (Nadeem et al. 2016). However, staff also believed that in-service training was easier to implement in comparison with other models of identification (i.e. curriculum-based models and universal screening) (Whitney et al. 2011). Three studies commented on time concerns. School staff viewed in-service training as intrusive into staff and student time (Eckert et al. 2003; Miller et al. 1999), although this was less of a concern for school superintendents (Scherff et al. 2005).
Adverse Events
Not described.
Curriculum-Based Models
Seven studies reported on the feasibility of curriculum-based models for identification of suicide risk (Eckert et al. 2003, 2006; Kalafat and Elias 1994; Miller et al. 1999; Scherff et al. 2005; Whitney et al. 2011), six of which examined in-principle feasibility according to school staff or students and one of which (Kalafat and Elias 1994) assessed in-practice feasibility according to students.
Intervention Fit
All seven studies commented on intervention fit. School staff generally agreed that curriculum-based models were beneficial, helpful, and appropriate for a variety of students (Eckert et al. 2003; Miller et al. 1999; Scherff et al. 2005). However, some doubted the fit for younger students and raised concerns about lack of teacher buy-in and parental objections (Whitney et al. 2011). In-principle perceptions varied among students, with female students finding curriculum-based models more acceptable and less intrusive than male students (Eckert et al. 2006). In practice, however, most students found the curriculum-based approach to identifying suicide risk to be useful, interesting, relevant, and important (Kalafat and Elias 1994).
Cost and Resource Implications
Not described.
Intervention Complexity, Flexibility, Manualisation, and Time Concerns
One study commented on programme complexity and found that school principals appreciated that curriculum-based models were easy to implement, standardised, and deliverable to all students (Whitney et al. 2011). Six studies reported on time concerns. In general, school staff were concerned about curriculum-based models intruding into staff and student time (Eckert et al. 2003; Scherff et al. 2005; Whitney et al. 2011), although school superintendents were less concerned about time requirements (Scherff et al. 2005).
Adverse Events
The only study to report on adverse events found that only 3% of students rated classes about suicide as upsetting (Kalafat and Elias 1994).
Discussion
We identified 33 studies that reported on the feasibility of school-based identification of MHD. Most studies focused on behavioural and socioemotional problems or suicide risk were cross-sectional in design and examined feasibility from the perspective of school staff.
Screening programmes were the most common identification model evaluated. Most school staff perceived screening to be aligned with school priorities but viewed programmes that screened for less prevalent conditions (e.g. eating disorders, substance abuse, and suicide risk) as less applicable to all students. Across conditions, school staff were concerned about additional human and material resources, and costs varied widely (from less than £5 per student for data collection to nearly US$200 per student screened). Time concerns were common across models and conditions, and staff doubted whether schools had enough time to complete screening, particularly when the process involved following up with at-risk students. Attainment of consent and communication with parents were significant barriers to feasibility. Flexible programmes were reported as more feasible, particularly when universal screening could be adapted to target higher-risk groups only. No study found evidence of harms resulting from screening.
Staff in-service training and curriculum-based models were less common and most focused on suicide risk. In-service training matched well with school priorities and was helpful in principle, but in practice, many school staff doubted whether mental health was their responsibility, which might explain concerns about time and resource requirements. Curriculum-based models also aligned with school priorities and were perceived as helpful, standardised, and easy to implement. School staff generally viewed both models as intrusive into staff and student time.
Suicide risk and ADHD were the only conditions represented across two or more identification models, thereby providing opportunity for comparison. For suicide risk, school staff preferred in-service training and curriculum-based programmes to universal screening. Compared with screening, these models aligned more with prioritised goals and were perceived as more applicable to a variety of students and easier to implement. Screening was ubiquitously viewed as the most time-intrusive model. ADHD identification had similar trends, whereby school staff and parents viewed staff in-service training as a better fit than screening.
There were important differences between findings from in-principle studies and studies of specific interventions, with the former generally showing lower feasibility. In-principle studies found that MHD identification was less of a priority and that programmes were less applicable to students. This might be explained by the fact that studies of specific interventions would have taken place in schools for which identification was enough of a priority to participate in research and were therefore viewed more favourably. Alternatively, initial concerns may be allayed when a programme is delivered in practice.
Quality of the Evidence
Although the majority of quantitative studies were rated ‘weak’ in terms of study design, Bowen et al. (2009) have argued that several designs besides RCTs are appropriate for assessing feasibility, including cross-sectional and pre-post designs (Bowen et al. 2009). Studies used a variety of methods to measure feasibility, including authors’ observations, surveys, rating tools, and interviews. However, few utilised validated and reliable measures to measure feasibility, which is unsurprising given the scarcity of available tools. The qualitative studies and qualitative elements of mixed methods studies were generally of high quality and examined feasibility in more depth by exploring context as well as the logic and reasoning underlying stakeholder perspectives.
Limitations
We acknowledge several limitations. First, we only included studies published in English. Second, the lack of standardised definition and validated measures of feasibility limited our ability to compare feasibility across studies and identification models. Finally, whilst the SAFE guidance was comprehensive, it would have been useful to compare it with other tools (nearly 40 identified aspects of feasibility were left out of the tool; see Appendix DS1 of Bird et al. 2014). Widening the scope of feasibility criteria would likely have led to inclusion of additional studies.
Implications for Practice
Although the evidence did not indicate one identification model as more feasible than others, we did identify a number of key barriers. Collaboration between schools and mental health professionals, as recommended by the Green Paper on CYP’s Mental Health (Department of Health and Department for Education 2017), may help address some of these concerns. For example, mental health professionals could consult with schools to reduce barriers such as complexity and training/supervision requirements and could further assist in following up with identified students. Sharing the responsibility of identification between health and education sectors would also address schools’ concerns that mental health is not their responsibility. Indeed, several other settings may complement schools in the identification of MHD, including primary care practices.
Cost and resource concerns are perhaps more difficult to address, as schools have limited budgets and resources. However, these concerns can be partially addressed through efficient use of existing resources. For example, using routinely collected school data could help identify specific groups of students at increased risk of MHD (Kuo et al. 2013), thereby increasing the positive predictive value of any screening tool. Furthermore, despite some evidence that programmes can be costly for both schools and society, it is clear that affordable programmes do exist; at an estimated US$9–15 per student for identification and one-to-one follow-up (Vander Stoep et al. 2005), the costs of school-based identification can be much lower than for specialist care (Snell et al. 2013). Such programmes also offer opportunity for early diagnosis and treatment, which can reduce the long-term costs of MHD (Williams 2013). Given the clear benefits of early identification, education, health, and government sectors must collaborate to most effectively allocate existing resources.
Finally, in creating feasible programmes, stakeholder participation is crucial. Encouragingly, in this review, the majority of included studies assessed feasibility by directly surveying or interviewing school staff, parents, and/or students. Involving stakeholders in all phases of intervention design, evaluation, and implementation yields better quality research and improved outcomes, and promotes better integration and sustainability, increased ownership, and greater cultural sensitivity (Brett et al. 2014; Wallerstein and Duran 2010).
Directions for Future Research
The first steps toward a better understanding of the feasibility of school-based MHD identification are to (1) create a standardised definition of feasibility and its components and (2) clarify how to reliably measure intervention feasibility. The development of standardised measures (Weiner et al. 2017) is crucial for both assessing feasibility of individual programmes and comparing feasibility across programmes. Furthermore, as feasibility can differ in principle and in practice, it is important to examine the relationship between the two through continued evaluation in all stages of intervention research (e.g. with detailed process evaluations (Craig et al. 2008; Moore et al. 2015)) and to assess feasibility in conjunction with effectiveness.
Furthermore, because programmes that are feasible and effective in one school may not be in others, researchers should explicitly examine school and policy contexts and their interaction with the intervention itself (Domitrovich et al. 2008; Ozer 2006). Such research is likely best conducted through mixed methods approaches and requires a detailed understanding of both intervention components and broader structural factors (Howarth et al. 2016). An examination of service context is particularly needed; most included studies were US-based, limiting generalisability to countries such as the UK, where long wait times often prohibit timely access to services (Frith 2017).
Finally, future research should continue to explicitly examine the potential for harms and unintended consequences related to MHD identification, as many school staff are concerned about the possibility of iatrogenic effects (Evans et al. 2016). Potential harms must be weighed against the benefits of the programme in order to inform practice (Public Health England 2015).
Conclusions
This is the first known systematic review of the feasibility of school-based MHD identification. The evidence base regarding feasibility is not robust enough to support programme scale-up, and between-study variation in definition and measurement of feasibility prohibits definitive conclusions about the most feasible identification model. Time, resource, and cost concerns are the most common barriers to feasibility. Education, health, and government agencies must work together to determine how to best allocate available resources to make the widespread adoption of identification programmes more feasible. Further research is needed regarding other possible contexts for identification, such as primary care or online screening.
References
Anderson, J. K., Ford, T., Soneson, E., Thompson Coon, J., Humphrey, A., Rogers, M., … Howarth, E. (2018). A systematic review of effectiveness and cost-effectiveness of school-based identification of children and young people at risk of, or currently experiencing mental health difficulties. Psychological Medicine, 1–11.
Armijo-Olivo, S., Stiles, C. R., Hagen, N. A., Biondo, P. D., & Cummings, G. G. (2012). Assessment of study quality for systematic reviews: a comparison of the Cochrane collaboration risk of bias tool and the effective public health practice project quality assessment tool: methodological research. Journal of Evaluation in Clinical Practice, 18, 12–18.
Barry, T. D., Sturner, R., Seymour, K., Howard, B., McGoron, L., Bergmann, P., et al. (2016). School-based screening to identify children at risk for attention-deficit/hyperactivity disorder: barriers and implications. Children’s Health Care, 45, 241–265.
Belfer, M. L. (2008). Child and adolescent mental disorders: the magnitude of the problem across the globe. Journal of Child Psychology and Psychiatry, 49, 226–236.
Bird, V. J., Le Boutillier, C., Leamy, M., Williams, J., Bradstreet, S., & Slade, M. (2014). Evaluating the feasibility of complex interventions in mental health services: standardised measure and reporting guidelines. The British Journal of Psychiatry, 204, 316–321. https://doi.org/10.1192/bjp.bp.113.128314.
Bowen, D. J., Kreuter, M., Spring, B., Cofta-Woerpel, L., Linnan, L., Weiner, D., et al. (2009). How we design feasibility studies. American Journal of Preventive Medicine, 36, 452–457. https://doi.org/10.1016/j.amepre.2009.02.002.
Breslau, J., Lane, M., Sampson, N., & Kessler, R. C. (2008). Mental disorders and subsequent educational attainment in a US national sample. Journal of Psychiatric Research, 42, 708–716.
Brett, J., Staniszewska, S., Mockford, C., Herron-Marx, S., Hughes, J., Tysall, C., & Suleman, R. (2014). Mapping the impact of patient and public involvement on health and social care research: a systematic review. Health Expectations, 17, 637–650.
Bruhn, A. L., Woods-Groves, S., & Huddle, S. (2014). A preliminary investigation of emotional and behavioral screening practices in K–12 schools. Education and Treatment of Children, 37, 611–634.
Burns, B. J., Costello, E. J., Angold, A., Tweed, D., Stangl, D., Farmer, E. M., & Erkanli, A. (1995). Children’s mental health service use across service sectors. Health Affairs, 14, 147–159.
Chartier, M., Stoep, A. V., McCauley, E., Herting, J. R., Tracy, M., & Lymp, J. (2008). Passive versus active parental permission: Implications for the ability of school-based depression screening to reach youth at risk. Journal of School Health, 78, 157–164.
Chatterji, P., Caffray, C. M., Crowe, M., Freeman, L., & Jensen, P. (2004). Cost assessment of a school-based mental health screening and treatment program in New York City. Mental Health Services Research, 6, 155–166.
Children and Young People’s Mental Health and Wellbeing Taskforce. (2015). Future in mind: promoting, protecting and improving our children and young people’s mental health and wellbeing. (Department of Health and NHS, Ed.). London.
Craig, P., Dieppe, P., Macintyre, S., Michie, S., Nazareth, I., Petticrew, M., et al. (2008). Developing and evaluating complex interventions: new guidance. Medical Reseach Council, 337, 1–34. https://doi.org/10.1136/bmj.a1655.
Critical Appraisal Skills Programme (CASP). (2013). CASP qualitative checklist. Retrieved from http://www.casp-uk.net/checklists
Curtis, B. L., McLellan, A. T., & Gabellini, B. N. (2014). Translating SBIRT to public school settings: an initial test of feasibility. Journal of Substance Abuse Treatment, 46, 15–21.
D’Souza, C. M., Forman, S. F., & Austin, S. B. (2005). Follow-up evaluation of a high school eating disorders screening program: knowledge, awareness and self-referral. Journal of Adolescent Health, 36, 208–213.
Davis, S. D. (2014). Teacher nominations and the identification of social, emotional, and behavioral concerns in adolescence. PhD Thesis. Brigham Young University, Provo, Utah.
Day, L., Blades, R., Spence, C., & Ronicle, J. (2017). Mental health services and schools link pilots: evaluation report.
Department of Health, & Department for Education. (2017). Transforming children and young people’s mental health provision: a green paper. APS Group.
Domitrovich, C. E., Bradshaw, C. P., Poduska, J. M., Hoagwood, K., Buckley, J. A., Olin, S., et al. (2008). Maximizing the implementation quality of evidence-based preventive interventions in schools: a conceptual framework. Advances in School Mental Health Promotion, 1, 6–28.
Donohue, P., Goodman-Scott, E., & Betters-Bubon, J. (2015). Using universal screening for early identification of students at risk: a case example from the field. Professional School Counseling, 19, 133–143.
Eckert, T. L., Miller, D. N., DuPaul, G. J., & Riley-Tillman, T. C. (2003). Adolescent suicide prevention: school psychologists’ acceptability of school-based programs. School Psychology Review, 32, 57–76.
Eckert, T. L., Miller, D. N., Riley-Tillman, T. C., & DuPaul, G. J. (2006). Adolescent suicide prevention: gender differences in students’ perceptions of the acceptability and intrusiveness of school-based screening programs. Journal of School Psychology, 44, 271–285.
Edmunds, S., Garratt, A., Haines, L., & Blair, M. (2005). Child Health Assessment at School Entry (CHASE) project: evaluation in 10 London primary schools. Child: Care, Health and Development, 31, 143–154.
Eklund, K., & Dowdy, E. (2014). Screening for behavioral and emotional risk versus traditional school identification methods. School Mental Health, 6, 40–49.
Evans, R., Russell, A. E., Mathews, F., Parker, R., The Self-Harm and Suicide in Schools GW4 Research Collaboration, & Janssens, A. (2016). Self-harm in schools: research project summary. Retrieved from http://medicine.exeter.ac.uk/research/healthresearch/childhealth/child-mental-health/
Fazel, M., Hoagwood, K., Stephan, S., & Ford, T. (2014). Mental health interventions in schools in high-income countries. The Lancet Psychiatry, 1, 377–387. https://doi.org/10.1016/S2215-0366(14)70312-8.
Fox, C. K., Eisenberg, M. E., McMorris, B. J., Pettingell, S. L., & Borowsky, I. W. (2013). Survey of Minnesota parent attitudes regarding school-based depression and suicide screening and education. Maternal and Child Health Journal, 17, 456–462.
Frith, E. (2017). Access and waiting times in children and young people’s mental health services. Education Policy Institute .
Gilmore, B., Brown, D. L. R., Van Midden, N., Mead-McEwan, L., Bretherton, M., Broere, C., et al. (2004). Seeds for success He Kakano Ka Puawai: school entry behaviour screening and intervention. Kairaranga, 5, 28–35.
Gould, M. S., Marrocco, F. A., Kleinman, M., Thomas, J. G., Mostkoff, K., Cote, J., & Davies, M. (2005). Evaluating iatrogenic risk of youth suicide screening programs: a randomized controlled trial. JAMA, 293, 1635–1643. https://doi.org/10.1001/jama.293.13.1635.
Green, H., McGinnity, Á., Meltzer, H., Ford, T., & Goodman, R. (2005). Mental health of children and young people in Great Britain, 2004. Basingstoke: Palgrave Macmillan.
Hallfors, D., Brodish, P. H., Khatapoush, S., Sanchez, V., Cho, H., & Steckler, A. (2006a). Feasibility of screening adolescents for suicide risk in “real-world” high school settings. American Journal of Public Health, 96, 282–287.
Hallfors, D., Cho, H., Brodish, P. H., Flewelling, R., & Khatapoush, S. (2006b). Identifying high school students “at risk” for substance use and other behavioral problems: implications for prevention. Substance Use & Misuse, 41, 1–15.
Hallfors, D., Khatapoush, S., Kadushin, C., Watson, K., & Saxe, L. (2000). A comparison of paper vs computer-assisted self interview for school alcohol, tobacco, and other drug surveys. Evaluation and Program Planning, 23, 149–155.
Howarth, E., Devers, K., Moore, G., O’Cathain, A., & Dixon-Woods, M. (2016). Contextual issues and qualitative research. In Raine R., Fitzpatrick R., Barratt H., Bevan G., Black N., Boaden R., … Zwarenstein M. (Eds.), Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. (Vol. 4, pp. 105–120). Health Services and Delivery Research.
Humphrey, N., & Wigelsworth, M. (2016). Making the case for universal school-based mental health screening. Emotional and Behavioural Difficulties, 21, 22–42. https://doi.org/10.1080/13632752.2015.1120051.
Jensen, P. S., Goldman, E., Offord, D., Costello, E. J., Friedman, R., Huff, B., et al. (2011). Overlooked and underserved: “action signs” for identifying children with unmet mental health needs. Pediatrics, 128, 970–979.
Jokela, M., Ferrie, J., & Kivimäki, M. (2009). Childhood problem behaviors and death by midlife: the British National Child Development Study. Journal of the American Academy of Child & Adolescent Psychiatry, 48, 19–24.
Kalafat, J., & Elias, M. (1994). An evaluation of a school-based suicide awareness intervention. Suicide and Life-threatening Behavior, 24, 224–233.
Kirk, M. (2014). Screening for social emotional difficulties among elementary students: a comparison of screening methods and teacher perceptions. Department of Communication Disorders and Counseling, School, and Educational Psychology. Indiana State University, Terre Haute, Indiana, USA.
Kohn, R., Saxena, S., Levav, I., & Saraceno, B. (2004). The treatment gap in mental health care. Bulletin of the World Health Organization, 82, 858–866.
Kuo, E. S., Vander Stoep, A., Herting, J. R., Grupp, K., & Mccauley, E. (2013). How to identify students for school-based depression intervention: can school record review be substituted for universal depression screening? Journal of Child and Adolescent Psychiatric Nursing. https://doi.org/10.1111/jcap.12010.
Levitt, J. M., Saka, N., Romanelli, L. H., & Hoagwood, K. (2007). Early identification of mental health problems in schools: the status of instrumentation. Journal of School Psychology, 45, 163–191.
Lyon, A. R., Maras, M. A., Pate, C. M., Igusa, T., & Vander Stoep, A. (2016). Modeling the impact of school-based universal depression screening on additional service capacity needs: a system dynamics approach. Administration and Policy in Mental Health and Mental Health Services Research, 43, 168–188.
McManus, S. B. (2009). Enhancing positive early childhood mental health outcomes in young children. Eugene, OR: University of Oregon.
Miller, D. N., Eckert, T. L., DuPaul, G. J., & White, G. P. (1999). Adolescent suicide prevention: acceptability of school-based programs among secondary school principals. Suicide and Life-threatening Behavior, 29, 72–85.
Moore, G. F., Audrey, S., Barker, M., Bond, L., Bonell, C., Hardeman, W., et al. (2015). Process evaluation of complex interventions: Medical Research Council guidance. Bmj, 350, h1258.
Nadeem, E., Santiago, C. D., Kataoka, S. H., Chang, V. Y., & Stein, B. D. (2016). School personnel experiences in notifying parents about their child’s risk for suicide: lessons learned. Journal of School Health, 86, 3–10.
NatCen Social Research, & The National Children’s Bureau Research and Policy Team. (2017). Supporting mental health in schools and colleges . (Department for Education, Ed.). London. Retrieved from https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/634725/Supporting_Mental-Health_synthesis_report.pdf
National Health Service England. (2016). The five year forward view for mental health. a report from the independent Mental Health Taskforce to the NHS in England. (NHS England, Ed.). London.
Nemeroff, R., Levitt, J. M., Faul, L., Wonpat-Borja, A., Bufferd, S., Setterberg, S., & Jensen, P. S. (2008). Establishing ongoing, early identification programs for mental health problems in our schools: a feasibility study. Journal of the American Academy of Child & Adolescent Psychiatry, 47, 328–338.
Ozer, E. J. (2006). Contextual effects in school-based violence prevention programs: a conceptual framework and empirical review. Journal of Primary Prevention, 27, 315–340.
Patel, V., Flisher, A. J., Hetrick, S., & McGorry, P. (2007). Mental health of young people: a global public-health challenge. Lancet, 369, 1302–1313. https://doi.org/10.1016/S0140-6736(07)60368-7.
Popay, J., Roberts, H., Sowden, A., Petticrew, M., Arai, L., Rodgers, M., et al. (2006). Guidance on the conduct of narrative synthesis in systematic reviews. A Product from the ESRC Methods Programme Version, 1, b92.
Poulsen, K. M., McDermott, B. M., Wallis, J., & Cobham, V. E. (2015). School-based psychological screening in the aftermath of a disaster: are parents satisfied and do their children access treatment? Journal of Traumatic Stress, 28, 69–72.
Public Health England. (2015). Criteria for appraising the viability, effectiveness and appropriateness of a screening programme. Retrieved from https://www.gov.uk/government/publications/evidence-review-criteria-national-screening-programmes/criteria-for-appraising-the-viability-effectiveness-and-appropriateness-of-a-screening-programme
Robinson, J., Yuen, H. P., Martin, C., Hughes, A., Baksheev, G. N., Dodd, S., et al. (2011). Does screening high school students for psychological distress, deliberate self-harm, or suicidal ideation cause distress - and is it acceptable? Crisis. https://doi.org/10.1027/0227-5910/a000087.
Romer, N. (2012). Mental health screening within a tiered model: investigation of a strength-based approach. Department of Special Education and Clinical Sciences: Univeristy of Oregon.
Sayal, K., Hornsey, H., Warren, S., MacDiarmid, F., & Taylor, E. (2006). Identification of children at risk of attention deficit/hyperactivity disorder. Social Psychiatry and Psychiatric Epidemiology, 41, 806–813.
Scherff, A. R., Eckert, T. L., & Miller, D. N. (2005). Youth suicide prevention: a survey of public school superintendents’ acceptability of school-based programs. Suicide and Life-threatening Behavior, 35, 154–169.
Shortt, A. L., Fealy, S., & Toumbourou, J. W. (2006). The mental health risk assessment and management process (RAMP) for schools: II. Process evaluation. Australian E-Journal for the Advancement of Mental Health, 5, 295–306.
Snell, T., Knapp, M., Healey, A., Guglani, S., Evans-Lacko, S., Fernandez, J., et al. (2013). Economic impact of childhood psychiatric disorder on public sector services in Britain: estimates from national survey data. Journal of Child Psychology and Psychiatry, 54, 977–985.
Vander Stoep, A., McCauley, E., Thompson, K. A., Herting, J. R., Kuo, E. S., Stewart, D. G., et al. (2005). Universal emotional health screening at the middle school transition. Journal of Emotional and Behavioral Disorders, 13, 213–223.
Walker, H. M., Severson, H. H., Nicholson, F., Kehle, T., Jenson, W. R., & Clark, E. (1994). Replication of the Systematic Screening for Behavior Disorders (SSBD) procedure for the identification of at-risk children. Journal of Emotional and Behavioral Disorders, 2, 66–77.
Wallerstein, N., & Duran, B. (2010). Community-based participatory research contributions to intervention research: the intersection of science and practice to improve health equity. American Journal of Public Health. https://doi.org/10.2105/AJPH.2009.184036.
Weiner, B. J., Lewis, C. C., Stanick, C., Powell, B. J., Dorsey, C. N., Clary, A. S., et al. (2017). Psychometric assessment of three newly developed implementation outcome measures. Implementation Science, 12, 108. https://doi.org/10.1186/s13012-017-0635-3.
Weist, M. D., Rubin, M., Moore, E., Adelsheim, S., & Wrobel, G. (2007). Mental health screening in schools. Journal of School Health, 77, 53–58.
Whitney, S. D., Renner, L. M., Pate, C. M., & Jacobs, K. A. (2011). Principals’ perceptions of benefits and barriers to school-based suicide prevention programs. Children and Youth Services Review, 33, 869–877.
Williams, S. N. (2013). Bring in universal mental health checks in schools. Bmj, 347, f5478.
Wolf, M. M. (1978). Social validity: the case for subjective measurement or how applied behavior analysis is finding its heart 1. Journal of Applied Behavior Analysis, 11, 203–214.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare that they have no conflict of interest.
Research Involving Human Participants and/or Animals
This article does not contain any studies with human participants or animals performed by any of the authors.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Soneson, E., Howarth, E., Ford, T. et al. Feasibility of School-Based Identification of Children and Adolescents Experiencing, or At-risk of Developing, Mental Health Difficulties: a Systematic Review. Prev Sci 21, 581–603 (2020). https://doi.org/10.1007/s11121-020-01095-6
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11121-020-01095-6
Keywords
- Mental health
- Schools
- Identification
- Screening
- Feasibility