Youth mental illness represents an urgent public health concern requiring immediate action (Colizzi et al., 2020; Collishaw & Sellers, 2020). Globally, the World Health Organization (2020) estimates the aggregated global prevalence of youth and young adults (i.e., those aged 10–25 years) with a mental health disorder range from 10 to 20%. International rates of youth mental health symptoms and disorder have also sharply risen following the COVID-19 pandemic (Power et al., 2020; Stewart et al., 2023), and over recent decades (Keyes et al., 2019; Merikangas et al., 2009). Youth mental illness can result in immediate intrapersonal and interpersonal ramifications, and if unaddressed, can trigger a long-term cascading disability trajectory, resulting in costly personal, social, and economic outcomes (World Health Organization, 2021). Given the emerging and sub-diagnostic nature of many mental illness pathways, adolescence and emerging adulthood are opportune periods for preventative action. However, young adults are less likely to seek professional support for their mental health than those in older age groups (Babajide et al., 2020; Slade et al., 2009).

To effectively address these mental health concerns, there is a growing emphasis on youth-friendly, stigma-free, and accessible digital interventions (Hollis et al., 2017; Lehtimaki et al., 2021), such as digitally delivered mental health interventions. Digital mental health interventions (DMHIs) have emerged as vital resources, especially for young people in remote areas, those new to mental health services, and those seeking privacy and safety (Hollis et al., 2017; Lehtimaki et al., 2021; Pretorius et al., 2019; Schueller & Torous, 2020; Wilson, 2022). These DMHIs, encompassing online psychological interventions for individual or group therapy, and mobile services using calls, video meetings, or messaging, have evolved significantly since the 1980s (Burns et al., 2014; Marsac & Weiss, 2019; McNamee et al., 1989). Today’s platforms offer interactive, personalized content in both synchronous and asynchronous formats, aligning with the tech-savvy nature of today’s youth (Aschbrenner et al., 2019; Lattie et al., 2022; Philippe et al., 2022; Pokowitz et al., 2023).

In the present review, DMHIs refer to psychological interventions, for mental health conditions or symptoms, delivered online individually or to a group. They also include mobile phone services or applications involving voice calls, video meetings or text/chat messaging and can be live, automated, or pre-recorded.

The Rise of Digital Mental Health Interventions

Initially, DMHIs were primarily designed to overcome the physical and economic barriers to accessing healthcare, while leveraging the ubiquitous nature of internet, mobile phone, and computer access. The COVID-19 pandemic accelerated the rapid expansion and uptake of these DMHIs mainly due to closures of typical in-person mental health providers (Mahoney et al., 2021). The pandemic resulted in increases in the incidence of mental ill-health, further increasing demand for telemedicine services, with attendant increased burden on the healthcare system and demand for DMHIs (McLean et al., 2021). The confluence of these factors has resulted in a substantial increase in the development, uptake, and research of DMHIs during the COVID-19 pandemic and beyond (Celia et al., 2022; Cerutti et al., 2022).

The clinical efficacy of DMHIs is promising, revealing many of these interventions to be equivalent to their in-person counterparts (Andrews et al., 2018). Research examining digital mental health platforms suggests the promise for improved service accessibility and engagement, with more people being treated at a lower cost (Lattie et al., 2022; Sherifali et al., 2018). However, while the evidence-base for the clinical benefits of DMHIs is strong for adults, it currently represents an emerging field of research for youth-specific DMHIs, with calls for greater research enquiry (Lattie et al., 2022). These DMHIs are especially well-suited to young people who tend to be technologically savvy and early adopters of such approaches (Aschbrenner et al., 2019; Giovanelli et al., 2020). DMHIs have also been found to be particularly well suited for people who are deemed (or seen) to be at ‘less risk’ (i.e., not in an acute psychiatric emergency and without currently meeting clinical diagnostic thresholds) (Paganini et al., 2018; Rigabert et al., 2020), which includes universal, selective, and indicated prevention. Given the promise that these digital interventions hold, it is unsurprising that digital mental health is now a burgeoning field of study. DMHIs could be particularly useful for people who face stigma accessing mental health services or for youth who are reluctant to ask parents for consent accessing these services (Lattie et al., 2022).

Obstacles to Optimized Digital Health Services

Despite the recent rapid growth and identified benefits of self-guided (i.e., 100% self-guided digital delivery) DMHIs, concerns regarding their sustained usage, appropriate utilization, and ongoing efficacy have been raised (Mehrotra et al., 2017; Opie et al., 2023; Schueller et al., 2017). Self-guided DMHIs appear to have high attrition rates, limiting the impact of such interventions (Alqahtani & Orji, 2019; Karyotaki et al., 2015). Furthermore, there is currently a limited understanding of the factors contributing to such intervention attrition and specifically understanding how these retention rates can be improved (Alqahtani & Orji, 2019; Schmidt et al., 2019). Ethical concerns pertaining to these DMHIs are also important to consider, including the storage and sharing of personal data and risk management associated with distant, independent access (Galvin & DeMuro, 2020; Wykes et al., 2019). Additionally, person-specific influences can impact the usage (or lack thereof) of intervention design, such as motivation and capability, which are currently under researched (Cross et al., 2022). These influences may include low digital literacy, negative prior user experience, or costs associated with internet or program access. These limitations may prevent users from reaping the full benefits of these interventions (Schueller et al., 2017).

DMHIs with a Guided Component

To address these problems, researchers have turned to DMHIs with guided support. DMHIs with guided support includes human contact embedded within their DMHI delivery. Such guided support aims to to enhance socioemotional outcomes, engagement, and to provide clinical and technical support (Heber et al., 2017; Werntz et al., 2023). Methods of DMHIs can be partially guided (i.e., combination of guided and self-guided intervention elements) or completely guided (i.e., 100% delivered by human support). Such support can be delivered synchronously (i.e., live support occurring in real-time; e.g., videoconferencing, phone call) and/or asynchronously (i.e., delayed; e.g., email, text message), by an array of human support providers, including qualified mental health clinicians (e.g., psychologists) and non-clinician or paraprofessional support (e.g., lived experience peer support workers, lay counselors, volunteers, or students). Of note, heterogeneity in these guided supports is evident varying in terms of support content, amount, and timing, for example, which may introduce measurement error when attempting to compare these interventions (Harrer et al., 2019).

Existing Systematic and Meta-analytic Reviews

While not youth-specific, prior meta-analytic evidence demonstrates the efficacy of DMHIs with partially and/or fully guided support for depression (Karyotaki et al., 2021), anxiety (Olthuis et al., 2016a), and post-traumatic stress disorder (Olthuis et al., 2016b). Further, meta-analytic evidence has shown such DMHIs with human support to be equivalent to their face-to-face counterparts (Andrews et al., 2018; Cuijpers et al., 2019). One meta-analysis examined the efficacy of DMHIs with non-clinical support to self-guided, and clinician-guided DMHIs (Leung et al., 2022). Notably, they reported no significant difference between clinician-guided and non-clinician-guided DMHIs in terms of intervention efficacy. They also found a significant difference in effectiveness between self-guided and non-clinician-guided DMHIs, favoring non-clinical guided support. They found non-clinician-guided DMHIs reported significantly greater post-treatment efficacy relative to controls. However, results were based on studies which included participants aged 16–64, and thus was not youth specific.

Youth Populations

When looking at youth populations, meta-analytic and systematic review evidence remains mixed. Meta-analytic evidence has reported varying effect sizes (Hedges’ g range 0.46 to 0.94; Cohen’s d range 0.14 to 0.33) when comparing DMHIs against a control condition (Bennett et al., 2019; Ebert et al., 2015; Garrido et al., 2019; Ma et al., 2021). Systematic reviews have also examined the efficacy of guided, partially guided, and unguided youth-specific DMHIs, with findings indicating overall improvements in depression, stress, and anxiety outcomes (Hollis et al., 2017; Lehtimaki et al., 2021; Zhou et al., 2021); however inconsistent effects have been identified when factoring in different control conditions (e.g., active control (receives an alternative intervention concurrent to intervention group) versus inactive control (receives no intervention above treatment as usual) (Hollis et al., 2017; Lehtimaki et al., 2021; Zhou et al., 2021). Additionally, such differences have been attributed to within-study or within-intervention heterogeneity in terms of sampling, delivery, and content (Lehtimaki et al., 2021; Zhou et al., 2021).

Indicated Youth Populations

The scope of youth populations in DMHI research varies. Notably, van Doorn et al. (2021) uniquely concentrated on indicated preventive interventions for youth exhibiting emerging symptoms, unlike other reviews that merged both universal and indicated prevention population (Ebert et al., 2015; Harrer et al., 2019). This approach by van Doorn et al. highlighted that DMHIs have a more pronounced effect on indicated youth with emerging symptoms compared to universal youth without symptoms (Conley et al., 2016).

Given the mixed and emerging findings from various systematic and meta-analytic reviews of youth DMHI efficacy, it is unsurprising that there have been calls for further research into the efficacy of DMHI guided human supports based on these mixed and emerging findings (Bennett et al., 2019; Ebert et al., 2015; Garrido et al., 2019).

The Need for Further Systematic Examination

Considering the limitations and advantages of such DMHIs, their rapid growth warrants further systematic examination to build upon the existing literature that has supported their efficacy. While DMHIs appear to work better than no intervention to improve depression in young people, they may only be of clinical significance when use is highly supervised (Garrido et al., 2019). The ability of DMHIs to deliver automated and self-directed interventions is frequently argued as a way to improve access to mental health services and avoid stigma; however, inconsistencies in intervention efficacy have been reported (Baumeister et al., 2014; Dear et al., 2016; Hollis et al., 2015; Josephine et al., 2017).

While there is a plethora of research on the benefits and disadvantages on fully self-guided interventions as described above, further research is needed to understand the efficacy of different types of guided DMHIs, including synchronous and asynchronous delivery methods, and their comparative variations in efficacy of programs delivered via various channels (Rogers et al., 2021). Attention to socioemotional data is also needed to provide an efficacious and impactful intervention for young people (Garrido et al., 2019; Lehtimaki et al., 2021; Rogers et al., 2021). Taken together, existing systematic reviews have highlighted the importance of guided support in DMHIs for young people.

Previous systematic reviews (Baumeister et al., 2014; Harrer et al., 2019) have also not fully explored the specific elements and characteristics that contribute to the efficacy of DMHIs. Recognizing and understanding these key characteristics is essential for guiding future research. This insight is crucial for enhancing the effectiveness of current digital tools and employing the latest technologies more effectively to support this vulnerable population. Understanding these aspects can lead to significant improvements in how digital mental health resources are developed and utilized.

As a research priority, is a recognized need for more systematic research into the impact of human-guided DMHIs. This includes examining the impact of various types of support personnel, including clinicians, trained laypersons, and peers with lived experience, as well as examining the different levels of guidance they provide, from partially to fully guided support (Hollis et al., 2017; Ma et al., 2021). Additionally, research gaps remain in understanding the effects of synchronous and asynchronous DMHIs on clinical effectiveness and treatment adherence (Hollis et al., 2017). Addressing these gaps and limitations of previous systematic reviews is essential for development of effective and accessible mental health care.

The Current Study

To address the limitations identified in existing systematic reviews, as detailed above, the current review expands upon the literature by evaluating the body of research on youth-specific DMHIs that offer some level of guidance. Our approach includes identifying and synthesizing all youth-focused DMHIs that are either fully or partially guided by human support. The objective is to comprehensively report on the socioemotional clinical efficacy outcomes of these guided and partially guided youth DMHIs.

Methods

A systematic review methodology utilized the Joanna Briggs Institute (JBI) methodology framework (Aromataris & Munn, 2020). Our reporting adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA; Page et al., 2021). See Online Resources 1 for a complete PRISMA checklist. A protocol of the present review was prospectively registered in PROSPERO (March 23, 2023; CRD42023405812).

The review methodology was co-designed and conducted alongside Beyond Blue, Australia’s most well-known and visited mental health organization. This review was also conducted by several lived experience consumer academics. Thus, this review was informed by consumer principles, acknowledging the meaningful contributions that people with a lived experience have to offer whose experiences and perspectives are to be respected and valued. Collectively, the current review aimed to bring together academic, consumer, and mental health service skills, experiences, and voices.

Inclusion Criteria

The Population, Intervention, Comparator, Outcome, and Study design (PICOS) framework (McKenzie et al., 2019) guided inclusion criteria eligibility (See Table 1). If necessary information was not reported in-text, the study was excluded. Only literature written in English language was included.

Table 1 PICOS framework

Types of Sources

The search was limited to contemporary published literature. Full text references in English were searched from 14 March 2018- 14 February 2023. Date restrictions were applied to the search to ensure that we conducted a contemporary examination of the literature due to rapid recent technological advancements and associated technological redundancies. Date restrictions were also applied due to the dearth of available literature pre-2018. This decision was further made to allow for evaluations of comparable digital youth-specific interventions.

Search Strategy

We followed a four-step search strategy. An initial limited search of PsycINFO was conducted, followed by analysis of the text contained in the title and abstract, and of the index terms used to describe the article. This identified the keywords and index terms used for a second search across all the databases covered by this study. The second search was a systematic search of five electronic databases: PsycINFO (Ovid), MEDLINE (Ovid), CINAHL (EBSCO), Cochrane Central Register of Controlled Trials (Central; via Cochrane Library). See Online Resources 2 for a complete search strategy (concept and terms) of all included databases. The third search was an examination of unpublished and grey literature. This included identifying dissertations and theses identified via ProQuest Dissertations and Theses. Global Trial registries were also searched to identify ongoing studies or complete but unpublished studies, these included Australian New Zealand Clinical Trial Register (https://www.anzctr.org.au/) and ClinicalTrials.gov. The first 20 pages of Google were also searched. See Online Resources 3 for a complete grey literature search strategy. Finally, to ensure a comprehensive search was conducted, reference lists of all eligible studies and pertinent systematic reviews were manually searched to identify further studies that met inclusion criteria. Authors were not contacted for missing data.

Study Screening and Selection

All records were imported to Endnote (2013) where duplicates were removed. Remaining studies were imported in Covidence (Veritas Health Innovation, 2020) and were screened at title and abstract level by four reviewers (JO, AV, SM, EW). Studies were then screened at full-text level. At both title and abstract, and full-text, 75% of records were double screened.

Data Extraction

Data extraction was completed by four independent reviewers (JO, AV, SM, EW) with disagreements resolved through conferencing. Data from each full-text article were charted by one reviewer and checked by a second independent reviewer. Data were extracted into a priori standardized data extraction forms, consistent with Tables 3, 4 and 5.

Quality Assessment

To appraise methodological quality of included papers, we ranked studies based upon study design. Upon appraisal completion, studies were labelled as ‘weak’, ‘moderate’, or ‘high’ in terms of their methodological quality. An a priori decision was made not to exclude any record based on study quality. All studies were appraised via the Quality Assessment Tool for Quantitative Studies (EPHPP, 2010). Quality appraisal checklist response options were ‘yes’, ‘no’, ‘unclear’, or ‘not applicable’. Grey literature was critically assessed using the Authority, Accuracy, Coverage, Objectivity, Date, and Significance (AACODS) checklist (Tyndall, 2010). Studies were subsequently grouped into low risk (> 75% of quality criteria met), moderate risk (> 50% of criteria met), or high risk of bias (< 50% criteria met). An a priori decision was made not to exclude studies based on quality. One author assessed study quality for all the papers, and a second author independently assessed the study quality of 25% of the papers (IRR = 75% agreement). All disagreements were resolved through conferencing.

Synthesis

Included studies were categorised under sub-headings, consistent with Tables 2, 3, 4. To identify socioemotional outcome efficacy and user experience outcomes, we collated and categorized the extracted intervention characteristics and outcomes. Outcomes of examination were data-driven, wherein we privileged frequently reported outcomes. Due to data heterogeneity, a meta-analysis was not feasible, and results were narratively synthesized. If two included studies reported on an identical outcome, only data from the study with the largest sample size was included for that outcome. Where a dissertation and a published record reported on an identical study, the published paper was included and the dissertation excluded, as the published paper had passed the peer-review process.

Table 2 Study quality of included studies
Table 3 Characteristics of included studies
Table 4 Characteristics of interventions

Outcomes

Socioemotional outcomes examined were depression, anxiety, stress, wellbeing, mindfulness, and quality of life.

Results

Study selection

The systematic literature search yielded 22,482 records (after removal of duplicates), of which 22,450 records were excluded at title/abstract (n = 21,817) and full-text level (n = 633). Double screening at title and abstract resulted in inter-rater reliability (IRR) of published literature 96% (κ = 0.43) and unpublished literature IRR 98% (κ = 0.45). At full-text, double-screening IRR was 98% (κ = 0.74) for published literature and 92.31% (κ = 0.75) for unpublished literature. A total of 32 quantitative primary studies met all inclusion criteria and were included in the present review. Figure 1 details the results at each level and reasons for exclusion.

Fig. 1
figure 1

PRISMA diagram of the phases of the review process and record selection

Study Quality Assessment

Overall, the quality of included published studies was moderate (n = 15, 50%); with some of high quality (n = 9, 30%) and the remaining of low quality (n = 6; 20%). The quality of included grey literature (n = 2; Koltz, 2022; Wahlund, 2022) was strong (low risk of bias). See Fig. 2 and Table 2 for a visual and tabular representation of study quality, respectively.

Fig. 2
figure 2

Visual representation of study quality

Study Characteristics

Most studies were published studies (n = 30) and two were unpublished dissertations (Koltz, 2022; Wahlund, 2022). Table 3 provides a detailed description of included studies. All included studies reported on pre-post intervention outcomes, with 13 studies including additional follow-up assessments. Included studies predominantly followed a RCT study design (n = 18, 56%), with 11 single pre-post experimental studies (34%). 41% (n = 13) of studies included a single comparison group (active = 6; inactive = 7), while eight studies (25%) included two or more comparison groups which comprised of inactive controls and active controls.

Two studies reported on diverse populations. Schueller et al. (2019) included a sample of young people experiencing homelessness that were gender diverse or questioning. The intervention sample in Radovic et al. (2021) unintentionally included approximately one third (n = 6/20) of individuals who did not identify as male or female. Out of the 32 studies included, only 19% (n = 6) reported on gender diverse communities (e.g., non-binary) and/or sexual orientation. No study focused specifically on under-resourced communities or socioeconomics.

Studies were most commonly from the United States (n = 7, 22%) and Canada (n = 4, 13%). Three studies were from China, Finland, Germany, Netherlands (9%, respectively) and two from Australia, Italy, Sweden, United Kingdom (6%, respectively), while one study was from Indonesia (3%).

Participant Characteristics

Included study sample size was highly variable, ranging from 4 to 5568 participants, with a mean sample size of 317. Excluding studies that did not report sample age range (n = 7), the mean participant age was 20.14 years (range 12–46). Eight studies included only participants aged ≥ 18 years. Study participants were predominantly female, with a mean of 72.28% female participants across studies (n = 29). All participants displayed emerging subclinical symptomatology.

Intervention Characteristics

From the 32 included studies, we identified 29 unique brief digital mental health interventions that are guided (entirely or partially; i.e., ACT guide; BREATHE (6-module version); BREATHE (8-module version); BIP Worry; Entourage; ENJOY + Sense-It; ICare Prevent; Inroads; inSPIRE; I-BA; iSOMA; Tellmi; MIND; Moodpep; Pocket helper + Purple Chill + Slumber time; RESPOND; Rileks; SilverCloud;; Step-by-step; StudiCare-M; SOVA; UniWellbeing; WeChat mini; Youth COMPASS; Unnamed [n = 5]). Three interventions were reported on multiple times in separate studies: Youth COMPASS (Lappalainen et al., 2021, 2023), iSOMA-guided (Hennemann et al., 2022a, 2022b) and BREATHE (6-module version) (O'Connor et al., 2022; Radomski et al., 2020). Table 4 provides a detailed description of included interventions.

Intervention engagement period ranged from 20 days to 12 weeks (M = 7.34 weeks). Where reported, the average number of modules per intervention was 6.22 (range 3–12, n = 22 interventions), and the average number of modules intended to be completed per week of the intervention was 1.87 (range 1–6, n = 15 interventions). Mean number of sessions completed by study participants was 4.81 modules (n = 6 studies) and mean rate of completion (i.e., proportion of participants completing all modules) was 42.56% (n = 11 studies). Technology delivery mode was mixed: 14 interventions were web-based, four were mobile app-based (Ravaccia et al., 2022; Schueller et al., 2019; Sit et al., 2022; Sun et al., 2022), two via telehealth (i.e., Zoom/videoconferencing software; (Harra & Vargas, 2023; Novella et al., 2022)), and nine via a combination of delivery methods.

Of the 29 guided interventions, four (14%) offered solely human support, while 25 (86%) were partially guided and included a combination of human support and self-directed program elements. Twelve interventions offered human support via asynchronous methods, and four via synchronous contact. The remaining 13 interventions provided human support via a combination of asynchronous and synchronous methods. Of the interventions that were reported across multiple studies [n = 3 studies; Youth COMPASS (Lappalainen et al., 2021, 2023), iSOMA-guided (Hennemann et al., 2022a, 2022b) and BREATHE (O'Connor et al., 2022; Radomski et al., 2020)], in no cases were the human support methods compared. Mental health professionals were the primary providers of guided intervention content (n = 12 interventions, 43%), followed by interventions delivered by clinicians and psychology students together (n = 6 interventions, 21%), and peers (n = 3, Harra & Vargas, 2023; Klimczak et al., 2023; Rodriguez et al., 2021). Researchers were the sole human support for two interventions (Lappalainen et al., 2021; Sun et al., 2022). Together, peers and clinicians delivered guidance on two interventions (Rice et al., 2020; van Doorn et al., 2022) while clinical psychology students provided guidance in one intervention (Garnefski & Kraaij, 2023).

Regarding theoretical orientation, the most common intervention framework was cognitive behavioral therapy (CBT; n = 16 interventions), followed by acceptance and commitment therapy (ACT; n = 5 interventions), mindfulness (n = 3; Küchler et al., 2023; Rodriguez et al., 2021; Sun et al., 2022), and positive psychology models (n = 2; Schueller et al., 2019; van Doorn et al., 2022). Four interventions used multiple theoretical frameworks (Koltz, 2022; Küchler et al., 2023; Schueller et al., 2019; van Doorn et al., 2022). Two studies did not report on the therapeutic orientation of the intervention (Harra & Vargas, 2023; Ravaccia et al., 2022). A single intervention drew on the frameworks of ACT and mindfulness (Küchler, et al., 2023) while others drew on CBT and positive psychology (Schueller et al., 2019; van Doorn et al., 2022) or social cognitive models (Koltz, 2022).

Socioemotional Outcome

Informed by observed outcome frequency, the primary mental health symptoms of examination included anxiety, depression, stress, wellbeing, mindfulness, and quality of life. Most studies (n = 29/31, 93.55%) examined several socioemotional outcomes. Only two studies examined one primary outcome (Koltz, 2022; Radomski et al., 2020). Anxiety and depression were the most common socioemotional outcome examined (n = 23; 72%, respectively), followed by stress (n = 10, 31%), well-being (n = 5, 16%), mindfulness (n = 3, 9%), and quality of life (n = 3, 9%).

Anxiety Symptoms

Twenty-three studies assessed anxiety symptoms, including three studies that assessed social anxiety (Novella et al., 2022; Rice et al., 2020; Stapinski et al., 2021) and one assessing academic anxiety (Radomski et al., 2020). Studies assessed anxiety symptoms via the GAD (n = 11), DASS-21 (n = 3; Juniar et al., 2020; Klimczak et al., 2023; Rodriguez et al., 2021), MASC (n = 3; O’Connor et al., 2020; O’Connor et al., 2022; Radomski et al., 2020), BAI (n = 2; Cerutti et al., 2022; Novella et al., 2022), STAI (n = 2; Lappalainen et al., 2023; Celia et al., 2022), SIAS (n = 2; Rice et al., 2020; Stapinski et al., 2021), CCAPS (n = 1; Novella et al., 2022), GRCS (n = 1; Radomski et al., 2020), SCID-I (n = 1; Cook et al., 2019), PSWQ (n = 1; Cook et al., 2019), MASQ (n = 1; Harra & Vargas, 2023), LSAS (n = 1; Rice et al., 2020), BFNE (n = 1; Rice et al., 2020), and ASI (n = 1; Rice et al., 2020). The number of intervention sessions ranged from 3 to 12 (M = 6.39). The efficacy of guided and partially guided digital delivery interventions in treating anxiety symptoms was compared to a control group(s) in 17 studies, 16 of which were RCTs and one of which was an experimental study (3-arm; Pescatello et al., 2021). Studies included either inactive controls (n = 7), active controls (n = 5), or a mix of both (n = 5).

A number of studies reported that intervention groups observed significantly greater short-to long-term anxiety symptom reductions when compared to either an inactive (Küchler et al., 2023; O'Connor et al., 2022; Radomski et al., 2020; Radovic et al., 2021) or active control group (Küchler et al., 2023; Stapinski et al., 2021; Sun et al., 2022, p values ≤ 0.001–0.43). In contrast, no significant differences between intervention and active/inactive control on anxiety outcomes were reported for many interventions (Cook et al., 2019; Harra & Vargas, 2023; Hennemann et al., 2022b; Karyotaki et al., 2022; Klimczak et al., 2023; Lappalainen et al., 2023; Novella et al., 2022; Pescatello et al., 2021; Rodriguez et al., 2021), demonstrating substantial heterogeneity across anxiety results.

Six studies conducted single-arm pre-post studies to examine intervention effects on anxiety symptoms. Five (83.33%) found a significant reduction in pre-post intervention anxiety symptoms (p < .001 to p = 0.024), while one study (Rice et al., 2020) found significant social anxiety reductions on various measures (LSAS; SIAS, BFNE; ASI; p values ≤ 0.001).

When contrasting studies that were entirely guided by a human support (n = 4; Celia et al., 2022; Cerutti et al., 2022; Harra & Vargas, 2023; Novella et al., 2022) to partially guided interventions (n = 19), it was found that half of the entirely guided interventions resulted in significant pre to post intervention anxiety declines (Celia et al., 2022; Cerutti et al., 2022), while the other half found non-significant group differences in pre to post intervention, or follow-up changes on various domains of anxiety, including social and generalized anxiety (Harra & Vargas, 2023; Novella et al., 2022). Partially guided interventions (n = 19) also found heterogenous results and only 53% (n = 10) of studies favored the intervention over control. Seven interventions were asynchronous, four were synchronous and eight were both. The use of synchronous or asynchronous guidance did not appear to influence anxiety outcomes.

For studies assessing anxiety symptoms, human support was provided in interventions by either mental health professionals (n = 10), mental health professionals and students together (n = 3; Pescatello et al., 2021; Radovic et al., 2021; Sit et al., 2022), researchers and students together (n = 2; Karyotaki et al., 2022; Lappalainen et al., 2023), peers (n = 3; Harra & Vargas, 2023; Klimczak et al., 2023; Rodriguez et al., 2021), researchers (n = 2; O'Connor et al., 2020; Sun et al., 2022), paraprofessionals or lay workers (n = 2; O'Connor et al., 2022; Radomski et al., 2020), or peers and mental health professionals together (n = 1; Rice et al., 2020). Support personnel did not appear to influence anxiety symptom outcomes.

Depression Symptoms

Of the 23 studies that assessed depressive symptoms, studies primarily assessed depressive symptoms via the PHQ-9 (n = 14), DASS-21 (n = 3; Klimczak et al., 2023; Rodriguez et al., 2021; Stapinski et al., 2021) and DEPS (n = 2; Lappalainen et al., 2023; Lappalainen et al., 2021). The average number of sessions was 6.22 (range 4–12). The efficacy of guided and partially guided digital delivery interventions in treating depression symptoms was compared to a control group(s) in 15 studies, 13 of which were RCTs. Control groups included inactive controls (n = 11) and active controls (n = 12).

While some interventions demonstrated significant reductions in depression due to intervention when compared to inactive controls, results are mixed. Three studies reported significantly greater depression symptom reduction due to intervention (Klimczak et al., 2023, p < 0.001; Harra & Vargas, 2023, p < 0.05; Küchler et al., 2023, p = 0.020–0.048). In contrast, five studies found non-significant differences between control and intervention group in symptom reduction immediately post intervention (Karyotaki et al., 2022; Lappalainen et al., 2023) or at follow-up periods (Cook et al., 2019; Karyotaki et al., 2022; Radovic et al., 2021). Peynenburg et al. (2022), identified significant pre-post intervention group differences (p = 0.06), yet these effects were not maintained at follow-up (1- and 3-month, p = 0.25–0.52).

Changes in depressive symptoms were inconsistent and depended markedly on how the study data were collected. For example, Grudin et al. (2022) observed significant pre-intervention to follow-up declines in clinician-rated depressive symptoms for both intervention and active control groups (p’s < 0.001), but not the inactive control (p = 0.077), whereas significant declines in self-rated or parent-rated depressive symptoms were observed for all groups (inactive control, intervention group, active controls; all p’s < 0.01) (Grudin et al., 2022).

If guided support was synchronous (n = 2; Cerutti et al., 2022; Harra & Vargas, 2023), we found reduced depression symptoms. If guided support was asynchronous (n = 12) or both synchronous and asynchronous (n = 9), results suggest mixed benefit in depressive outcomes. Human support was provided by either mental health professionals (n = 9); a combination of mental health professionals and students (n = 5); peers (n = 3; Harra & Vargas, 2023; Klimczak et al., 2023; Rodriguez et al., 2021); researchers and students (n = 2; Karyotaki et al., 2022; Lappalainen et al., 2023); researchers (n = 2; Lappalainen et al., 2021; Sun et al., 2022); psychology students (n = 1; Garnefski & Kraaij, 2023); or peers and mental health professionals (n = 1; Rice et al., 2020). Human support personnel did not appear to influence depression outcomes.

Stress Symptoms

Of the 10 studies that assessed stress symptoms, the DASS-21 was the most frequently used validated measure (27%, n = 2; Klimczak et al., 2023; Rodriguez et al., 2021). Remaining studies assessed stress via a heterogeneous array of measures (DT: Celia et al., 2022, ELEI: Cook et al., 2019, PASS: Koltz, 2022, PSS-4: Küchler et al., 2023, DASS-42: Juniar et al., 2022, PCL-5: Schueller et al., 2019, PSYCHOLOPS: Sit et al., 2022, and Dutch EMA: Van Doorn et al., 2022). When reported, the average number of sessions was 7 (range 5–12). All included studies that assessed stress were between-group designs and included a control. Of the 10 included studies, four were RCTs and included both inactive and active controls (Cook et al., 2019; Klimczak et al., 2023; Küchler et al., 2023) and active controls (n = 2; Rodriguez et al., 2021; van Doorn et al., 2022).

Five single-arm pre-post studies assessed the impact of an intervention on stress, again yielding inconsistent results. From pre- to post intervention, three studies (Celia et al., 2022; Juniar et al., 2022; Sit et al., 2022) reported a significant reduction in stress (p range < 0.001–0.005), whereas two studies (Koltz, 2022, p NR; Schueller et al., 2019, p > 0.50) reported non-significant changes in academic stress (Koltz, 2022) and general levels of stress (Schueller et al., 2019).

Partially guided interventions (n = 9) yielded mixed results in stress reduction, with three studies (Juniar et al., 2022; Klimczak et al., 2023; Sit et al., 2022) reporting a significant reduction in stress levels, especially among those with higher baseline stress (Cook et al., 2019), while five studies reported non-significant stress changes from pre to post intervention (Koltz, 2022; Rodriguez et al., 2021; Schueller et al., 2019; van Doorn et al., 2022) or to follow-up (Küchler et al., 2023).

Synchronous guided support (Celia et al., 2022) resulted in a significant reduction in perceived stress (p < 0.001); however, asynchronous guided support identified mixed results with some studies identifying a significant stress reduction (Cook et al., 2019; Juniar et al., 2022), while others observing no change (Koltz, 2022) or inconclusive results (Küchler et al., 2023). Providers of guided support varied, synchronous guidance or asynchronous guided support was generally delivered via mental health personnel alone (n = 4; Celia et al., 2022; Cook et al., 2019; Juniar et al., 2022; Küchler et al., 2023) while both asynchronous and synchronous guided support was delivered by a combination of mental health professionals and students (n = 2; Schueller et al., 2019; Sit et al., 2022), peers (n = 2; Klimczak et al., 2023; Rodriguez et al., 2021), mental health professionals alone (Koltz, 2022), and peer and mental health professionals combined (n = 1; Van Doorn et al., 2022). Support personnel did not appear to influence stress outcomes. There appeared to be no difference between synchronous and asynchronous intervention guidance and pre- and post intervention stress outcome change.

Wellbeing

Five studies assessed wellbeing as measured by the ORS (Ravaccia et al., 2022), WHO-5 (Küchler et al., 2023; Sit et al., 2022), SWLS (Celia et al., 2022), SWEMWBS (Rice et al., 2020), LSS (Rice et al., 2020), and ESS (Rice et al., 2020). The mean number of intervention sessions was 5.67 (range 5–7, n = 3) (Celia et al., 2022; Küchler et al., 2023; Sit et al., 2022). Mean intervention duration was 9.50 weeks (range 8–12, n = 5). Three studies provided asynchronous guided support (Küchler et al., 2023; Ravaccia et al., 2022; Rice et al., 2020), one study provided synchronous and asynchronous guided support (Sit et al. 2022) and one provided synchronous guided support (Celia et al., 2022). Most studies used a single-arm pre-post design to examine treatment effects on wellbeing (Celia et al., 2022; Ravaccia et al., 2022; Rice et al., 2020; Sit et al., 2022). One study was a RCT that included an inactive control and active control group (Küchler et al., 2023). Regarding theoretical framework that guided these study’s interventions, two studies used a CBT framework (Rice et al., 2020; Sit et al., 2022), Juniar et al. (2022) used transitional method, and Celia et al. (2022) used integrated mind–body approach.

Of the four single-arm pre-post studies, two studies found significant wellbeing improvements from pre- to post intervention on various wellbeing measures (p = 0.001, Celia et al., 2022; SWEMWBS: p < 0.001, WVS: p < 0.001, Rice et al., 2020), however no significant wellbeing change on the LSS measure was found (p = 0.580; Rice et al., 2020).

Regarding delivery method, results were inconsistent. There were significant differences between inactive controls and interventions that solely used asynchronous guided delivery after 4 weeks (p < 0.001), 8 weeks (p < 0.001) and 6-month (p = 0.016) follow-up (Küchler et al., 2023). When using asynchronous and synchronous intervention delivery, Sit et al. (2022) did not find a significant increase in subjective well-being (p = 0.208, d = 0.386). Ravaccia et al. (2022) used asynchronous delivery and found that improvements in wellbeing from pre to post for girls, was approaching significance (p = 0.05), but pre-post changes were non-significant for boys (p > 0.05). Overall effects were also non-significant (pre-intervention M(SD) = 5.07(2.58); post intervention M(SD) = 4.44(2.23), p NR; Ravaccia et al., 2022). Differences between synchronous and asynchronous guidance did not appear to influence wellbeing outcomes (Celia et al., 2022; Küchler et al., 2023; Ravaccia et al., 2022; Rice et al., 2020; Sit et al., 2022).

Mindfulness

Three studies assessed mindfulness as measured by the MAAS (Sun et al., 2022), FMI (Küchler et al., 2023), and the FFMQ (Rodriguez et al., 2021). No studies targeted mindfulness in isolation. The intervention duration ranged from 4–8 weeks (M = 5.3). Mindfulness sessions lasted 15–20 min (Rodriguez et al., 2021), 5–40 min (Sun et al., 2022), to 45–60 min (Küchler et al., 2023). Intervention elements included mindfulness-based exercises, awareness of the mind, and working with challenges or difficulties. All studies included homework tasks and audio recordings.

All studies were RCTs. Küchler et al. (2023) employed a RCT 3-arm method (with active and inactive controls) while Rodriguez et al. (2021) and Sun et al. (2022) employed a 2-arm RCT (with active control). Küchler et al. (2023) found mindfulness significantly improved after 4 weeks (p < 0.001), 8 weeks (p < 0.001) and 6-month follow-up (p < 0.001) in the intervention group (guided) compared to inactive control (waitlist). Similarly, significantly higher mindfulness was observed at 4 weeks (p < 0.001), 8 weeks (p < 0.001) and 6-month follow-up (p < 0.001) in the active control group (unguided) compared to inactive control (waitlist), suggesting that both mindfulness interventions (guided and unguided) were more efficacious compared to inactive control (waitlist). However, when comparing the intervention group (guided) to an active control (unguided), mindfulness did not significantly differ after 4 weeks (p = 0.56), 8 weeks (p = 0.90) and 6-month follow-up (p = 0. 08) (Küchler et al., 2023). Similarly, differences between intervention (mindfulness program with guidance) and active control groups (mindfulness program only with no guidance) on pre-post change in mindfulness was non-significant (Rodriquez et al., 2021; p = 0.53) suggesting that guidance did not significantly improve mental health outcomes. Both active control (social support-based intervention) and intervention (mindfulness-based intervention) improved on mindfulness from pre-post (Sun et al., 2022), with greater increases on mindfulness from pre to follow-up in the mindfulness-based intervention relative to social support intervention, however this was non-significant, p = 0.065 (Sun et al., 2022).

Küchler et al. (2023) and Sun et al. (2022) used asynchronous guided support and the other study (Rodriguez et al., 2021) was asynchronous and synchronous guided support. Both synchronous and asynchronous guidance did not appear to influence mindfulness outcomes. One study employed professional psychologist e-coaches to deliver the intervention (Küchler et al., 2023). Sun et al. (2022) used mindfulness trained research assistants and Rodriguez et al. (2021) used supervised and trained peer counselors. Support personnel did not appear to influence mindfulness outcomes.

Quality of Life

Three web-delivered studies assessed quality of life via the EQ-5D (Karyotaki et al., 2022), WHOQOL-BREF (Juniar et al., 2022), and YQOL-SF (O'Connor et al., 2022). Two of the included studies were two-arm RCTs (Karyotaki et al., 2022; O’Connor et al., 2022) and the remaining study (Juniar et al., 2022) was a single-arm feasibility pre-post design. Juniar et al. (2022) and Karyotaki et al. (2022) used psychologists to provide intervention guidance and O’Connor et al. (2022) employed research team members. Karyotaki et al. (2022) and Juniar et al. (2022) both provided asynchronous guided support, while O’Connor et al. (2022) used both asynchronous and synchronous guided support.

Both RCTs found no significant differences when comparing an intervention group to an inactive control. This was observed immediately post intervention (p > 0.05, Karyotaki et al., 2022) and at 3 to 12-month follow-up (Karyotaki et al., 2022, p > 0.05; O’Connor et al., 2022, p = 0.23). Juniar et al., (2022), via a single-arm design, identified significant pre- to post improvements in quality of life across various areas, including overall quality of life (p = 0.01), overall health (p = 0.03), physical health (p < 0.001), and psychological health (p = 0.003), with the exception of quality of life regarding social relationships (p = 0.45) and environmental health (p = 0.13).

Reported Socioemotional Outcomes and Efficacy

Common DMHI elements associated with socioemotionally efficacious and non-efficacious interventions were explored. As above, results were separated by the following reported socioemotional outcomes: depression, anxiety, stress, wellbeing, mindfulness, and quality of life (see Table 5). A DMHI element was deemed efficacious if it reported a statistically significant effect on the socioemotional outcome under examination. Contrastingly, a DMHI element was deemed ineffective if no significant difference was found in that outcome, or if it was no different to a control condition. DMHI elements were reported when they were observed in two or more studies (n = 21). This was viewed as a preliminary exploration to examine potential associations and trends with intervention element and socioemotional outcome. Thus, findings do not imply the efficacy of a particular element and the identified elements may still be effective even if they are associated with treatment failure in this review.

Table 5 Key socioemotional outcomes of included studies

Preventative interventions primarily focus solely on the immediate program period and do not provide ongoing support post intervention (n studies = 27). This has resulted in intervention effects that are not enduring long-term (Cook et al., 2019; Stapinski et al., 2021).

Overall Efficacy of Socioemotional Outcomes Examined

The efficacy of DMHIs on youth socioemotional outcomes shows notable inconsistencies across various study designs. Positive impacts on depression, anxiety, and stress were observed in single-arm pre-post study designs. However, when compared to control groups in multi-arm studies, such as randomized controlled trials, the results were mixed for these same outcomes Additionally, there was evidence of poor or inconclusive improvements in certain areas: mindfulness outcomes in multi-arm studies and quality of life in both single-arm and multi-arm studies. Similarly, limited effectiveness was noted for wellbeing in single-arm studies.

Elements Common to DMHI with Established Efficacy

For DMHIs that were effective at enhancing socioemotional outcomes, common interventions elements include a combination of content delivery and activities (such as goal setting or emotion regulation) and program structure, such as follow-up support and participant feedback (See Table 6). Elements such as refresher/follow-up content, goal setting, and relapse prevention were common features of DMHIs that were efficacious for depression and anxiety, while content personalization and personalized recommendations were features of DMHIs that were efficacious for depression and stress. For the socioemotional outcomes of mindfulness and quality of life, no associated common DMHI elements were identified.

Table 6 Common intervention elements and associated socioemotional outcomes

Elements Common to DMHI with Poor or Yet Established Efficacy

For DMHIs that did not report significant findings in various socioemotional outcomes, the most common elements were homework tasks and log-keeping activities. Specifically, homework tasks were associated with interventions reporting poor efficacy for depression, anxiety, stress, and mindfulness, while self-monitoring activities were associated with interventions reporting poor efficacy for depression, anxiety, stress, wellbeing, and mindfulness (Grudin et al., 2022; Karyotaki et al., 2022; Küchler et al., 2023; Rodriguez et al., 2021; Sun et al., 2022). Log-keeping activities were associated with DMHIs reporting poor efficacy for depression, anxiety, stress, wellbeing, and mindfulness (Cook et al., 2019; Karyotaki et al., 2022; Klimczak et al., 2023; Koltz, 2022; Küchler et al., 2023; Schueller et al., 2019; Sit et al., 2022; Sun et al., 2022).

Elements Common to DMHI with Inconsistent Efficacy

Elements common to DMHI that yielded inconsistent socioemotional outcomes included psychoeducation, quizzes, and audio recordings. Psychoeducation was a common intervention element included in 13 studies (41.94%); however, psychoeducational contents and associated impacts on socioemotional outcomes were mixed. For instance, some studies found positive impacts on depression and anxiety levels (Juniar et al., 2022; Stapinski et al., 2021; Wahlund, 2022), while others reported negative impacts on depression and anxiety (Cook et al., 2019; Karyotaki et al., 2022; Pescatello et al., 2021). Similarly, interventions that contained quizzes reported positive impacts on depression and anxiety (Juniar et al., 2022; Stapinski et al., 2021), whereas others that contained quizzes demonstrated no evidence of efficacy on said outcomes (Karyotaki et al., 2022; Pescatello et al., 2021). Likewise, audio recordings were associated with both positive effects (Juniar et al., 2022; Sun et al., 2022; Wahlund, 2022 and non-significant effects on socioemotional outcomes (Küchler et al., 2023; Schueller et al., 2019).

Elements Common to DMHI with Inconsistent Efficacy

Elements common to DMHI that yielded inconsistent socioemotional outcomes included psychoeducation, quizzes, and audio recordings. Psychoeducation was a common intervention element included in 13 studies (41.94%); however, psychoeducational contents and associated impacts on socioemotional outcomes were mixed. For instance, some studies found positive impacts on depression and anxiety levels (Juniar et al., 2022; Stapinski et al., 2021; Wahlund, 2022), while others reported negative impacts on depression and anxiety (Cook et al., 2019; Karyotaki et al., 2022; Pescatello et al., 2021). Similarly, interventions that contained quizzes reported positive impacts on depression and anxiety (Juniar et al., 2022; Stapinski et al., 2021), whereas others that contained quizzes demonstrated no evidence of efficacy on said outcomes (Karyotaki et al., 2022; Pescatello et al., 2021). Likewise, audio recordings were associated with both positive effects (Juniar et al., 2022; Sun et al., 2022; Wahlund, 2022) and non-significant effects on socioemotional outcomes (Küchler et al., 2023; Schueller et al., 2019).

Discussion

This systematic review aimed to appraise the available literature on the socioemotional effectiveness of guided and partially guided digital mental health interventions (DMHIs) for indicated youth populations. Thirty-one studies from published and unpublished sources were identified that utilized guided or partially guided DMHIs for youth.

Summary of Key Study Findings

A major and unique finding of this review was the identification of elements that were common to interventions demonstrating clinical efficacy and elements that were common to those demonstrating poor or yet to be established clinical efficacy. Within efficacious DMHIs for anxiety and depression, refresher/follow-up content, goal setting content, and relapse prevention content were common features. Additionally, content personalization and personalized recommendations were found in efficacious interventions for depression, stress, and wellbeing. Conversely, homework tasks, self-monitoring activities, and log-keeping activities were common to interventions reporting poor efficacy or yet to be established efficacy for the socioemotional outcomes of depression, anxiety, stress, wellbeing, and mindfulness. Across the socioemotional outcomes examined, the most common DMHI elements associated with the preservation of long-lasting impact were content personalization and self-reflective activities. This finding demonstrates the potential influence of intervention design decisions themselves on clinical outcomes. Further analysis of the impacts of intervention elements is warranted to inform developments in this field. This is the first study to attempt to draw associations between socioemotional outcomes and the specific DMHI elements. Given the clinical efficacy of personalized content and personalized recommendations, it can be highlighted that the digital health landscape has evolved from a ‘one-size-fits-all’ approach to more personalized care. As well, this systematic review is the first to examine the outcomes of stress, well-being, quality of life, and mindfulness in youth-specific guided DMHIs, with prior reviews examining a narrow range of socioemotional outcomes (depression, Välimäki et al., 2017; depression and anxiety, Ebert et al., 2015; Garrido et al., 2019).

Comparing Results to Prior Research

We found that guided and partially guided DMHIs demonstrated consistent short-term improvements in several youth socioemotional outcomes, particularly for depression, anxiety, and stress. This is similar to prior systematic reviews in children and adolescents across various age groups (Grist et al., 2019; Hollis et al., 2017), and youth more specifically (Clarke et al., 2015; Välimäki et al., 2017), which assessed general DMHIs. When looking at long-term outcomes, DMHI efficacy inconsistencies were particularly prevalent for the outcomes of depression, anxiety, and stress, findings that have been reported in the existing youth-specific literature, with some indicating DMHI superiority relative to control groups (Clarke et al., 2015; Välimäki et al., 2017), while others showing no evidence of DMHI superiority at follow-up (Bennett et al., 2019; Grist et al., 2017). However, a limitation of the existing body of research included DMHIs primarily focused on the immediate program period, which did not provide ongoing support or assessment post intervention. In the present study, we observed heterogeneity in DMHIs, which may have contributed to observed inconsistencies, clouding true study findings. Factors contributing to these inconsistencies include content, intervention adherence, study design, content delivery method, and the presence of control groups. The dynamic and fast-evolving DMHI landscape has also contributed to this variability. However, as DMHI research amasses, a more granular systematic review of these programs will be able to take space, minimizing such variance.

Consistent with findings from adult reviews (Domhardt et al., 2019; Leung et al., 2022; Ma et al., 2021), we identified that the provider of guided DMHI human support (e.g., professional, peer, student), and their associated training or qualification level, did not appear to impact socioemotional outcomes. This appears to suggest the general value of human engagement and support within these digital interventions. These are promising findings as they may reduce the burden on mental health professionals while also offering less costly healthcare solutions.

Further, we identified that the delivery mode of human guidance, whether synchronous (e.g., videoconferencing) or asynchronous (e.g., text or email), did not appear to influence the mental health outcomes of depression, anxiety, stress, or well-being. This result aligns with adult-oriented reviews (Furness et al., 2020; Yellowlees et al., 2021). Drawing on asynchronous guidance has been associated with enhanced provider efficiency and participant flexibility (Lagera et al., 2023). However, it must be noted that we did observe a lack of entirely synchronous guided DMHIs in the present review (k = 4, 13%), a finding reported in a related review (Zhou et al., 2021). Despite limited data, synchronous youth DMHIs show promise in improving socioemotional outcomes, which is consistent with the broader youth-specific literature (Lattie et al., 2022; Li, 2023). Moreover, since a further four studies (13%) reported on entirely guided interventions, which offered no self-directed component, due to a lack of data we were unable to draw conclusions about their effectiveness compared to partially guided DMHIs that offered a combination of human support and self-guided program elements.

While we sought to examine brief interventions, no interventions with less than three sessions were identified, highlighting the under-explored potential of very brief or single-session interventions. This is important as adult research had identified that the modal number of therapy sessions attended is one, irrespective of client mental health diagnosis, severity, or complexity (Young et al., 2012). Further, research tells us that, on average, 75% of adults who ‘drop out’ from therapy after a single therapy session are happy with that one session (Barbara-May et al., 2018; Josling & Cait, 2018; Söderquist, 2018).

Strengths and Limitations

A key strength of our review is its methodology, which includes a comprehensive search strategy, co-designed approach, diversity of included study designs, duplicate screening processes, appraisal of included studies, and inclusion of both published and unpublished literature from varied sources.

Despite strengths, limitations of the present review’s methodology must be noted. This systematic review was limited by its inclusion of internalizing socioemotional symptoms only, due to a lack of available data on externalizing symptom outcome data. Results are skewed by US-specific literature, which has notable cultural differences to other Anglophone countries including Australia. Finally, 48.39% of the review’s sample was drawn from university students. As the review examined youth ranging from 12-to-25 years, there are generalizability concerns for youth that are 12-to-18 years and 22-to-25 years who fall outside of the usual university enrolment years (Auerbach et al., 2016; Mortier et al., 2018). Due to a lack of identified data, we were unable to report on minority populations. Due to date restrictions, some pertinent studies may have been excluded that were published pre-2018. However, due to the substantial changes in the DMHI space in recent years, older studies are expected to have diminishing relevance.

Recommendations for Improving Youth DMHIs

In light of the review’s findings, recommendations are proposed:

  1. 1.

    Integration of refresher and follow-up content The short-term nature of the DMHIs’ socioemotional effects necessitates the incorporation of follow-up or refresher content. This could be in the form of periodic check-ins, booster sessions, or reminders that revisit key concepts.

  2. 2.

    Re-engagement opportunities

    1. a.

      ‘The door is always open’ Embrace Single Session Thinking principles to convey that users can always return for support (Rycroft & Young, 2021). This approach reframes the concept of disengagement. Instead of viewing engagement lapses as failures, this perspective reframes disengagement positively, valuing each interaction as meaningful, regardless of frequency. It recognizes that users may have received the help they needed at that time, rather than seeing it as a ‘dropout’ or ‘failure to engage.’ This perspective encourages maximizing the benefit of each interaction, including digital ones, and reduces the stigma associated with re-engaging.

    2. b.

      Continuous access to content Allow uninterrupted access to asynchronous DMHI content for users to re-engage at their convenience, acknowledging that mental health can fluctuate over time.

  3. 3.

    Emphasis on goal setting and relapse prevention strategies Interventions that include goal setting and relapse prevention content have shown efficacy. Goal setting helps individuals stay focused and motivated, while relapse prevention strategies can aid in maintaining gains over time.

  4. 4.

    Re-evaluation of homework and monitoring elements Given the suboptimal efficacy correlated with certain DMHI elements including homework tasks, self-monitoring, and log-keeping activities, a reassessment and potential reconfiguration or reduction of these DMHI elements is warranted. Ensuring these elements are not overly burdensome and are clearly linked to therapeutic goals is necessary.

  5. 5.

    Enhancement of user engagement strategies To counteract the identified fleeting nature of DMHI efficacy, innovative strategies to bolster user engagement are imperative. This may encompass interactive features, gamification elements, or personalized content.

  6. 6.

    Continuous evaluation and refinement of active components via longitudinal studies To better understand the long-term effects of these interventions, longitudinal studies are needed. This can help in identifying which components have lasting impacts on socioemotional outcomes.

  7. 7.

    Focus on accessibility and user-friendliness Ensuring that DMHIs are accessible and user-friendly appears crucial in reducing attrition rates and enhancing overall socioemotional effectiveness.

  8. 8.

    Optimizing DMHI delivery and intervention strategies

    1. a.

      Given the mode of DMHI delivery (i.e., asynchronous, synchronous, combined) did not appear to notably impact socioemotional outcomes, focus on developing and implementing more novel, flexible, and cost-effective delivery methods. This approach should aim to maximize accessibility and convenience for users, while also considering the operational efficiencies for providers. For example, place greater emphasis through those with a lived experience, rather than reliance on therapists.

    2. b.

      Since the type of support personnel (e.g., therapist, researcher, peer, student) did not appear to influence outcomes for this population, concentrate on optimizing the duration and intensity of DMHIs for a balance between effectiveness and user engagement.

  9. 9.

    Pre, post, follow-up DMHI evaluations To understand and address user outcomes, it is essential to gather data both before and after they participate in the DMHI. This pre- and post -intervention data collection will likely provide valuable insights into user requirements, helping to tailor the DMHI more effectively to meet these needs. It is crucial to employ validated and reliable instruments for assessing client progress and reflection, not only during the intervention period but also for an extended duration of 1–2 years post intervention. It is further necessary to understand the effectiveness of a DMHI over an extended period of time, to learn when re-engagement might be indicated as effects wear off, for example.

  10. 10.

    Content personalization and being client-led DMHI content personalization involves designing and adapting the intervention content to align with the individual needs, preferences, and circumstances of each user. Personalized content can be achieved through initial evaluations of the user’s specific mental health challenges, preferences in learning and engagement, and unique life circumstances. This approach is expected to increase user engagement, satisfaction, and overall effectiveness of the intervention.

  11. 11.

    Developmentally suitable Youth and young adulthood encompasses a wide age range, necessitating the consideration of developmentally appropriate DMHI content through a life course developmental lens.

  12. 12.

    Incorporation of feedback mechanisms Embedding automated and human-led feedback channels to listen to the client creates a client-informed service may enhance their socioemotional efficacy. These systems, which can include options for anonymity, serve to both continuously improve the intervention and tailor it to individual user needs.

  13. 13.

    Mobile app-based content As digital interventions evolve from web-based to app-based formats, incorporating mobile app content in new DMHIs for youth becomes crucial. This aligns with young users’ expectations and boosts engagement. App-based platforms offer flexibility in synchronous and asynchronous support, catering to individual needs and schedules. They also provide opportunities for interactive features and gamification to further engage users.

  14. 14.

    Leveraging smartphone capabilities Smartphones’ built-in features offer valuable opportunities for improving DMHIs. Examples include:

    1. a.

      Location services for resource connectivity Leverage the smartphone’s location capabilities to connect users with local mental health services, youth facilities, and safe social venues, facilitating easy access to nearby support and resources.

    2. b.

      Gamification through token economy Integrate a token economy (Kazdin, 1977) within apps to make progress tracking more engaging. For example, youth can earn tokens for each day they avoid behaviors (e.g., self-harm, binge-purging). This could be paired with easy re-engagement options and the normalization of re-engagement with a service.

    3. c.

      Movement tracking to promote healthier lifestyles Use phone’s movement monitoring capabilities to motivate users to increase their physical activity, which has been associated with mental health improvements in youth (Rodríguez-Romo et al., 2022).

  15. 15.

    Gaps in research literature and existing brief guided DMHIs

    1. a.

      Trauma-informed DMHIs No study reported explicitly on trauma-informed elements, the critical importance of this orientation is now undisputed in mental health intervention literature (Sockolow et al., 2017; Ting & McLachlan, 2023). Thus, there is a need for research to align with clinical insights more closely and overtly on trauma-informed practices, as well-documented in victimisation, trauma, and long-term treatment literature. As an example, in a trauma-informed care approach for single session encounters one key consideration could be to avoid requiring clients to repeatedly recount their mental health history if they happen to engage with multiple different practitioners. This is because such a requirement can potentially be retraumatising (Frueh et al., 2005).

    2. b.

      Co-designed DMHIs should be explored in more depth. This involves incorporating feedback from current and former clients, practitioners, and client support systems when developing and revising DMHIs. This process should also consider cultural safety by including diverse cultural and population consultations.

    3. c.

      Peer support and engagement Research has yet to fully explore the benefits of peer support and engagement, especially as an initial engagement strategy before clinical contact. Potential benefits include normalising problems, reducing hierarchical dynamics, cost-effectiveness, and improving accessibility. Within an intervention, this could include considering online communities within a DMHI (e.g., live online chat group, asynchronous moderated discussion forums) to assist engagement and positive outcomes that may also provide a mechanism for long-term support without adding to the burden on clinical teams.

  16. 16.

    Emphasising a strengths-based approach This involves reminding clients of their personal resources and capabilities. Innovative methods such as automated games or digital interactive activities can be utilised to reinforce the client’s sense of self-efficacy and remind them that they possess the solutions to many of their challenges. This approach aims to boost client confidence and promote a self-reliant perspective in addressing their issues.

  17. 17.

    Implementing a multi-tiered support model We recommend a multi-tiered DMHI support model, allowing for tailored intervention and intensity based on user needs, utilizing the full spectrum of the digital ecosystem. Figure 3 illustrates a multi-tiered model of care wherein graded referral or progression pathways are made based on need. This approach would conserve professional and financial resources for those most in need. Client’s may complete a pre-DMHI questionnaire to inform the optimal pathway through the tiered structure, as well as additional check-ins to monitor for the need to increase or reduce support.

  18. 18.

    Bookending guided digital support with complimentary non-guided digital resourses Bookending existing guided (a/synchronous) Single Session approaches with access to non-guided digital online resources could be beneficial. Following a trauma-inform stance, digital non-guided resources could be client-selected. Figure 4 displays a basic example of this approach.

  19. 19.

    Systemic awareness and responsiveness Service providers have systems in place for effective and coordinated communication that facilitates the delivery of safe and high-quality care for service users and their support network. This will allow for the provision of wraparound support not placing all the responsibility on the vulnerable young person.

  20. 20.

    When additional support is required

    1. a.

      Brief DMHIs as a gateway into longer-term support Brief digital work can serve as an initial step, providing a gateway to longer-term treatment options or facilitating referrals to other appropriate services. This role positions brief interventions as a critical entry point in a broader therapeutic process.

    2. b.

      Referrals Incorporating high-quality referral sources and systems. This approach ensures that clients are directed to the most appropriate resources or services, fostering a comprehensive care strategy that extends beyond the brief DMHI.

Fig. 3
figure 3

Tiered intensity model of digital support

Fig. 4
figure 4

Bookending a guided single session DMHI with non-guided digital supports

Future Research

Future research will be strengthened and refined through the inclusion of externalizing socioemotional outcomes, permitting a more robust analysis of youth socioemotional outcomes. Future research may also consider exploring DMHI user experience elements (e.g., intervention feasibility, satisfaction, retention, engagement, credibility, motivation). These user experience efficacy outcomes are as critical as the socioemotional outcomes examined in the present review and are two sides to the same coin: both outcomes must balance in harmony for these programs to work successfully. Further research is also required to assess the utility of current DMHIs for diverse populations, including culturally and linguistically diverse communities, diverse socioeconomic groups, and those based in rural or regional locations. Further, modifications of existing interventions or the formation of specific DMHIs for diverse populations is required to enhance factors such as engagement, use, relevance, and trust. Once developed, these will require assessments of efficacy. Further, as we did not identify a brief intervention with less than three sessions, this highlights the under-explored potential of single session or very brief digital mental health interventions for youth that are evidence-based and grounded in science, a notable gap in the literature. Finally, more research on long-term follow-up (i.e., up to 12 months post intervention) is needed to track the enduring or decaying nature of intervention effects.

Results highlight important practice implications, including the value of program engagement with youth using these types of interventions, the need for individualized DMHI content for youth, and the need for ongoing follow-up or refresher program content to ensure sustained intervention effects. While findings were generally similar to other reviews (Clarke et al., 2015; Välimäki et al., 2017), as the needs and context of youth are often unique, this study offers a developmentally specific account of youth DMHIs. This review provides important implications for future investment to design a new digital health model of care for youth that combines both refresher/follow-up content, goal setting, and relapse prevention content together with content personalization and personalized recommendations. Combining the successful elements of DMHIs has the potential to lead to useful interventions for this population.

Study findings are being utilized by our key stakeholder, Beyond Blue, to inform the continuous improvement of their Community Support Services model of care, including how it can meet the needs of younger people. Beyond Blue offer single session, brief interventions using phone, email, and webchat provided by an accredited counsellor workforce and trained coaches.

Conclusions

The finding from this systematic review serves as a promising evidence-base from which further empirical studies can be conducted. While some results were varied, there was strong evidence that these programs are effective for depression, stress, and anxiety outcomes, but that these were short-lived. We also provide an initial examination of the specific DMHI elements common to interventions that yielded positive or negative socioemotional intervention outcomes. This represents an important move toward strengthening evidence-enriched digitally focused mental health services for youth and young adults. Further quality research is necessary before we can determine the socioemotional outcomes associated with DMHIs.