Outcomes of Best-Practice Guided Digital Mental Health Interventions for Youth and Young Adults with Emerging Symptoms: Part II. A Systematic Review of User Experience Outcomes

Although many young people demonstrate resilience and strength, research and clinical evidence highlight an upward trend in mental health concerns among those aged 12 to 25 years. Youth-specific digital mental health interventions (DMHIs) aim to address this trend by providing timely access to mental health support for young people (12–25 years). However, there is a considerable gap in understanding young people user experiences with digital interventions. This review, co-designed with Australia’s leading mental health organization Beyond Blue, utilizes a systematic methodology to synthesize evidence on user experience in youth-oriented digital mental health interventions that are fully or partially guided. Five relevant online databases were searched for articles published from 2018 to 2023, yielding 22,482 articles for screening and 22 studies were included in the present analysis. User experience outcomes relating to satisfaction and engagement were assessed for each included intervention, with experience indicators relating to usefulness, usability, value, credibility, and desirability being examined. Elements associated with positive/negative outcomes were extracted. Elements shown to positively influence user experience included peer engagement, modern app-based delivery, asynchronous support, and personalized content. In contrast, users disliked static content, homework/log-keeping, the requirement for multiple devices, and social media integration. Asynchronous interventions showed high satisfaction but faced engagement issues, with combined asynchronous/synchronous interventions reporting better completion rates. DMHIs offer a promising platform for youth mental health support and has the potential to dramatically increase the reach of interventions through the adoption of technological and user experience best practices. While young people respond positively to many aspects of intervention modernization, such as interactive, app-based design, other concepts, such as social media integration, they need to be adopted by the field more cautiously to ensure trust and engagement. Trial Registration CRD42023405812 Supplementary Information The online version contains supplementary material available at 10.1007/s10567-024-00468-5.


Introduction
Recent evidence highlights an upward trend in mental health concerns among those aged 12 to 25 years (Capon et al., 2023;Twenge et al., 2019).Among many contributing factors, the COVID-19 pandemic may have intensified these challenges, with most young adults (74-87%) experiencing mental health deteriorations during the pandemic (Headspace, 2020;Radomski et al., 2023).Despite the escalating number of young adults requiring greater levels of support, access to timely mental health care is currently insufficient (McGorry et al., 2022;Mei et al., 2019).Some of the common barriers to accessing support and services include perceived stigma, privacy concerns, and poor health literacy (Amone-P 'Olak et al., 2023;Renwick et al., 2022).In light of these challenges and in response to reported low levels of program and support engagement and high levels of attrition, researchers are focusing on youth-oriented digital mental health interventions (Dixon et al., 2016;Kim et al., 2019).
There is growing interest in youth-oriented digital mental health interventions (DMHIs) as a means of addressing some of the challenges associated with typical face-to-face healthcare (Babbage et al., 2022;Richardson et al., 2010;World Health Organisation, 2019).These DMHIs aim to promote engagement and adherence by providing convenient support and a positive user experience (Lattie et al., 2019;Liverpool et al., 2020).A primary benefit of these interventions is their enhanced accessibility, flexibility, and scalability (Marcu et al., 2022;Philippe et al., 2022).DMHIs also offer economic benefits, as online services are generally less costly for both client and health system alike, relative to conventional face-to-face treatments.This is attributed to, for example, an absence of overhead expenses, such as renting and cleaning a physical site, and fewer staff resources required (Ben-Zeev et al., 2021;Howard & Kaufman, 2018).Importantly, DMHIs can reduce the burden on healthcare professionals, resulting in shorter waitlist times (Gagnon et al., 2016;Haleem et al., 2021).Moreover, accessing DMHIs can overcome perceived barriers such as privacy and anonymity which might otherwise deter patients from accessing face-to-face treatment (Khanna & Carper, 2022).DMHIs can also ensure treatment integrity, providing a consistent and standardized intervention in addition to the gathering of real-time participant data (Philippe et al., 2022).Integral to their success is a thoughtful user experience design that factors in the unique needs and preferences of young users, ensuring that interfaces are intuitive, content is relatable, and engagement metrics are prioritized.

Barriers to Online Interventions
Despite the recent growth and identified benefits of selfguided DMHIs, concerns regarding their sustained usage, appropriate utilization, and ongoing efficacy have been raised (Mehrotra et al., 2017;Opie et al., 2024a;Schueller et al., 2017).These issues of engagement may prevent users from fully benefiting from these interventions (Schueller et al., 2017).A further limitation of self-guided digital interventions is high attrition rates (Alqahtani & Orji, 2019;Karyotaki et al., 2015).There is currently a limited understanding of the factors contributing to such intervention attrition and specifically understanding how these retention rates can be improved (Alqahtani & Orji, 2019), though interface ease of use has been identified as a potential barrier (Andrews et al., 2018;Nielsen, 2012).
Individual factors, such as motivation and capability, can influence intervention engagement; however, this has not been extensively studied (Cross et al., 2022).Challenges such as low digital literacy, negative prior user experience, and costs associated with internet or program access can deter users.Other considerations include data security and privacy concerns associated with DMHIs, including the storage and sharing of personal data and risk management associated with distant, independent access (Galvin & DeMuro, 2020;Wykes et al., 2019).
Specific limitations for youth also exist, relating to intervention suitability, usability, and acceptability (Balcombe & De Leo, 2023;Bergin et al., 2020;Liverpool et al., 2020).For example, youth-specific DMHIs are recommended only if specific content and design requirements are met, such as the inclusion of videos, minimal text, and intervention personalization (Liverpool et al., 2020).Therefore, analysis of clinical or standardized outcomes alone may not be sufficient.Exploring user's experiences and perspectives may inform the re-design and improvements of an online intervention, with the purpose of improving clinical outcomes through sustained engagement.
User experience outcomes tell us about user's engagement with, and experience of, an intervention.They often include general feedback, satisfaction and acceptance ratings, and completion rates.To date, there are few standardized tools for measuring and evaluating a user's experience of a digital intervention with reviews reporting heterogeneity in employed measures (Ng et al., 2019;Saleem et al., 2021;Shim et al., 2017;Welsh et al., 2023).When reported, studies tend to only provide summative evaluations of users' experiences with online interventions (Inal et al., 2020).Formative evaluations instead are conducted to develop a deep understanding of user perceptions, informing the redesign and improvements of an intervention.Formative evaluations are essential for understanding the reasons why people may be more or less likely to engage and for addressing barriers, both known and unknown.In addition to more open-ended qualitative feedback, formative evaluation seeks to collect user feedback on specific key indicators of the experience that can be used for comparing different interventions or iterations.These key indicators of user experience are the focus of the present study.

Intervention Guidance and Delivery
DMHIs can be delivered with varying levels of human interaction or support.Guided interventions involve interaction with a human support (e.g., clinician, peer) to boost engagement and offer both clinical and technical support (Heber et al., 2017;Werntz et al., 2023).The degree of guided support can vary, ranging from partially guided, with some elements intended to be completed independently, while others provide guidance for all elements.Guidance can be delivered synchronously (i.e., live human interaction; e.g., telehealth) or asynchronously (delayed human support; e.g., email, text message).Such supported interventions have been found to be more effective than non-supported, self-guided interventions (Leung et al., 2022;Schueller et al., 2017) (Garrido et al., 2019).In one study, DMHI adherence was improved through regular interaction with a trained support facilitator (Garrido et al., 2019).Similarly, Wei et al. (2020) identified that self-guided DMHIs focusing on relaxation and self-care for COVID-19 patients were beneficial for those with mild to moderate symptoms of depression and anxiety.More research is needed, however, to fully understand the impact of, and most appropriate level of human support.

Gaps in Available Research
To our knowledge, prior systematic and scoping reviews that examined DMHIs (both guided and unguided), and associated user experience outcomes such as satisfaction, usability, engagement, and acceptability, have exclusively targeted adults with no youth-specific reviews (Balcombe & De Leo, 2023;Gan et al., 2022;Saleem et al., 2021;Villarreal-Zegarra et al., 2022).Furthermore, prior reviews lack specific recommendations about the level and amount of human guidance that optimizes the young adult's user experience (Hollis et al., 2017;Lehtimaki et al., 2021).A recent systematic review identified that over 70% of preventative youth DMHIs failed to document user participation in their design and development process (Bergin et al., 2020).Overlooking youth end users' perspectives via co-design, codevelopment, and by embedding their feedback may result in less efficacious and appealing DMHIs (Li et al., 2022).As Opie et al. (2024b) emphasized, DMHIs must be both effective and ensure a positive user experience is provided, necessitating the examination of not only socioemotional outcomes, but user experience outcomes also.

The Current Study
To address the aforementioned gaps and limitations and build on the promise of emerging findings, this systematic review aims to (1) identify and synthesize the literature on user experience in youth-specific, guided and partially guided DMHIs and (2) identify user experience elements within DMHIs that are associated with improved experiences and outcomes for young people.The specific user experience indicators under examination will include feasibility and fidelity; user satisfaction; completion and adherence; mode of delivery; session number; and intervention content.

Methods
We conducted a rapid systematic review to provide a timely evidence synthesis to our industry partner (Beyond Blue, Australia's most well-known and visited mental health organization) and help them to inform policy decision making.This review followed the Joanna Briggs Institute (JBI) methodology (Aromataris & Munn, 2020) and Cochrane Rapid Review methodological recommendations (Garritty et al., 2021).Our reporting of the review adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA; Page et al., 2021).See Online Resource 1 for a complete PRISMA checklist.A protocol of the present review was prospectively registered in PROSPERO (registered: March 23, 2023;CRD42023405812).
Following good practice, the review methodology was codesigned and conducted alongside our key stakeholder Beyond Blue and several lived experience consumer and carer academics (Pollock et al., 2018).Collectively, the current review aimed to bring together academic, consumer, and mental health service skills, experiences, and voices.

Inclusion Criteria
The Population, Intervention, Comparator, Outcome, and Study design (PICOS) framework (McKenzie et al., 2019) guided inclusion criteria eligibility (See Table 1).Only literature written in English was included.If necessary information was not reported in-text, the study was excluded.

Types of Sources
The search was limited from 14 March 2018 to 14 February 2023 due to the rapid advancement of technological interventions.Date restrictions were also applied due to the dearth of available literature pre-2018).

Search Strategy
We followed a three-step search strategy.An initial limited search of PsycINFO was conducted, followed by analysis of the text contained in the title and abstract, and of the index terms used to describe the article.This identified the keywords and index terms used for a second search across all the databases covered by this study.The second search was a systematic search of five electronic databases PsycINFO (Ovid), MEDLINE (Ovid), CINAHL (EBSCO), Cochrane Central Register of Controlled Trials (Central; via Cochrane Library).See Online Resource 2 for a complete search strategy (concept and terms) of all included databases.The third search step was an examination of additional search databases.This included searching grey literature, identifying dissertations and theses via ProQuest Dissertations and Theses.Global Trial registries were also searched to identify ongoing studies or complete but unpublished studies, these included the Australian New Zealand Clinical Trial Register (www.anzctr.org.au) and www.Clini calTr ials.gov.The first 20 pages of Google were also searched.See Online Resource 3 for a complete grey literature search strategy.Finally, to ensure a comprehensive search was conducted, reference lists of all eligible studies and pertinent systematic reviews were manually searched to identify further studies that met inclusion criteria.Authors were not contacted for missing data.This is the same search strategy used for the first part of this study series, focusing on socioemotional outcomes of digital mental health interventions.

Study Screening and Selection
All records were imported to Endnote (2020) where duplicates were removed.Remaining studies were imported in Covidence (Veritas Health Innovation, 2020) and were screened at title and abstract level by three reviewers (JO, AV, HK).Studies were then screened at full-text level.At both title and abstract, and full-text, 75% of records were double screened.

Data Extraction
Data extraction was completed by three independent reviewers (JO, AV, HK) with disagreements resolved through conferencing.Data from each full-text article was charted by one reviewer and checked by a second independent reviewer.Data was extracted into a priori standardized data extraction forms, consistent with Tables 3 and 4.

Quality Assessment
All studies were appraised using the Quality Assessment Tool for Quantitative Studies (EPHPP, 2010).Quality appraisal checklist response options were 'yes,' 'no,' 'unclear,' or 'not applicable.' Grey literature was critically assessed using the Authority, Accuracy, Coverage, Objectivity, Date, and Significance (AACODS) checklist (Tyndall,  All studies were required to report on pre-post intervention socioemotional outcomes and post-intervention user experience outcomes Study design (S) Primary research from published and unpublished sources in the form of experimental and quasi-experimental were included.
Case control studies were also included.All included studies needed to report on pre-post program user experience data 2010).Studies were subsequently grouped into low risk (> 75% of quality criteria met), moderate risk (> 50% of quality criteria met), or high risk of bias (< 50% of quality criteria met).An a priori decision was made not to exclude studies based on quality.One author assessed study quality for all the papers, and a second author independently assessed the study quality of 25% of the papers (interrater reliability = 75% agreement).All disagreements were resolved through conferencing.

Synthesis
Data were extracted from each study relating to the included population, the intervention, and intervention user experience elements reported on.To identify socioemotional outcome efficacy and user experience outcomes, we collated and categorized the extracted intervention characteristics and outcomes into a finite set of top-level elements to facilitate synthesis (Morville, 2014).Due to data heterogeneity, a meta-analysis was not feasible, with results instead being collated and tabulated following categorization, and results were reported narratively.

Intervention User Experience Outcomes
As recommended by Morville (2014), we aimed to categorize the findings into seven user experience quality factors or measures: useful, usable, findable, credible, desirable, accessible, and valuable, as shown in Table 2. Considering the substantial amount of heterogeneity in the reporting of different user experiences in different studies, mapping results extracted from each study to this well-defined set of factors enabled for synthesis.However, several of these user experience elements were excluded due to lack of data.Specifically, no study reported on the findable element and very limited data reported on the desirable and accessible elements.We also reported on user experience sub-elements of these factors.Table 3 provides population and intervention information for each included study, grouped by delivery method and Table 4 provides a summary of extracted user experience assessments from each study.Each user experience element extracted from a study was identified as either positive or negative.This was achieved by using statistic data present in the study if its directionality was apparent (for example, 93% of participants indicated that the intervention was easy to use").In other cases, the authors' interpretation of collected results and comparison to provided baselines was used (for example, "the measured rate of intervention acceptance was higher than reference interventions").

Study Selection
The systematic literature search yielded 22,482 records (after removal of duplicates), of which 22,450 records were excluded at title/abstract (n = 21,817) and full-text level (n = 633).Double-screening at title and abstract resulted in inter-rater reliability (IRR) for published literature of 96% (κ = 0.43) and unpublished literature of 98% (κ = 0.45).At full-text screening, IRR was 98% (κ = 0.74) for published literature and 92.31% (κ = 0.75) for unpublished literature.A total of 31 quantitative primary studies were included in the present review (part I and part II).However, only 22 studies reported on user's experience outcome.Hence this review will only focus on those studies.A more detailed explanation of the results of the 32 studies is provided in (Opie et al., 2024a, 2024b, this Special Issue).Figure 1 details the results at each stage of study selection and reasons for exclusion.

Study Quality Assessment
Overall, the quality of included published studies was moderate (n = 12, 57%); with some of high quality (n = 5, 24%) and the remaining of low quality (n = 4; 19%).The quality of included grey literature (n = 1; Wahlund, 2022) was weak (i.e., high risk of bias).See Online Resource 4 for a visual and tabular representation of study quality.

Study Characteristics
Table 3 provides a detailed description of included studies.Most studies were published studies (n = 21) and one was an unpublished dissertation (Wahlund, 2022).Study year ranged from 2019 to 2023, with a steady increase in the number of studies published per year.
All included studies reported on pre-post intervention outcomes, with nine studies including additional follow-up  assessments.Included studies predominantly followed a RCT study design (n = 12, 55%), with seven single prepost experimental studies (32%).Ten (45%) of the studies included a single comparison group (active = 5; inactive = 5), while five studies (23%) included two or more comparison groups which comprised of inactive and active controls.
Two studies reported on diverse populations.Schueller et al. (2019) included a sample of young people experiencing homelessness that were gender diverse or questioning.The intervention sample in Radovic et al. (2021) unintentionally included approximately one third (n = 6/20) of individuals who did not identify as male or female.Out of the 22 studies   Studies were most commonly from the United States (n = 5, 23%), Canada (n = 3, 14%), and Netherlands (n = 3, 14%).Two studies were from Australia, China, Germany, Sweden, United Kingdom (9%, respectively), while one study was from Indonesia (5%).
of the intervention was 1.60 (range: 1-6, n = 10).Of the 22 studies, one study (Harra & Vargas, 2023; 5%) reported on guided interventions, which provided solely human support, while 21 (95%) reported on partially guided interventions that included a combination of human support and selfguided program elements.It was beyond the scope of this review to report on entirely self-guided digital programs.Technology delivery mode was mixed: 10 interventions were web-based, three mobile app-based (Ravaccia et al., 2022;Schueller et al., 2019;Sit et al., 2022;Sun et al., 2022), one via telehealth (i.e., Zoom/videoconferencing software; (Harra & Vargas, 2023), and eight via a combination of delivery methods.
Human guidance was provided via asynchronous methods in 11 studies, and via synchronous contact only in one study.A further 10 studies provided human guidance via a combination of asynchronous and synchronous methods.Mental health professionals were the primary providers of guided intervention content (n = 8, 38%), followed by clinicians and psychology students together (n = 5, 24%), and researchers [n = 1, 5% (O'Connor et al., 2020)].Peers were the sole human support for three interventions (Harra & Vargas, 2023;Klimczak et al., 2023;Rodriguez et al., 2021).Together, peers and clinicians delivered guidance on two interventions (Rice et al., 2020;van Doorn et al., 2022), while researchers and students together delivered one intervention (Karyotaki et al., 2022).Paraprofessionals provided guidance on one intervention (Radomski et al., 2020) while clinical psychology students provided guidance in another intervention (Garnefski & Kraaij, 2023).
Eleven studies were delivered solely by an asynchronous intervention, while 10 had both asynchronous and synchronous guidance.Only one intervention was solely delivered synchronously (Harra & Vargas, 2023).Due to limited data, we reported on effectiveness findings of solely and partially asynchronously guided interventions at the aggregate level.

Personalization
Ten interventions provided some degree of personalized messages or individually tailored content.Interventions were individually tailored according to user's responses to interactive activities (e.g., pre-intervention survey, multiple choice questions, short writing activities, sorting tasks; Klimczak et al., 2023;O'Connor et al., 2020;Peynenburg et al., 2022;Rice et al., 2020) or users' needs and goals (Karyotaki et al., 2022;Rice et al., 2020;van Doorn et al., 2022).Regarding timing and frequency, personalized written feedback was provided within 2 days after session completion (Juniar et al., 2022) or on a weekly basis (Stapinski et al., 2021).In one study (Cook et al., 2019), clinicians sent personalized reminder emails if there was inactivity for more than a week, while another app allowed users to adjust the frequency and the type of notifications received (Van Doorn et al., 2022).(Hennemann et al. 2022, Küchler et al., 2023); and Health Information Technology Usability Evaluation Model [Health-ITUES], (van Doorn et al., 2022)).Unvalidated measures were also employed in 12 studies.In order to better drawn conclusions and synthesize the various and heterogeneous measures reported in different studies, the reported measures from each study were mapped to the standardized user experience elements present in Table 2 (useful, useable, findable, credible, desirable, accessible, and valuable) As shown in Table 5, most user experience measures related to usability, satisfaction, acceptance, and helpfulness (Juniar et al., 2022;Radomski et al., 2020;Rodriguez et al., 2021;Wahlund, 2022).However, no study reported on intervention findability, and limited information was reported on desirability and accessibility-related user experience factors (Juniar et al., 2022;Küchler et al., 2023;O'Connor et al., 2020;Radomski et al., 2020;Rice et al., 2020).

Intervention User Experience Outcomes
Below, we report on intervention elements or identified factors that were common to interventions reporting positive user experience outcomes (e.g., statistically significant   Grudin et al. (2022); Eight adolescents (73%) and eight parents (73%) in therapist-guided I-BA, and three adolescents (30%) and four parents (40%) in self-guided I-BA had completed all eight chapters by the end of treatment.Karyotaki et al. (2022); Participants completed approximately half of the main 7 sessions of the iCBT intervention (55%).

Engagement
Web or app-based program Ravaccia et al. (2022); At T1, 50% of youth had just started and 37% had been using MeToo for ≥ 1 mnth.
At T2, 54% had been using MeToo for ≥ 1 month and 31% had just started.Rice et al. (2020); 1583 total individual system logins from participants (M(sample)=17.8;M(male)=19.9).high participant usage of Steps modules with 1534 completed in total (M(sample)=17.2;M(male)=14.4) with an average of 4.2 Actions completed per user (M(male)=3.9).Talking Point feature also received substantial engagement, with 80 contributions to these discussions from participants.
As shown in Table 5, DMHIs were generally found to be more useful and usable for users when they were appbased, included automated notifications, and incorporated interactive components; and less so when using static webbased content or social media components.Usability was also increased when programs included telehealth calls as part of a combination-delivered approach (asynchronous and synchronous), included short modules (30 min or less), and did not require the use of multiple devices.User impressions of program credibility were shown to also be improved by the inclusion of telehealth consultations and reduced by the inclusion of social media components.Finally, a strong negative signal was observed in user-reported desirability due to the inclusion of homework and log-keeping elements.

Delivery Method
A small number of studies delivered content via a mobile app (Ravaccia et al., 2022;Schueller et al., 2019;Sit et al., 2022), and others received feedback from participants that mobile app delivery would be favorable over web-based delivery (van Doorn et al., 2022).Static online content was associated with a negative user experience (O'Connor et al., 2020;Radomski et al., 2020) when compared with didactic online learning modules.Elements that allowed participants to engage with either their peers or other intervention participants (peer counseling and prompted group discussions) were also associated with positive user experiences, with participants reporting a greater sense of engagement and social connectedness (Harra & Vargas, 2023;Rice et al., 2020).Finally, participants of interventions that involved homework components or log/diary-keeping components commonly reported these aspects as undesirable (Karyotaki et al., 2022;Klimczak et al., 2023;Küchler et al., 2023;Radomski et al., 2020;Schueller et al., 2019).

Asynchronous Guided and Partially Guided Interventions
See Table 6 for a breakdown of effective and poor or yetestablished effectiveness data for asynchronously guided interventions.Table 7 details user experience outcomes reported for each study, aggregated by level of guidance and delivery method.
Among asynchronous interventions, all interventions associated with high user engagement provided its users with reminders via emails or text messages after a period of delayed engagement or inactivity (> 1 week, Cook et al., 2019;Küchler et al., 2023;Rodriquez et al., 2021;five, ten, or 20 days, Hennemann et al., 2022).Furthermore, participants reported that regular reminders (i.e., on a weekly basis) were helpful (Hennemann et al., 2022;Peynenburg et al., 2022;Radomski et al., 2020) and associated with significantly greater module completion than interventions that offered irregular reminders (Hennemann et al., 2022).Radovic et al. (2021) found that asynchronously delivered interventions without regular reminders resulted in attrition.
Positive user experience outcomes were associated with asynchronously-delivered interventions that provided motivational and encouraging written feedback (Cook et al., 2019;Karyotaki et al., 2022;Küchler et al., 2023) and personalized or individually tailored messages of support from mental health professionals (Hennemann et al., 2022;Peynenburg et al., 2022;Rice et al., 2020).Similar positive user experiences were linked to receiving timely written feedback, within 24 to 48 hours after module completion (Cook et al., 2019;Juniar et al., 2022;Karyotaki et al., 2022;Küchler et al., 2023;Wahlund, 2022) and automated weekly emails or texts with personalized recommendations (Stapinski et al., 2021).Furthermore, participants reported positive experiences when coaches regularly called to monitor their progress and used motivational interviewing to promote continued participation (Garnefski & Kraaij, 2023).Positive user experiences were also tied to interventions where clinicians adhered to standardized manuals or templates for  providing written feedback (Cook et al., 2019;Juniar et al., 2022;Karyotaki et al., 2022;Küchler et al., 2023).

Intervention Session Number and Associated Outcomes
See Table 8

Discussion
This systematic review sought to identify and examine the available published and unpublished literature, focusing on user experience of contemporary, youth-specific digital mental health interventions (DMHIs) targeting young people with emerging mental health symptoms (i.e., indicated prevention).Emphasis of the review was placed on brief DMHIs that are in full or in part guided by a human support personnel (e.g., peer, clinician).Findings from the present study indicate that contemporary, technology-aided content delivery methods intended for indicated youth, that provide guided or partially guided support, are beneficial.Results highlighted that a positive user experience was associated with greater integration of these modern delivery methods.We also found that engagement with either peers or other intervention participants through peer counseling and prompted group discussions was associated with positive user experiences, with participants reporting a greater sense of engagement and social connectedness following DMHI participation.This is in contrast to social media integration, which was shown to negatively impact user experience.Homework or log/diarykeeping components were also often reported as undesirable by intervention participants and associated with negative experiences.Notably, homework or log/diary-keeping activities were similarly associated with negative socioemotional impacts of DMHIs (Opie et al., 2024b;this issue).It was additionally found that guided interventions showed high satisfaction rates, whether the guidance was synchronous, asynchronous, or a mixture.However, disliked elements or areas requiring improvement were typically not explicitly reported on in the examined studies.Synchronous and asynchronous combined interventions were found to have higher completion rates than solely asynchronous guided interventions, with adherence rates varying depending on the delivery method used.Consistent with prior reviews (Garrido et al., 2019;Zhou et al., 2021), we identified that web-based interventions were the most frequent delivery methods, with 48% of all interventions using this delivery mode.This suggests that diversified digital delivery methods could be drawn upon to a greater degree, which may serve to enhance user experience outcomes and broaden reach.
We found that peer engagement enhanced user experience, in line with prior research on older cohorts (Riadi et al., 2022;Saleem et al., 2021).Strong preference for peer interaction has been similarly observed in another systematic review looking at guided and unguided DMHIs in young people (Garrido et al., 2019).Despite this, peer engagement is currently an underutilized resource in DMHIs (Naslund et al., 2020;Suresh et al., 2021).Peer engagement could be a first point of engagement before clinical contact, with benefits including problem normalization, reduced power structures, cost-effectiveness, and accessibility (McGorry et al., 2022).As those working in peer support roles have typically reached a degree of recovery and maintenance during life stages and experiences similar to potential participants (Suresh et al., 2021), this has been shown to enhance client motivation and empowerment (Fortuna et al., 2019).In the present study we identified that positive user experience outcomes were associated with interventions that provided motivational and encouraging written feedback and personalized or individually tailored messages of support from a mental health professional, in support of the findings of a prior review (Liverpool et al., 2020).Similarly, positive user experiences were associated with the provision of timely written feedback, within 24 to 48 h of module completion and automated weekly emails providing personalized suggestions.Effective asynchronous interventions with high user engagement also provided its users with reminders via emails or text messages after a period of delayed engagement or inactivity.Furthermore, regular reminders (i.e., on a weekly basis) were found to be effective and associated with significantly greater module completion than interventions that offered irregular reminders (Hennemann et al., 2022).The importance of reminders was further implied in another study that showed that interventions without regular reminders resulted in users simply forgetting to access the intervention (Radovic et al., 2021).Although not youth-specific, this is consistent with a prior systematic review wherein guided DMHIs providing automated reminders were associated with enhanced user engagement (Borghouts et al., 2021).
In line with other DMHI reviews (Liverpool et al., 2020;Struthers et al., 2015), completion and adherence rates varied depending on the delivery methods used.DMHIs had high attrition rates, with app-based interventions having the highest attrition, despite being viewed most positively by youth.Attrition rates varied according to digital delivery method, with combination-delivered studies demonstrating the lowest rates of attrition (26.83%), followed by web-based (28%), telehealth-based (29%, n = 1; Harra & Vargas, 2023), and app-based interventions (54.67%).These findings align with a previous meta-analysis conducted by Garrido et al. (2019), who reported that drop-out rates exceeding 20% are frequently observed.Importantly, these rates should be considered together with program reach and accessibility.For interventions aiming to reach a large number of young people, app-based interventions may enjoy a greater level of uptake at the expense of greater attrition.
Study findings suggest that investment in contemporary modes of delivery is important for usability and acceptance among young people.This includes the ability for participants to access and engage with content, support, and community through their mobile device via social media accounts, comments sections as onboarding/engagement locations, rather than solely through the web (45%, n = 10).This will also allow for additional interactive, rather than static, content, and the personalization of delivered content and delivery mode based on user interactions.However, integration with social media will need to be performed thoughtfully to overcome the challenges it presents with user acceptability and credibility, as shown in Table 5.The ever-increasing importance of social media in young people's lives mandates the integration of mental health support in these forums and the overcoming of these challenges.
With common integration of online social networks within daily lives, there are opportunities and constraints in using familiar social media patterns within mental health interventions.Early feedback suggests that utilizing existing social media platforms may not be desired by participants due to privacy concerns and social stigma surrounding mental illness.However, the establishment of within-intervention online communities is likely to assist engagement and positive outcomes, and also provides a mechanism for longterm support without clinical burden.
There are a number of trade-offs between improving user experience and optimizing the socioemotional outcomes from interventions.Of note, asynchronous guidance was associated with high user satisfaction, despite commonly appearing in interventions demonstrating fewer positive outcomes for depression (Opie et al., 2024b, this issue).It will be important to strike the right balance in creating a DMHI that is both effective, feasible, and palatable.Similarly, in the present study, app-based content delivery and communication were strongly preferred among the youth cohort despite attrition rates for app-based delivery being higher than alternatives, at 54.67%.Given the importance of both socioemotional outcomes and user experience (including adherence and uptake), intervention designers will need to consider trade-offs like this carefully.

Strengths and Limitations
While the current review has multiple strengths including a comprehensive search strategy, only articles published in the English language were included, which may have omitted some important studies.Moreover, half (n = 11) of the included studies recruited participants solely from university students with prodromal mental health concerns.This raises questions about generalizability considering the differing lived experience of many youth sub-populations, who often experience mental illness at higher rates than the general aggregated youth population (Cook et al., 2019;Klimczak et al., 2023;Sit et al., 2022).Considering and validating the unique experiences of broader groups may result in greater user experience outcomes, such as engagement, adherence, safety, and acceptability.One limitation of the current study is that it may have missed including some relevant research on digital mental health interventions (DMHIs).This is because the criteria for study inclusion required that the research report on both user experience outcome and a socioemotional outcome.As a result, studies that focused solely on user experience outcomes without addressing socioemotional outcomes may have been excluded.

Future Research
In the future development of guided DMHIs, the principle of user-centered design is key.This requires inclusion of consumer, carer, and/or intervention recommenders' input (e.g., mental health professional) throughout all phases of the development of DMHIs.Further research should focus on improving existing DMHIs by including a peer engagement component as it is currently an underutilized resource that could be a first point of engagement before clinical contact, with benefits including problem normalization, reduced power structures, cost-effectiveness, and accessibility.Further research is also required to examine differences in user experience based on module number or DMHI length.Similarly, as there is minimal research relating to single episodic interventions, we recommend exploring single session DMHIs due to their low-cost and efficient nature.The present review identified web-based programs to be the most common intervention platform; however, there was a preference for phone-based app programs (e.g., van Doorn et al., 2022).With this, future research and development projects would ideally update the formatting of these computer-only interventions to be smart-phone friendly to better suit user lifestyles and remove engagement resistance variables.Program construction should be informed by data on app usage, youth preferences and patterns, and social media engagement of target populations when moving from computer to phone-based apps.
Further DMHI research is also required to assess the utility of current interventions for diverse populations, including culturally and linguistically diverse communities, diverse socioeconomic groups, and those based in rural or regional locations.A lack of diversity in study populations limits the generalizability of interventions, highlighting the critical necessity of tailoring programs to diverse populations to account for their unique experience and meet their unique needs.There are clear constraints to methods developed and tested with predominantly white, female university students, particularly addressing findability and engagement factors for high-risk populations in need of these interventions.Further, modifications of existing interventions or the formation of specific digital mental health interventions for diverse populations is required to enhance factors such as engagement, use, relevance, and trust.Once developed, these will require assessments of efficacy.

Implications and Translation
With common integration of online social networks within daily lives, there are opportunities and constraints in using familiar social media patterns within mental health interventions.Early feedback suggests that utilizing existing social media platforms may not be desired by participants due to privacy concerns and social stigma surrounding mental illness.However, the establishment of within-intervention online communities is likely to assist engagement and positive outcomes and also provides a mechanism for long-term support without clinical burden.
As for the number of sessions, it was difficult to draw any conclusions regarding user's experience and the number of sessions required for an efficacious intervention.While the most common number of therapy sessions a client will attend is one (Young et al., 2012), we did not identify a brief intervention with less than three sessions.This highlights the underexplored potential of single-session or very brief digital mental health interventions for youth that are evidence-based and grounded in science.This is a notable gap in the literature.Interventions should be data-driven and consumer-informed to enhance program uptake and engagement, which in turn will likely enhance clinical efficacy outcomes.Adjunctively, research further tells us that 75% of those who drop out of therapy, on average, after that single session are happy with that one session (Barbara-May et al., 2018;Josling & Cait, 2018;Söderquist, 2018).These results have been observed internationally in Australia, Canada, and Sweden).Importantly, we must hold in mind that these research fundings do not pertain to an online therapy context.However, to date, we do not have such data for the online therapy setting.

Conclusion
This review highlighted several factors that are associated with positive user's experience toward DMHIs including engagement with peers; adoption of modern, technologyaided content delivery methods; and asynchronous mode of delivery.However, while many contemporary digital modes of delivery hold promise, they also present challenges that need to be thoughtfully addressed.The future of DMHIs lies in incorporating user-centered design, prioritizing the needs and preferences of its target audience, and ensuring wisereaching applicability by catering to diverse populations.

Table 1
PICOS framework Youth (mean age 12-25 years, inclusive) experiencing non-acute, emerging, mild-to-moderate mental ill-health symptoms, with no existing psychiatric diagnosis (i.e., indicated populations were excluded) Intervention (I) Young adult-specific interventions.The scope of interventions was mental health or combination interventions that focused on mental ill-health and alcohol and other drugs (AOD) interventions were included.Entirely AOD interventions were excluded.Interventions were required to be evidence-based or informed and developed by a mental health expert.The intervention duration was brief, defined as intervention length ranging from 1 to 12 sessions and duration ranging from 0 to 12 months.Interventions were standardized and manualized (solely or partially); digitally delivered by any digital delivery method; and individually delivered.Intervention delivery channel could be: 1. Combination delivery (partially guided and partially self-guided) or 2. Entirely guided.Such guided delivery could be synchronous or asynchronous.Guidance could include support from a clinician, researcher, expert by experience, or a mix of experts.There were no theoretical framework parameters around included interventions Comparison (C) Studies that contained within-group data (i.e., examine differences among subjects who are in the same group) and betweengroup data (i.e., assess differences in how two or more groups differ) were included.For studies with between-group data, the comparison group could be any of the following: placebo, non-intervened control, group receiving an equivalent inperson program, or any other varied intervention Outcome (O)

Table 3
Study characteristics

Table 3
(continued) a Unpublished thesis ACT Acceptance Commitment Therapy, Active control Alternative intervention received, App Application, AR Attrition Rate, Async Asynchronous, Auto Automated, Biofeed Biofeedback, Biwkly Biweekly, CBT Cognitive Behavioral Therapy, Exp Experimental, F Female, GoD Guidance on Demand, i-BA Internet-based Behavioral Activation, iCBT Internet-Based Cognitive Behavioral Therapy, Inactive control No intervention received, Incl Includes/Including, iRFCBT Internet-based Rumination-Focused Cognitive Behavioral Therapy, IU Intolerance of Uncertainty, M Mean, Min/s Minute/s, MI Motivational Interviewing, N Sample size, n subsample size, NR Not Reported, Pos Psych Positive Psychology, RCT Randomized Controlled Trial, RFCBT Rumination-Focused Cognitive Behavioral Therapy, Sync Synchronous, TAU Treatment As Usual, Wk week, Wkly Weekly, W/ With, W/n Within, Gray shading-Comparator not included in study

Table 4
When users reported digital mental health intervention ability to produce a desired or intended result, efficacy is marked yes.When users reported achieving maximum productivity with minimum wasted effort, efficient is marked yes ** * Includes comparison data included, only 23% (n = 5) reported on gender diverse communities (e.g., non-binary) and/or sexual orientation.No study focused specifically on under-resourced communities or socioeconomics.

Table 5
Common intervention elements and associated user experience outcomes

Table 6
for effectiveness based on number of sessions.