Introduction

Adolescents and young adults (AYA) remain a key priority population for the achievement of global HIV targets. Research over the past decade has highlighted significantly poorer clinical outcomes across HIV testing, linkage to care, initiation of treatment and viral suppression among AYA compared to adult populations [1,2,3]. In addition, HIV incidence among AYA remains high, especially among adolescent girls and young women (AGYW), with an estimated 5000 new infections occurring among AGYW each week [4]. Although interventions for improving poor HIV outcomes among AYA exist, the majority have yet to be scaled up and implemented programmatically. To reach global HIV targets for AYA, it is critical to identify and address unique gaps in the translation and scale-up of evidence-based interventions (EBIs) among this key population. Critical gap areas for this population include adherence and retention, transitional care from pediatric to adult services, integration of mental health and sexual and reproductive health services into HIV services, and prevention of new infections [5].

Implementation science (IS) uses systematic methods to close the know-do gap that exists between research and clinical practice by identifying and addressing barriers to the implementation of EBIs. To accelerate progress towards UNAIDS 95-95-95 goals, global focus has shifted to IS to reach the most vulnerable populations, as well as sustain changes made to optimize HIV clinical outcomes [6, 7]. IS methods can address critical gaps, particularly for children and adolescents, in whom evidence is largely lacking and predominantly extrapolated from adult studies [8]. While this approach has enabled faster implementation of EBIs for this marginalized population, it may result in less effective implementation if there is inflexibility to adapt to the specific unique needs of the population which may result in lack of effectiveness [8]. By identifying the processes used in implementation, and measuring contextual factors influencing implementation, IS provides insight into the heterogeneity observed in implementation of EBIs across varied settings and helps identify how to optimize and adapt EBIs for maximum impact.

The emergent field of IS has wide variation in how measures are defined, applied and studied [9]. Frameworks provide a way to harmonize the use of IS measures and compare IS outcomes across a wide range of settings and populations. Using consistent approaches to measure and evaluate implementation processes and contextual influences on implementation of EBIs could be especially valuable for AYA, where rapid translation of research to clinical practice has the potential to significantly improve health for a future generation. In addition, IS data collection tools have largely been qualitative, with only a few quantitative tools validated in resource limited settings [10, 11]. Given the global distribution of the epidemic, understanding how IS concepts are applied in AYA HIV research, as well as how IS measures, outcomes and determinants are adapted for LMIC settings, is a key strategy to understanding how to end the HIV epidemic. Harmonizing IS measures across studies and settings, developing reliable and valid ways of assessing IS measures, and identifying when and how specific measures are selected, is critical to support innovations in the field of IS, and areas of focus for future AYA research. In this paper, we review ongoing AYA implementation research in the Adolescent HIV Prevention and Treatment Implementation Science Alliance (AHISA) network to identify IS measures, frameworks and outcomes used across the network and determine gaps in methodology and rigor.

Methods

Study Context

In 2017, to catalyze IS research within the field of adolescent HIV, the NIH convened the Adolescent HIV Prevention and Treatment Implementation Science Alliance (AHISA), a collaboration where researchers, program implementers, and policymakers could share experiences and exchange ideas to facilitate effective implementation of EBIs in the sub-Saharan context [12]. Principal and co-investigators of funded projects (study teams) were eligible to apply for AHISA membership if their research included evaluation of one or more domains within the HIV care continuum and focused on AYA in Africa. AHISA is currently composed of 26 study teams, conducting one or more research studies in 11 countries in Africa, including 5 countries with the highest prevalence of adolescent HIV globally (South Africa, Nigeria, Kenya, Uganda, Tanzania) [13].

Study Design & Data Collection

This review aimed to summarize ongoing studies conducted by AHISA members and characterize implementation and clinical outcomes measured, EBIs and implementation strategies tested, and identify gaps in the scientific agenda of IS for AYA across the HIV prevention and HIV care cascades. We presented the review’s aim and purpose to all AHISA member study teams during the 5th Annual AHISA Meeting (February 11–12, 2021). We requested study protocols and protocol manuscripts via email from the PI’s of all 26 study teams. Each AHISA study team provided between 1 and 3 study protocols for review.

Analysis

ATLAS.ti version 9 (Scientific Software Development GmbH) supported coding and analysis of submitted study documents. Codes were developed by the authors to extract information related to study context (study design, population, geographical setting), EBIs and clinical or efficacy/effectiveness outcomes assessed, implementation strategies tested, and implementation outcomes and/or determinants measured. We utilized Proctor’s Implementation Outcome Framework (IOM) [14] and the Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) framework [15] to define and classify outcomes (Table 1). IS outcomes were first identified if explicitly named in study documents. These outcomes were reviewed by manuscript co-authors for consistent interpretation between studies and re-categorized as needed to match definitions in Table 1. Additional IS outcomes described in study documents, but not explicitly named, were also categorized by co-authors using IOM and RE-AIM definitions in Table 1. All study populations that included any age bands between 10 and 24 years of age were grouped as AGYW if defined as female gender only, youth with HIV (YLH) if living with HIV, or youth if they included both populations living with and without HIV. Those that included adolescents (ages 10–19) only were classified as adolescents living with HIV (ALH). Where possible, we mapped implementation strategies to the Expert Recommendations for Implementing Change (ERIC) [16].

Table 1 Implementation science outcome definitions and examples from AHISA network studies

The coding team included four co-authors of this manuscript (KBS, SD, TC, IN), and two acknowledged researchers (SV, RS), who each participated in independent coding and code review. Each study document was independently coded by one author, and coded documents were reviewed by a second author. Disagreements were discussed and resolved through group discussion. Data were summarized using queries and code co-occurrence tables and re-presented in summary tables. Initially drafted summary tables were reviewed by three manuscript authors (IN, ADW, KBS) to ensure internal consistency in categorization across studies. Extracted and summarized data were returned to individual AHISA teams for review and verification of accuracy and completeness. In cases where the terminology between the study protocols and the review team’s conceptualization differed (e.g., defining an EBI versus strategy), the review team maintained its classification for internal consistency.

Ethics

This study did not involve human subject data and was exempt from IRB research oversight.

Results

This review focused on implementation outcomes, frameworks, and strategies applied to AYA HIV prevention and care among AHISA-affiliated studies. All 26 AHISA member study teams submitted one or more study protocols or protocol manuscripts, representing a total of 36 research studies. Studies represented ten countries; South Africa (10 studies [28%]), Kenya (6 [17%]), Zambia (3 [8%]), Tanzania (3 [8%]), Zimbabwe (3 [8%]), Nigeria (3 [8%]), and Uganda, Ghana, Botswana and Malawi (1 [3%] each). Four studies (11%) took place in multiple locations (Kenya and Canada, South Africa and Kenya, Malawi and South Africa, Kenya and Uganda). Fifteen studies (42%) focused on HIV prevention, 12 (33%) on HIV treatment, 3 (8%) on HIV testing, 3 (8%) studies on HIV treatment/testing/prevention, 2 studies on treatment/testing (6%), and 1 study (3%) on treatment/prevention. Of the 12 studies focused on HIV treatment, 4 were on transition to adult care, 2 on adherence alone, 3 on adherence and retention, 2 on mental health, and 1 on morbidity. Supplementary Table 1 summarizes strategies and outcomes across the HIV continuum of care.

Study Designs

Randomized designs were most common, with 12 (33%) cluster randomized clinical trials (RCTs), 9 (25%) individual RCTs, 4 (11%) stepped wedge RCTs, 1 (3%) 2 × 2 factorial design RCT, 1 (3%) that included both stepped wedge and individually randomized RCT designs, 1 (3%) described as cluster RCT with individual randomization within clusters. There were 6 (17%) cohort studies, 1 (3%) described as observational, and 1 (3%) exclusively qualitative research design. Overall, 8 (22%) were defined as pilot studies.

Aligned with the broad research emphasis of AHISA, studies focused on a range of AYA populations, with 13 including AGYW, 17 including youth, 12 including YLH, either alone or in combination with caregivers and health providers, and 1 each including antenatal mothers, HIV negative male youth and health care workers (Table 2). YLH-defined populations spanned a range of age groups; the most common (12 [33]%) age groups were 14–25 years, while 8 studies (22%) included only youth ≤ 19 years of age.

Table 2 Study Descriptions of AHISA-affiliated studies

Evidence-Based Interventions

There was diversity in the types of EBIs delivered in the 25 studies. Broadly, these were classified into 14 (39%) studies delivering medications (PrEP and ART), 13 (36%) delivering behavioral or social interventions, 9 (25%) delivering clinical services beyond medication, 4 (11%) delivering health systems toolkits, and 2 (5%) providing economic support. A few studies used a combination of EBIs as their intervention. These studies combined EBIs across categories, including 3 studies that combined medication and clinical services and 1 study that combined behavioral or social interventions with economic support. Other studies evaluating combined EBIs integrated multiple EBIs from the same category (e.g., behavioral/social EBIs) into a single multicomponent EBI approach for the study.

These multicomponent EBI approaches are useful for strengthening the effect of a therapy on a single health outcome or to broaden the number of health outcomes targeted in the EBI package. For example, in the Sauti ya Vijana pilot [26, 27] and scale up study [28], a multicomponent behavioral/social EBI included components of trauma-informed cognitive behavioral therapy, interpersonal psychotherapy, and motivational interviewing, all unique mental health therapy EBIs focused on achieving specific mental health outcomes. In another multi-component approach, the Thetha Nami study [18] delivered a multicomponent clinical services and medication EBI, including universal test and treat, family planning, and pre-exposure prophylaxis, to reach a broader range of HIV and sexual and reproductive health outcomes.

Implementation Outcomes, Determinants, and Frameworks

Implementation outcomes were defined by Proctor’s IOF and RE-AIM. The definitions and example quotes for how each outcome was operationalized within study protocols are summarized in Table 1. All 36 studies measured at least one implementation outcome. The most commonly measured outcomes were acceptability (n = 29), implementation (n = 13), feasibility (n = 16), cost (n = 16), fidelity (n = 15), and reach (n = 17) (Table 3). Outcomes measured less commonly included appropriateness (n = 8), adoption (n = 9), sustainability (n = 6), maintenance (n = 5), and penetration [2] (Table 3). Earlier phase implementation outcomes (e.g., acceptability, feasibility, appropriateness, adoption) were more common across the studies than later phase outcomes (e.g. sustainability, maintenance) (Fig. 1). The operationalization of these outcomes was heterogeneous, and there were few occurrences in which a validated implementation outcome measure was utilized or utilized consistently across studies. Studies that focused on the same aspects of the HIV care continuum assessed IS outcomes at different timepoints, among different stakeholder groups and using different measurement tools. For example, the InTSHA and ATTACH studies both focused on transition to adult care and measured acceptability. However, the InTSHA study measured acceptability among those receiving the intervention using the Unified Theory of Acceptance and Use of Technology (UTAUT) [31, 32], while the ATTACH study measured acceptability among those delivering the intervention using the Acceptability of Intervention Measure [34].

Table 3 Study Implementation Characteristics of AHISA-affiliated studies
Fig. 1
figure 1

Characterization of coverage in measurement of IS outcomes and strategies by EBI. AHISA study EBIs were classified into five representative categories. IS outcomes were listed by stage of implementation, ranging from early (acceptability, adoption, appropriateness, reach, feasibility), to mid (fidelity, implementation, penetration, cost), and late (maintenance, sustainability) stages. Presence or absence of specific implementation outcomes and strategies was assessed within each EBI category and organized into a heat map representing the overall evidence available for each implementation measure

All studies measured clinical outcomes or precursors to clinical outcomes alongside implementation outcomes, representing reliance on hybrid effectiveness-implementation trial designs. The clinical outcomes measured aligned closely with the EBIs being tested. Many studies included precursor outcomes that were proximal to clinical outcomes of interest. For example, the 3P study included PrEP interest and knowledge as precursors to PrEP uptake or adherence [35] and the ATTACH study measured transition readiness as a precursor to successful transition [25] (Table 3).

Less than half of studies (n = 16) assessed determinants of implementation of EBIs, and a few explored how specific strategies might overcome specific barriers. For example, the 3P study [35], the HIV prevention cascade study [20, 40], POWER PrEP [38] and Tu’Washindi [17] assessed barriers to PrEP at the individual, social, and cultural levels. The InTSHA study [23] focused on assessing how their social media implementation strategy overcame specific barriers and enhanced facilitators to transition care. While not specifically related to determinants, two studies described investigating mechanisms, mediators, and moderators (Project YES! [22] and SA IMARA [19] (Table 3)) of EBI implementation.

Only half (n = 19) of the studies specifically mentioned applying a framework, model, or theory to inform their studies. The most common were RE-AIM (n = 4) [15], the Consolidated Framework for Implementation Research (CFIR) (n = 3) [45], the FRAME (n = 2) [30] used to track adaptation, Proctor’s IOF (n = 4) [14], and the Exploration, Preparation, Implementation, Sustainment (EPIS) framework (n = 1), and PRECEDE (n = 1) [42]. Seven studies employed frameworks or theories that were not explicitly implementation science frameworks, including those focused on behavioral theories, like the HIV Prevention Cascade framework [36]. Of note, many studies utilized outcomes language from either RE-AIM or Proctor’s IOF without specifically mentioning these frameworks in their protocols (Table 3).

Implementation Strategies

Across AHISA, 26 studies incorporated one or more implementation strategies and 21 studies developed and tested a strategy. For example, the 3P study developed and tested a conditional financial incentive based on PrEP drug levels to motivate adherence [41], while the iCARE study developed and tested a combination demand creation and service provision implementation strategy that included personalized interactive SMS support and peer navigation [46]. Seven studies engaged in adapting an EBI; 3 only adapted an EBI while 4 adapted and tested an EBI. Adaptation was more common among the behavioral and social EBIs. For example, the MUHAS study did not test a strategy but did describe adapting the EBI to be delivered while observing COVID-19 prevention measures [47]. In contrast, the ATTACH study engaged in adapting an EBI disclosure toolkit, developing a transition toolkit, and testing the combined package with a strategy of tracking and training tools [25]. Testing strategies was most common among studies delivering medication EBIs (Fig. 1). When mapped to ERIC, implementation strategies were predominantly targeting change at the interpersonal level, including provider changes in training (e.g., use of training manuals, tracking sheets, and patient actors for simulation-based training), task shifting (e.g., to peers or lay counselors), and supervision. For studies delivering PrEP, the strategies tested occurred at different levels, including incentives (individual level), video and brochure education (individual level), interactive counseling (interpersonal level), and mobilization and community engagement (community level).

Discussion

This review of AHISA protocols and studies revealed a rich body of implementation science focusing on HIV prevention and HIV care interventions for AYA populations in high HIV-burden African countries. Most studies focused on early implementation outcomes of delivering medication, clinical, and behavioral/social EBIs and all used a hybrid trial approach that included measurement of clinical outcomes. The use of frameworks and assessment of determinants was reasonably common, but fewer studies utilized validated implementation outcome measures. Many studies delivered EBIs in parallel with an implementation strategy, with some experimentally testing strategies. Formal evaluation of mechanisms, moderators, and mediators of EBI implementation was uncommon.

Since the original formation of the AHISA in 2017, the use of frameworks, measurement of implementation outcomes, and testing of implementation strategies has expanded in NIH’s implementation science portfolio [12]. Facilitating this expansion, as part of the AHISA collaboration, study teams received intensive implementation science training to strengthen current research designs and inform future IS grants. Expanded training in IS among AHISA teams was reflected in the shared research protocols, with increasing use of IS frameworks in the most recently developed protocols. For example, the Sauti Ya Vijana scale protocol [28] included the CFIR framework to evaluate barriers and facilitators to implementation and the FRAME to evaluate intervention adaptations, expanding IS activities from those included in the earlier pilot [26, 27]. Additionally, almost all AHISA-related protocols dated 2020–2021 included a formal IS frameworks (CFIR, RE-AIM, FRAME, Proctor) [23, 28, 46, 48], whereas most protocols dated 2017–2019 did not. This extended use of IS frameworks among AHISA team research projects demonstrates progress towards achieving the AHISA goal of building implementation science capacity among adolescent HIV researchers in high HIV-burden African countries [12]. As implementation of HIV prevention and care interventions for AYA populations continues and moves from early- to mid- to late-implementation, we expect the AHISA portfolio to grow to include later stage implementation outcomes (e.g., sustainability and penetration) in addition to early implementation outcomes (e.g., acceptability and feasibility) that are common in the current portfolio. Similarly, we expect more studies to shift beyond identifying barriers to implementation and instead focus on testing implementation strategies. A series of similarly structured reviews of interventions addressing stigma [49], non-communicable diseases [50], and depression [51] in resource-limited settings observed few studies that measured later implementation outcomes, and had less specification and testing of implementation strategies, and suboptimal usage of implementation frameworks.

In this review, many studies included an implementation strategy, but often the strategy was not referred to using IS strategy terminology in the protocol. This represents an opportunity to strengthen future research in this area; operationalizing strategies using Proctor’s specification scheme [52] will contribute to the growing evidence linking specific IS strategies to particular outcomes. Additionally, many studies that utilized a strategy did not test the impact of the strategy on implementation outcomes experimentally (a traditional implementation study) but rather conducted hybrid effectiveness-implementation type I designs with clinical outcomes as the primary focus and inclusion of implementation outcomes [53]. As time progresses, we expect more research to employ hybrid type II (equal focus on clinical and implementation outcomes) and III designs (primary focus on implementation outcomes with inclusion of clinical outcomes), as well as purely implementation foci. Finally, most of the implementation strategies tested focused on interpersonal level changes, with the exception of studies focused on PrEP delivery, which included strategies at individual, interpersonal, and community levels. One gap that could be strategically addressed in future HIV prevention research would be testing implementation strategies at higher levels for non-PrEP EBIs. These could include systems-level and community-level strategies, which are well suited to achieve later implementation outcomes like sustainability and penetration. In a similar review of implementation science applied to PrEP delivery for pregnant and postpartum populations, the authors focused on earlier implementation outcomes. They noted fewer studies testing implementation strategies, and of those strategies being tested, fewer tested systems-level or higher level strategies [54].

Adaptation of EBIs was common in the AHISA-affiliated studies. Many interventions required adaptation to a different cadre of provider (often shifting to peers), a new population (e.g., AYA instead of adults) or context (shifting from in-person to mobile delivery), and often to settings with fewer resources than the ones where the EBI was originally developed and tested. Despite adaptation being common, only two studies (Sauti ya Vijana [28] and ATTACH [25]) utilized a published framework to structure the documentation of the adaptation process (the FRAME framework [30]). Most AHISA studies were affected by the COVID-19 pandemic during study implementation, which presented an opportunity to adapt intervention delivery rapidly and creatively to new platforms, such as mobile delivery of the ATTACH and MUHAS interventions [25, 47]. Given the dynamic nature of intervention implementation over time [55] and the need to be responsive to unanticipated circumstances, systematic evaluation of adaptations are critical to understand intervention optimization within given contexts as AYA research places greater focus on sustainability and scale-up.

Within implementation science, timely methodologic challenges include development and psychometric validation of implementation measures for contexts outside the US and Canada [10, 11, 56], as well as elucidating implementation strategy mechanisms and identifying moderators and mediators that activate or inhibit mechanisms [57]. Future implementation science projects in resource-limited settings have an opportunity to advance these scientific and pragmatic areas. Two studies in this review included mechanism, moderator, and mediator language. Similarly, few studies utilized validated implementation outcome measures like the acceptability, appropriateness, and feasibility measures by Weiner et al. [34]. This limited use may be warranted given the dearth of context-validated measures at this point in time. For example, one study that formally adapted and assessed validity of an implementation determinant measure of organizational readiness found that several new domains were required to reflect structural context [33], while a review and application of the CFIR to LMICs revealed the need to add a new domain and new constructs to improve compatibility for use in LMICs [37].

This review is limited in several ways. We only included studies affiliated with AHISA study teams. We did not undertake either a systematic review of all AYA HIV IS research nor a structured review of all NIH-funded studies in this area. The findings of this review are not generalizable to the broader arena of AYA HIV IS research. Some of the AHISA studies were designed when there was less discussion about the importance of harmonization, the application of implementation frameworks, the selection and operationalization of implementation outcomes, and the selection and testing of implementation strategies. As a result, much of the categorization of these items was completed by our team and may differ from how study teams might characterize their work. However, we provided study teams the opportunity to check all categorization in this manuscript to ensure accuracy. Additionally, it is a testament to the capacity-building impact of the AHISA program that protocols developed by teams after AHISA supported IS training incorporated many of these newer practices. Finally, due to less specification of implementation strategies within protocols, it was not possible to map strategies to an orienting list, such as the ERIC [16].

Conclusion

Current AHISA supported research delivers diverse EBIs and measures a range of clinical and implementation outcomes. Future studies that address lack of measurement harmonization across studies and focus on developing and validating implementation measures in heterogeneous contexts could improve development of an implementation-related foundation and improve cross-study comparisons. Additional opportunities for advancing the agenda of AYA HIV IS research include expanding the selection, specification, and testing of implementation strategies beyond the individual and interpersonal, documenting the motivation and results of adaptation of EBIs to new populations and contexts, especially resource-constrained settings, and expanding the scope of inquiry to include identification of mechanisms of action.