Background

Community-based interventions (CBIs) are interventions that may combine different strategies across multiple settings and are aimed at improving the well-being of the target population in a community [1]. These different strategies may include education about HIV prevention, promotion of HIV awareness, counseling about risk-reducing behaviors, and promotion of HIV testing and counseling [2]. With HIV prevention, CBIs aims to increase access to medical care to a population that are identified as at risk of HIV infection, such as intravenous drug users, sex workers, men who have sex with men (MSM), or young people with multiple sexual partners [3,4,5,6,7]. They do so by reaching these individuals in homes, schools, or community centers [2]. For uninfected individuals in Sub Saharan Africa (SSA), testing offers a critical point of contact with healthcare providers to use effective HIV prevention strategies; and for people living with HIV, testing provides a gateway to diagnosis and treatment [8]. However, implementing these interventions comes at a cost, and SSA nations will need to optimize their limited resources to scale up HIV prevention interventions that are high quality and cost-effective [9, 10].

Although understanding the costs associated with program implementation is critical to the adoption, success, and sustainability of the program [11, 12], little is known about the costs required to implement these community-based interventions [13]. Furthermore, itemized costs of the resources used to accomplish the different components of their program are infrequently reported by HIV studies implemented [14,15,16,17,18]. The implementation costs of interventions are contextual because the costs depend on the complexity of the intervention, the implementation strategy, and the intervention’s geographical and healthcare setting [12]. When cost analyses are reported in the form of “total cost,” as is common, without the breakdown of individual components of the total cost [18,19,20,21], they fail to provide crucial information on the individual factors driving the implementation costs [11, 22, 23]. Therefore, such cost studies may have limited application in implementation science as they are unable to present a realistic scenario of the programs’ implementation [24].

Micro-costing or an “ingredients” approach to costing provides a thorough understanding of the resources required for a project [23, 25]. These are a more transparent and precise approach to economic costing in healthcare because it involves identifying all resources used in an intervention [23, 25]. These costing approaches are recommended for studies focused on the implementation of HIV testing programs conducted in community-settings [25]. When HIV prevention program reports include detailed information about costs and outcomes, they present a realistic scenario of how these programs can be implemented in a real-world setting [17, 26,27,28]. Data from detailed cost evaluation reports are critical and relevant to policymakers and other stakeholder groups [11, 12]. Cost information for interventions also facilitates their adaptation in other settings [18, 28]. They also minimize biases that may lead to decision-makers underestimating the resources required to scale up, sustain, or reproduce successful interventions in other settings [29].

Two previous reviews have explored the implementation costs of HIV testing interventions in SSA: a 2002 systematic review by Creese et al. and a literature review by Hauck et al. [30, 31]. In both reviews, few of the included studies focused on HIV testing and none used micro-costing approach [30, 31]. As such, the reviews may have limited application since they did not present a realistic scenario of how these programs were implemented. To address this gap in the literature, this study presents evidence of the costs of implementation of HIV testing services in SSA, as well as how the costs of implementing these interventions were analyzed and reported.

Methods

Search strategy

We conducted a systematic review of English language publications that described the costs of community-based implementation of HIV testing, and reported our findings in accordance to the PRISMA checklist [32]. There was no date restriction for the publications. On 2 December 2019, and updated on 26 April 2020, keyword searches were performed on the following databases: SCOPUS, CINAHL, Web of Science, Global Health, PsycINFO, MEDLINE, and Google Scholar. Keyword selection for cost was guided by the taxonomy of implementation outcomes outlined by Proctor et al. [12]. The search strategy (see S1) was designed to capture studies that evaluated the implementation costs of behavioral interventions: randomized control trials and non-randomized control trials, pilot studies, or implementation of evidence-based interventions (EBIs) that have a quantitative economic element (i.e., costs and benefits). The EBIs are peer-reviewed programs with outcomes that are supported by rigorous empirical evidence of effectiveness [33]. The search terms did not include individual SSA countries. The reference lists of the systematic reviews [15,16,17, 30, 31, 34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79] were checked for relevant studies that may have not been identified by our search. See Table 1 for keyword search strategy.

Table 1 Search strategy

Screening strategy

The completed search results were downloaded into Endnote X9 for citation management, deduplication, and literature screening. Study titles and abstracts were initially screened by two independent reviewers using the following inclusion and exclusion criteria. Publications were excluded if they were systematic or scoping reviews, meta-analyses, briefing, debates and commentaries, study protocols, guidelines, meeting reports, conference abstracts, and poster presentations. Interventions related to pediatric HIV prevention or implemented outside of SSA, not HIV-related or not primarily focused on HIV prevention were similarly excluded. Also excluded were interventions designed for people living with HIV/AIDS (PLWHA), pharmaceutical interventions, or utilized HIV prevention strategies other than testing, i.e., treatment as prevention (TasP), universal test and treat (UTT), prevention of mother to child transmission (PMTCT), prevention programs for serodiscordant couples, and voluntary medical male circumcision (VMMC). Interventions utilizing mathematical or simulation models modeling for analysis were excluded as they do not fit the purpose of this study. Studies deemed not to have met the “detailed cost analyses” criteria for micro-costing or “ingredient approach,” in their methodology were excluded (i.e., non-identification of the cost of the individual components of the interventions’ resources). We included HIV testing studies that (1) were community-based intervention in SSA and (2) had intervention and control/comparison arm of the study; and (3) reported disaggregated cost data, i.e., broke down the components of the total cost into small items (e.g., per-diems, overhead or transport).

Data extraction

Two reviewers independently extracted data from each selected study (FU and UN). A third reviewer (CO) conducted an independent crosscheck to identify and resolve any disagreements. We extracted data on intervention description, geographical setting, HIV prevalence, population, sample size, time horizon, perspective, sensitivity analysis, cost measurement used, discount rate, costing instrument or toolkit used (where applicable), and data collection type. We categorized studies by the testing strategy to compare intervention-specific results. The primary cost measurements of interest were total implementation cost and cost per unit of interest (e.g., cost per client tested, cost per HIV diagnosis). Study outcomes not related to cost analyses were not reported in this review. Given that the interventions were too different to allow for pooling [80,81,82] and our aim was not to compare cost across the ten studies included in this review, we did not inflate the costs to a common year.

Risk of bias

To systematically compare the interventions, we evaluated the rigor of each intervention using the risk of bias that was developed by the Evidence Project for behavioral interventions for HIV interventions in low- and middle-income countries [83]. The tool consists of eight items: cohort, control or comparison group, pre-post intervention data, random assignment of participants to the intervention, random assignment of participants to assessment, follow-up rate of 80% or more, comparison group equivalent on socio-demographics, and comparison group equivalent at baseline on outcome measures [84]. The risk of bias was independently rated by FU and UN using the guideline outlined by Kennedy et al. 2019 [84].

Quality appraisal

One of the objectives of this review was to evaluate how the implementation costs of HIV testing interventions in SSA were analyzed and reported. We used two study quality appraisal frameworks: Quality of Health Economic Studies (QHES) standardized framework [85] (which assessed the quality of the cost analysis itself) and the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) [86] (which assessed the reporting quality of the economic evaluation). The QHES and CHEERS frameworks are included in Appendices A and B respectively (in Additional file A1 and A2) [87, 88].

Results

We identified 1533 citations: 1519 from the database search, and 14 additional resources from previous studies on the cost of HIV interventions [30, 31]. Of 1533 articles, 25 were identified for full-text review. Seventeen of the 27 papers were excluded for not meeting the inclusion criteria. Seven provided total cost or cost per intervention outcome but did not have sufficient disaggregated costing data available [19, 89,90,91,92,93,94]. Thus, these 17 studies were excluded, leaving the remaining ten publications that met the full inclusion criteria [95,96,97,98,99,100,101,102,103,104]. Other reasons for exclusion included full text was unavailable [105, 106], studies had no control or comparison group [21, 107, 108], the study was a Universal Test and Treat (UTT) intervention [109], and studies were not primarily focused on HIV testing [110,111,112]. Although Chang et al. presented disaggregated data, the collection of cost data started 6 months after the start of the intervention when the intervention was believed to have reached a stable operational state per the goal of the study to characterize stable program functioning [113]. As such, the cost information provided by the study would not have fully reflected the implementation costs of the intervention. Figure 1 shows the PRISMA flowchart, and Table 2 shows the PRISMA checklist.

Fig. 1
figure 1

PRISMA table

Table 2 PRISMA checklist

Risk of bias

A study must meet at least one of these three criteria (cohort, control, or comparison group, pre-post intervention data) to be included in the review. We calculated the inter-rater reliability for each tool item. All items are treated as dichotomous, whereby we collapsed “not applicable” and “not reported” responses with “no” to reflect an assessment of whether the study did or did not get credit for having achieved that item. We added up the number of items met to create a final summary score for each study and using the weighted kappa assessed inter-reliability between the raters. The total count of agreement was substantial (κw = 0.73). No study was excluded from the review due to concerns about biases. The summary of the risk of bias rating is presented in Table 3.

Table 3 Risk of bias assessment

Characteristics of studies

The ten studies included in the review provided an economic evaluation of HIV testing interventions in SSA, either as a component of a larger study design [95, 96, 103] or as a stand-alone cost analysis [97,98,99,100,101,102, 104]. Four studies collected cost data retrospectively [96, 100, 102, 104]. Data from five studies were collected prospectively [95, 97,98,99, 101]. One paper did not disclose their study’s data collection method [103]. The study participants in four studies were 18 years and older [95, 97, 99, 101]. In Tabana et al., participants had to be 14 and older; 13 and older in Cham et al. [96, 104]. Overall, the interventions spanned a period of nine years, 2008–2017.

Study settings

All ten interventions were implemented in East and Southern Africa. Two studies were conducted in Kenya [98, 102], and two in Swaziland [102, 103]. The studies by Parker et al. and part of Obure et al. were both conducted in Swaziland, a landlocked lower-middle-income country in Southern Africa, with a population of 1.2 million, but in different locations Swaziland [102, 103]. The Parker et al. study was carried out in the relatively rural Shiselweni region with an estimated population of 41,000 of people living with HIV and 15,000 person who are unaware of their HIV status [114, 115]. There was no specific mention of the locations Obure et al. was conducted, only that it was in 41 health facilities in Kenya and Swaziland that were chosen to represent urban and rural regions [116]. Aside from the information that the George et al. study was conducted in Kenya, no additional location-based information provided in the article [98]. The article mentioned that The North Star Alliance, the organization George et al. partnered with, provided health services to hard-to-reach populations across Africa and that in 2017, the organization operated 53 clinics located at major transit hubs in 13 countries in Southern and East Africa, including eight in Kenya. Services provided by The North Star Alliance included HIV self-testing (HIVST), screening, and treatment of infectious disease (e.g., STI, HIV, TB, malaria) diagnosis and treatment of mobility-related and other non-communicable diseases, health education and laboratory services [98].

Two studies were carried out in Malawi: Choko et al. and Maheswaran et al. [97, 99]. In Choko et al. (2019), the specific location of the study was omitted. However, the article specified that a total of 3137 pregnant women (in 71 clusters with approximately 20–30 women per cluster) were initially screened for the study—the 36 clusters in the first stage of trial, and then 35 clusters in the second stage 2. From 3137, 2349 were included in the final study [97]. The Maheswaran et al. was conducted in three high-density urban suburbs of Blantyre with an adult population of approximately 34,000 residents, 1200 adults of which made it into the study [99].

Two studies were set in South Africa, Meehan et al. and Tabana et al. [100, 104]. In Meehan et al., the study took place in the Cape Metro district, Western Cape Province [100]. The study was carried out in partnership with The Desmond Tutu TB Centre (DTTC) at Stellenbosch University, and five non-governmental organizations (NGOs) in five peri-urban communities in the district characterized by poverty, overcrowding, high unemployment rates, and high HIV prevalence [117]. Tabana et al. (2012) was a cross-sectional study conducted in KwaZulu-Natal province, a sub-district with a population of approximately 243,000 people, with the highest HIV prevalence rate in South Africa (17%) and where 70% of the households lived below the poverty line [104]. Only 16% of the adult population in province had reportedly ever had HIV testing [118].

Mulogo et al. and Bogart et al. are two study carried out in Uganda [95, 101]. The study by Mulogo et al. was conducted in two sites: Mbarara and Isingiro districts. The populations of Mbarara and Isingiro districts are estimated to be 418,300 and 385,500 respectively [119]. Facility-based VCT was offered the Mbarara study site (Kabingo sub-county) while home-based VCT was offered to Isingiro study site (Rugando sub-county) [101]. For the 2017 Bogart et al. study was conducted in Wakiso District; event-based HIV testing in Zzinga Island, while and home-based HIV testing in Kavenyanja Island. Zzinga Island was estimated to have about 700 households, while Kavenyanja Island has about 1100 households [95].

The Cham et al. study was the only study included in this review that was conducted in Tanzania [96]. In Bukoba Municipal Council (BMC), the capital of Kagera Region is located on the western shore of Lake Victoria, with its economy supported by fishing and agriculture. As such, BMC residents are primarily fishermen and associated populations that support the fishing industry, including sex workers [96]. Fifty-two percent of men and 68% of women in BMC have reportedly received an HIV test in the past 2 years [120, 121].

Study design

Table 4 summarizes the methodological design of the studies evaluated and provides a descriptive overview of interventions reported in the ten reviewed studies. Four categories of intervention types were identified: HIV self-testing (HIVST) [97,98,99], home-based testing and counseling [95, 96, 100, 101, 104], mobile-based testing and counseling [103], and provider-initiated testing (PITC) [102]. These interventions were commonly compared to facility-based testing [97,98,99, 101, 104], event-based testing [95], home-based testing [103], PITC [96], and voluntary testing [102]. Four studies were randomized control trials; three of which evaluated the cost of implementing HIVST interventions [97,98,99] and one evaluated home-based testing [104]. Mulogo et al. was a longitudinal study with a pre-post cross-sectional investigative phase [101]. The remaining five studies did not state the study design, but the description of the data collection process suggests a cross-sectional design [95, 96, 100, 102, 103], whereby Bogart et al., Meehan et al., Obure et al., and Parker et al. were comparison group study, while Cham et al. was a cohort study. All ten studies were appraised for their QHES score.

Table 4 Descriptive overview of interventions

Types of interventions

The only diagnostic testing reported in eight studies was HIV [95,96,97,98,99,100,101, 103]. In addition to HIV, participants in Tabana et al. were also tested for syphilis, gonorrhea, chlamydia, trichomonas, and candidiasis [104]. Furthermore, Tabana and colleagues did not disaggregate the costs for HIV testing specifically in the cost of the intervention. Although this is a study limitation, it was reported in the paper. In Obure et al., participants in the PITC arm of the study received routine healthcare (e.g., general primary care, maternal and child healthcare, care for sexually transmitted infections, and inpatient services) [102]. In Bogart et al., Meehan et al., and Tabana et al., condoms were given to participants [95, 100, 104]. Participants in Bogart et al. also received de-worming tablets, bed nets, and water guard tablets [95]. Seven studies stated the cadre of healthcare workers involved in the intervention [95, 96, 99,100,101,102, 104]. Nurses were used in five studies [96, 100,101,102, 104]; lay counselors in seven studies [95, 96, 99,100,101,102, 104]; and lab assistant/technologist in two studies [101, 102]. In three studies, lay counselors served both as pre- and post-test counselors as well as tested the participants [101, 102, 104].

Types of costing measures

Costs were predominantly evaluated using a healthcare perspective (n = 8) [95,96,97, 100,101,102,103,104]. All but one study used empirical analytic approach [95,96,97,98,99,100, 102,103,104], with the exception using a model-based approach [101]. While Mulogo et al. mentioned the use of a decision model in their economic evaluation, the particular model used was not stated [101]. None of the studies mentioned the use of any economic evaluation guidelines to inform their costing approach. Tabana et al. was the only study that specified the costing instrument used in their study [104].

Confirmatory testing in a healthcare facility was required in four studies [96, 97, 99, 103]. In Meehan et al., HIV-positive clients were given referral letters to a public health facility for care and treatment [100]. However, none of these five studies reported who bore the cost of confirmatory testing or HIV treatment for participants who tested positive to HIV [96, 97, 99, 100, 102]. Though Maheswaran et al.’s study was a societal perspective, the authors did not state if an amount of money was paid out of pocket by the patient or was subsidized or paid for by the government or donor as part of the intervention. The cost of test kits was included in the intervention costs in seven studies [95,96,97,98,99,100,101], with Bogart et al., Choko et al., George et al., and Maheswaran et al. providing the individual cost of the kits [95, 97,98,99]. Maheswaran et al. reported the unit cost of purchasing and shipping the HIVST kits, as well as the cost of the finger-prick rapid diagnostic test (RDT) kits used in the health facilities [99]. Obure et al. and Parker et al. did not state if kits were free or subsidized or if it was purchased for the intervention [102, 103]. Tabana et al. costed testing equipment, without specifying the particular testing equipment [104].

Although all ten studies reported using micro-costing or ingredient-approach in their cost evaluation, individual costs of different implementation components were aggregated in many studies. In Meehan et al., the cost of all equipment and assets was aggregated as capital goods, while the cost of utilities, consumables, and services directly related to testing service was aggregated as recurring goods [100]. While Maheswaran et al. provided the most detailed cost information compared to other studies, capital/overhead was costed without the study stating what constituted capital/overhead in the program [99]. Notwithstanding, we identified 12 common resource types: start-up cost, material, and equipment, vehicle, fueling, stationary/supplies, office rental/building, utilities, furniture, maintenance, training, and transportation. Personnel cost, material/equipment, stationary/supplies, and training were the cost items typically presented as stand-alone cost components. Personnel costs were reported in all ten studies. Materials and equipment were reported in seven studies [95,96,97,98,99, 103, 104]. Stationary/supplies were reported in five studies [96,97,98, 101, 104], and training in four studies [96, 98, 99, 101]. Fueling or vehicle [96, 99, 100, 102], furniture or maintenance [96,97,98,99,100, 103, 104], and office rent/building or utilities [97,98,99,100, 103, 104] were commonly aggregated. Tabana et al. was the only study to report on start-up costs. Tabana et al., Maheswaran et al., and Mulogo et al. had the most detailed cost information [99, 101, 104]. Conversely, Obure et al. and Meehan et al. had the least [100, 102].

Cost analysis

Six studies reported only the financial cost of implementing the interventions, focusing on the direct cost of the intervention [95,96,97, 101, 103, 104]. Four studies performed economic costing [98,99,100, 102]. In George et al., costs not specifically borne by the counseling and testing services were said to have been calculated but was not reported in the paper [98]. While Meehan et al. said economic costing was performed, the costs of free products were not accounted for in the paper [100]. Maheswaran et al. was the only study that reported cost for patient time-off, patient direct non-medical cost, and caregiver time [99]. Two studies stated they were conducting a cost-effectiveness analysis (CEA) [101, 104]. We identified Maheswaran et al. as a cost-utility analysis (CUA) because the study had measured the health-related quality of life (HRQoL) of the participants [99]. The remainder seven studies were identified as cost-effectiveness studies since they reported cost per unit outcome of interest [95,96,97,98, 100, 102, 103]. Eight studies reported the total cost per intervention [95,96,97,98,99,100,101, 104]. All ten studies reported the cost per unit of interest: cost per test [95, 96, 98,99,100,101,102, 104], cost per HIV diagnosed [96, 99,100,101,102,103], and cost per client linked to care [97, 99, 100, 103]. Table 5 contains detailed information about the cost outcomes of the interventions.

Table 5 Intervention cost per outcome

In comparing the total implementation costs of the interventions to the controls, the latter was recorded to cost less in most studies. While this pattern was noted in all four categories of testing interventions (HIVST, home-based testing [HBHTC], mobile testing [MHTC], and provider-initiated testing [PITC]), the margin was wider with HIVST interventions. For instance, in Choko et al., total intervention cost (excluding ART/VMMC) for the five intervention strategies ranged from USD 1176 to USD 7470 [97]. The corresponding control (standard of care) cost was USD 557; less than half of what was spent implementing the least costly strategy. The margin was the narrowest in George et al.; USD 544 compared to USD 285 and USD 336 of implementing a standard of care and enhanced standard of care respectively [98]. Only Meehan et al. and Cham et al. reported implementing the intervention at a lower cost than the control [96]. However, in Cham et al., the total cost for the intervention (USD 176,866) was lower only for the PITC arm (USD 404,365) and not the venue-based testing service [VBHTC] (USD 139,377) [96]. However, cost per test and cost per HIV diagnosis was lowest in PITC of the three arms of testing modalities; VBHTC cost the most of the three [96].

When assessing cost per outcome, the cost of implementing the interventions was lower than that of the control for some outcomes but higher for other outcomes. For instance, in George et al., cost per client tested was lower for truck drivers: USD 20.92 for HIVST to USD 28.48 for the standard of care and USD 33.57 for an enhanced standard of care [98]. However, for female sex workers, the intervention group cost USD 11.43 to the USD 9.56 for the standard of care. In Bogart et al., there was minimal difference in the cost per test between intervention and control: cost per test for home-based testing was USD45.09 and USD46.99 for event-based testing as its control [95]. In Obure et al., the intervention arm of the study (PITC) costed less than the control (voluntary counseling and testing) for both total cost and cost per the two outcomes measure (per client and HIV diagnosed) [102].

The heterogeneity in reporting how different components of the intervention were costed made implementation costs across studies incomparable. For instance, the financial input in Bogart et al. was calculated as cost per capita [95], percentage of total cost in Parker et al. [103], and cost per client tested in George et al. [98]. Eight studies adjusted for inflation and the dollar exchange rate relative the currency used in implementing the interventions. Three studies adjusted for inflation [96, 99, 104]. Maheswaran et al. used World Bank data to adjust all costs to account for inflation and differences in purchasing power between countries [99]. Cham et al. inflated the costs to 2017 price levels using the annual Tanzania consumer price index (CPI) ratio for 2014, 2015, and 2016 [96]. For Tabana et al., the costs incurred prior to 2010 were adjusted by using the CPI ratio for 2010 as the base year [104]. Costs in six studies were collected in local currencies and converted to US dollars [95, 98, 100,101,102, 104]; five of them provided the exchange rate used in converting to dollars [98, 100,101,102, 104]. Cham et al. and Tabana et al. annuitized the cost of some items [96, 104]: Cham et al. annuitized vehicle costs at an annual rate of 3% [96], while Tabana et al. annualized the economic costs of capital items, using either the items’ purchase value or replacement value, to an interest rate of 9% [104]. Only George et al. and Meehan reported the marginal costs alongside the absolute intervention costs, providing mainly the cost of additional test kits [98, 100]. In George et al., the cost of HIVST kit dropped from USD 9.22 to USD 2.00 after the agreement with Gates Foundation [98].

Data quality appraisal

Quality of Health Economic Studies

Using the Quality of Health Economic Studies (QHES) checklist, 50% of ten studies were of high quality [96, 98, 99, 101, 104]. With a QHES score of 86%, Maheswaran et al. was the study with the highest quality [99]. At 46%, Obure et al. scored the lowest and was the only study of poor quality [102]. The QHES dimensions with the highest scores were questions responding to (a) if the study stated and justified the main assumptions and limitation of the study (90%); (b) if the presentation of study methods and analysis was clear and transparent (90%); (c) if the data extraction methodology was stated (90%); and (d) if the study conclusions/recommendations were justified and based on the study results (100%). Although economic evaluations are susceptible to, six studies failed to address how the researchers handled uncertainties [95,96,97, 100, 102, 103]. In that, the studies did not report performing statistical analysis to address random events or sensitivity analysis to cover a range of assumptions [95,96,97,98, 102, 103]. Three studies performed univariate sensitivity analysis [98, 101, 104]. Maheswaran et al. performed both sensitivity and statistical analyses for uncertainties [99]. For the seven studies with a time horizon beyond 1 year, four discounted for the effects and cost generated after the first year [98, 101, 102, 104]. Only George et al. reported CEA estimates from subgroup analyses: female sex workers and truck drivers [98]. Obure et al. was the only study that failed to disclose information of the data extraction method used [102]. Furthermore, the authors failed to state the perspective of their analysis and did not discuss the direction or magnitude of the potential biases of the study.

Consolidated Health Economic Evaluation Reporting Standards

Overall, the reviewed studies performed poorer on the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) assessment compared to QHES. No study reached the 75% threshold to be classified as high quality. Four studies had scored lower than 50% and were therefore considered to be of poor quality [95, 97, 102, 103]. These studies also had the lowest QHES scores. Six studies were categorized as average quality, fulfilling between 50 and 63% of the criteria [96, 98,99,100,101, 104]. At 63% of criteria met, Maheswaran et al. had the highest CHEERS score [99]. Model choice and model assumptions were only applicable to Mulogo et al. [101]. However, the authors did not describe the assumptions underpinning the decision-analytic model and did not provide a figure showing the model structure as strongly recommended by CHEERS. Nine studies stated the time horizon for the costs being evaluated, but none justified why the time horizon was appropriate [95, 97,98,99,100,101,102,103,104]. Cham et al. did not state the study’s time horizon nor its appropriateness for the evaluation [96].

Six studies with time horizon more than a year failed to report the choice of the discount rate used and why it was appropriate [95,96,97,98, 100, 101, 103]. Nine studies did not characterize participants’ heterogeneity in their results [95, 96, 98, 100,101,102,103,104, 122]. Eight studies did not declare information about conflict of interest among study contributors [95,96,97,98, 101,102,103,104]. Seven items on the checklist were most commonly reported: a structured abstract [95,96,97,98,99,100,101, 104], explicit statement about the broader context of the study and its policy relevance in the introduction [95,96,97, 99,100,101, 103, 104], and a summary of population characteristics [95,96,97,98,99,100, 102, 103]. However, none of the seven studies that provided a structured abstract mentioned performing uncertainty analyses as required by CHEERS. The overall quality of the included studies according to the QHES and CHEERS checklists is summarized in Table 6.

Table 6 Overall data quality score

Discussion

We identified four categories of HIV testing interventions in this review: HIVST, home-based testing, mobile-based testing, and PITC. Three categories of testing services commonly served as controls: facility-based testing (FBHTC), event-based testing, and PITC. In two studies conducted in Malawi, the HIVST intervention costs twice as much the FBHTC, irrespective of the clinic site [97, 99]. Given that HIVST is a relatively new testing modality compared to the controls, the large difference in costs associated with implementation is partly attributable to the latter requiring little or no additional cost-intensive resources such as office rental, vehicle, or pre-implementation costs. Regardless, HIVST has the potential to increase uptake of HIV testing among undiagnosed people living with HIV and individuals with high HIV risk [123,124,125,126]. HIVST also provides complementary coverage to the standard HIV testing service [127]. With the release of the World Health Organization (WHO) guidelines to encourage HIVST [125, 128], our review findings make an important contribution to scaling up HIVST interventions in SSA.

Delivering PITC mostly costs the least whether as the intervention arm or control arm. This could be because PITC had been recommended by the WHO since 2007 [129]. Per recommendation, all patients attending health facilities are required to be routinely offered HIV testing in countries with generalized HIV epidemics [129]. Correspondingly, some costs associated with implementing PITC had already been built into the healthcare system. Nevertheless, while PITC may be low-cost, it is an approach with limited impact in reaching the greatest number of people [130,131,132,133]. Specifically, PITC does not reach individuals who do not typically utilize facility-based health services and other vulnerable or marginalized population groups with both high HIV incidence rates and low uptake of HIV testing due to fear of stigmatization (e.g., adolescents and men who have sex with men) [130,131,132,133]. Hence, the push for HIVST to address these barriers [123,124,125,126]. An example of how HIVST addressed some of the barriers associated with accessing clinic-based HIV testing services is in men’s health—their reluctance to visit healthcare facilities [118], thereby leading to a situation where there are a high proportion of HIV positive men who remain unaware of their HIV status [134]. HIVST is thought to offer an approach to improving men’s HIV testing rates by enabling the men convenience in time and place of conducting and interpreting their own HIV tests at their own convenient time and in a private space [135,136,137].

While all ten studies in the review presenting disaggregated cost information, the level of details varied across papers. Nevertheless, most studies provided fewer details about the individual cost of resources involved with accomplishing the different components of their program. Furthermore, there were many aspects of program implementation that were inadequately covered in the studies, such as startup costs related to preparatory work, and education or costs related to ongoing monitoring. Other than Tabana et al. [104], none of the other studies provided a clear picture of how much it cost to initiate the intervention or at what stage the cost analysis began. Nine out of the ten studies did not provide an explicit assessment of the “hidden” costs of implementation, such as an estimation of the cost of human or material resources that may have been free to the intervention or costs shared. Furthermore, the marginal costs of the interventions were reported by only two studies. While absolute costs are important for implementation planning as it presents the resource demands of an intervention design [138], marginal costs should not be neglected. This is because marginal costs capture how additional costs change as service levels increase, thereby making the reported cost information amenable for analysis and comparison [139, 140].

The studies did not provide details on whether the funds used in their programs were from one source or from multiple sources. As it pertains to personnel cost, it was also not clear if there had been a need to recruit new staff as the intervention advanced. If additional staff had been recruited, at what stage did it become necessary to do so and what extra cost was added to the intervention. This might be due to reporting bias or that they were only available in grey literature, thus limiting access to valuable information decision-makers need. As a result, the findings limit a realistic reflection of the resources that may be required to scale up, sustain, or reproduce the intervention in other settings. Subsequently, decision-makers may underestimate the cost of implementing the intervention and overestimate their benefits [18]. Detailed overview of the materials and personnel resources necessary for implementation facilitates budgeting, and enable implementers intending to adapt the intervention anticipate costs they may not otherwise consider [18]. This is a gap that needs to be filled by future researchers and program implementers.

Another critical gap to be addressed is the quality of the economic evaluations, particularly the reporting. Presumably, this is due to the low capacity of health technology assessment (which has economic evaluation at its core) in SSA [141]. Therefore, deliberate efforts will need to be made in other interventions/studies to build this capacity [141]. Most of the reviewed studies were generally of good quality using the QHES checklist, with half of them reaching the threshold for high quality (75%). Additionally, data were collected prospectively in half of the studies included in this review which minimized the risk of bias analyzing programs’ financial records retrospectively are subject to [18]. However, there still remains room for improvement. One way may be in calculating implementation costs in a manner consistent with existing guidelines such as Guideline for Economic Evaluations in Healthcare [142], Costing Guidelines for HIV/AIDS Intervention Strategies [143], or Reference Case for Estimating Cost of Global Health Services and Interventions [144]. These guidelines allow for more informative reports that aid decision-makers’ choices about the options available to them [145, 146]. Greater attention also needs to be paid to the reporting of the cost evaluation, as evidenced by the low quality of the CHEERS appraisal. This provides a premise for building capacity for economic evaluation in sustainable and institutionalized ways in SSA. Overall, implementation researchers should be mindful of the importance of reporting the cost of their implementing their interventions [147]. More so, they need to go beyond reporting cost-effectiveness and cost-benefit analyses to demonstrate the long-term economic effects of their interventions [148]. The absence of implementation cost data constrains deliberations about resources to consign to community-based health programs [149]. It constricts investment to program components like personnel, equipment, and modalities that are critical to strengthening and developing community health systems [149].

Although this review offers a synthesis of cost analyses of HIV testing intervention in SSA, there are potential limitations to this study worth mentioning. While the literature search was wide, the study had strict inclusion criteria, thus limiting generalizability [150, 151]. Nonetheless, the strictness of the criteria meant that the review was more concise, cohesive, and had fewer challenges potentially introduced by heterogeneity [152].

Conclusions

Our systematic review shows that more attention needs to be paid to increasing the quality of conducting and reporting economic evaluations for HIV prevention interventions in SSA. Particularly, considerable effort needs to go into reporting them appropriately. To better inform policy, future evaluation of HIV prevention intervention will need to follow evidence-based guidelines and quality assurance frameworks so that the costs reported are extensive enough to address the many aspects of implementation that were not reported in previous evaluations. The interventions included in this review were disproportionately from East and Southern Africa. Geographic diversification of implementation cost analysis studies from West and Central Africa is needed in future research. As noted, implementation costs are contextual, thus costs of implementing HIV testing in West and Central Africa may or may not be substantially different compared to East and Southern Africa. Therefore, geographic diversification of implementation cost analysis studies from West and Central Africa to address the research question is needed. In an evolving field of implementation research, the review contributes to current resources on quantitative evaluation of cost studies. It particularly advocates for an increased use of economic evaluation guidance to aid implementation researchers for better reporting of cost information.