Background

One of the greatest global public health achievements has been the rapid scaling up of antiretroviral therapy (ART) in resource limited settings over the past decade. This has largely been achieved through the “public health approach” promoted by the World Health Organization (WHO) [13]. This approach has involved training a range of different health-care personnel to support delivery and monitoring of ART treatment and care services with the aim of shifting from a centralized, doctor-led model of HIV treatment and care to decentralized models, thus enabling a larger number of people to be initiated and retained in care [3, 4].

The WHO 2003 guidelines for the use of ART initially did not recommend viral load (VL) testing as a necessary component of treatment programs. However, the WHO 2013 guidelines now recommend VL testing as the preferred monitoring approach to diagnose and confirm ART treatment failure in both adults and children. Thus many countries such as Uganda [5] have revised their national guidelines for the provision of ART to recommend VL monitoring as the preferred standard. However, VL testing remains relatively costly and more technologically challenging in comparison to clinical or CD4 cell count monitoring in resource limited settings. Moreover, the WHO scale-up strategy is based on decentralized, integrated delivery of HIV care. However, in rural areas where most patients live, local health facilities generally do not have access to sophisticated laboratories and referral networks for transporting samples to, and receiving results from, centralized laboratories [1, 6]. While there are advantages to providing access to VL testing such as earlier detection of treatment failure and thus a reduced likelihood of developing ART drug resistance, this approach is still debated in resource limited settings [711].

The Home-Based AIDS Care (HBAC) project was a 3 arm clinical trial which found that clinical monitoring alone resulted in increased risk of new OIs or death, in comparison to the two other arms where routine laboratory monitoring was available [8]. However, the study found no difference in clinical outcomes between participants who were randomized to VL and CD4 cell count monitoring in comparison to CD4 cell count monitoring, alone after 3 years of follow-up. The only other randomized trial which has directly compared clinical outcomes between patients monitored with VL and CD4 cell counts with those monitored with CD4 cell counts alone, conducted in Thailand found similar results. [12] In, 2007, following the end of the first phase of the HBAC trial, participants who were originally randomized to the clinical monitoring arm were re-randomized to either the VL or the CD4 cell count monitoring arm and all participants were observed for an additional 2 years of follow-up. We now report on the long term clinical outcomes from this study with this additional follow-up time. The objective of this continuation of the HBAC trial was to see if any additional differences emerged with additional follow-up between individuals receiving CD4 cell count monitoring and VL testing in comparison to those individuals who received CD4 cell count testing alone.

Methods

Study design

Beginning in May, 2003, we assessed for eligibility for study enrolment of HIV positive adult patients ≥18 years who had registered with The AIDS Support Organization (TASO) - Tororo branch. Enrolment was offered to patients with a CD4 cell count <250 cells/μL or severe HIV disease (defined as WHO stage 3 or 4 or a history of recurrent herpes zoster). Additional enrollment criteria are described elsewhere. [8] We obtained written informed consent from all the study participants that were enrolled in the study. Participants initiated ART with combinations of lamuvidine with either niverapine or efavirenz; and zidovudine or stavudine, In April, 2007, following analysis of the first phase of the study which demonstrated that clinical follow-up only participants were at increased risk for death and/or new opportunistic infections (OIs) [8], these participants were re-randomized to either clinical monitoring and quarterly CD4 cell counts and VL (CD4-VL) or clinical monitoring and quarterly CD4 cell counts only (CD4-only) and all participants were followed until March 31, 2009. Trained lay field workers continued to provide ART to participants at home including collecting data to monitor potential toxicity, morbidity and mortality. However, the frequency of home visits was changed in the second phase of the study over a 4 month period from once per week to once every 2 months. Pre-packaged drugs were replaced by using a storage container, and pill counts were conducted at the study clinic by a pharmacist. Participants were weighed during home visits and these weights and body mass index (BMI) scores were provided to clinicians. After enrolment, no routine clinic visits were scheduled but participants were encouraged to come to the clinic or hospital if they were ill and were transported to the clinic for assessment if they had specifically defined symptoms or severe illness during a home visit.

Monitoring and diagnostic procedures for the occurrence of illness did not differ between study arms. Physicians responsible for patients in the two study arms received laboratory results on a quarterly basis. Participants received daily cotrimoxazole prophylaxis regardless of CD4 cell count except during a five-month cotrimoxazole discontinuation sub-study [13] Participants who had ART treatment failure as per the arm-appropriate definitions below were switched to didanosine, tenofovir, and lopinavir/ritonavir. In the CD4-VL arm, treatment failure was defined as two consecutive viral load measurements ≥500copies/mL occurring more than 6 months after the start of ART. For the CD4-only arm, persistently declining CD4 cell counts on two consecutive measurements was considered to indicate treatment failure. The first response to a worsening trend in CD4 or VL was counselling about adherence to treatment. Study physicians, nurses, counsellors, and other staff met weekly in a case conference to discuss all deaths, opportunistic illnesses, and abnormal laboratory results and approved all regimen changes. A data safety monitoring board reviewed data every 3 months and was asked to reject the null hypothesis of monitoring arm equivalence if the rate of severe morbidity and mortality in any arm exceeded another by three standard errors of the difference (“Haybittle-Peto” rule) [14, 15]. The study received ethics approval from the University of British Columbia, the Uganda Virus Research Institute, and the Institutional Review Board of the United States Center for Disease Control and Prevention and the Uganda National Council for Science and Technology. The trial was registered at ClinicalTrials.gov, Registration number NCT00119093.

Laboratory procedures

HIV VL was measured with Cobas Amplicor HIV-1 Monitor version 1.5 ultrasensitive assay (Roche, Branchburg, NJ) for baseline measurements, which had a lower limit of detection of 400 copies/mL. Follow-up VL measurements were conducted with the Cobas Taqman (manual extraction) assay, with a lower limit of detection of 50 copies/mL. CD4 cell counts were done with Tri TEST reagents following an in house dual platform protocol and MultiSET and Attractors software with a FAC Scan or FACS Calibur flowcytometer (Becton-Dickinson, Franklin Lakes, NJ). Complete blood counts were provided with CD4 cell counts [6].

Data analysis

We followed the study participants randomized or re-randomized in the remaining two arms for an additional 2 years up to 21st March 2009. We conducted bivariate analyses of clinical and demographic characteristics of study participants in the remaining two arms. Data were analyzed with SAS 9.0 (SAS Institute, Cary,NC). We used Kaplan-Meier survival curves to graphically compare time to first opportunistic illness (OI) or death after 90 days following ART initiation (or after re-randomization for those who were re-randomized to the CD4-VLor CD4-only arms). Adherence to therapy was calculated using the medication possession ratio. [16] Cox proportional hazards regression models were used to adjust for possible confounding, by age, sex, baseline CD4 cell count, VL and body mass index (BMI). Poisson regression analysis with log link function was used to compare the rates of new opportunistic infections and/or deaths occurring after 90 days following ART initiation (or after re-randomization for those who were re-randomized). Logistic regression models were used to compare the proportions that were switched to second line regimens and proportion that had elevated (≥500 copies/mL) viral loads after 6 months on ART or after re-randomization for those who were re-randomized. Person time for people lost to follow-up or transferred to a different provider was censored at the time of the last home visit at which they received ART.

Results

A total of 1211 participants were randomized beginning in May 2004 and started on ART in the initial three study arms (413 in VL arm 411 in CD4 cell count arm and 387 in the clinical arm [8]. Overall, 71.8 % of the participants were female, the median age was 38 years (IQR: 32–44) and the median baseline CD4 cell count was 134 cells/mL (IQR: 70–199). In April, 2007, 331 surviving participants in the clinical arm were re-randomized to the VL (165) and CD4 cell count (166) arms (Fig. 1). Demographic and clinical parameters were similar across the two study arms, (Table 1).

Fig. 1
figure 1

Study profile

Table 1 Baseline characteristics of HBAC study participants Tororo and Busia Districts, Uganda, 2003-9, according to type of monitoring: viral load arm

As of April 30, 2009, the median follow-up time for all participants was 5.2 years from the original randomization date and 4.8 years after 90 days on ART (or re-randomization). During follow-up after 90 days on ART (or re-randomization) 37 deaths and 35 new OIs occurred in patients randomized or re-randomized to the CD4-VL arm and 39 deaths. The last median CD4 for VL arm was 560, IQR (324–602) while CD4 arm was 554, 1QR (331–595). We did not find any significant differences between the two arms p = 0.986. Forty two (42) new OIs occurred in patients in the CD4 cell count arm. The most common OIs diagnosed among participants were tuberculosis (49 % of OIs), followed by Cryptococcosis (13 %), and Kaposi’s sarcoma (10 %).

In a Kaplan-Meier analysis, we found no difference in the time to first event of new OI or mortality between the two monitoring arms (Fig. 2.) rate of 3.0 per 100 person-years in the CD4-VL arm compared to 3.2 per 100 person-years in the CD4 arm; p = 0.605 for log-rank test. Adherence was similar across the two study arms with the mean adherence over each visit interval of 99 % in each study arm (p = 0.123). In a Cox proportional hazards model with adjustment for baseline age, sex, CD4 cell count, viral load, and BMI, there was no statistically significant difference in the risk of first serious morbidity or death between the CD4 arm and the CD4-VL arm; adjusted hazard ratio [AHR] 1.19, 95 % confidence interval 0.82–1.73) for the CD4 cell count arm in comparison to the CD4-VL arm. We did not find any statistically significant difference between the two arms in terms of mortality (HR =1.12, 95 % CI: 0.70–1.77) (Table 2) or the number of severe morbidity events including death (RR = 1.23, 95 % CI: 0.88–1.71) after adjusting for baseline age, sex, CD4 cell count, viral load, and BMI (data not shown), when analyzed separately.

Fig. 2
figure 2

Kaplan-Meier curves of time to first opportunistic illness or death. a-c Porportion of participants without opportunistic infection/illness/mortality

Table 2 Cox proportional hazards regression analysis for time to first morbidity (OI) or mortality event

During the follow-up, 182 participants had at least one elevated VL measurement (≥500 copies/mL after 6 months or re-randomization for those who were re-randomized; 80 (14.6 %) in the CD4-VL arm, 102 (18.9 %) in the CD4 arm (Table 3). These differences were not statistically significant (Odds ratio = 1.31 for CD4 cell count arm relative to CD4-VL arm, 95 % CI: 0.95–1.83). A total of 54 participants were changed to a second-line regimen (Table 4), 30 (5.3 %) in the CD4-VL arm, and 24 (4.3 %) in the CD4 arm (Table 4). Again these differences were not statistically significant) (OR = 0.76) for the CD4 arm compared to the CD4-VL arm, 95 % CI: 0.44–1.33). Of the 24 individuals in the CD4 arm who were switched to second-line therapy, 11(46 %) were found to have had VLs > 500 copies/mL after 6 months of ART. We noted that a smaller proportion of patients in the CD4-VL arm who ever had two VL results ≥500 copies/mL compared to those in the CD4 monitoring arm (4.6 % vs. 7.5 %). However this difference was not statistically significant (p = 0.56). At the close of the study, 92 % of the participants on the CD4 only arm had undetectable viral loads.

Table 3 Proportion switched to second line regimen (after re-randomization for those who were re-randomized)
Table 4 Proportion switched to second line regimen (after re-randomization for those who were re-randomized)

Discussion

In this extension of the HBAC study as a two-arm trial, we found no statistically significant differences in clinical outcomes associated with the addition of quarterly VL monitoring to quarterly CD4 cell count monitoring after over 5 years of follow-up. Furthermore, we did not find any differences in terms of the proportion of participants with unsuppressed VL or rate of switching to second-line therapy between these two strategies. Our analysis again suggests that the addition of VL monitoring to CD4 cell count monitoring may not result in improved clinical outcomes for HIV positive patients receiving ART in resource limited settings. This conclusion is the same as that of the original HBAC study and the only other direct comparison of VL and CD4 cell count monitoring, another RCT conducted in Thailand [12, 17]. The latter study reported that a CD4 switching strategy was non-inferior in terms of clinical outcomes among HIV-positive adults, 3 years after beginning ART when compared to a VL -based switching strategy [12]. The authors found that there was also no difference between the strategies in terms of virologic suppression and immune restoration. Importantly, however, even though patients in the CD4 arm spent longer with a high viral load than patients in the VL arm, the emergence of HIV mutants resistant to antiretroviral drugs was similar in the two arms [12]. Unfortunately, we do not have any resistance data in order to make comparisons in this regard.

These findings differ somewhat from the results of an analysis of mortality of patients on ART in Southern Africa from the International epidemiological Databases to Evaluate AIDS in Southern Africa (IeDEA-SA) [8]. Participants from programs which did not have access to VL testing, namely those in Zambia and Malawi reported higher rates of death and loss to follow up, in comparison to participants from South Africa where VL measurement was accessible and readily available. However, it is unlikely that the only differences between these programs related to the provision of VL testing and differences in health care systems and living environments of these patients likely also influenced the differences in outcomes observed. Studies which compared the effect of routine VL testing to the standard of care where VL was used sparingly to adjudicate discrepancies between CD4 and clinical assessments, found that VL monitoring did not reduce death over the first 36 months of ART but did result in earlier ART regimen change [8, 9, 18].

A similar exploratory study by AIDS Clinical Trials Group A5115 that followed up participants for three years and compared a treatment switching strategy based on CD4-only monitoring versus VL thresholds in 21 public hospitals throughout Thailand reported no significant differences in activated or total CD4 cells at study end [19, 20]. Despite the lack of evidence of clinical benefit to support the use of routine VL testing, there may be other reasons to promote increase use of VL testing. Routine monitoring of participants with VL may result in reduction in the time a patient takes a failing regimen and potentially reducing the frequency of developing drug resistant mutations [21]. However, to date, there is very little evidence that the drug resistance mutations which develop while patients are failing their first-line regimens have much effect on the success second-line therapy. A study from Malawi found that virologic responses to a second line regimen among 109 participants with immunologically-defined treatment failure and a measured VL ≥1000 copies/mL was quite good (85 % VL < 400 copies/mL among those with VL measurements at 12 months after switching), although mortality was quite high at 9 %. All patients in this study had viruses with at least one resistance mutation, and 56 % of patients had viruses with thymidine analogue mutations, but the authors did not find an association with these mutations and virologic suppression at one-year after treatment switching [22]. Furthermore, the Thai RCT described above, did not find differences in the accumulation of virologics resistance mutations. More evidence from larger studies are needed to determine whether virologic monitoring can improve outcomes for individuals diagnosed with treatment failure in resource-limited settings. In the interim, designing HIV programmes that maximize retention of patients in the continuum of care and support adherence counselling to treatment should remain the focus of HIV treatment programmes. [2325] Many programmes in Sub-Saharan Africa have reported a loss to follow up among patients on ART of 20 % or more suggesting potential for improvement [26, 27].

This study has a number of limitations; firstly, the generalizability of our study findings to routine care settings may be limited as participants in this trial were seen and counseled more frequently than is routine in most settings. In the first phase of the HBAC study, participants received weekly home delivery of ART and clinical monitoring by field officers. However, in this phase of the study we extended the interval between home visits to once every 2 months over a 4 month period, in order to reflect standard care models. The intensity of the follow-up likely contributed to the low overall rates of virologic failure and loss to follow-up in comparison to those reported in most other settings. It is also important to note that laboratory evaluations were performed every 3 months, rather than every 6 months that is recommended by WHO. Furthermore, the rates of virological failure in our study were generally lower than most reported programmes from the region, as surveyed in a recent systematic review [2730].

Conclusions

In conclusion, we found that clinical outcomes in the first 5 years after ART initiation were not different between participants with access to CD4 testing alone in comparison to those with routine VL and CD4 cell count testing. These data support the continued expansion of access to ART in resource-limited settings, irrespective of the availability of VL testing.