INTRODUCTION

There is a growing literature on the types of interventions required to improve healthcare quality1. To reap long-term benefits, the gains brought about by such programs must be sustained beyond the initial interventional period. However, achieving sustainability (i.e., the routinization and institutionalization of improved processes), is difficult and may be dependent on characteristics of the intervention that are not examined during the trial that demonstrates effectiveness. Sustainability is not often studied, and when it is, the results are often disappointing24.

Herein we report on the sustainability of a successful intervention to increase HIV testing. The clinical benefits of identifying and treating asymptomatic human immunodeficiency virus (HIV) infected individuals are firmly established and more cost-effective than many other general population preventive services514. However, 21% of the 1.1 million HIV-infected persons in the United States remain undiagnosed15. Similarly, only 30% to 50% of Veterans Administration (VA) patients with known, documented risk factors for HIV infection have been tested16,17. Therefore, we previously implemented a multi-modal intervention based upon computerized decision support, provider education and feedback, and organizational changes that significantly increased HIV testing rates in at-risk individuals who receive care at VA medical facilities18. Over a one-year period, implementation of this program increased the cumulative rate of ever being tested for HIV from 20.1% to 53.7% (p < 0.001). In contrast, there was no change in three control facilities.

Once the interventional year was over, we turned project responsibility over to preexisting primary care clinical leadership. This leadership chose to dramatically reduce the labor-intensive provider education campaign and merged what little that remained into routine clinical management (e.g. weekly staff meetings). They did, however, continue the largely “fixed” changes in the systems infrastructure for HIV testing, which required substantially less support to maintain (i.e. the computerized decision support, feedback reports, and maintenance of organizational changes). We now report on the intervention’s sustainability in the second, sustainability year of this project.

METHODS

As previously described18, the intervention program was put in place for one year in two of the five geographically separate VA regional healthcare systems (HCS) in southern Nevada and California. HCS A and B were comprised of 12 and five sub-facilities, respectively, in which primary care were provided by mixtures of academic and non-academic staff physicians, postgraduate medical trainees and mid-level providers. This study was approved by the appropriate institutional review boards.

In brief, the components of the intervention were:

  1. 1.)

    A continuously updated, electronic clinical reminder that identifies patients at increased risk for HIV infection and encourages providers to offer HIV testing to such individuals. This reminder is triggered by HIV risk factors available in the VA electronic medical record. These include evidence of Hepatitis B or C infection, illicit drug use, sexually transmitted diseases, homelessness, and Hepatitis C risk factors18. Once triggered, the reminder was resolved by ordering an HIV test, recording the result of an HIV test performed elsewhere, or indicating that the patient was not competent to consent to testing or refused HIV testing. Once resolved, the reminder was no longer triggered.

  2. 2.)

    An audit-feedback system: Providers were given quarterly reports of clinic-level HIV testing performance19.

  3. 3.)

    The reduction of organizational barriers: Under federal laws specific to the VA, written, informed consent and pre-test HIV counseling have been required for all HIV tests20. To expedite this process we encouraged nurse-based rather than physician-based pre-test counseling, use of streamlined HIV counseling, and both telephone notification and brief post-test counseling after negative HIV test results18,21.

  4. 4.)

    A provider education (activation) program: This included academic detailing, social marketing, and educational materials22,23. The academic detailing component involved regular informal discussions by project staff to encourage providers to prioritize the performance of HIV testing24,25. Social marketing involved having physician and nursing clinical opinion leaders encourage HIV testing by primary care healthcare providers26. Finally, we developed and distributed educational hand-outs, pocket cards and posters to promote HIV testing and increase provider comfort and abilities to provide pre- and post-test HIV counseling.

All aspects of the program were implemented in the first month of the intervention year at HCS A and HCS B and maintained during the subsequent 11 months. In support of the provider education program, members of the study team made frequent visits to the clinics to informally promote HIV testing in one-on-one ad hoc meetings with primary care providers. In addition, senior members of the study team regularly attended clinic and facility-wide meetings of primary care physicians, nurses and clinic leadership to promote HIV testing.

The study team did not participate in provider education activities during the second (sustainability) year of the study and instead fully transferred responsibility for this activity to clinic leadership. Qualitative evaluation indicated that provider education activities were much reduced and merged into routine clinical management activities such as staff meetings. Leadership did maintain other aspects of the intervention, including quarterly feedback reports of the rate of HIV testing, and the electronic clinical reminder. Organizational changes that had eased the documentation requirements for HIV testing and broadened the number of people authorized to initiate testing and counseling persisted. Distribution of educational activities, pocket cards and handouts continued at a reduced rate

Our primary analytical goal was to assess the trajectory of the monthly rate of HIV testing during the intervention and sustainability years. In addition, we assessed changes in the proportion of patients who agreed to be tested.

Data sources

We obtained administrative and clinical data, including patient demographics, laboratory tests, diagnostic codes and health factors of the inpatient and outpatient encounters from August 2004 to July 2007 from a pre-existing regional VA database18. The medical records were linked across the data files by encrypted identifiers.

Study population

We evaluated outcomes during clinical visits of patients who were identified as being at-risk for HIV infection but had not been offered HIV testing (i.e., the HIV Testing Clinical Reminder had been previously been resolved). Visits by eligible patients were removed from the database subsequent to the month during which the reminder was resolved.

Statistical methods

To assess the adjusted rates of HIV testing and refusal, we performed logistic regression analyses in which the unit of analysis was the patient who was seen at the VHA facilities in each month, had HIV risk factors, but the HIV Testing Clinical Reminder had not previously been resolved. The dependent variables were performance of HIV testing and documentation of patient refusal to be tested. The independent variables included patient demographic and clinical factors such as age, race and ethnicity, marital status, lack of housing, co-payment status, being at-risk for hepatitis C, hepatitis C infection, hepatitis B infection, illicit substance use and sexually transmitted diseases18. The two VHA healthcare systems comprised of 17 facilities where the patients were seen. To adjust for any systemic effects on patient likelihood of accepting or refusing HIV testing, we included facility-level annual patient loads and baseline HIV testing rates in the pre-intervention period as independent variables. Finally, we adjusted the covariance of the regression model for patient clustering within facilities using the Generalized Estimating Equation method. The data analysis was generated using SAS v9.1 proc genmod (SAS version 9.1. SAS Institute, Cary, NC, USA).

RESULTS

Table 1 compares the demographic features and factors of patients with known risk for HIV infection who received care in the intervention and sustainability years. In the sustainability year, at-risk patients were somewhat younger and less often married. This largely represents an influx of veterans from recent military campaigns into VA care2730. Otherwise there were no meaningful differences in demographic and clinical characteristics between the two years. The number of patients in the sustainability year was lower than in the intervention year as all patients in whom the HIV Testing Clinical Reminder was resolved in the intervention year were excluded from the analyses of the sustainability year.

Table 1 Patient Demographic and Clinical Characteristics

We previously reported that our multi-modal intervention more than doubled the rate of HIV testing rates among at-risk individuals18. The percentage of at-risk patients who received an HIV test was 11.1% in the intervention year versus 5.0% in the year prior to the intervention (p < 0.001). In the sustainability year, 11.6% of at-risk patients were tested. To better assess whether this result represented actual sustainability of the intervention, we assessed the trajectory of the monthly HIV testing rates31,32. This rate increased from 2% at baseline (prior to implementation of the program) to 6% in month 12 (Fig. 1). Although the monthly testing rate declined in the sustainability year, the rate in month 24 remained more than twice the baseline rate (4% versus 2%). These results were consistent across all patient subgroups (data not shown).

Figure 1
figure 1

Adjusted HIV testing rates among all patients with identified risk factors for HIV infection. The active intervention period started in study month one and lasted through study month 12. The sustainability period started in study month 13.

As only patients in whom the HIV Testing Clinical Reminder remained unresolved were eligible for testing in the sustainability year, the previous analyses are susceptible to bias from differences in system-, provider- or patient-level characteristics for patients in whom the reminder was or was not resolved in the intervention year. To reduce this bias, we analyzed HIV testing rates by the order of visits since the start of the intervention period (i.e. first visit, second visit, etc.). This analysis was prompted by discussions with providers which indicated that a more comprehensive approach to detecting undiagnosed disease is taken in new patients. As shown on Figure 2a, the HIV testing rate was consistently greatest on a patient’s first visit during the study period (i.e., on the first possible exposure to the intervention). For such patients, the testing rate increased from 2% at baseline (pre-intervention) to 6% on month 1; the rate continued to increase throughout the 24-month observation period. For each subsequent visit, the magnitude of the increase in the HIV testing rate was less than for patients having their first visit, but remained greater than during the prestudy period for patients having their second to fourth visits. Time series analyses demonstrated that the probability of being tested increased over time for patients having their second or third visits. Minimal increases were seen on the fourth visit and the testing rate on the fifth and later visits did not increase. Over time the proportion of patients being seen on their first to third visits decreased while the proportion being seen on visit number four and greater increased (Fig. 2b). This change in patient distribution explained the attenuation of the rate of HIV testing in the overall population. Further analyses did not identify any demographic, clinical or facility characteristics that differed between persons who were or were not tested for HIV by their fourth visit (data not shown).

Figure 2
figure 2

(a). Adjusted HIV testing rates among patients as stratified by outpatient study visit number. The starting period for the strata are offset at monthly intervals as very few patients had more than one visit per month. (b). Proportion of outpatient visits grouped by visit number.

As discussed in METHODS, the HIV Testing Clinical Reminder can be resolved by performing an HIV test or by documenting that the patient refused to be tested. While allowing for patient choice with respect to HIV testing, minimization of the refusal rate is an important goal; once “refused” is selected, the HIV Testing Clinical Reminder did not prompt providers to re-offer HIV testing during future visits. However, we hypothesized that some “refusals” might actually reflect provider discomfort offering an HIV test33,34, and therefore that the refusal rate might decrease as providers gained more HIV testing experience.

We found that there was a substantial, continuous decrease in the HIV test refusal rate (Fig. 3). The net result was that among persons in whom the HIV Testing Clinical Reminder was resolved, the likelihood that reminder resolution resulted in HIV testing increased from 17% of all reminder responses in the first month of the intervention to 60% in the final month.

Figure 3
figure 3

The vertical bars depict the adjusted rates at which patients with HIV risk factors underwent tests or were stated to refuse testing. The lines indicate the proportion of patients who were offered HIV testing and then underwent testing.

DISCUSSION

We previously demonstrated that implementation of an integrated package of quality improvement interventions that utilizes decision support, a provider education (activation) campaign, feedback reports and organizational changes more than doubled HIV testing rates for at-risk individuals18. These results were robust with dramatic increases in the likelihood of being tested for HIV being observed across patient-level, provider-level and subfacility-level factors. Furthermore, the fraction of HIV test results that were positive remained constant (0.45%) and well within the range at which HIV testing costs less than $50,000 per quality-adjusted life year when societal benefits of testing are considered6.

We now report on the sustainability of this program during the twelve-month period after overall responsibility for the interventional program was transferred to preexisting clinical management, who chose to greatly deintensify the provider education campaign and other labor and time-intensive aspects of the intervention18,35. Remarkably, we found that the rate of HIV testing continued to increase for patients making their first, second or third visits during the sustainability period. These results indicate that despite the de-emphasis of the provider education campaign, when the frequency of medical contact is considered, the program’s impact on HIV testing rates was fully sustainable. The observation that overall testing rates declined was related to the changing make-up of the study population as patients with their first through third visits accounted for 100% of the study population in month 1, 54% of the population in month 12 and 41% of the population in month 24.

We also found that the rate at which patients refused HIV testing decreased over time. Correspondingly, the likelihood of having the HIV Testing Clinical Reminder being resolved by HIV testing increased. These results suggest that providers became more proficient at offering and discussing HIV tests and may have integrated HIV testing into their normal practice. Others have observed that normalization of HIV testing is associated with increased patient acceptance of testing36,37.

The importance of reporting the sustainability of health care interventions and of choosing appropriate measurement metrics is receiving increasing attention32. Our results indicate that assessments of the sustainability of the outcome of an intervention are critically dependent on the mode of analysis. We found that when applied to homogeneous patient population (as defined by prior use of VA healthcare), increased HIV testing rates were sustained after de-emphasis of the provider education campaign and continued to increase among patients newly exposed to the intervention (Fig. 2a). This suggests that our intervention has become part of the institutional culture of our facility, does not overburden providers and fits the implementing culture and variations of the patient population32.

Stratified analysis by the number of visits during each year reveals that our intervention was least sustained among established patients who had not previously been offered testing. We conclude that interventions that aim to maximize sustainability should consider a “tail” of provider education or other components focused on patients who do not receive recommended services on the first exposure. Also, further work needs to be done to determine the determinants of repeated non-performance. We believe that such failures are likely due to systemic barriers or a lack of provider agreement/knowledge. Notably, although theoretical3840 and empirical observations22,23,4143 demonstrate that the use of provider education (or activation) campaigns are necessary to transform group norms and maximize quality improvement, there is far less literature regarding the importance of maintaining these activities to sustain whatever gains are achieved during their use32.

The strengths of our sustainability analysis include, as recommended, use of a time-series analysis of monthly rates of HIV testing which allowed us to better assess the trajectory of HIV testing rates32,44. Furthermore, we examined the effectiveness of the intervention in an unselected population of at-risk veterans receiving care in a routine, real-world clinical setting.

Limitations include the fact that the sustainability analysis was done immediately after the withdrawal of study personnel from active maintenance of the intervention. It is therefore difficult to distinguish between lingering improvements from the implementation and true persistence of effects from institutionalization45. Moreover, this study was undertaken within the quality improvement infrastructure in the VA, which includes an electronic medical record, clinical reminder software and familiarity with performance measurements. Although such tools are increasingly common, this intervention might not be generalizable to other healthcare systems. Another limitation is that while sustainability can be defined as continued use of the core elements of the interventions, and persistence of improved performance32, we did not formally evaluate the continued use of the core elements of the interventions or their individual contributions to the successful sustenance of the intervention. However, surveys of the two HCSs involved in this project indicate that the organizational changes that favor HIV testing and the HIV Testing Clinical Software package have been maintained. Another limitation is that there was still room for improvement and it is unknown whether the rates of HIV testing would have increased further had the provider activation campaign been continued. Furthermore, while guidelines now recommend that all patients be offered HIV testing and that yearly testing be offered to persons who continue to engage in high risk activities14,4648, this intervention was targeted to ensure one-time testing in patients with known risk factors. This strategy was purposely undertaken to prioritize testing for patients at the highest known risk for HIV infection and in deference to concerns that a program to promote HIV testing in all patients would be impractical in the VA as long as written informed consent was required for testing. Finally, the achieved rate of HIV testing remained less than desired. It will be important to determine the effect of removal of the written informed consent requirement for VA HIV testing in August 2009 on the rates of HIV testing49.

In conclusion, we found that when assessed in homogeneous patient populations, the impact of implementation of the coordinated use of a computerized clinical reminder, feedback reports, provider education and organizational change is sustainable after cessation of external support of the provider education component. Maintenance of the gains after withdrawal of support by the research team suggests that the organizational and behavioral changes that led to the enhanced performance of HIV testing were successfully institutionalized. These findings have substantial implications for the assessment and sustenance of quality improvements programs for clinical preventive services and beyond.