INTRODUCTION

Influenza is a significant cause of illness, hospitalization, and mortality in the United States.1 , 2 Vaccination has been demonstrated as an effective method for reducing the burden of disease.3 , 4 Despite these benefits, however, each year more than 50% of adults in the United States do not receive the influenza vaccine, and this level has not improved in nearly a decade.5 , 6 Therefore, new strategies may be needed to address this issue.

The growing adoption of the electronic health record (EHR) may provide new opportunities for developing scalable methods to increase the use of preventive services.7 While clinical design support through the EHR has been demonstrated to improve performance across clinician process measures, there is less evidence of its impact on patient outcomes.8 11

Insights from behavioral economics could offer new approaches to design choices within the EHR to improve clinician and patient decisions.12 15 For example, our prior work found that using “active choice,” a method that requires clinicians to accept or decline an order for colonoscopy and mammography, significantly increased order rates for these screening tests.16 In this study, we evaluated the impact of an active choice intervention using the EHR to increase influenza vaccination rates. Rather than the standard approach of relying on clinicians to recognize the need for vaccination and opt into placing an order, the EHR confirmed patient eligibility during the clinic visit and used an alert to ask the physician and their medical assistant to actively choose to “accept” or “cancel” an order for influenza vaccination.

METHODS

This study was approved by the University of Pennsylvania institutional review board. Informed consent was waived because it was infeasible given the retrospective study design and the minimal risk posed to patients.

Study Sample

The sample comprised patients with a clinic visit during influenza season (from September 1 to March 31) at one of three similar internal medicine practices at the University of Pennsylvania Health System between September 2010 and March 2013. All three sites were academic teaching practices located within the same area (0.3 miles apart) in Philadelphia, Pennsylvania. To ensure we evaluated a sample of patients who were due for the influenza vaccine, we excluded patients with either of the following during the current flu season: 1) EHR noted the patient had already received the influenza vaccine at the practice or elsewhere (using health maintenance data from EPIC, the outpatient EHR); 2) health insurance claim for influenza vaccine identified.

Intervention

Prior to the intervention, providers at all three sites had to manually check whether a patient was due for the vaccine and then place an order for it. On February 15, 2012, one of the clinics implemented a change to the EHR settings using a best practice alert in EPIC. This intervention confirmed patient eligibility for the vaccine during the clinic visit and, upon signing into the EHR for that patient, prompted the provider to actively choose to “accept” or “cancel” an order for the influenza vaccine. This alert was at the time of patient check-in to medical assistants who could pend orders for the physician to review and potentially sign. Regardless of the action of the medical assistant, physicians also received the alert when first opening a patient’s chart. Physicians could place their own order or approve the order pended by the medical assistant. Because the alert was delivered independently to both the physician and medical assistant, in some cases both members could enter an order. In this scenario, the physician would then sign one of the orders and cancel the other.

Main Outcome Measures

The primary outcome measure was the percentage of patients eligible for the influenza vaccine who had an order for it placed on the day of the clinic visit.

Data

Clarity, an EPIC reporting database, was used to obtain data on patient demographics and comorbidities, clinic visits including type of visit and status of provider as primary care physician or not, and influenza vaccine orders. Health insurance claims were obtained from the billing system at the University of Pennsylvania Health System. Data on Medicare or Medicaid insurance were missing during the pre-intervention year for some patients because the method by which the health system captured this data changed. These patients were coded as having other insurance.

Statistical Analysis

We used multiple time series research design,17 , 18 also known as difference-in-differences, to compare within-practice pre- and post-intervention outcomes between the intervention practice and the two control practices. While some opportunity for residual confounding remains, this approach reduces potential biases from unmeasured variables from three possible sources.18 20 First, a difference between groups that is stable over time cannot be mistaken for an effect of the intervention, because practice site fixed effects are used to compare each practice with itself before and after the intervention. Second, changes affecting both groups similarly over time, such as technological improvements or pay-for-performance initiatives, cannot be mistaken for an effect, because the regression models use monthly time fixed effects. Third, if the patient mix is changing differently among practices, and if these changes are accurately reflected in the measured risk factors, this cannot be mistaken for an effect of the intervention, because the regression models adjust for these measured risk factors.

Similar to prior work,13 , 16 , 21 a multivariate logistic regression model was fit to the binary outcome measure (vaccination ordered) using the patient as the unit of analysis and adjusting for demographics (age, gender, race/ethnicity), comorbidities (using the Charlson Comorbidity Index, which predicts 10-year mortality),22 insurance type, whether the visit was with the primary care provider, and visit type (new, return, reassign provider, other). The model compared the post-intervention year influenza season (September 2012 to March 2013) to influenza seasons in the prior 2 years, adjusting for calendar month, year, and practice site fixed effects, and clustering on patient. Standard errors in the models were adjusted to account for clustering by patient.23 , 24 To assess the mean effect of the intervention in the post-intervention period, we used exponentiation of the mean of the monthly interaction term log odds ratios for the outcome measure.13 , 16 , 21 , 25 , 26 To obtain the adjusted difference in the percentage of patients with a test ordered, along with 95% confidence intervals, we used the bootstrap procedure, resampling patients.27 , 28 For all measures, a test of controls was conducted to test the null hypothesis of parallel trends between the intervention and control practices using monthly data during the pre-intervention period. Two-sided hypothesis tests used a significance level of 0.05; analyses were conducted using SAS version 9.4 software (SAS Institute Inc., Cary, NC).

RESULTS

The sample comprised 45,926 patients with a mean age of 50.2 years, of which 62.9% were women, 35.9% were white, and 54.4% were black (Table 1). More than 99.9% (9938/9941) of vaccination orders placed during the study period resulted in the patient receiving the vaccination.

Table 1 Sample Characteristics

From pre-intervention year 2 to pre-intervention year 1, the vaccination order rate in the control practices declined from 19.8 to 14.9% (Fig. 1). During the same time period, the vaccination order rate in the intervention practice also declined from 21.2 to 14.5%. A test of controls for the pre-intervention period could not reject the null hypothesis of parallel trends between the intervention and control groups (odds ratio [OR]: 0.98, 95% confidence interval [CI]: 0.87–1.10, P = 0.70).

Figure 1
figure 1

Influenza vaccination order rates for the intervention and control practices before and after active choice implementation. Percentage of patients with an order placed for the influenza vaccination at the intervention practice (gray dashed line) and the two control practices (black solid line) for the following influenza seasons: pre-intervention year 2 (Sept 2010 to March 2011), pre-intervention year 1 (Sept 2011 to March 2012), and post-intervention year (Sept 2012 to March 2013).

In the post-intervention year, the vaccination order rate increased to 28.2% in the control practices and 36.3% in the intervention practice. In the adjusted difference-in-difference model, compared to the control group over time, the intervention practice had a significant increase in vaccination order rates (6.6 percentage points; 95% CI, 5.1–8.1; P < 0.001), representing a 37.3% relative increase compared to the pre-intervention period (Fig. 1).

DISCUSSION

Our findings demonstrate that the active choice intervention in this study was associated with a significant increase in influenza vaccination rates when compared to a control group over time. This suggests that choice architecture within the EHR could be used more broadly as a scalable approach for optimizing medical decision-making and improving influenza vaccination rates.

Seasonal trends in influenza vaccination rates may be related to the severity of influenza activity in the respective years. Based on data from the Centers for Disease Control and Prevention, influenza activity was low in the pre-intervention years in our study, with the 2011–2012 season having lower activity than the 2010–2011 season, potentially explaining the decline in vaccination rates.29 Compared to the pre-intervention years, we observed increased vaccination rates for both the control and intervention practices during the 2012–2013 influenza season. This may reflect a response to increased influenza activity during the 2012–2013 season.29 A strength of our study is the difference-in-differences design, which mitigates external factors that apply to all clinics similarly from biasing the interpretation of the impact of the intervention.17 20

Our findings expand our understanding of using choice architecture within the EHR to increase influenza vaccination rates, and this may impact physician and patient behaviors in several ways. First, in prior work, we also found that an active choice intervention led to higher rates of ordering colonoscopy and mammography testing for eligible patients (∼12 percentage-point increase).16 However, while colonoscopy completion rates increased by 3.5 percentage points, mammography completion rates were unchanged. In the present study, we found that nearly all orders resulted in actual vaccination. This may be because the ordering clinician has more control over the administration of the service, being able to give the vaccination during the office visit. Interventions that occur at the time of order entry may therefore be more impactful for services that can be completed or coordinated within the same clinic visit.

Second, a recent systematic review evaluating 57 clinical trials focused on influenza vaccination found that many interventions were focused on reminding either patients or physicians about the importance of vaccination.30 However, many of the patient-focused interventions were more labor-intensive, such as mailing letters to patients or having a pharmacist or nurse call them. Provider-focused interventions included financial incentives, physical reminders through posters in the clinic or postcards in the mail, and regular education sessions. The EHR is potentially a more scalable and automated approach, reducing complexity and administrative burden. For example, Fiks and colleagues conducted a randomized trial of EHR-based reminders in 20 pediatric practices and found that the intervention led to higher rates of influenza vaccination among patients with asthma.31 However, there is growing evidence that too many EHR-based reminders can cause alert fatigue, potentially reducing the impact of these interventions over time.32 35 Best practice alerts such as those used in this study not only act as reminders, but have the added feature of allowing the provider to quickly place an order. A study by Ledwich and colleagues found that best practice alerts in two rheumatology clinics led to higher rates of influenza and pneumococcal vaccinations.36 However, this study did not have a control group for comparison. Our study leveraged best practice alerts and compared changes across two control practices over time.

Third, while active choice was used in our study, there is evidence that other forms of choice architecture, such as changing default options, can achieve significant increases in influenza vaccination rates at similar magnitudes. Dexter and colleagues randomly assigned inpatient physician teams to receive a reminder about patient eligibility for influenza vaccination or to have a standing order placed (opt-out process).37 Physician reminders led to a 30% vaccination rate, while the opt-out process led to a higher vaccination rate of 42%. In another study, Chapman and colleagues randomly assigned university employees to either an opt-in process in which vaccination appointments needed to be scheduled, or an opt-out process in which appointments for vaccination were already scheduled but could be changed.38 The authors found similar differences in vaccination rates, with 33% vaccination in the opt-in group compared to 45% vaccination in the opt-out group. We also found that changing from an opt-in to opt-out process for generic substitution can increase generic prescription rates.13 , 14 Therefore, it may be important to consider various approaches to changing choice architecture and how they affect behavior. In some settings, changing default options is not possible, and other forms of interventions may be more appropriate. For example, two randomized controlled trials have demonstrated the use of social comparison feedback to physicians in order to reduce unnecessary antibiotic prescribing.15 , 39 , 40

This study is subject to limitations. First, any observational study is susceptible to unmeasured confounders. However, by comparing outcomes over time between the intervention and control practices, potential bias from unmeasured confounding is reduced. Second, these findings are from a small number of practices within a single health system, which may limit generalizability to other settings. Third, we were unable to assess relative differences in effects between physicians (who could place and sign orders) and medical assistants (who could pend orders for the physician to review and sign). Fourth, the intervention began shortly before the end of pre-intervention year 1, which may conservatively bias our results towards the null. Fifth, we did not evaluate alert fatigue, and longer-term evaluations are needed to evaluate sustainability of the effect of the intervention over time.

In conclusion, compared to a control group over time, the active choice intervention in the EHR was associated with a significant increase in influenza vaccination rates. Changing the manner in which choices are offered and displayed in the EHR may be an effective, scalable approach that could be used to increase the rates of influenza vaccination and other preventive services more broadly.