INTRODUCTION

Routine population-based alcohol screening is recommended in primary care settings in order to identify patients with unhealthy alcohol use who may benefit from brief alcohol intervention.1 3 While together screening and brief intervention are considered a top prevention priority for U.S. adults,4 , 5 they have proven challenging to implement in routine care.5 8

The U.S. Veterans Health Administration (VA) has achieved high rates of documented alcohol screening, both overall9 , 10 and relative to other healthcare systems.6 As a result, the VA has been highlighted as a leader in implementation of alcohol screening.11 Consistent with implementation of other clinical services,12 , 13 the VA used a combination of a national performance measure and an electronic clinical reminder to implement alcohol screening. Specifically, the VA implemented a national performance measure incentivizing annual screening with the validated Alcohol Use Disorders Identification Test – Consumption (AUDIT-C) questionnaire14 18 and disseminated an associated clinical reminder (Fig. 1), to be embedded in the electronic medical record.13 While there is no “gold standard” for optimal use of the AUDIT-C clinical reminder, it was designed to: 1) prompt annual clinical screening; 2) guide clinical staff to perform screening in a validated, standardized, reproducible way; 3) automatically score and document results of screening; and 4) when positive, trigger a subsequent clinical reminder for brief intervention and other appropriate follow-up. The AUDIT-C clinical reminder was used 1.5 million times in its first year (2004), and over 24 million screens have been documented with the clinical reminder in the last five years.9 , 10

Figure 1.
figure 1

Screen capture of the U.S. Veteran Health Administration’s (VA’s) electronic clinical reminder for alcohol screening with the Alcohol Use Disorders Identification Test – Consumption (AUDIT-C) questionnaire.

However, while the AUDIT-C has high sensitivity for identifying unhealthy alcohol use based on validation studies,14 , 16 , 18 the sensitivity of alcohol screening performed in VA clinics appears to be lower than expected.19 , 20 Specifically, a previous study found that 61 % of patients who screened positive for unhealthy alcohol use on confidential mailed surveys screened negative when screened clinically.20 However, little is known about how screening is conducted in practice and what resulted in this discrepancy.

New policy initiatives in the U.S. are paving the way for widespread implementation of alcohol screening. The Centers for Medicare & Medicaid Services (CMS) now reimburse for annual alcohol screening,21 and the Affordable Care Act (ACA) established both alcohol screening and brief intervention as standard preventive benefits.21 23 Other healthcare systems are beginning to use strategies similar to those used by the VA to implement alcohol screening,24 and more are likely to do so in response to these policy initiatives. Understanding factors underlying the low sensitivity of clinical alcohol screening in VA may help optimize the quality of screening being implemented in VA and other systems. The objective of this study was to observe clinical staff as they conducted alcohol screening in order to understand factors that might contribute to low sensitivity of clinical screening.

METHODS

Setting

This observational qualitative study was conducted at nine primary care clinics located at seven geographically distinct sites within a single VA Healthcare System in the Northwestern United States. This included two large medical centers, each with a large primary care clinic and a women’s clinic that provides primary care, two VA-managed community-based outpatient clinics (CBOCs), and three contract CBOCs (i.e., local clinics that offer VA care to Veterans). Similar to other VA primary care settings, clinical staff, including Registered Nurses (RNs), Licensed Practicing Nurses (LPNs), and Health Technicians (Health Techs), are responsible for conducting alcohol screening during patient intake at these sites. The AUDIT-C electronic clinical reminder is “due” annually and “triggered” 9 months after the previous screen.25 No preferred approach to screening (e.g., paper questionnaire or in-person verbal interview) has been specified by national VA policy, but it is common for care incentivized by performance measures to be documented using electronic clinical reminders.25

Participants

Clinical staff responsible for alcohol screening were recruited opportunistically at each site and asked for permission to be observed during their usual course of clinical care. Either before or on the first day of observation at each clinic, clinical staff were given an overview of the study, and those who considered participating were given a one-page information statement describing the study. Staff participants verbally consented to be observed; written consent was not obtained to ensure that no identifying information regarding participants was collected. Patients were not considered participants of the study and were not recruited. However, because patients were observed in the course of observing clinical staff who participated in the study, enrolled staff participants asked their patients whether they were comfortable being observed, and encounters in which the patient also agreed to the observation were observed. Clinical staff participants were told that observers were interested in understanding the ways in which clinicians interact with clinical reminders, but were blinded to the study’s specific focus on alcohol-related care. The study, including a waiver of written informed consent, was reviewed and approved by the Institutional Review Board at VA Puget Sound.

Observational Data Collection

Between July 2010 and January 2011, four masters’ level researchers trained in public health (RT, GL, LC), social work (GL), and/or nursing (CA), observed clinical staff as they interacted with clinical reminders during intake in order to specifically observe alcohol screening. Observers were not blinded to the study purpose. Based on methods previously used in VA,26 , 27 observers spent 1–2 days at each of the nine clinics and took handwritten ethnographic notes of their observations. Notes were observational, not interpretive, and included short descriptions of what the observer saw that was pertinent to the use of clinical reminders and/or alcohol screening. Though not solicited, participating clinical staff occasionally offered their opinions regarding clinical reminders and/or alcohol screening, and these were also documented in observers’ notes. Observations were completed until saturation (no new information was being obtained).

Qualitative Analyses

Handwritten observational notes were transcribed into Microsoft Word® by each individual observer. Data were analyzed in an iterative fashion using template analysis (also called codebook analysis or thematic coding).28 While some codes are identified a priori, others emerge from the data. In this way, template analysis is midway between content analysis, where all codes are rigidly defined a priori, and analytic approaches based on grounded theory, where all codes emerge from the data.29 The coding template was initially based on both Greenhalgh et al.’s conceptual model of diffusion of innovations in services settings,30 and a summary of previous literature describing barriers to and facilitators of use of clinical reminders for implementation of evidence-based care.9 , 10 , 26 , 27 , 31 37 Greenhalgh’s model identified broad domains of implementation (e.g., the implementation process), while previous literature regarding the use of clinical reminders for implementation identified specific factors (e.g., workflow) that might contribute to low sensitivity of clinical screening within those broad domains. The template was iteratively revised based on initial and ongoing review of field notes by all investigators, to identify observations consistent with both a priori and emergent themes.30 Once consensus among all investigators was achieved, two investigators (EW and JG) independently coded all transcribed notes according to the finalized template. Coders resolved all discrepancies via discussion. All investigators reviewed iterative presentations of final coded data to finalize themes and select prototypic examples.

RESULTS

Among 50 clinical staff working in the nine clinics, 49 agreed to be observed. Among the 49 participants, 40 (17 RNs, 18 LPNs, and five Health Techs) were observed, and 31 unique staff (15 LPNs, 11 RNs, and five Health Techs) were observed conducting alcohol screening with 72 unique patients (site-specific numbers reported in Table 1). Qualitative analysis revealed two themes anticipated a priori, three emergent themes specific to conducting alcohol screening with a clinical reminder, and two additional hypothesis-generating themes regarding alcohol screening.

Table 1. Number and Type of Observations, by Clinic

Themes Anticipated a Priori

Two themes anticipated a priori confirmed known challenges to implementation of care using clinical reminders.9 , 10 , 26 , 27 , 31 37 These included issues related to: 1) workflow, and 2) usability/flexibility of the reminder. Similar to previous studies, existing workflow often prevented completion of the screening reminder before patient appointments. One nurse participant commented to an observer: “if a provider is trying to stay on time with his/her appointments, they will encourage the nurse to skip the reminders.” Another commented, “it’s good the patient came early so I have time to do the clinical reminders.”

Further, we found that alcohol screening guided by the clinical reminder was inflexible to and did not always optimally address user needs. The clinical reminder often offered clinical staff “no right answers”26 to click based on the conversations occurring. For instance, patients sometimes offered up recent radical changes in drinking or other important qualitative information, but the clinical reminder did not include a response option or text box to capture this information. In one observation, the Health Tech conducting screening asked “How much do you drink?” and the patient responded: “I slowed down since I came up here. Maybe two 24 oz cans each day. . .my mother be getting on me about that.”

Emergent Themes Regarding Alcohol Screening with a Clinical Reminder

Three themes specific to conducting alcohol screening with a clinical reminder emerged from the data and reached saturation. Emergent themes and related sub-themes are described in detail below.

Predominantly Verbal Screening Observed Despite Variability in Approaches Across and Within Clinics

Most staff performed alcohol screening verbally by interview, guided by the clinical reminder. However, other methods were observed with variability both within and between clinics. Sometimes a paper form was used (generally only for new patients) that included the AUDIT-C and other behavioral health and preventive screenings; it was either mailed or self-administered in the waiting room prior to the appointment. When paper-based screening was conducted, the clinical reminder was used to document patients’ reported responses. On occasion, staff used laminated paper-based screens to administer the AUDIT-C, whereby clinical staff would hand patients the laminated screen during intake and ask for verbal responses to each question while the staff either input the information directly into the clinical reminder or wrote it down on paper to input later. Regarding this laminate method, one nurse participant commented to an observer: “I think it is more accurate because they can see the answers.”

Specific Screening Practices that May Contribute to Low Sensitivity of Alcohol Screening in VA Clinics

We observed four different verbal screening practices, which often occurred in conjunction with one another, that may contribute to low sensitivity of alcohol screening (prototypical examples provided in Table 2). First, the questions were often not asked verbatim. This happened in several different ways. Staff often preceded verbal screening with questions that were not part of the AUDIT-C (Table 2; examples 1a-1c, 2a, 3a, 3b), and they frequently did not provide patients with the response options (Table 2; examples 1a, 2a, 2c, 3a, 3b, 4a, 4b). Second, staff often made inferences and/or assumptions in selection of AUDIT-C response options when screening was conducted verbally. For instance, some staff documented responses that were not reported into the clinical reminder (Table 2; examples 1a, 2a, 2b), and/or they interpreted general patient responses as fitting into specific AUDIT-C response options (Table 2; examples 2a, 4a). Third, answers to alcohol screening questions were often suggested by clinical staff based on available response options, before patients had a chance to respond (Table 2; examples 3a, 3b). Fourth, staff often omitted the third question of the AUDIT-C (Table 2; examples 4a, 4b), which asks about frequency of drinking six or more drinks on an occasion.

Table 2. Specific Alcohol Screening Practices that Might Contribute to Low Sensitivity of Clinical Screening

Staff Introduced and Adapted Screening Questions to Enhance Patient Comfort

When screening verbally, staff used diverse introductory statements to initiate screening. Some introductions were general, including indicating that the staff in charge of conducting screening was not responsible for deciding who receives screening and clarifying that alcohol screening is routine for all patients. Other introductions appeared specifically focused on enhancing patient comfort (prototypical examples displayed in Table 3). In addition, some staff spontaneously described their perceptions that patients are uncomfortable with alcohol screening and reported adapting the questions in order to make screening more comfortable and acceptable to patients (examples presented in Table 3).

Table 3 Examples of Introductory Statements and Reports of Adaptating Screening to Enhance Patient Comfort

Emergent Themes Without Saturation

Two additional themes emerged that did not reach saturation, but may affect screening sensitivity. First, staff appeared not to have been trained to conduct alcohol screening in a validated way. Two staff members explicitly expressed that they were not trained. One said, “We all ask the questions in a different way, we have never been taught how to do it.” Another said, “Everything is just thrown at us without any training.” Second, some staff we observed may have thought screening was targeting identification of patients with the most severe conditions—alcohol use disorders—as opposed to the spectrum of unhealthy alcohol use that also includes risky drinking.38 For instance, we observed one interaction in which a nurse commented on his/her feeling that the threshold for positive screening was low: “The VA is very tough on alcohol… if you don’t drink much, they say ‘don’t drink too much.’”

DISCUSSION

Observations of clinical staff conducting alcohol screening at nine independently-managed primary care clinics showed that staff most often conducted alcohol screening verbally. Verbal screening included practices that may result in under-identification of unhealthy alcohol use, including asking the questions non-verbatim; making inferences, assumptions, and/or suggestions of responses to the questions; omitting the third AUDIT-C question regarding frequency of binge drinking; and otherwise adapting the questions. Previous studies have found high rates of documented screening with VA’s electronic clinical reminder,9 , 20 but found that clinical screening missed many patients with unhealthy alcohol use.20 Findings from the present qualitative study regarding specific verbal screening practices help explain those findings.

Findings from this study also suggest several possible reasons that screening questions were modified when screening was conducted verbally. Specifically, consistent with previous research,26 , 27 this study identified issues with existing workflow, such that completion of the alcohol screening clinical reminder prior to patients’ appointments was sometimes impractical. Though it is unknown why clinical staff modify questions, they may be attempting to make them briefer in order to fit into existing workflow. In addition, this study’s findings suggested that staff introduced and adapted questions in order to address their perceptions that patients were not comfortable with alcohol screening. It is unknown whether these practices reflected actual patient discomfort, staff’s own discomfort, and/or general social stigma related to alcohol use.39 Regardless of their underlying cause, and the fact that these efforts are likely aimed at making patients more comfortable and care more patient-centered, resulting adaptations to screening questions may diminish the sensitivity of screening.

Findings from this study may have implications for other healthcare systems implementing alcohol screening, as well as screening for other important substance use and mental health conditions.21 24 , 40 Although not without limitation,41 , 42 clinical decision support systems have been used to implement evidence-based practices across multiple conditions,43 49 and many of these systems are being developed to be shareable across multiple healthcare systems,50 especially in the U.S. in response to healthcare reform. Findings from the present and prior studies in VA suggest that while VA’s alcohol screening clinical reminder has effectively prompted clinical staff to conduct screening and facilitated documentation that screening occurred,9 it has not facilitated screening in a validated, standardized, reproducible way.

If healthcare systems use clinical reminders to implement screening for unhealthy alcohol use and other mental health conditions, additional implementation strategies may be needed to optimize quality (i.e., to successfully identify the patients who might benefit from indicated interventions). Specifically, healthcare systems may need to specify a preferred approach to screening. If verbal screening is recommended, successful implementation30 , 51 may rely on strategies that actively engage clinicians.48 , 52 54 Our hypothesis-generating findings suggested clinical staff may not have been systematically trained to conduct alcohol screening, and may be not be aware that screening should identify patients with risky drinking in addition to those with alcohol use disorders.55 57 User-level training strategies, such as didactic training, academic detailing, clinical champions, or practice facilitation58 , 59 may be necessary in order to convey risks associated with the entire spectrum of unhealthy alcohol use (including risky drinking), and the reason for identifying patients with the spectrum of unhealthy alcohol use (i.e., the efficacy of brief intervention).

However, while application of additional strategies, including user-level training, may contribute to a clinical culture with greater aptitude for offering high-quality alcohol-related care, these strategies may be resource-heavy and cost prohibitive. Further, findings from the present study call into question whether brief mental health screens, and particularly the AUDIT-C or other screens with multiple specified response options, should be administered verbally by a clinical interviewer, or if they should instead be patient administered. Patient self-administered screening, such as laminate, paper-based, or web-based screening, may address underlying reasons for non-standard screening (i.e., workflow issues and responses to perceptions of patient discomfort), avoid the need for continual training of new staff, and increase the sensitivity of clinical screening above that achieved with interviewer administered screening.60

This study has several limitations. First, practices observed at the study clinics may not be generalizable to those at other VA healthcare systems (of which there are approximately 150 nationally). Future research is needed to assess whether results are similar in other VA and non-VA healthcare settings/clinics. Generalizablity may also have been limited by requiring agreement to be observed from clinical staff participants and their patients. While all but one staff agreed to be observed, not all who agreed were observed due to limited research staff and time in clinics, and not all who were observed conducted alcohol screening because, by design, patients were not considered subjects of the research, and thus we were unable to determine a priori which patients were “due” for alcohol screening. Second, observations may not be entirely objective or reflective of existing clinical care—research staff did not audio-record or video-record clinical screening and were not blinded to the study purpose, which may have influenced their notes. In addition, staff were aware they were being observed, which may have altered their practices. Third, although our observations suggested reasons underlying the low sensitivity of clinical alcohol screening in VA, this study did not specifically test whether and how staff delivery of questions impacts patient responses or screening validity. Finally, although some staff offered us their opinions or experience of screening, we did not systematically elicit their experiences. Therefore, future research is needed to understand perspectives and experiences of clinical staff who conduct screening.

Despite these limitations, this large qualitative study found that implementation of alcohol screening facilitated by a clinical reminder resulted primarily in verbal screening, which was often not conducted in a standardized, validated fashion. Issues related to workflow, efforts to make patients comfortable, and lack of training may have resulted in observed screening practices. As healthcare systems move forward with implementation of alcohol screening, as well as screening for other mental health and substance use conditions, use of a clinical reminder alone may be limited as a method of facilitating valid, standardized screening. Systems may need to specify a preferred approach to screening. Patient self-administration of recommended mental health and substance use screens (e.g., laminate, paper-based, or web-based patient self-screening) may address underlying reasons for adapting screening questions, and thus, offer a strong alternative to verbal approaches to screening.