Background

The American Recovery and Reinvestment Act of 2009 (ARRA) awards up to $63,750 of incentive payments to health care providers who demonstrate "meaningful use" of a "qualified electronic health record" (EHR), and will eventually penalize providers who do not demonstrate such meaningful use by 2015[1]. One important aspect of the meaningful use concept is the application of clinical decision support (CDS) tools to improve coordination and quality of health care[2]. For instance, real-time electronic notification of abnormal test results via the EHR may facilitate timely follow-up, particularly in outpatient settings, where many results are not immediately life threatening and not verbally reported to ordering clinicians[3, 4]. Outpatient test results are especially vulnerable to "falling through the cracks" [5, 6] and are often cited as reasons for delays in diagnosis and treatment, patient harm and malpractice claims[714].

To achieve meaningful use of EHRs as envisioned by the federal government, providers need to be proficient in use of the decision support features available in their EHR and understand how they fit into the clinical workflow. However, provider needs and the current workflow are not always considered when designing EHR systems[15]. In many EHR systems, an electronic notification feature (e.g., the "View Alert" window used by the U.S. Department of Veterans Affairs' (VA) Computerized Patient Records System (CPRS), or the "In-Basket" feature available in Epic's EpicCare EHR) delivers test results to a message inbox that providers can access after they login to the EHR. (see Additional file 1). Alerting through this asynchronous mechanism is quite different from "synchronous" CDS alerts such as drug-drug interaction (DDI) alerts, which interrupt users when they are entering medication orders. While synchronous alerting has been studied quite extensively, [1620] alerting through "asynchronous" channels has received little attention. Unlike actions related to DDI alerts, follow-up actions required of test results alerts are not necessarily required immediately after alert delivery. How these asynchronous alerts integrate into a provider's workflow is largely unknown.

We recently examined providers' responses and follow-up actions on over 2500 alerts of abnormal test results in CPRS[5, 6]. Of these, we found providers did not acknowledge (i.e., did not read) 18.1% of alerts pertaining to abnormal imaging results and 10.2% of abnormal laboratory alerts. Furthermore, approximately 8% of abnormal imaging and 7% of abnormal laboratory results lacked timely follow-up at 30 days. We also found that there was no significant relationship between acknowledging an alert and timely follow-up. Thus, despite delivery of test results directly to a clinician's View Alert window, abnormal results did not always receive timely follow-up.

Clinicians do not optimally utilize all of the functions in the EHR; for instance, we found that about half (46%) of clinicians did not use the specific features of the View Alert window that facilitate better processing of electronic alerts[21]. Instead, providers often used handwritten notes or external systems (e.g., ticklers on their calendar) to help process their alerts[21]. Thus, many factors beyond the technology itself will likely predict how "meaningfully" providers will use CDS tools for test result reporting in the future[22, 23].

To obtain a comprehensive understanding of the management of test result alerts in EHRs and to explain why abnormal results might be missed, we used a qualitative, sociotechnical, systems-based approach. Our objective was to conduct a qualitative study at two large VA facilities to understand barriers, facilitators, and potential interventions for effective and safe management of abnormal test results delivered through the EHR. We relied on human factors engineering principles to frame our research questions and explore issues beyond the confines of the computer.

Methods

Human Subjects

This study was approved by the Baylor College of Medicine Institutional Review Board for compliance with accepted human subject research practices consistent with the Helsinki Declaration (Protocol # H-21817, Improving Outpatient Safety Through Effective Electronic Communication).

Design and Setting

We conducted three focus groups at each of two large, geographically dispersed VA medical centers between January and May 2009. Focus groups are ideally suited for this type of research because they allow for live interaction among participants, and richer data than what survey methods could elicit[24]. Table 1 presents basic characteristics for the two sites.

Table 1 Basic characteristics of participating sites and focus group composition.

For almost a decade, both study sites have used the Computerized Patient Record System (CPRS), the EHR system in use at VA facilities nationwide. Within CPRS, providers are notified of test results in a "View Alert" window that is displayed when a provider logs in. Although some functionality is configurable at the facility and user levels, most software changes to CPRS are made at the national level and disseminated simultaneously to all VA facilities. Consequently, CPRS configurations are far more standardized among facilities than most other commercially available EHR systems.

Participants and Sampling Frame

Forty-four full-time personnel representing primary care, radiology, information technology (IT) and laboratory services participated in the focus groups. We purposively selected participants from these fields specifically because their job responsibilities involved considerable interaction with test result notifications in the View Alert window at different parts of the workflow. Additionally, the primary care providers (PCPs, which consisted of physicians and physician assistants) were purposively sampled based on previous analyses [5] for having a high or a low number of alerts lost to follow-up within a 30 day period. Groups were limited to the recommended size of six to eight participants[24]. Table 1 presents the composition of each focus group by staff specialty.

Procedure

Details of our data collection and analysis procedures are published elsewhere [25] and are summarized here. Participants were recruited via phone/email, and signed an informed consent form before participating in the focus groups after having the opportunity to ask questions about the study, including issues of anonymity and confidentiality.

During the first two focus groups, we asked participants to discuss barriers and facilitators to successful management and follow-up of abnormal test result alerts and provide suggestions for improvement. We encouraged participants to think beyond the CPRS user interface, software, and hardware, and to consider broader human factors engineering issues such as organizational, personnel, workflow, and work environment concerns. Participants in the third focus group at each site concurred or dissented with the most frequently raised themes from the first two focus groups and discussed further barriers and suggestions for improvement.

Data Analysis

We used thematic analysis [26] to analyze our focus group transcripts, with the goal of identifying common alert management barriers and facilitators and suggestions for improving the alert system. Analysis tasks included a) the development of an initial coding taxonomy; b) open coding, in which text passages were examined for recurring themes and ideas; and c) axial coding, in which themes were organized into meaningful relationships. Figure 1 depicts the flow of analysis tasks.

Figure 1
figure 1

Summary of coding process and analysis flow.

Taxonomy Development and Open Coding

Two coders with qualitative research experience independently coded the focus group transcripts for content pertaining to barriers, facilitators, and suggestions for improvement. The coded data sets were then merged and reviewed by a third coder (the validator) to reconcile nearly identical quotations, codes carrying different labels yet referring to the same phenomenon, and codes needing further discussion to reach consensus. The coding team then met to review and reach agreement on discrepant codes and quotations. After a one-week waiting period to reduce priming effects, each coder independently coded the finalized quotation list using the newly developed taxonomy. The validator again identified inter-coder discrepancies, which were resolved by consensus.

Axial Coding

We first organized coded passages according to groundedness (i.e., the number of quotations to which a code was assigned) to determine the most commonly cited barriers, facilitators, and suggestions for improvement. We then compared the patterns of coded passages by site and used these comparisons to identify larger, recurring themes.

Results

Figure 2 presents the barriers, facilitators, and suggestions for improvement that were most frequently raised by participants according to their groundedness; all themes presented were mentioned by multiple participants. The most commonly cited barriers overlapped considerably across sites and focus groups, despite differences in site characteristics and focus group composition. Furthermore, these themes were raised by multiple participants across the focus groups, suggesting they were not simply an artifact of a single, dominant participant.

Figure 2
figure 2

Most commonly cited barriers, facilitators, and suggestions for improvement, by site.

Number of Alerts Received

The most frequently raised barrier was the number of alerts received by providers. In addition to test result alerts providers received many other types of notifications, which complicated the task of reviewing test results and providing timely follow-up care. Participants expressed concern about both the total number of alerts and the proportion of notifications perceived as unnecessary. Providers in all focus groups reported that their already heavy clinical workloads left very little time for the task of alert management:

"On an average it takes about two to three minutes per alert. And we get sixty to seventy alerts per day. So, there's no time allowed for alerts... I've just finished seeing patients. I have to go back and handle all the alerts. Some people actually come in on weekends. So, yeah, time is definitely a factor." -PCP, Site A

"One of the issues is just the sheer volume of alerts, and there are a number of alerts that in all honesty [you] really don't have any business seeing." -PCP, site B

"I counted 150 alerts one day just to see how many were coming in that normal day, and this is a fairly regular day, 150 alerts. That's a lot of time spent trying to go through that while you're seeing patients, while there's no in between time to get caught up." -PCP, Site B

Types of alerts perceived as unnecessary varied across the sites, but at both sites providers discussed situations when they were needlessly notified of events they deemed as strictly "for your information." These included, for example, overly detailed status updates of services performed outside primary care:

"The surgeon needs to take care of his own alerts. I don't need to be a backup for him. I mean, you know, he's licensed, right? He holds a license. He needs to worry about his license. He needs to take care of his stuff. And if every department did that, I mean, that would cut down our workload by fifty percent. That's where the problem is, everybody expects us to be the backup, and there's really no need." PCP, Site A

"You could have half a dozen notifications on a given consult which really are unimportant. The only thing I really need to know about is if it was actually scheduled and what the date was in case it's something I want to have done soon, and this is way too far away, or if it's canceled altogether. I really don't care about any of the other notifications, all these notes that they pass back and forth about whether or not they contacted the patient, whether or not he had transportation." PCP, Site B

To further explore the problem of large numbers of alerts, we conducted a co-occurrence analysis to examine common themes in participants' proposed solutions. We identified all passages in which suggestions for improvement co-occurred with any of the three quantity-related barriers most heavily discussed in the focus groups: too many alerts, unnecessary alerts, and an overly heavy patient-related workload created by the alerts. As seen in Figure 3, the three barriers co-occurred with a total of 17 suggestions for improvement. Fourteen were associated with overcoming the workload barrier; these suggestions involved both changes to CPRS (e.g., ability to categorize alerts, bundling alerts together) and changes to workflow (e.g., allocating protected time to manage alerts). Ten of seventeen suggestions applied to multiple barriers, suggesting that these barriers are interrelated. Interestingly, only three suggestions were uniquely associated with the two barriers about number of alerts. This analysis suggests that workload created by alerts is a complex barrier needing multidimensional solutions.

Figure 3
figure 3

Co-occurrence analysis of participants' proposed solutions to volume-related barriers.

Tracking and Categorizing Relevant Clinical Information

Another salient theme was providers' desire for a mechanism within CPRS to organize, track, and retrieve alerts so that providers remember to follow up on needed care. As the CPRS View Alerts system was designed to alert providers so they could take action at the time of the alert, no functions for longitudinal tracking currently exist in CPRS. Therefore, at both sites, better EHR capabilities to help visualize, organize, and track alerts ranked among the most frequently cited suggestions.

I always wished to see a way to see or a way to know because then I can know how they're performing. There is no way you can tell me how many consults you've placed to GI in the month of November. There's no place I can click... . (PCP, Site A)

Provider Knowledge of EHR Features

Notably, providers often requested to add functionality to CPRS that already existed (e.g., sorting, ability to delete multiple alerts as a batch), suggesting they lacked knowledge about features within CPRS to manage the large number of incoming alerts:

"Oh, another thing, I learned yesterday that you can do it [sort] by patient also, so mine were all mixed up. So, I just learned that today before coming to this meeting that you can [sort], did you know that? ... That's something which I learned today after eight years of being at the VA."

-PCP, Site A

Along similar lines, participants strongly advocated improvements in CPRS training to highlight existing EHR features specific to alerts citing the existing training sessions they received as "pretty lackluster" (PCP, site B).

Improvements to Alert Content

Finally, providers at both sites also suggested improvements to alert content, such as minimizing inpatient alerts to receiving only the discharge summary, bundling related alerts to avoid overlap and redundancy):

I just want to be alerted, if there's any abnormal I don't care if it's CBC or Chem 7 just say "abnormal lab" and this is the name of the patient, you know, abnormal lab. IF the CBC came in later, click me again- abnormal lab. But if they come in all at the same time don't give me three alerts for the same patient! It clutters my view alert. It's so difficult for me to get into what is important and what is not. (PCP, Site B)

Discussion

We employed a human factors approach to identify barriers, facilitators, and potential interventions for safe and effective management of EHR-based abnormal test result alerts at two large VA facilities. Providers reported the biggest barrier to be the large number of several other types of alerts that they receive in conjunction with test result alerts; these additional non-test result related alerts were not always high priority and did not always contain urgent information; many were believed to be unnecessary. The total number of alerts was also perceived as too high. These barriers were compounded by the lack of proficiency in alert management features in the EHR. Providers' suggestions to improve follow-up of test results included redesigning the computer interface to improve alert display and allow improved tracking of critical alerts, redesigning clinical workflow, and streamlining policies and procedures related to test result reporting.

A human factors approach has several advantages to understand and improve test result reporting in the EHR[22]. Because of significant previous patient safety concerns regarding follow-up of test results in both paper and EHR based systems,[10, 11, 2729] this approach has a higher likelihood of achieving better outcomes as EHRs are implemented widely. Other studies have found similar barriers to the use of computerized clinical reminders (such as for preventive health care), which resemble "non-interruptive" alert notifications in some respects. For example, Patterson and colleagues[30] reported that factors such as workload, the inapplicability of reminders to certain situations, and limited knowledge and training, limited providers' use of reminders. Saleem and colleagues[31] similarly observed that workload and poor usability in the user interface were barriers to effective clinical reminder use. Finally, Campbell and colleagues identified additional new work for clinicians as an unintended consequence of CPOE implementation[32]. Our study adds to this body of work by demonstrating the need for a multi-dimensional approach to address test reporting challenges within the EHR.

Implications

EHR-based test reporting needs improvement to prevent missed results

Currently, EHR-based test result notification systems do not offer an effective way to safely and effectively present critical information such as that related to abnormal test results. Reducing the total number of alerts and removing unnecessary alert types will alleviate this problem. Our findings suggest the need for better interface design to visualize certain types of alerts more effectively, such as those with a higher priority (e.g. critical alerts). Previous work shows that warnings (alerts) are useful only when they communicate information of high importance [33] and overload of information can lead to alert fatigue[34]. Our findings also suggest the need for careful consideration by key decision makers of what should constitute an "alertable" event. Populating the list of alert categories in EHRs judiciously, while not as simple as it seems, is perhaps one of the first steps to address this issue.

EHR systems need better tracking capabilities to prevent missed results

Providers voiced several concerns about the current inability to save, track, and retrieve alerts. Information contained in the alerts cannot be retrieved easily once alerts are processed. Currently, longitudinal tracking capabilities such as those that ensure fail-safe follow-up of alert notifications are also lacking. Tracking was raised as a particular concern because there are no current means in this EHR to assist providers in monitoring a patient's progress. For instance, there is no mechanism to remind providers to check on whether a patient has completed a follow-up evaluation in a timely manner. In the absence of these tools, providers used workarounds such as paper-based reminders or leaving progress notes unsigned so as to generate alerts to themselves in the View Alert window. Our findings thus suggest the need for tracking functionality integrated with the alert system, akin to that currently available in most email tools (e.g., delivery/read receipts, hierarchical folder structure to store, categorize and retrieve alerts for future reference, or to do lists with past due reminders). Better tracking and retrieval features in future EHRs may significantly improve test result management.

Reducing missed test results in EHRs requires a multidisciplinary approach

We recently proposed eight dimensions of safe and effective use of EHRs, many of which go beyond the technology itself: content, software and hardware, user interface, personnel, communication and workflow, organizational policies and procedures, state and federal regulations, and monitoring[35]. Examples in the context of electronic alerts include improving provider training (personnel), changing and improving awareness of policies regarding responsibility for follow-up (organizational policy), reducing redundant alerts (content), adding functionality for saving, tracking, and retrieving previous alerts (software), and improving existing functionality to categorize, organize, and process alerts (interface). Most interventions will involve more than one of these dimensions. As illustrated in Figure 4, a seemingly straightforward interface change, such as adding a separate window for visualizing critical alerts, requires consideration of all eight dimensions.

Figure 4
figure 4

Example of implementing an intervention using a multidimensional approach: an additional window to better visualize "critical" notifications.

Current legislative decisions have largely focused on the technological aspects of the EHR to improve the quality of health care. However, adopting and implementing meaningful use of an EHR requires more than designing software with good information exchange capabilities as required by the ARRA. Providers will not benefit from technological advances unless a multidisciplinary approach is used to design and implement these systems. For providers to make more meaningful, safe, and effective use of EHR features such as test reporting features, all stakeholders (designers, vendors, organization, and users) must address many socio-technical issues.

Limitations

Our study has several limitations. Our results may currently only be applicable to the VA's EHR. However, this EHR is used widely and assists over 150,000 clinicians to deliver health care to over eight million veterans[36]; many of its features are present in other commercially available EHRs. Additionally, our multidisciplinary approach is applicable to several test result reporting processes that are likely to be shared among EHRs. Therefore, our findings should be broadly applicable to many organizations currently deciding how to best design and implement technology-based tools to facilitate timely follow-up of abnormal test results.

Second, though we spoke with 44 different participants, the focus groups occurred at only two sites. Future research may need to include a larger number and variety of facilities to rule out the possibility of facility idiosyncrasies as potential confounders.

Conclusion

We found that providers perceive several challenges for fail-safe electronic communication and tracking of abnormal test results in a state-of-the-art EHR. A multi-dimensional socio-technical approach that includes addressing organizational, personnel, and workflow-related factors in addition to improving technology, is essential to design interventions that help reduce missed test results in EHRs and increase their meaningful use.