Keywords

It was clear from the beginning of my investigation into the application of systems theory to error prevention in healthcare that however strong the theory and the evidence—and for me it was compelling—the idea of a systems approach to preventing errors would get little acceptance from physicians unless we could demonstrate that it actually worked in healthcare.

Doctors are the ultimate “NIH” (not invented here) thinkers; they have trouble imagining that something that works in another industry would be relevant to healthcare. “Healthcare is different.” “Healthcare is special.” And, of course, it is, but couldn’t we learn from others? Not easily, I knew. It was clear to me that if I wanted to get acceptance of systems theory and motivate doctors—as well as everyone in healthcare—to change, we would have to demonstrate that systems theory could be successfully applied to real-world medical problems.

But it is even more complicated. Any demonstration in healthcare would have to resonate—be applicable—for all kinds of physicians. Making a systems change in the operating room, for example, would be of little interest to internists. And a change eliminating errors in the diagnosis of diabetes would not carry much weight with the surgeons. To prove the point, we needed to address a systems failure that affected all physicians.

The obvious choice was medication errors . All doctors write prescriptions. Moreover, we knew from the Medical Practice Study that misuse of medications was a serious problem, indeed the most serious problem we found, accounting for a fifth of all serious adverse events discovered in the study. Medication errors it would be.

Who knew anything about medication errors ? More to the point, who might be interested in collaborating on this type of project? I spoke with Tony Komaroff , professor of medicine at Harvard and editor in chief of the Harvard Health Letter and the Harvard Medical Schoo l Family Health Guide. He knew just the person: David Bates , a young internist-investigator at Brigham and Women’s Hospital (BWH) .

I first met David on April 12, 1990. We immediately hit it off. He got interested in medication errors when he learned that adverse drug events (ADEs) were the leading type of harm found by the Medical Practice Study . David was also the key person at the Brigham evaluating a computerized physician order entry (CPOE) system being developed by a team led by Jonathan Teich in which physicians were to enter orders on the computer instead of writing them by hand. It seemed obvious that this could be a powerful systems change for reducing errors . Could we demonstrate that it did in fact do that?.

figure 1

David Bates. (All rights reserved)

We agreed on a strategy: first, we would do a study to get an accurate measure of the extent of medication errors and the harm they caused. We would categorize them by type and when in the medication process they occurred. And we would see if we could identify the systems failures causing them. Most previous studies of medication errors relied on self-reporting, which was known to be unreliable, and none were comprehensive in the sense of considering all of the stages in the medication process. Most significantly, none had linked medication errors to harm , and none inquired into underlying causes .

After getting this information, we would implement a systems change , such as CPOE , to see if it reduced the harm . None of this was assured. How to find the errors ? How to find the underlying systems failures ? All new territory, but very exciting.

Fortunately, the Risk Management Foundation (RMF) of CRICO (the Controlled Risk Insurance Company that provides liability coverage to all the Harvard hospitals and doctors) was intrigued by the Medical Practice Study and was interested in exploring the possibility of preventing medical injury , not just paying for its consequences through malpractice suit settlements. They gave us a small grant for a pilot study at BWH . Thus began a long and fruitful relationship with this incredibly enlightened insurer.

We were aware that many complications of the use of medications—such as unpredictable allergic reactions—are not caused by errors , so we decided to focus on drug-related harm , not just errors . Referring to the Medical Practice Study definition of adverse event, we defined “adverse drug event” (ADE) as “an unintended injury caused by use of a medication.” To determine those caused by errors , we used the MPS definition: “The failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim.”

Our objective of finding every episode of harm caused by a medication led us to develop a totally new approach to data collection. Rather than rely on reporting of events by the unit nurse, we would have a specially trained nurse visit the study care units in the hospital several times a day to review each patient’s record, follow up on laboratory test results, and interview the unit nurses searching for evidence that the patient had experienced an ADE . She would also count the medication errors and find out as much as she could about what caused them. In short, we did everything we could think of to try to find every ADE and every medication error .

The results of the pilot study were encouraging. The intensive data collection enabled us to identify many more ADEs than had been reported in other studies that largely depended on review of medical records [1]. We were also able to determine how many medication errors result in harm . We drew up a proposal for a large multi-institutional study that would have a sample size big enough to have statistical significance. We would also find out if we could identify underlying systems failures . We sought funding from the Agency for Health Care Policy and Research (AHCPR) .

Meanwhile, acutely aware of our lack of knowledge and experience in how to train people to find causes of errors , we sought help from a psychologist and were finally steered to Richard Hackman , professor of social and organizational psychology at Harvard. Hackman was an expert on teamwork , having studied airplane crews, sports teams , corporate boards, and even symphony orchestras. He was intrigued by our project, and we enlisted him in our study.

We also recruited David Cullen , a senior anesthesiologist from the Massachusetts General Hospital (MGH) who had research experience and a long-standing interest in patient safety. He had done medication safety research in anesthesia and was very enthusiastic about joining the team. As is often the case in a strong collaboration, we each brought different things to the table. I had clinical experience from my long surgical career to draw on, as well as experience in finding and classifying adverse events from the Medical Practice study . David Bates was an internist at the Brigham with epidemiology training and informatics skills. And David Cullen brought his anesthesia experience and was well positioned to recruit a team at MGH .

figure 2

David Cullen. (All rights reserved)

In 1992 our proposal was funded by AHCPR . We called our coalition the ADE Prevention Study Group and conceived of the project in two phases. In Phase 1 we had two objectives: to identify every ADE and potential ADE (an error that could have but did not result in an ADE ) and to identify the systems failure(s) underlying each one. In Phase 2 we would introduce one or more specific systems changes to correct an identified failure and find out whether it prevented ADEs.

The Agency funded Phase 1 but to our disappointment declined to fund Phase 2 until we showed that we had succeeded with Phase 1. Based on our pilot study results, we were confident we would succeed, although we were worried about making the timing work out. We established an investigative team at each hospital and selected 11 nursing units for the study at the 2 hospitals: 5 intensive care units and 6 general, non-obstetric care units. David Bates was the leader of the Brigham team and David Cullen led the MGH team.

As in the pilot study , we identified adverse drug events by having a trained nurse investigator review the charts and laboratory test results of every patient and talk with the nurses in each of the study units daily.

To identify underlying systems failures , something that had not been done before, we developed data forms with questions regarding the what, where, when, and how of each incident. We trained our nurse investigators to use them to assess each ADE they found. We also gathered data about within-team communication, between-team communication, as well as environmental information.

The nurse investigators also inquired about circumstances around the event such as the person’s health, job stressors, sleep the previous night, education about the drug, and experience with the drug. In other words, we were looking for all possible explanations for why errors might be made.

To develop and refine our data collection methodology, David Bates and I had multiple meetings with Richard Hackman and his graduate student, Amy Edmondson . Despite this, things almost came unglued at our first training session for the nurse investigators. Through what in retrospect was a major miscommunication, David and I thought Amy was going to do the training. However, when we had the meeting with the nurses, it was immediately obvious that she had no idea that was to be her role. Without revealing our problem to the trainees, I took over and spontaneously ran the program. David and I had thought a lot about our objectives and measures and had spelled them out, so it wasn’t starting from scratch, but more planning would have been better. In any case, it had to do. In the end it worked out all right. After a few weeks, the process worked fairly smoothly.

The study team at each hospital conferenced every other week to review every adverse drug event and potential adverse event that had been identified and to classify errors by type. Using the data the nurses had collected about the circumstances surrounding each ADE , we then systematically identified the underlying causes of errors at two levels: the proximal (obvious) causes and the underlying causes , or systems failures . For example, a doctor prescribed the wrong medication (error), because of insufficient knowledge about the drug (proximal cause ) due to a failure in the drug knowledge dissemination system (systems failure ). Although we were all new at this type of analysis, it proved surprisingly easy to do, which gave us confidence that our findings were valid.

In 1994 we completed the fieldwork and analyzed our data. There had been some pitfalls; partway through the study, a medical unit was switched to a surgical unit, for example. But overall data collection went well. We found 247 adverse drug events in the study population of 4031 admissions, a rate of 6.5 per 100 admissions. Of these, 70 (28%) were preventable [2]. We also found 194 potential ADEs, errors that did not result in harm or were intercepted before the medication was given (Table 3.1).

Table 3.1 Adverse drug event rates

Errors occurred at every stage of the process. Nearly half (49%) occurred during ordering, followed by nurse administration, 26%; pharmacist dispensing, 14%; and transcription, 11%. Dosing errors were the most common type of medication error , and more than half of these occurred at the physician ordering stage. Fortunately, nearly half of physician errors were intercepted (largely by nurses), but no one backstopped the nurses; only 2% of nursing administration errors were intercepted (Table 3.2).

Table 3.2 Types of medication errors

This rate of ADEs , 6.5 for every 100 patients, was astounding! It was almost ten (10) times higher than had ever been reported. And this was at the two flagship teaching hospitals of Harvard, institutions that considered themselves the best in the country! [2]

To my delight, we were also able to identify systems failures underlying the errors and to categorize them into operationally useful categories. The leading failures were in systems for (1) drug knowledge dissemination (example of an error : failure to reduce dose for elderly patient), (2) dose and identity checking (error: mix-up of two “look-alike” packaged drugs), (3) patient information availability (error: lack of information about reduced kidney function), (4) order transcription (error: handwriting errors), and (5) allergy defense (error: giving medication to a patient known to be allergic to it) [3] (Table 3.3).

Table 3.3 Major systems failures

We now had evidence that systems failures could be identified in a healthcare environment, the first step in my quest to develop data to convince doctors and hospitals that changing systems would be more effective in reducing harm than punishing people who made mistakes. We still had a long way to go: we needed to show that we could change the systems and that changing them would reduce the harm . But we were on our way. Needless to say, this was pretty exciting.

While we were analyzing our results for publication, we completed planning for Phase 2, the implementation of systems changes in a controlled study to determine if the changes would, in fact, reduce harm . Suddenly the roof fell in! Despite our brilliant results in Phase 1, the rating of our grant proposal to AHCPR fell a fraction of a point below their funding level. We had no funding for the next phase.

We were in deep trouble. We had the team assembled, we had the instruments, and we had the plan all worked out to move ahead. Most importantly, we had a potentially powerful systems change to test—but no money! Into the breach came the Risk Management Foundation, which had funded our original pilot study . They agreed to pay for the project, one more example of their generosity at crucial times that was so helpful for our team. We were profoundly grateful.

The systems change to be tested in Phase 2 at BWH was computerized physician order entry (CPOE) in which all physician orders are written on the computer. This enabled medication orders to be automatically checked for errors such as wrong dose, overlooking a drug allergy, or giving two incompatible drugs, thus preventing the physician from making the error.

This had been our plan from the start. David Bates and colleagues at BWH had been working to get it designed and tested, and it was ready to go. The timing was perfect. This would be a powerful systems change ; we anticipated it would have a significant effect in reducing prescribing orders, the most common type of error found in Phase 1.

But what systems change would the MGH implement? They were far from having a computer order entry system, so we needed something else. Fortunately, we were aware that there was some evidence that having a pharmacist present on rounds with clinicians reduced prescribing errors . This made sense, but the practice had not been tested in a controlled trial. We decided to see if implementing this systems change of having a pharmacist make rounds every morning with the physician care team would significantly decrease ADEs .

Morning rounds are when care decisions are made, including what medications to prescribe, so having the pharmacist’s input at the time of decision-making might reduce prescribing errors . We would try it in an intensive care unit (ICU) where patients are cared for by a true “team” that meets to make rounds at a predictable time. Another ICU would be the control unit. We began the study.

The big event of 1995 for our research team was the publication of our first two papers from the drug prevention study: the incidence study and the systems analysis study [2, 3]. Prior to submitting the papers to a journal, we ran them by the CEOs of the Mass General and the Brigham, as well as the chair of medicine at the Brigham, Dr. Eugene Braunwald, so they would not be blind-sided by what we anticipated might be extensive publicity when the papers were published.

Despite the fact that the high rates of ADEs could potentially make them look bad, to their credit, neither CEO suggested that we not publish nor, for that matter, change a single word in the papers. They did, however, arrange for and pay for media training for both of us! It proved very helpful. I learned for the first time that when being interviewed, you don’t answer the reporter’s question but use it to make your points. We were taught some techniques for turning the conversation around to what we wanted to talk about.

George Lundberg welcomed our first two papers, and they were published fairly soon in JAMA in July 1995—just 7 months after my Error in Medicine paper and 5 months after the news about Betsy Lehman’s tragic death from an overdose of chemotherapy. The papers got a lot of publicity: all three major television networks covered them on the nightly news, and both David and I had live interviews with them. Ted Koppel even did a special on Nightline about them. Our media training paid off.

But Nancy Dickey , the president of the American Medical Association , was not pleased. In a television interview, she criticized us and said the numbers were exaggerated. Of course, the reverse was true—we knew we missed some, and indeed, later more sophisticated studies showed even higher rates. David Bates was shocked by this. I was not surprised, having previously had a similar experience with her and the Medical Practice Study . To her credit, Dr. Dickey later came around and subsequently became an important advocate for patient safety and led the establishment of the National Patient Safety Foundation .

On the other hand, both of us got favorable letters from other physicians, as well as a number of letters to the editor, most of which were positive. The papers were also well accepted by the general healthcare community. They have since been cited over 2500 times, the most-cited studies of the frequency of harm related to the hospital use of medications.

An interesting sidebar was an episode in the review process after we submitted the papers. As is typically the case, the acceptance was tentative, conditioned on our revising them in response to reviewers’ comments. One reviewer wrote a five-page single-spaced review of the systems paper that raised multiple important points, all of which I would have to respond to!

I knew as soon as I read it that it was written by Don Berwick . Don was the founder and CEO of the Institute for Healthcare Improvement (IHI) , the pathbreaking organization teaching quality improvement (QI) to healthcare professionals. QI, of course, was about process improvement , or systems change. IHI had applied QI techniques to issues such as overuse and underuse of services, but not to medical errors .

I had met Don some years earlier when I was exploring options for my new career. From talking with him and reading his papers, I immediately recognized that he was the author of the critique. I was reminded of the old saw, “With friends like that, who needs enemies?” But, of course, revising with his points addressed made it a much stronger paper. The paper was also Don’s introduction to my work (the review was before my Error in Medicine paper came out) and led him to later involve me in the IHI Breakthrough Collaborative work on adverse drug events and begin our long-term collaboration.

Another interesting wrinkle related to our psychology colleagues. Amy Edmondson became curious about why two of the four seemingly identical nursing units at the MGH had substantially lower rates of ADE than the other two. Were they better managed? Were nurses there more careful? If so, why?

Using the data from our study and further interviews of nurses, she was able to show that the units with the higher rates of reported ADE were those that had more supportive nurse managers. In the units with lower rates, nurses were less likely to report errors because they feared they would be punished or reprimand. In the high-rate units, that wouldn’t happen. The high-rate units did not have more ADEs, they just knew about more of them because the environment made it possible for them to be brought to the surface. Edmondson developed this finding into her PhD thesis, and it became the stimulus to her later work. She is now a full professor at Harvard and an internationally recognized expert on teamwork .

Phase 2, studying the effect of our two systems changes—computerized physician order entry (CPOE) at the BWH and pharmacist presence on rounds in the ICU at the MGH —was well underway before the results of the first study came out. Our methods had been worked out and our teams were experienced at finding ADEs . BWH had previously committed to implementing CPOE . At the MGH , the extra cost of including a designated pharmacist as part of the care team for daily rounds in the ICU was funded by the nursing department and pharmacy, who were both keenly interested.

When the results came back from the studies, we were ecstatic. Both systems changes had significant impact. The before-after study at BWH showed that CPOE reduced all medication errors by 83% and ADEs by 17% [4]. The estimated cost saving if the system were implemented hospital-wide was $480,000 per year. The controlled study of pharmacist participation on rounds at the MGH showed a 66% reduction of ADEs caused by errors in prescribing [5]. Finally, we had evidence that systems change worked in healthcare.

Not surprising, our systems change papers received less coverage in the popular media than the studies that had demonstrated the extent of the problem. The media prefer bad news to good. Sadly, evidence of a problem is much more newsworthy than the demonstration of its solution.

However, our colleagues in safety took notice. Here was the proof needed that systems change worked in medicine. The papers were discussed in trade journals, and both systems change papers were part of the evidence cited as part of the Institute of Medicine recommendations in its famous report, To Err Is Human , that came out 2 years later. The groundwork was laid. Now began the hard job of getting doctors, nurses, and hospitals to incorporate systems thinking into their work.

BWH Center for Patient Safety Research and Practice

As we concluded our research, I turned my attention to promoting systems change and influencing policy. David kept his focus on research. He wanted to establish a “Center of Excellence,” a new vehicle that AHRQ had just announced and was generously funding. Our studies showed how broken the medication delivery system was. Basic research was needed in the epidemiology of medication errors , not just in the hospital, but in all venues. Safe practices needed to be developed for all stages: ordering, dispensing, and administration and for communication and interactions between them. More needed to be known about costs and barriers to improvement.

From his work developing an electronic medical record , David could see the technological explosion that was coming, and he was eager to apply the new technology to medication error problems. There was much to do. AHRQ funded the proposal, and the BWH Center for Patient Safety Research and Practice was born. I was honored to chair its advisory board .

The scope of the Center’s work under David’s leadership more than lived up to the prospectus. Early on, the group demonstrated the effectiveness of real-time decision support during computer prescribing using alerts to adjust doses for renal impairment and age. Some elderly patients were receiving 10 times the recommended dose of psychoactive drugs! But alerts were not universally regarded as a benefit. If the system provided too many, as it often did, physicians ignored all of them. Center researchers found that if the alert was accompanied by specific advice, e.g., the correct dose, it was readily accepted.

The Center sponsored Rainu Kaushal’s first study of medication errors in a pediatric hospital. It found a similar rate of ADE to that found in adult patients, except for newborns, where it was 10 times higher. Potential ADEs —the near-misses—were 3 times as common, testimony to an alert staff and a poor system [6].

The first study of ADE in office patients led by Tejal Gandhi showed that they were even more common than in the hospital. The ADE rate was 21%, of which 36% were preventable [7]. The study was unique and pathbreaking in another way: it demonstrated the value of asking patients about their experiences when assessing harm . We were stunned to find that patients reported 8 times as many ADEs as were noted in the physicians’ charts.

From the beginning, a major focus of the Center was on the use and effectiveness of technology to reduce ADE . The early work with computerized ordering helped increase the national will to spread the use of computers into office practice. A pioneering study of bar coding of drugs showed it dramatically reduced errors in pharmacists ’ dispensing and when nurses give the medication to the patient [8]. Based on this evidence, the NQF endorsed bar coding , and it has since been adopted as standard practice in hospitals nationwide.

Over time David expanded the Center’s agenda well beyond the issue of ADEs to patient safety in general. Center researchers studied the costs of adverse events and of adopting information technology in healthcare. They demonstrated that the use of sensors under the mattress to monitor hospitalized patients’ vital signs and activity led to improved responses by nurses and a 50% reduction in ICU days. Dr. Patti Dykes , a nurse, developed a fall prevention protocol that decreased its risk by a third. It is now being used at more than 100 hospitals around the country.

David Bates proved to be an incredibly effective leader, who over the years created a leading center—probably the leading center—of innovation, research, and development in patient safety. He inspired and mentored a new generation of researchers, attracting postdocs and others from around the world to the center. He has trained more than 100 researchers in patient safety research who have published over 1000 papers on patient safety. His Center exemplifies patient safety research at its best.