Keywords

“Don’t go there.” Howard Hiatt and Troy Brennan were emphatic: investigating medical error and writing about it would bring the wrath of the medical profession down on my head. But how could we not go there? How could we not go there, now that we knew from the Medical Practice Study (MPS) that 120,000 people were dying from medical errors every year? How could we not act?

The Harvard Medical Practice Study confirmed what smaller studies had shown earlier—that nearly 4% of patients in acute care hospitals suffered a significant injury from their medical treatment. What was shocking, and previously totally unrecognized, was that two-thirds of those injuries resulted from errors . Surely we should be able to do something about that.

What was known about how to prevent errors? I knew very little. So, in typical academic fashion, I started my education with a literature search. I clearly remember the day in September 1989 when I went to the Countway Library at Harvard Medical School to carry out a search of the medical literature to find out what was known about preventing medical errors . I came up empty-handed! There were case reports and a few commentaries, reports of surgical complications, and the like, but little about errors or how to prevent them other than to try harder and be more careful. But healthcare professionals—especially doctors, nurses, and pharmacists —are some of the best trained and most conscientious workers there are. They were already trying hard and being careful. So why were there no descriptions of error prevention ?

I took my search strategy to the librarian and asked for help. She thought the strategy was fine but asked if I had looked in the social sciences or engineering literature. It hadn’t occurred to me. When she did, hundreds of references came up. It turned out that many people, in several disciplines—particularly cognitive psychology and human factors engineering —knew a great deal about why people make mistakes as well as how to prevent them. Thus began my education on the mental processes that lead to errors and on the methods of preventing them. I had a lot of reading to do. I dug in.

By early 1990, I had decided to work on a paper to bring these lessons from human factors engineering and cognitive psychology to my profession of medicine. My reading had introduced me to the insights of a host of experts, but three had the greatest influence: James Reason (Human Error) [1], the true father of error research, later to become a good friend; Don Norman (The Design of Everyday Things) [2]; and Charles Perrow (Normal Accidents) [3].

The Causes of Errors

James Reason, of the University of Manchester, UK, is without doubt the person who has contributed the most to the understanding of the causes and prevention of errors. His book, Human Error (1990), is the “Bible” of error theory. While Reason had many insights, his most original and useful contribution was to differentiate between active and latent failures. Active failures (or errors) are the individual unsafe acts that cause an injury (such as a nurse’s miscalculation of a drug dose). Latent failures (or latent errors) are contributory factors that are “built in” to the system—defects in design—that lay dormant and “set up” the individual to make a mistake. One reason a nurse may make an error in calculating the dose of a medication, for example, is the latent failure of a work environment full of interruptions and distractions. Latent errors create “accidents waiting to happen.” Latent errors result from poor system design [1, 4].

figure 1

Jim Reason. (All rights reserved)

From this distinction between active and latent errors came the fundamental principle that underlies essentially all safety efforts: errors are not fundamentally due to faulty people but to faulty systems. To prevent errors, you have to fix the systems. As Reason put it so pungently: “Rather than being the main instigators of an accident, operators tend to be the inheritors of system defects. . . . Their part is that of adding the final garnish to a lethal brew whose ingredients have already been long in the cooking” [1].

After studying many industrial accidents, Reason further developed a general theory that accidents result from failures in one or more of four domains: organization, supervision , preconditions (such as fatigue from long hours of work), and specific acts. He is best known for his “Swiss cheese” model that depicts the organizational defenses (systems) as a series of slices of cheese. Each defense has defects, represented by the holes, which vary in size, timing, and position. Normally the multiple layers of defenses work, but when the defects temporally coincide—when the holes in the slices align—the potential for an “accident trajectory” is created, leading to the failure.

Charles Perrow , professor of sociology at Yale, studied risks and accidents in large organizations. His book, Normal Accidents: Living with High-Risk Technologies, advanced the theory that accidents are inevitable in highly coupled and complex systems and hard to predict. From analysis of a number of famous accidents, he described the latent failures and what could have been done to prevent the catastrophes. He has been a consistent and effective promoter of systems theory.

Donald Norman, director of The Design Lab at the University of California, San Diego, is the author of the delightful book, The Design of Everyday Things, in which he laments the everyday annoyances—and the error potential—posed by poor design, such as door openers for which it is not obvious whether to push or pull. Though that design failure results only in trivial annoyance, others, such as confusing instructions for programing navigation systems in aircraft, can result in crashes. Norman introduced me to “affordances ”—designs that make function obvious, such as door handles that show by their position or design which way to push or pull, and “forcing functions ”—designing a process to make it impossible to do it wrong, such as design of a car’s ignition switch so that the engine cannot be started unless the gear is set in “park.”

This was fascinating stuff. It was all new to me despite my excellent undergraduate and medical education at fine universities . I knew nothing about the extensive knowledge that psychologists and human factors engineers had developed about why we make mistakes, nor about the ideas they had for preventing them. The light bulbs went off: this is what we need! This is something we can use. It was clearly applicable to healthcare: we had to redesign our systems.

The more I read, the more excited I got about the relevance of this knowledge to what we needed to do to reduce iatrogenic harm . I assumed that, like me, very few doctors, nurses, or other healthcare workers had any knowledge of this body of thought. It seemed inescapably clear that healthcare needed to take a systems approach to medical errors . We needed to stop punishing individuals for their errors since almost all of them were beyond their control, and we had to begin to change the faulty systems that “set them up” to make mistakes. We needed to design errors out of the system. I had no doubt we could do that.

Application of Systems Thinking to Healthcare

Healthcare lacked effective systems at many levels. At the most obvious level, there was no fail-safe system for identifying the patient to make sure that a test or medication was being given to the right person . We lacked a system for guaranteeing that a medication dose was correctly calculated and measured out or given to the right patient. The only system for preventing a patient from getting the wrong dose or a substantial overdose of any drug was double-checking by another nurse, but this was required only for certain medications such as narcotics. Nothing prevented a nurse from inadvertently confusing two vials with similar labels, such as solutions of sodium chloride and concentrated potassium chloride, and accidentally giving the patient a lethal infusion of potassium—which, in fact, was not all that rare.

Although the evidence was clear that disinfecting your hands reduced hospital-acquired infections, there was no system to ensure that doctors and nurses did it for every encounter. And, of course, the hospital environment was notorious for distractions and interruptions of nurses and resident physicians, who were also overworked and sleep-deprived—all “preconditions ” that are well-known to cause errors .

As noted, my colleagues—particularly Howard Hiatt and Troy Brennan from the MPS —tried to dissuade me from writing about this. They said that “error ” was a “third rail” issue that doctors—including my friends and associates—would be very upset if I brought this to public attention, since it would make them look bad; the medical establishment, i.e., the AMA , would line up against me.

I understood that risk but saw no choice. Here was an answer to the problem of medical errors . How could we not pursue it? We needed to make a fundamental change in how we practiced. There was no way to make that happen unless we talked about error . We needed to change physicians’ (and nurses’ and everyone’s) mindset away from thinking of an error as a moral failing to recognizing that it resulted from a systems failure. I was very excited about the possibility of doing this.

Error in Medicine

By mid-1992, I had finally finished the paper. I decided to call it “Error in Medicine” [5]. It was a comprehensive look at the problem. It began by referencing the findings of the MPS , which found that nearly 4 percent of hospitalized patients suffered a serious injury, of which 14% were fatal and 69% were due to errors and were thus preventable.

From these findings we had estimated that nationwide more than a million patients were harmed annually, and 180,000 died from these injuries. I noted that this was the equivalent of three jumbo-jet crashes every 2 days, an analogy that was later picked up by others and became popular after the IOM report came out in 1999. Two-thirds of the deaths, or about 120,000, were due to errors . What could be done about the high rate of preventable injury?

The paper set out to do four things. It first explored why the error rate in medicine is so high. It noted that some of the lack of response comes from lack of awareness—errors are not part of a doctor’s everyday experience—and the fact that most errors are, fortunately, not harmful. But a more important factor is that doctors and nurses have a great deal of difficulty dealing with errors . They are taught to believe that they should make no mistakes; when they inevitably do make a mistake, they view it as a character failing.

The second section explored the institutional approach to errors in medicine, which is based on this “perfectability” model: the expectation of faultless performance. This leads to blame when individuals fail, followed by punishment or more training. Since all humans err, any system that relies on error-free performance is destined to fail. I called for a fundamental change in the way we think about errors.

The third section summarized the lessons from the extensive research about the cognitive mechanisms of error in the field of cognitive psychology and the research on latent error (poor system design ) and the effectiveness of system design in reducing errors in the field of human factors engineering .

The aviation experience provided a useful model: physicians and pilots are highly trained professionals committed to maintaining high standards while performing complex tasks in challenging environments. But aircraft designers assume that errors are inevitable and design systems to prevent them or, if that fails, “absorb” them with buffers, automation, and redundancy to prevent accidents. Procedures are standardized and pilots use checklists . Training is extensive, both in technical aspects and communication; pilots must take proficiency examinations every 6 months. The other major difference from healthcare is that adherence to standards is monitored and enforced by the Federal Aviation Administration, and accidents are investigated by the National Transportation Safety Board.

The impressive improvement in safety from application of this system-design approach in aviation (where there has been no fatality in the USA from a commercial air flight in more than 10 years!) contrasts dramatically with the medical model that focuses on the individual. There was one exception: the specialty of anesthesia, where application of systems changes had already resulted in dramatic improvements in mortality.

Starting in 1978, Jeff Cooper and his colleagues published a series of pioneering studies of critical incidents in anesthesia [6, 7] in which they identified specific systems failures and recommended system solutions (such as alarms for airway disconnections, procedures and practices for handovers, and greater preparation of residents before their care of patients). That led Ellison Pierce , the president of the American Society of Anesthesiologists, to partner with Cooper and others in 1984 to found the Anesthesia Patient Safety Foundation (APSF) , with the mission “To ensure that no patient is harmed by anesthesia.” Under Cooper’s direction, APSF distributed a newsletter to all anesthesia providers highlighting patient safety issues and established a program to fund grants for research in anesthesia safety.

These efforts were dramatically successful: they reduced the mortality of anesthesia 90%, from 1 in 20,000 to 1 in 200,000, within a decade [8]. In the first years of the APSF grant program, David Gaba’s group at Stanford was funded to develop and study the use of simulation to train anesthesia providers to effectively work in teams to manage critical events. The use of simulators was later expanded throughout all of healthcare and in medical schools.

figure 2

(a) Jeff Cooper, (b) Jeep Pierce, (c) David Gaba. (All rights reserved)

In the final section, I urged hospitals to implement a systems approach by creating systems for error reporting, changing processes to reduce reliance on memory, standardizing routine procedures, and reducing error-inducing conditions such as long hours and high workloads.

I ended the paper with a summary that was more prophetic than I realized at the time: “But it is apparent that the most fundamental change that will be needed if hospitals are to make meaningful progress in error reduction is a cultural one. Physicians and nurses need to accept the notion that error is an inevitable accompaniment of the human condition, even among conscientious professionals with high standards. Errors must be accepted as evidence of systems flaws not character flaws. Until and unless that happens, it is unlikely that any substantial progress will be made in reducing medical errors ” [5].

I knew this was important stuff. I thought it would be a paradigm-shifting paper—as in fact it turned out to be. So I was stunned when The New England Journal of Medicine rejected it without even sending it out for reviews! I knew the editor, Jerry Kassirer , from our days together at Tufts, so I called him and asked him to tell me why they had rejected it so I could revise it. I will never forget his answer: “It just didn’t meet our standards.” I was so stunned that I didn’t know what to say, so I said nothing, thanked him, and said goodbye.

Not long after, I happened to see George Lundberg , editor in chief of JAMA , in the hallway at HSPH . He was there that day teaching. I asked him if he would take an informal look at my paper. He did, immediately recognized its “huge importance” (his words), and asked me to submit it to JAMA . I was delighted and greatly relieved. George handled itself at JAMA and accepted it shortly after. It would be months before it was published, however. Such delays—a year sometimes—between acceptance and publication are not unusual with high-impact medical journals, but there was something else going on here.

figure 3

George Lundberg. (All rights reserved)

Response to Error in Medicine

George realized that my paper would be a red flag for many doctors, who were very sensitive to anything that might make them look bad . Their institutional arm was the AMA , which saw its primary responsibility as the defense of physicians’ pride and privilege. Naively, I thought the paper offered so much in the way of opportunity to reduce harm to patients that it would be rapidly embraced by doctors. Here was the way they could reduce harm to their patients and decrease the risk of malpractice suits. Why wouldn’t doctors be excited about that?

George had the better political sense. He deliberately published the paper just before Christmas, on December 21, 1994, knowing that holiday issues are the least read by the press; hopefully, it would not attract a lot of media attention. It almost worked. Only NPR picked it up: David Baron (later of “Spotlight” fame) recognized its importance and gave it public notice. A month later The Washington Post wrote about it and then the reaction began. Lundberg began to receive hate mail, and a lobbying campaign to get rid of him began. James Todd, the executive vice president of the AMA stood by him, however, and the furor subsided.

Curiously, I don’t recall receiving any “hate” mail—although I may have just put it out of my mind. I certainly did not get a lot. But he did, and this proved to be an early episode in a series of courageous publishing decisions that ultimately cost him the JAMA editorship. I am forever indebted to George Lundberg , who had the courage to do the right thing.

On the other hand, within days of the publication of Error in Medicine, I received letters from friends and others congratulating me and thanking me for the paper. I even received a speaking engagement request. JAMA received a deluge of letters to the editor disagreeing with one or another of the points I had made. It ignored most of them but asked me to respond to nine—a huge number for a single paper. I did so, and the letters and my responses were subsequently published in JAMA [9, 10].

Amazingly, almost as if on cue, suddenly a series of highly publicized events occurred in early 1995 that drew public and professional attention to the paper. In January, The Boston Globe reported that Betsy Lehman , a beloved health reporter for the paper, had died from a massive overdose of chemotherapy at the prestigious Dana-Farber Cancer Institute (DFCI) . The community was shocked; Globe reporters relentlessly pursued the story, with a litany of frontpage articles week after week castigating the Institute for its mistakes and poor systems.

figure 4

Betsy Lehman. (All rights reserved)

As a leading cancer research organization, DFCI always had a number of new drug trials going on simultaneously. Sometimes these included tests of high doses of toxic chemotherapeutic drugs, and treatment protocols varied substantially by dose, time of dosage, etc. Study protocols were complicated and many pages long. It was difficult for nurses and doctors to keep it all straight. So, when the physician mistakenly wrote an order for Lehman for a dose that was four times the usual amount, neither the nurses nor the pharmacy questioned it. The system failed.

In April, I was asked to meet with the DFCI staff to talk with them about our new thinking about systems causes of errors in an effort to help the devastated staff deal with the crisis. They were visibly shaken. Years later people commented to me about our session, so I think it helped. The Lehman case was a life-changing event for DFCI , which underwent a major reorganization under the leadership of Jim Conway to dramatically improve its safety and ultimately achieve the lowest medication error rate in the nation.

The Massachusetts Board of Registration in Nursing was not so moved. Four years later (!) it censured 18 nurses for their role in the Betsy Lehman case. I wrote a scathing op-ed for the Globe [11].

The Betsy Lehman tragedy, plus several other egregious errors that got national coverage that spring, the amputation of the wrong leg of a patient in Florida, removal of the wrong breast of a patient in Michigan, death from accidental disconnection of a ventilator in Florida, and an operation on the wrong side of the brain of a patient in Chicago, stimulated reporters and others to inquire deeper into why these things happen. They discovered my recently published Error in Medicine paper. It undoubtedly got much more early attention because of the coincidence of these tragic accidents.

The combination of the paper and these highly visible preventable deaths also created the climate for a favorable reception of the results of our adverse drug event (ADE) study at the Massachusetts General Hospital and the Brigham and Women’s Hospital that David Bates and I published just a few months later in JAMA in July 1995 [12, 13]. Not only did we find high rates of ADEs, further evidence of the seriousness of the error problem, but we were also able to show that underlying systems failures could be identified. (See next chapter.)

It is hazardous to ascribe causation, but it is not unreasonable to conclude that the “one-two-three punch”—the error paper, which raised the issue and recommended a system solution, the serious cases that got public attention, and the evidence from the ADE study that we could identify systems causes underlying medical errors —was instrumental in beginning to get patient safety and systems change on the national agenda .

The paper also influenced the thinking of future leaders in patient safety. Within a year, Jerod Loeb , from the Joint Commission, and Mark Eppinger of the Annenberg Center decided to convene a conference on medical error . Despite the displeasure with Lundberg at the AMA , its legal counsel, Marty Hatlie , convinced the leadership to shift its efforts from tort reform to error prevention . That ultimately led the AMA to found the National Patient Safety Foundation . (See Chap. 5.)

Most importantly, however, the paper influenced Ken Shine , president of the Institute of Medicine (IOM) and its Quality of Care Committee, to make safety a focus of its work in quality of care. (See Chap. 9.) The Committee’s later report To Err is Human [14] was in many ways a detailed explication of the information in Error in Medicine, amplified with patient examples and specific recommendations for policy changes. It brought to public attention what the paper brought to the profession.

Error in Medicine called for a paradigm shift. It challenged everyone in healthcare to change their approach to its most sensitive and most taboo failing: medical errors . It called for replacing a stale, failed policy of blame and retribution after a mistake with a radically new approach to prevent future mistakes. It looked forward, not backward; it replaced fear with hope . It gave medicine a way to deal with our national shame of preventable deaths. “It’s not bad people, it’s bad systems.” would be the guiding principle for the work to follow. Things would never be the same.