On March 30, 1981, Ronald Reagan , president of the USA, was shot in an assassination attempt. During his lifesaving surgery at the George Washington Hospital, the nation was riveted by the clear and calm account of its progress by the hospital’s physician spokesman, Dennis O’Leary . Five years later, O’Leary became the head of the Joint Commission on Accreditation of Hospitals .

History of the Joint Commission [1]

Most of what follows of the early history of The Joint Commission comes from its excellent 50-year anniversary report, Champions of Quality in Health Care.

The Joint Commission has been for many years the principal driver of healthcare quality in hospitals and, in more recent decades, other types of healthcare organizations. Its roots go back to the early years of the twentieth century when medicine was undergoing rapid changes as the result of scientific advances in understanding the causes of diseases, the use of antisepsis, and the development of x-rays. Over a short period, hospitals move from “pest houses” to essential resources and proliferated. But quality of care varied widely. A few people began to be concerned.

One was Ernest Amory Codman , a surgeon at the MGH in Boston who developed the End Result Idea, a system for following up patients after surgery to determine their outcomes. The Idea was sufficiently unpopular with his colleagues at the MGH that he found it necessary to leave and found his own hospital in 1911. He kept meticulous records and later published his results in A Study in Hospital Efficiency [2]. Along the way, however, he went out of his way to publicly accuse surgeons and hospitals of being more interested in making money, for which he was widely ostracized.

At about the same time, in 1912, a prominent Chicago surgeon, Franklin H . Martin, and several others founded the American College of Surgeons (ACS) . The purpose was to distinguish those who were trained in surgery from others, to establish the specialty. Their other concern was about the lamentable state of hospitals, which led them to call for “a system of standardization of hospital equipment and hospital work.” Ernest Codman was tapped to chair a Hospitalization Standardization Committee.

In 1918 the ACS initiated the standardization program. In its initial foray surveying hospitals of 100 beds or more, only 89 of 692 hospitals surveyed met the minimum standards! Although that number was reported out to the college, the list of the names of the hospitals was burned to keep it from being obtained by the press! The hospital standards approval process was here to stay, however, and by 1950, half of all hospitals, 3290, were on the ACS approval list.

But the ACS was in financial trouble and unable to sustain the program. It approached the American Hospital Association (AHA) about taking it over. The AHA was interested and willing to provide financial support. The American Medical Association (AMA) got wind of this and made a counteroffer. The AMA had an accreditation program for internships and residencies and disliked the idea of a program run by administrators, not doctors. After months of wrangling, the three parties, along with the American College of Physicians (ACP), worked out their differences and in 1951 founded the Joint Commission on Accreditation of Hospitals (JCAH) .

In the early years, the JCAH carried on the ACS standards. Surveys focused only on the hospital environment, what Donabedian would later call “structure.” They examined physical aspects, such as proper use of autoclaves, function of clinical laboratories, medical staff organization, and patient records. An early effort to include evaluation of clinical care was shot down by the AMA , who consistently claimed this to be the prerogative of physicians. They even threatened to absorb JCAH into the AMA if they did not get their way.

The enactment of Medicare in 1965 was a watershed moment for JCAH . The legislation gave it “deemed” status, meaning that hospitals that were accredited by JCAH were deemed to have met the Medicare Conditions of Participation . This may have been of necessity—the federal government had no capacity to inspect hospitals and the states were notoriously poor at it—but it had immense impact and dramatically enhanced the prestige and power of JCAH as a “quasi-public” licensing body. The flip side was that it exposed it to much greater public accountability .

The JCAH board saw the new status as an opportunity to elevate the standards and use the surveys to improve quality of care. By using lessons learned from the surveys, they could move standards from “minimum essential” to “optimum achievable.” They undertook a major project to elevate the standards over the next few years.

These were also the years of rising concerns about civil rights, however, and the JCAH soon found itself in the crosshairs of public interest groups that were concerned—justifiably—about poor conditions in municipal hospitals in major cities. The Commission was pushed to deny accreditation to these hospitals in order to stimulate increased funding.

Not only would this be counterproductive, but closing the hospitals would deny care to indigent people, and most of these hospitals would be able to get certification from their state or the Department of Health, Education and Welfare (HEW). The JCAH stuck to provisional accreditation and consulting with hospitals on how to improve.

However, in 1971 the preamble in the new Accreditation Manual for Hospitals declared that patients were entitled to “equitable and humane treatment” and that “no person should be denied impartial access to treatment ... on the basis of such considerations as race, color, creed, national origin or the nature of the source of payment for his care.” The first patients’ bill of rights . It was not just a statement. Observance of patient rights would be a factor in the determination of accreditation .

The tension between the JCAH’s concept of its accreditation role as informing and stimulating hospitals to improve quality of care and the public’s desire for accountability came to a head a few years later when Congress passed legislation that gave HEW (now Health and Human Services (HHS)) authority to do “validation” studies of accredited hospitals in which they would conduct surveys in response to complaints alleging noncompliance with Medicare standards and to establish standards that exceeded those of the JCAH . The Commission was being pushed to be not just an accreditor, but a certifier, i.e., a regulator. It continued to resist but in some states it did begin to survey hospitals with state licensing teams .

It seemed to work. In 1979, a General Accounting Office (GAO) report endorsed The Joint Commission process as superior to that of HHS and later commended its cooperative relationships with states for licensure and accreditation . The Commission focused on improving its surveys.

The Agenda for Change

In 1986, the course of The Joint Commission changed dramatically when Dennis O’Leary was appointed president. O’Leary was a board-certified internist and hematologist. He grew up in Kansas City, graduated from Harvard and Cornell Medical School, and trained in internal medicine at the University of Minnesota and Strong Memorial Hospital in Rochester, NY. He had been at George Washington University Medical Center since 1971 where he had taken on progressively more administrative duties, including chairing the medical staff executive committee at the hospital and being the dean for clinical affairs at the medical center.

O’Leary had a clear idea of what he wanted to accomplish. He wanted The Joint Commission to change the focus of accreditation to performance improvement .

The time was ripe. In the 1960s, Avedis Donabedian at the University of Michigan had developed his classical definition of quality of care as encompassing three components, structure, process, and outcome, but his teachings had little impact except in academe. The Joint Commission focused on structure, and practicing physicians were too busy in the 1960s and 1970s keeping up with the fast pace of scientific advances that were transforming medicine to give much thought to how their systems worked or to analyzing their results. (Codman redux!)

But costs were increasing almost exponentially. When Medicare was introduced in 1965, it was estimated that it would cost $275–325 million a year [3]. The actual expenditure in 1966 was $1.8 billion, and by 1985 the expenditure was $71 billion and seemingly out of control [4, 5]. Questions began to arise for the first time about the effectiveness of care. What were we getting for our money?

In 1973, John Wennberg , a researcher from the Harvard Center for Community Health and Medical Care, added fuel to the fire when he published the results of his studies of geographic variation that showed two- to fourfold variations in the provision of common surgical procedures, such as tonsillectomy, hemorrhoidectomy, and prostatectomy [6]. Charles Lewis had previously shown three- to fourfold variation in the rates of performance of six surgical procedures in Kansas, including tonsillectomy, appendectomy, and herniorrhaphy [7]. These differences were not trivial, and the conclusion was inescapable: either too many patients were getting the service in one area or too few in the other. Did the doctors know what they were doing? Over the next few years, Wennberg expanded his studies to other regions, but the results were similar.

It took a while for people to take notice, but by the early 1990s, a movement was afoot in medicine to take a different approach to quality of care. Paul Batalden and Don Berwick had studied under Deming and began work on applying industrial continuous quality improvement (CQI) concepts to healthcare [8]. Berwick had founded the Institute for Healthcare Improvement (IHI) (see Chap. 6).

O’Leary and his senior colleagues Jim Roberts and Paul Schyve could see that its applications in healthcare were the future. It was time to get The Joint Commission on board . Schyve was the director of standards at The Joint Commission from 1986 to 1989 and then vice president for research and standards . He would function as The Joint Commission’s quality and safety guru for the next two decades, representing it to the NQF and participating in NPSF and LLI initiatives.

figure 1

(a) Dennis O’Leary and (b) Paul Schyve. (All rights reserved)

There were other pressures as well: by this time The Joint Commission had 2600 standards! Hospitals and doctors wanted relief, and they wanted accreditation to be more relevant to their work. Why not focus on quality of care?

Changing Accreditation

O’Leary and his staff proposed that the Commission change the accreditation survey focus from standards for organizational structure to standards important to the provision of quality care. A steering committee of six board members plus Paul Griner , John Wennberg , Steven Shortell , and Lincoln Moses was formed to guide them.

Steven Shortell ended up leading the reorganization of the accreditation manual to a series of chapters on clinical care functions and a series on management support functions. The hospital standards manual shrunk from 2600 standards to less than 500. The Joint Commission was now talking about performance, not structure. The new standards had clearly moved the bar to a higher level.

Meanwhile, work was begun to develop clinical indicators —discrete measures of outcomes and related processes—in selected areas such as the management of heart attack, heart failure, and pneumonia. Indicator data were then gathered from accredited hospitals and analyzed through an Indicator Measurement System . Integrating these data into the accreditation process was a challenge, however, since they were not standards against which to measure compliance , but more like a rifle shot that defined a narrow significant goal.

The Commission also changed its name to the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) to reflect its increasing scope. It was already accrediting long-term care organizations, mental health institutions, and ambulatory surgery centers, among others; it now added home care. Five public members were added to the board . In 1990, JCAHO changed its mission statement to “improve the quality of healthcare provided to the public.”

Hospitals resisted the changes. External factors, such as the failed Clinton health plan and continuing increases in healthcare and survey costs, compounded the problem. The combination of anxiety over the new standards and potential public disclosure of performance led the AHA at one point to seek O’Leary’s ouster as The Joint Commission president and replace him with a senior AHA staff member who had previously been The Joint Commission surveyor. However, Board leaders, headed by incoming Board chair William Kridelbaugh of the American College of Surgeons , successfully deflected the assault.

In 1994 and 1995, the Commission piloted a new approach to accreditation : the Orion Project. The Orion Project tested several innovations in the survey process: changing surveys from announced to unannounced so hospitals had to be continuously ready, use of an integrated team of surveyors who were all on-site at the same time, use of laptop technology by surveyors, focus on the effectiveness of staffing, and including performance measures in the accreditation process.

After testing in several states and refining them, some of the innovations were rolled out nationwide. These were major changes in the accreditation process, but the new approach was well-received. A survey of hospitals showed that 80% found new process “more interactive, consultative, and valuable” than previous accreditation surveys. By 1995 accreditation included six new functional areas: patient rights and organizational ethics, care of patients, continuum of care, management of the environment of care, management of human resources, and surveillance and prevention of infection.

Focus on Patient Safety: Sentinel Events

Something else was happening in 1995. Starting with the death of Betsy Lehman in Boston from an overdose of chemotherapy, a number of nationally reported cases of major medical mishaps were reported in the press. These led the public and advocacy groups to raise questions about the effectiveness of The Joint Commission accreditation. If it was doing its job, how could these things happen?

O’Leary recalls that some years earlier, in 1990, I had visited The Joint Commission to suggest that, based on our experience with the medical practice study in New York, The Joint Commission should pay more attention to patient safety issues in accrediting hospitals. The discussion was cordial, but The Joint Commission had a substantial development agenda on its plate and was not anxious to add more to its agenda. However, the seed had been planted.

The patient safety shocker that spring for The Joint Commission was when it learned that a surgeon in an accredited hospital had amputated the wrong leg of a diabetic patient with severe peripheral vascular disease. Wrong-site surgery ? Who had ever heard of that? Nor were there any identifiable case reports in the medical literature. (It was not so rare, it turned out. In the ensuing years, The Joint Commission would typically learn of 50–75 new cases of wrong-site surgery each year.)

Reacting to this and the other major mishaps, The Joint Commission moved into action. Rick Croteau , a senior surveyor, former aerospace engineer and surgeon, was tasked with creating an entirely new policy framework for addressing what came to be known as sentinel events . His knowledge of systems thinking , analysis, and applications would stand him in good stead.

A sentinel event was defined as “a serious undesirable occurrence that results in the loss of patient life, limb, or function.” A new performance improvement standard was added that required “intensive assessment of undesirable variation in performance,” i.e., root cause analysis (RCA) for each sentinel event , as well as the creation of a corrective action plan. A monograph describing a thorough RCA and how to perform one was also prepared and released by The Joint Commission. Hospitals experiencing a sentinel event were placed in Conditional Accreditation until they had submitted a thorough RCA and had it approved.

In early 1996, vice president for performance measurement Jerod Loeb conceived of holding a national conference on medical error . He persuaded the American Association for the Advancement of Science (AAAS) , the Annenberg Center , and the AMA to be co-conveners (Chap. 4). At the conference in the fall, the Commission announced that in an effort to be less punitive, it was changing its policy regarding sentinel events from “Conditional” Accreditation to “Accreditation Watch .”

It didn’t work. Although The Joint Commission leaders thought this was making their response to sentinel events less punitive, hospitals saw it as a stigma—less threatening, perhaps, but punitive, nonetheless.

By 1997, antagonism between The Joint Commission and hospitals was again in full flower. Some of this related to the formal initiation of ORYX , a next iteration of efforts to integrate clinical indicators into the accreditation process. ORYX set requirements for the number and types of standardized measures each organization had to collect and report. To ease the burden for hospitals, The Joint Commission approved the use of commercial measurement systems to collect the data, audit its reliability, and transmit it to the Commission. But the process was expensive and labor-intensive.

Adding to the friction was the Commission’s decision to improve public transparency by introducing quality check on its website. Quality check was a directory of accredited organizations and their performance reports. The combination of these initiatives with the perceived threat of exposing their patient safety performance was almost too much for the hospital field to tolerate.

Hospitals would agree with doing an RCA in principle (how could they not?), but in practice they regarded this expectation as intrusive. A more collaborative approach was needed. Reporting of sentinel events was made voluntary. Hospitals were “encouraged” to report sentinel events and to do RCAs. “Accreditation Watch ” would only be assigned if a hospital failed to do an RCA for review by The Joint Commission or if it did not report a sentinel event that The Joint Commission learned of by other means, such as from a patient complaint or a story in the press.

Although their brief was with The Joint Commission , hospitals had many reasons for not wanting to report sentinel events in addition to resistance to performing RCAs. Their lawyers were concerned that despite all assurances of confidentiality, information might be revealed that would lead to a lawsuit. CEOs worried about damage to the reputation of their hospitals if events became known. In addition, if the internal environment was punitive, as most still were, the CEO might not even know an event had occurred. And, despite all the efforts of patient safety experts, many people clung to the idea that some injuries are not preventable and therefore didn’t need to be reported.

Sentinel Event Alerts

Since 1995, The Joint Commission had been steadily amassing details regarding sentinel events and their related RCAs and correction plans into a “Sentinel Event Database.” The Database now contained a great deal of useful information about the prevalence of serious hazards. Why not disseminate these “lessons learned” widely? To every hospital. The idea of a Sentinel Event Alert was born.

In 1998, the first “Sentinel Event Alert ” was issued, describing patient deaths resulting from accidental infusion of concentrated potassium chloride (KCl). Nurses confused its vial with that of dilute sodium chloride, which was used to “flush” intravenous lines. This rarely occurred, but when it did, it was deadly.

The Alert recommended that concentrated KCl be removed from nursing units and that its use be restricted to the pharmacy, where it could be better controlled. This was a classic example of a “forcing function, ” a powerful human factors technique for preventing error. It worked. Hospitals complied, and within a few years, deaths from accidental infusion of KCl virtually disappeared. Sentinel Event Alert was one of The Joint Commission’s most successful initiatives.

In 1999 O’Leary invited me to speak to the Board at their annual planning retreat. I urged them to take the lead in patient safety and to do three specific things: (1) define and require implementation (i.e., inspect for) of known safe practices , (2) replace scheduled triennial inspections with unannounced inspections, and (3) move their reporting system to a separate entity with confidentiality protection. I have no idea whether my entreaties helped, but unannounced surveys were later implemented, and in 2001 The Joint Commission defined 11 new safe practices that they began inspecting for in January 2003.

The Commission continued to push hospitals to focus on quality improvement, but in 1999, the IOM report, To Err Is Human , put the spotlight on safety. In that year, The Joint Commission rolled out a major new chapter on patient safety for its accreditation standards manual and changed its mission statement to “. . . continuously improve the safety and quality of care provided to the public.” However, it needed to do more. It needed to motivate hospitals to implement safe practices .

Patient Safety Goals

Sentinel Event Alerts gave hospitals an incentive to deal with patient safety issues, but many did not know how. Removing concentrated KCl from nursing units was simple: issue a decree (although it is interesting that some hospitals didn’t.) But few hazards could be dealt with that easily; most required changing a process. Thanks to IHI and others, some hospitals were beginning to learn how to do this, but it was difficult work, and most were still not engaged. By this time, however, the National Quality Forum (NQF) was identifying evidence-based safe practices that hospitals could adopt. Hospitals wouldn’t have to reinvent the wheel; they would just have to put it on.

The Joint Commission Sentinel Event Database was the treasure trove of information that made the Alerts possible. Why not take the next step and recommend safe practices to address the high-priority problems? In addition to The Joint Commission , the NQF , IHI , and others were developing safe practices .

In 2002, the Commission established a Sentinel Event Alert Advisory Group of nurses, physicians, pharmacists , risk managers, and engineers to formalize the work already being done to identify patient safety issues based on lessons learned from sentinel events—the Alerts —as well as from accreditation surveys and recommendations from the NQF and other safety organizations. The Advisory Group analyzed potential remedies for practicality, cost-effectiveness, and evidence.

From this analysis, Patient Safety Goals were developed. Each goal had one or more specific recommendations—the practice changes to be implemented. Box 12.1 shows one of the early goals, reporting critical test results , that was informed by our work at the Massachusetts Coalition for the Prevention of Medical Error (Chap. 8) (Box 12.1).

Box 12.1 Patient Safety Goal 2

Goal 2

Improve the effectiveness of communication among caregivers.


Report critical results of tests and diagnostic procedures on a timely basis.

--Rationale for NPSG.02.03.01--

Critical results of tests and diagnostic procedures fall significantly outside the normal range and may indicate a life-threatening situation. The objective is to provide the responsible licensed caregiver these results within an established time frame so that the patient can be promptly treated.

Elements of Performance for NPSG.02.03.01

1. Develop written procedures for managing the critical results of tests and diagnostic procedures that address the following:

 The definition of critical results of tests and diagnostic procedures

 By whom and to whom critical results of tests and diagnostic procedures are reported

 The acceptable length of time between the availability and reporting of critical results of tests and diagnostic procedures

2. Implement the procedures for managing the critical results of tests and diagnostic procedures.

3. Evaluate the timeliness of reporting the critical results of tests and diagnostic procedures.

Reprinted by permission of the Joint Commission Resources. All rights reserved

Note that these were goals, not required practices. In line with the commission’s commitment to encourage voluntary improvement, the goals were to be aspirational statements of intent for organizations to pursue. Recommendations, not requirements. A hospital’s progress in implementing them would, however, be evaluated during accreditation surveys, and performance expectations that followed each goal were surveyed and scored. Compliance data were aggregated and periodically published in Joint Commission Perspectives.

The first set of six Patient Safety Goals was published in January 2003:

  1. 1.

    Improve the accuracy of patient identification.

  2. 2.

    Improve the effectiveness of communication among caregivers.

  3. 3.

    Improve the safety of high-alert medications.

  4. 4.

    Eliminate wrong-site, wrong-patient, and wrong-procedure surgery.

  5. 5.

    Improve the safety of infusion pumps.

  6. 6.

    Improve clinical alarm systems.

Each goal had two or more specific recommendations. More detailed processes for achieving the Goals were available for each safe practice from other sources such as IHI and the Massachusetts Coalition . At the time they were released, Rick Croteau noted: “These six Joint Commission National Patient Safety Goals and recommendations provide a clearly defined, practical, and achievable approach to addressing…the most critical threats to patient safety.” [9]

The Advisory Group continually reviews the goals. It adds new goals annually and retires old ones as high compliance rates are achieved. In 2004, Goals 1b (time-outs) and 4 were consolidated into a new “Universal Protocol ” for the prevention of wrong-site, wrong-person, wrong-procedure surgery. A seventh goal was added to address healthcare-acquired infections, which included complying with CDC hand hygiene guidelines. By 2008, 16 goals had been issued, including the first call to involve patients in their care: Goal 13, Encourage Patients’ Active Involvement in Their Own Care as a Patient Safety Strategy.

The Goals were well accepted by hospitals. Follow-up data from surveys in 2005 showed high rates of implementation for a number of specific recommendations: 95% use of two identifiers for patient identification, 82% use of time-outs, 90% implementation of critical test result procedures, 99% removal of KCl, and 96% use of a wrong-site checklist . They have been an effective mechanism for motivating hospitals to improve patient safety.

Core Measures

By 2000, The Joint Commission , working with expert panels, had expanded its original core measure sets for acute myocardial infarction (AMI), heart failure (HF), and pneumonia to a total of 14 individual measures . Box 12.2 shows the core measures for AMI. Hospitals began collecting measures July 1, 2002. (Box 12.2)

Box 12.2 AMI Core Measures

Aspirin within 24 hours of arrival

Aspirin prescribed at discharge

Beta-blocker within 24 hours of arrival

Beta-blocker prescribed at discharge

ACEI for LVSD prescribed at discharge

Smoking cessation counseling/advice

Thrombolysis within 30 minutes

PCI within 120 minutes

Adapted from Ref. [9]

Responding to the pleas from hospitals to reduce duplication of effort, the Commission worked with the Centers for Medicare and Medicaid Services (CMS) to create one common set known as the Specifications Manual for National Hospital Inpatient Quality Measures to be used by both organizations.

The measures worked. From 2002 to 2005, hospitals adherence improved: AMI 87 to 90%, pneumonia 72 to 81%, and CHF 60 to 76%. Some improvements were dramatic. For example, hospitals provided smoking cessation advice to 92.1 percent of patients in 2005 compared with 66.6 percent in 2002. More importantly, outcomes improved: the inpatient mortality rate for heart attack patients declined from 9.2% in 2002 to 8.5% in 2005, representing thousands of lives saved [10].

Currently, The Joint Commission’s ORYX initiative integrates performance measurement data into the accreditation process. The measures are aligned as closely as possible with the Centers for Medicare and Medicaid Services (CMS) , and chart-abstracted data are publicly reported on The Joint Commission’s Quality Check® website.

Public Policy Initiative

In 2001, The Joint Commission launched a set of public policy initiatives to amplify its patient safety and quality improvement messaging. It convened a series of topic-oriented roundtables to frame relevant discussions and develop recommendations. Each roundtable had 30–45 participants and met on 2–3 occasions. The Joint Commission staff used the input to draft a white paper. Each white paper was eventually 35–50 pages long and contained findings, recommendations, and accountabilities for seeing each recommendation through to fruition.

The first paper on the nurse staffing crisis struck a chord in the hospital, nursing, and patient safety communities and was an immediate success, eventually being downloaded from The Joint Commission website almost two million times. The report of a roundtable on patient safety and tort reform was also a hit and was downloaded over 300,000 times. A number of other roundtables were convened in the next few years.

Accreditation Process Improvement

In 2000, The Joint Commission decided it had to do more to improve its accreditation process. The triennial inspections had been a tremendous burden for hospitals. For several weeks beforehand, all work other than patient care stopped as hospital departments got their records in shape for the survey. Everyone dreaded them. Worse, they often failed to identify some serious performance problems.

O’Leary asked Russ Massaro , a seasoned surveyor who knew how the minds of healthcare organization leaders worked, to rethink the process. The result was the Shared Visions -New Pathways initiative that was launched in 2004. It completely changed the accreditation process in three fundamental ways.

First, organizations were asked to periodically do their own in-depth self-assessments and share the results and their plans for improvement with The Joint Commission . From now on, surveys would concentrate on the findings from the self-assessments and hospital-reported data, such as from ORYX and sentinel events .

Second, and this was a biggie, on-site surveys from here on would be unannounced. Gone was the dreaded triannual ritual of stopping work to get ready for The Joint Commission. You would have to be at the top of your game at all times. Unannounced meant finding out that the survey would be tomorrow.

Finally, instead of focusing on records, the on-site reviews would focus on the care actually provided to patients. They would use a tracer methodology in which the care to individual patients in the hospital at the time was evaluated by observing and interviewing the patients and hospital staff in real time. Patient care was reviewed for relevant standards compliance , such as medication management and nurse staffing.

It was a huge change. And it was very successful. The reviews engaged doctors, nurses, and other frontline staff. Suddenly, surveys made sense to them, while also giving them an opportunity to demonstrate their skills. Hospitals welcomed the new accreditation process. This was now continuous engagement with The Joint Commission toward the mutual goal of continuous improvement.


What do we make of all this? First and foremost, The Joint Commission has clearly been a leader in reducing harm in healthcare. Without its influence—and persistence—we would not have made the progress we have made in many areas and likely not made any at all in others. Deciding what to do for patient safety and how to do it has been an incredibly complicated business, and the Commission has navigated that thicket well.

From the beginning, the patient safety movement has had to confront the tension between those who call for the greater accountability and regulation that has worked so well in other hazardous industries such as transportation and nuclear power and those who believe that making real change is voluntary and that our job is to motivate people and provide them with the tools to do it.

Your author has found himself squarely in the middle of this debate, embracing the need to change the culture and helping to teach professionals to change systems, but also of the mind that we need to do much more to hold the leaders of healthcare organizations publicly accountable for failure to prevent harm , particularly serious harm where the methods are known, such as serious reportable events .

The Commission has also found itself in the middle, and over the years it has experimented with one approach and another regarding reporting of sentinel events , collection and reporting of data, and how to respond to failures revealed by accreditation visits. All compounded by the ambiguities and vicissitudes that result from the fact that participation by hospitals is voluntary and from changes in views about whether accreditation suffices for deemed status or state licensing.

Compounded also by pushback on all sides: consumer groups who want tougher oversight, hospitals and doctors who want less, and Congress who doesn’t know what it wants or changes what it wants according to external pressures. To say, “You can’t please everyone” is an understatement at best.

Despite all this, The Joint Commission has been a well-spring of innovations, a great many of which have measurably reduced harm and improved quality of care. More than any other organization, public or private, it has consistently pursued a data-driven analytic approach to helping hospitals improve care. We can all sleep better knowing that it will continue to be a major force for improving patient safety in the future.