1 Introduction

Making a serious error is one of the most stressful professional experiences for a doctor or for anyone in clinical practice. In other professions, such as architecture or the law, serious mistakes can generally be remedied with an apology and compensation for losses sustained. But in medicine, mistakes can have serious and lifelong consequences for patients and families.

Medical schools rightly encourage the highest standards of professional practice. Doctors are expected to work hard and do their best for their patients and, ideally, not make errors. It is tempting to think that only ‘bad’ or ‘lazy’ people make mistakes and that making a serious error implies a flaw in character not worthy of a serious professional. The reality however is that all doctors, indeed all clinicians, will make errors during their career and that some of them will have serious consequences.

We cannot completely avoid errors but we can do much to reduce them, to spot them more quickly and to protect patients from the worst of the consequences. However, in order to do this, we need to understand the nature of error and, in particular, how working conditions strongly influence our behaviour and the likelihood of error. We also need to understand that while we can make personal efforts to avoid errors, the greatest protection will come from working in a team of people who are willing to recognise errors, speak up, support each other, and protect both patients and colleagues from the consequences of errors.

2 What Is an Error?

In everyday life, recognising error seems quite straightforward though admitting it may be harder. Immediate slips, such as making tea when you meant to make coffee, are quickly recognised. Other errors may only be recognised long after they occur. You may only realise you prescribed a drug incorrectly when the patient returns to follow-up clinic a few weeks later with problematic side effects from an overdose. Some errors, such as missing a lung tumour on an X-ray taken to investigate a potential shoulder injury, may only become apparent years later.

An important common theme running through all these examples is that an action is only recognised as an error after the event. Human error is a judgement made in hindsight [1]. There is no special class of things we do or don’t do that we can designate as errors; it is just that some of the things we do turn out to have undesirable or unwanted consequences. This does not mean that we cannot study error or examine how our otherwise efficient brains lead us astray in some circumstances, but it does suggest that there will not be specific cognitive mechanisms to explain error that are different from those that explain other human thinking and behaviour.

Eric Hollnagel [2] points out that the term error has historically been used in three different senses: as a cause of something (wrong site surgery due to human error), as the action or event itself (removing the incorrect kidney) or as the outcome of an action (the death of a patient from renal failure). The distinctions are not absolute in that many uses of the term involve both cause and consequence to different degrees, but they do have a very different emphasis.

The most precise definition of error, and most in accord with everyday usage, is one that ties it to observable behaviours and actions. As a working definition, John Senders [3] proposed that an error means that something has been done which:

  • Was not desired by a set of rules or an external observer

  • Led the task or system outside acceptable limits

  • Was not intended by the actor

This definition of error, and other similar ones [2], imply a set of criteria for defining an error:

  • First, there must be a set of rules or standards, either explicitly defined or at least implied and accepted in that environment

  • Second, there must be some kind of failure or ‘performance shortfall’

  • Third, the person involved did not intend this and must, at least potentially, have been able to act in a different way

All three of these criteria can be challenged, or at least prove difficult to pin down in practice. Much clinical medicine is inherently uncertain and there are frequently no easily applicable protocols to guide treatment. In addition, the failure is not necessarily easy to identify; it is certainly not always clear, at least at the time, when a diagnosis is wrong or at what point blood levels of a drug become dangerously high. Finally, the notion of intention, and in theory at least being able to act differently, is challenged by the fact that people’s behaviour is often influenced by factors, such as fatigue or peer pressure, which they may not be aware of and have little control over. So, while the working definition is reasonable, we should be aware of the difficulties of applying it in practice.

3 Understanding Error

In his analysis of different types of error, James Reason [4] divides them into two broad types of error: slips and lapses, which are errors of action, and mistakes which are, broadly speaking, errors of knowledge or planning. Reason also discusses violations which, as distinct from error, are intentional acts which, for one reason or another, deviate from the usual or expected course of action. These psychological analyses are mainly concerned, with failures at a particular time and probe the underlying mechanisms of error. There is therefore not necessarily a simple correspondence with medical errors which, as discussed above, may refer to events happening over a period of time. However, we will see that this conceptual scheme is very helpful in understanding errors in clinical practice and how they sometimes combine to cause harm to patients.

3.1 Slips and Lapses

Slips and lapses occur when a person knows what they want to do, but the action does not turn out as they intended. Slips relate to observable actions and are associated with attentional failures, whereas lapses are internal events and associated with failures of memory. Slips and lapses occur during the largely automatic performance of some routine task, usually in familiar surroundings. They are almost invariably associated with some form of distraction, either from the person’s surrounding or their own preoccupation with something in mind.

A trainee doctor working on a surgical ward is prescribing an antibiotic for a patient after a ward round. Just as she opens the patient’s drug chart on the computer a nurse interrupts because he is concerned about a patient with very low blood pressure. The doctor goes with the nurse forgetting to complete the prescription. Other tasks follow and there is a substantial delay in delivery of the antibiotic and the patient becomes profoundly septic.

3.2 Mistakes

Slips and lapses are errors of action; you intend to do something, but it does not go according to plan. With mistakes, the actions may go entirely as planned but the plan itself deviates from some adequate path towards its intended goal. Here the failure lies at a higher level: with the mental processes involved in planning, formulating intentions, judging, and problem solving [4]. If a doctor treats someone with chest pain as if they have a myocardial infarction, when in fact they have a perforated gastric ulcer, then this is a mistake. The intention is clear, the action corresponds with the intention, but the plan was wrong.

Rule-based mistakes occur when the person already knows some rule or procedure, acquired as the result of training or experience. Rule-based mistakes may occur through applying the wrong rule, such as treating someone for influenza when you should follow the guidelines for meningococcal sepsis. Alternatively, the mistake may occur because the procedure itself is faulty (deficient clinical guidelines for instance).

A swab is inadvertently left in a wound after surgery because the standard operating procedure for counting swabs is not followed properly. (Misapplication of a good rule)

A patient is transferred from one site to another with inadequate medical assistance and monitoring. (Application of a bad rule: the standard operating procedure for the safe transfer of patients is poorly designed and difficult to understand, the patient is inappropriately deemed fit for low dependency transport)

Knowledge-based mistakes occur in novel situations where the solution to a problem has to be worked out on the spot. For instance, a doctor may simply be unfamiliar with the clinical presentation of a particular disease, or there may be multiple diagnostic possibilities and no clear way of choosing between them; a surgeon may have to guess at the source of the bleeding and make an understandable mistake in their assessment in the face of considerable stress and uncertainty. In none of these cases, does the clinician have a good ‘mental model’ of what is happening to base their decisions on, still less a specific rule or procedure to follow?

In knowledge-based mistakes, the changes encountered are not recognisable or planned for and rely on the cognitively effortful and error prone processes of reasoning:

A patient deteriorates rapidly after extubation on intensive care and the endotracheal tube cannot be repositioned in the usual way (via the mouth or nose). The team involved has not faced such a challenging situation before and the opportunity to site a surgical airway (tracheostomy) at an early stage is missed. The challenges of making decisions about the choice of airway are compounded by the high levels of stress in this situation.

3.3 Violations

Errors are, by definition, unintended in the sense that we do not want to make errors. Violations, in contrast, are deliberate deviations from safe operating practices, procedures, standards, or rules. This is not to say that people intend that there should be a bad outcome, as when someone deliberately sabotages a piece of equipment; usually, people hope that the violation of procedures won’t matter on this occasion or will actually help get the job done. Violations differ from errors in several important ways. Whereas errors are primarily due to our human limitations in thinking and remembering, violations are more closely linked with attitudes, motivation, and the work environment. The social context of violations is very important and understanding them, and if necessary curbing them, requires attention to the culture of the wider organisation as well as the attitudes of the people concerned.

Reason distinguishes three types of violations.

  • A routine violation is basically cutting corners for one reason or another, perhaps to save time or simply to get on to another more urgent task.

  • A necessary violation occurs when a person flouts a rule because it seems the only way to get the job done. For example, a nurse may give a drug which should be double checked by another nurse, but there is no one else available. The nurse will probably give the drug, knowingly violating procedure, but hoping that this is in the patient’s interest.

  • Optimising violations which are for personal gain, sometimes just to get off work early or, more sinister, to alleviate boredom, ‘for kicks’. Think of a trainee surgeon carrying out a difficult operation in the middle of the night, without supervision, when the case could easily wait until morning. The motivation is partly to gain experience, to test oneself out, but there may be a strong element of the excitement of sailing close to the wind in defiance of the senior surgeon’s instructions.

In practice, the distinction between slips, mistakes, and violations is not always clear, either to an observer or the person concerned. The relationship between the observed behaviour, which can be easily described, and the psychological mechanism often hard to discern. Giving the wrong drug might be a slip (attention wandered and the doctor picked up the wrong syringe), a mistake (misunderstanding about the drug to be given), or even a violation (deliberate over sedation of a difficult patient). The concepts are not easy to put into practice, except in circumstances where the action, context, and personal characteristics of those involved can be quite carefully explored.

4 Understanding the Influence of the Wider System

Human beings have the opportunity to contribute to accidents and clinical incidents at many different points in the process of production and operation. Problems and failures may occur in the design, testing, implementation of a new system, its maintenance and operation. The most obvious errors and failures are usually those that are the immediate causes of an accident, such as a train driver going through a red light or a doctor picking up the wrong syringe and injecting a fatal drug.

The immediate causes described above are the result of actions, or omissions, by people at the scene. However, other factors further back in the causal chain can also play a part in the genesis of an accident or a serious clinical incident. These ‘latent conditions’ lay the foundations for accidents in the sense that they create the conditions in which errors and failures can occur [5]. This places the operators at the sharp end in an invidious position as James Reason eloquently explains:

Rather than being the instigators of an accident, operators tend to be the inheritors of system defects …their part is usually that of adding the final garnish to a lethal brew whose ingredients have already been long in the cooking [4]

The organisational accident model applies this perspective to the study and analysis of accidents in many complex industries [5]. The accident sequence begins (from the left) with the negative consequences of organisational processes, such as planning, scheduling, forecasting, design, maintenance, strategy, and policy. The latent conditions so created are transmitted along various organisational and departmental pathways to the workplace (the operating theatre, the ward, etc.), where they create the local conditions that promote the commission of errors and violations (e.g. high workload or poor human–equipment interfaces). Many unsafe acts are likely to be committed, but very few of them will penetrate the defences to produce damaging outcomes. The fact that engineered safety features, such as alarms or standard procedures, can be deficient due to latent conditions as well as active failures is shown in Fig. 3.1 by the arrow connecting organisational processes directly to defences.

Fig. 3.1
figure 1

Organisational accident model from Vincent [6]

The model presents the people at the sharp end as the inheritors rather than as the instigators of an accident sequence. Reason points out that this may simply seem as if the ‘blame’ for accidents has been shifted from the sharp end to the system managers. However, managers too are operating in a complex environment and the effects of their actions are not always apparent; they are no more, and no less, to blame than those at the sharp end of the clinical environment [7]. Reason also describes the human as the hero in complex work environments where errors are noticed, corrected, and accidents prevented, far more frequently than they are missed [8].

We should emphasise that not every slip, lapse, or mistake needs to be understood in terms of the full organisational framework; some errors are confined to the local context and can be largely explained by individual factors and the characteristics of the particular task at hand. However, major incidents almost always evolve over time, involve a number of people and a considerable number of contributory factors; in these circumstances the organisational model proves very illuminating.

5 Contributory Factors: Seven Levels of Safety

Reason’s model has been extended and adapted for use in a healthcare setting, classifying the error producing conditions and organisational factors in a single broad framework of factors affecting clinical practice (see Table 3.1).

Table 3.1 Framework of contributory factors influencing clinical practice (from Vincent et al. [9])

At the top of the framework are patient factors. In any clinical situation, the patient’s condition will have the most direct influence on practice and outcome. Other patient factors such as personality, language, and psychological problems may also be important as they can influence communication with staff. The design of the task, the availability and clarity of protocols and guidelines may influence the care process and affect the quality of care. Individual factors include the knowledge, skills, and experience of each member of staff, which will obviously affect their clinical practice. Each staff member is part of a team within the inpatient or community unit, and part of the wider organisation of the hospital, primary care, or mental health service. The way an individual practises, and their impact on the patient, is constrained and influenced by other members of the team and the way they communicate, support and supervise each other. The team is influenced in turn by management actions and by decisions made at a higher level in the organisation. These include policies for the use of locum or agency staff, continuing education, training, and supervision and the availability of equipment and supplies. The organisation itself is affected by the institutional context, including financial constraints, external regulation, and the broader economic and political climate.

6 Putting It All Together: Illustration of Two Cases from an Acute Care Setting

Cases and clinical stories have always been used in medical schools and clinical practice as a means of education and reflection on the nature of disease. The presentation of a case of diabetes, for instance, will illuminate understanding of the evolution of the disease, potential complications, and impact on the patient and their family. Cases can also be used to illustrate the process of clinical decision-making, the weighing of treatment options and sometimes, particularly when errors are discussed, the personal impact of incidents and mishaps. Incident analysis, for the purposes of improving the safety of healthcare, may encompass all of these perspectives but critically also includes reflection on the broader healthcare system.

We now take the concepts described above and apply them to clinical practice to show how chains of errors can combine to cause harm to patients. We also examine the role of the wider organisation by considering the various factors that contribute to the likelihood of an error and harm to a patient. We consider two illustrative cases of common presentations in acute hospital settings. The first evolved over several days and the second over a much shorter time frame (hours). In each case, we see a chain of errors and other problems in the process of care which combine to cause harm to the patient. We also, importantly, see how working conditions and wider organisational issues impact on clinical work and how vulnerabilities in the healthcare system pose major risks to patients.

Box 3.1: An Avoidable Patient Fall

Day 1

An 88-year-old man was brought to the emergency department (ED) in the early afternoon by his wife and daughter. He had been becoming increasingly confused at home and was not taking care of himself as he normally would. His past medical history included chronic obstructive pulmonary disease, aortic valve replacement for stenosis, a laminectomy for sciatic nerve decompression, and benign prostatic hypertrophy. His presenting complaint was worsening confusion and hallucinations, disturbed sleep, poor appetite, and increased shortness of breath.

He was clerked in by a trainee doctor at 16:20 and seen by a consultant physician at 17:15 when a provisional diagnosis of sepsis of unknown origin was made. A bed was found on a medical ward (MW) and was transferred from ED at 21:00.

A falls risk assessment was undertaken in ED and he was found to be at high risk, unfortunately no falls action plan was made and the level of risk was not adequately handed over to the staff on MW. The family spoke to members of staff in ED and on MW about their concerns that the patient may fall and injure himself particularly as the bed on MW was in a bay at the end of the ward where the patient would not be easy to observe.

The ward was busy and it was staffed to agreed levels but the dependency of the patients was high. The nurse looking after this patient decided that he was settled and did not need 1:1 care but asked the care support worker (CSW) to review him regularly. The patient was being cared for on a bed with side rails (not recommended in high risk patients as they can become entangled in the rails if they are confused) and not on a low level bed with “crash mattresses” either side as recommended for patients at risk of falling.

At approximately 21:45 the patient was found on the floor by the bed having fallen. He was confused and complaining of pain in the right hip and thigh. He was reviewed by the trainee doctor on call whose note read (sic)

Asked to see patient as unwitnessed fall, found by nursing staff alert but very confused, admitted with confusion and urinary tract infection. Plan for ECG, review of right hip in the morning for development of swelling/bruising, close observation to prevent further falls, day team to consider if further imaging is required.

The patient was moved to a bay where he could be closely observed, the ECG was reviewed (nothing acute was seen) and the nursing notes recorded an otherwise uneventful night with no obvious pain.

Day 2

The morning ward round was conducted by a different trainee doctor and the speech and language therapists came to review the patient and decided that he was too drowsy and confused to take fluid safely by mouth and so the intravenous infusion should continue. The trainee doctor decided that an X-Ray of the right hip should be done but requested it as a routine investigation and it was not, therefore, prioritised. The handover to the trainee doctor on call that night mentioned that the X-Ray had not been done and that it needed ‘chasing’.

Day 3

A different trainee doctor undertook the ward round and notes concerns were raised in the nursing notes about bruising around the right knee but the patient also had a low blood pressure requiring closer monitoring and a fluid challenge. By 13:15, the X-ray had still not been done and the trainee doctor called the radiology department. At 16:00, the trainee doctor was called by the radiologist to report a hip fracture and suggest an urgent referral to the trauma surgeons.

While this patient was successfully treated for his hip fracture and returned home, the fall he sustained led to unnecessary pain, a protracted recovery and added to the concern felt by his family.

6.1 Case 1: An Avoidable Patient Fall

Box 3.1 provides an overview of the events leading up to an avoidable fall on a medical ward. This 88-year-old man had multiple health problems and was admitted in a confused and distressed state. He fell while in hospital with long-term consequences for his mobility and quality of life. We could easily see his fall as simply being the consequence of his frail condition and not the fault of healthcare staff. However, whether or not we regard anyone as being at fault, this story exposes some vulnerabilities in the healthcare system.

Following the event outline above, we can identify a series of problems in the care provided and a number of wider contributory factors. Figure 3.2 provides a summary of the key error points during this patient’s admission to hospital and includes error types and contributory factors. The contributory factors in the evolution of this incident were a mixture of problems with systems, organizational, work, and team factors—the kind of issues seen in most healthcare adverse events (these are categorised according to the London Protocol in Table 3.1).

Fig. 3.2
figure 2

Error chain describing key error points leading to an avoidable fall and a delay in diagnosis of hip fracture. Contributory factors (from the London Protocol) are highlighted and colour coded according to type

An elderly patient with sepsis is difficult to assess because of their multiple comorbidities and the difficulties of communicating with someone who is confused. The emergency department and ward were also very busy reducing the time available. Nevertheless, we can identify the following problems or ‘error points’ in the sequence of care:

  • Every adult over 65 years admitted to an acute hospital in the NHS should receive a falls risk assessment but it was not done properly. This patient was assessed for falls risk and was categorised (appropriately) as ‘high risk’ but no plan to reduce the risk was put in place and the information was not clearly handed over by the ED nurse to the nurse on MW.

  • Although at high risk of a falls the patient was placed in a bay which was difficult to observe and not kept under close observation. The Care Support Worker allocated to the bay was busy with someone else while this patient attempted to get of bed and fell.

  • The trainee doctor on call on the night of the fall did an appropriate assessment of the patient but did not handover his concerns about the risk of fracture adequately.

  • On Day 3 the patient had an additional problem (low blood pressure) another different trainee doctor (without senior assistance) reviewed the patient but was distracted by the low blood pressure and did not prioritise the investigation of the hip.

These are the principle error points (active failures in Reason’s terms) in the care of this man that played a part in both the fall and to the delayed diagnosis of fracture. We can also (Table 3.2) look at the wide range of factors that contributed to these problems occurring. These included: the frailty and confusion of the patient made assessment difficult, the inconsistent methods for monitoring and recording falls, the inexperience of the junior doctor, the lack of systematic handover, and the fact that at night the hospital has a lower nurse to patient ratio and that other elderly patients required a high level of support from the nurses on duty.

Table 3.2 Contributory factors in a case of avoidable fall (from the London Protocol)

Box 3.2: An Avoidable Emergency Laparotomy in a Case of Ectopic Pregnancy

A 28-year-old woman with abdominal pain and lethargy arrived in the busy emergency department (ED) at 16:19 and was seen by a triage nurse who recorded some baseline observations and referred the patient to the ED trainee doctor, stating that she was “not worried” about the patient. The protocol for the investigation and management of early pregnancy in ED was inadequate, and there was a delay in sending the necessary blood samples for diagnosis. The track and trigger score was incorrectly calculated and follow-up observations (for heart rate and blood pressure) were, therefore, not increased in frequency resulting in a delay in calling for an expert opinion from a gynaecologist. The ED trainee doctor did not recognise the urgency of the situation and when the referral was made to gynaecology the handover did not emphasise the seriousness of the situation adequately. The trainee gynaecologist, therefore, advised that the patient be sent to the gynaecology ward for further assessment without coming to ED to see the patient.

When the patient arrived on the ward, the senior trainee gynaecologist diagnosed an ectopic pregnancy and recognised that the patient’s condition was deteriorating (her haemoglobin had dropped significantly to 99 g/L, her blood pressure was falling, and she was now complaining of shoulder tip pain). The decision was made to take the patient to theatre for emergency laparoscopic surgery and because it was now after 18:00, theatres in the main hospital were informed and the case was booked with the on-call anaesthetist. Audits had revealed that very few gynaecological emergencies came to theatre after normal working hours and consequently gynaecological patients were transferred to main theatres out of hours.

When the consultant surgeon was called (there was a 30 min delay in locating him), he agreed to come in and assist with the procedure. The patient arrived in theatre 5 h after the initial presentation with a very low blood pressure and a haemoglobin of 67 g/L. The WHO pre-list briefing was completed without the consultant gynaecologist who did not arrive until the patient was anaesthetised and being prepared for surgery by the senior trainee gynaecologist and after the ‘time out’ section of the WHO checklist.

At this time, the patient was extremely unwell and there was significantly heightened pressure to get on with the procedure. Tensions were high and when problems arose with the laparoscopy equipment (an accidentally de-sterilised light source and diathermy forceps which were incompatible with the electrical lead) behaviour deteriorated and exacerbated the stress felt by staff in theatre. The delays caused by the equipment problems necessitated a decision to convert to an open procedure which the Consultant made promptly in order to gain control of the bleeding. Once the haemorrhage was controlled and additional blood products were given the operation to remove the fallopian tube was completed uneventfully and the patient was stabilised and transferred to recovery with no further complications.

This case is similar to the one described above in that it contains the same types of contributory factors and errors that led to the eventual adverse event. The patient recovered well but had to stay in hospital longer to recover because the procedure was converted to a more invasive surgical approach.

6.2 Case 2: An Avoidable Emergency Laparotomy in a Case of Ectopic Pregnancy

Box 3.2 provides an overview of events leading up to conversion to emergency laparotomy in a young woman with an ectopic pregnancy. The case resonates with the fall described above in the sense that it would be easy to see the delayed diagnosis and treatment as a result of the patient’s youth: her cardiovascular system was able to mask the signs of shock and so medical staff did not suspect haemorrhage. It is only when we take a more holistic view of the incident that we see the latent system and organisational issues which are summarised in Fig. 3.3 along with error types.

Fig. 3.3
figure 3

Error chain describing key error points in a case of emergency laparotomy for ectopic pregnancy. Contributory factors (from the London Protocol) are highlighted and colour coded according to type

Diagnostic challenges are a part of every medical student’s training and this case illustrates a well-recognised situation where haemorrhage is masked by the robust response of a healthy cardiovascular system. However, what is not commonly taught in medical school curricula is the risk of missing diagnoses due to distraction and system failures. This young woman’s case illustrates those problems very well:

  • The nurse in ED was using a poorly designed protocol for early pregnancy which did not stress the importance of urgent blood samples.

  • The trainee doctor had limited experience, was busy with other cases, and was influenced by the nurse’s lack of concern. He therefore did not request an urgent review of the patient.

  • Staffing problems in the hospital meant that emergency gynaecology cases after 18:00 had to be taken to main theatres and transfer time from the gynaecology ward was 20 min. Furthermore, no training was offered to support staff in acclimatising to the different work environment they would be in after hours.

  • The WHO checklist was not used adequately which led to a lack of understanding of what type of equipment would be available and no opportunity for a discussion of potential problems and their mitigations.

  • The gynaecologists were not used to the scrub staff or the theatre environment and equipment and when the situation became stressful the team did not function effectively and had to perform a more invasive operation to control the bleeding.

These are the principle error points leading to the emergency conversion to laparotomy in what could have been a more straightforward laparoscopic procedure. The heightened stress in this situation further impaired team function but the ‘upstream’ delays in diagnosis, staff shortages, and the physical location of the ward and theatres along with organisation of the gynaecology service out of hours all contributed to the ultimate crisis (see Table 3.3 for detailed categorisation of contributory factors).

Table 3.3 Contributory factors to a gynaecological emergency

7 Conducting Your Own Incident Investigation

There are a number of methods of investigation and analysis available in healthcare, though these tend to be comparatively under-developed in comparison with methods available in industry [10]. In the USA, the most familiar is the root cause analysis approach of the Joint Commission, an intensive process with its origins in Total Quality Management approaches to healthcare improvement [11]. The Veterans Hospital Administration has developed a highly structured system of triage questions which is being disseminated throughout their system. We do not have space to examine all potential methods, which vary in their orientation, theoretical basis, and basic approach. All however, to a greater or lesser extent, uncover factors contributing to the final incident. We will summarise an approach developed at University College London by the Clinical Safety Research Unit known, imaginatively, as the London Protocol [12].

Most other approaches to analysing incidents in healthcare are termed ‘root cause analysis’; in contrast, we have described our own approach to the analysis of incidents as a systems analysis as we believe that it is a more accurate and more fruitful description. The term root cause analysis, while widespread, is misleading in a number of respects [13, 14]. Most importantly, it implies that the purpose of an investigation is to identify a single or small number of ‘root causes’. If you look back at the two case examples however you will see that there is no ‘root cause’. Our analyses have shown a much more fluid and complex picture. Usually, there is a chain of events and a wide variety of contributory factors leading up to the eventual incident. Incident analysis, properly understood, is not a retrospective search for root causes but an attempt to use the incident as a ‘window on the system’ to reveal the vulnerabilities and hazards that are constant threats to patient care.

Too often the questions asked about an incident focus on “who?” rather than “how?” with the result that individuals rather than systems are targeted and blamed. High reliability organisations have recognised the need to move away from a culture of blame, which leads to reluctance to report incidents, and have developed a just culture where learning from incidents (including near misses) is encouraged and expected. The paradigm shift in these organisations is outlined in Table 3.4 but, unfortunately, is not yet well developed in healthcare [15].

Table 3.4 Critical incident paradigms (adapted from Woods et al. [15])

8 Systems Analysis of Clinical Incidents

During an investigation, information is gleaned from a variety of sources. Case records, statements, and any other relevant documentation are reviewed. Structured interviews with key members of staff are then undertaken to establish the chronology of events, the main care delivery problems and their respective contributory factors, as perceived by each member of staff. Ideally, the patient, or a member of their family, should also be interviewed though as yet this is not yet common practice in these analyses. The key questions are ‘What happened? (the outcome and chronology); How did it happen? (the errors and care delivery problems); and Why did it happen? (the contributory factors)’.

Once the chronology of events is clear there are three main considerations: the errors and other care delivery problems identified within the chronology, the clinical context for each of them, and the factors contributing to the occurrence of the care delivery problems. Any combination of contributory factors might contribute to the occurrence of a single care delivery problem. The investigator needs to differentiate between those contributory factors that are only relevant on that particular occasion and those which are longstanding or permanent features of the unit. For instance, there may be a failure of communication between two midwives which might be an isolated occurrence or might reflect a more general pattern of poor communication on the unit.

While a considerable amount of information can be gleaned from written records, interviews with those involved are the most important method of identifying the contributory factors. This is especially so if the interview systematically explores these factors and so allows the member of staff to collaborate in the investigation. In the interview, the story and ‘the facts’ are just the first stage. The staff member is also encouraged to identify both the successful aspects of the care provided and the errors and care delivery problems. Both staff members and interviewer can reflect together on the contributory factors, which greatly enriches both the interview and investigation.

Analyses using this method have been conducted in hospitals, primary care settings, and mental health units. The protocol may be used in a variety of formats, by individual clinicians, researchers, risk managers, and by clinical teams. A clinical team may use the method to guide and structure reflection on an incident, to ensure that the analysis is full and comprehensive. For serious incidents, a team of individuals with different skills and backgrounds would be assembled though often only a risk manager or an individual clinician will be needed. The contributory factors that reflect more general problems in a unit are the targets for change and systems improvement. When obvious problems are identified action may be taken after a single incident, but when more substantial changes are being considered other incident analyses and sources of data (routine audits and outcome data) should also be taken into account.

8.1 From Analysis to Meaningful Action

When considering the error type in the context of the contributory factors at the time of the error, it becomes clearer how meaningful interventions might be made to prevent similar incidents in future. Sometimes incident investigations point to immediate changes that need to be made, such as replacement of faulty equipment or updating of misleading or inconsistent guidelines. Generally, however, we should not generate plans for major interventions on the basis of a single incident but draw on a wider range of information and check that the findings of the incident are really indicative of more widespread problems. We can nevertheless think about usual intervention that might be made on the basis of our analyses of the two cases.

For example, in the first case there were several rules-based mistakes. The protocol for falls assessment and prevention was not used adequately by the nurses. Some important contributory factors were the inconsistencies in falls risk assessment and recording and also the staffing shortages at critical times. These suggest potential interventions:

  • A review of staffing levels and consideration of different working patterns to cover busy times more effectively could help

  • Standardising the way falls risk assessments are recorded across all clinical areas (the use of electronic patient records can help here)

The second analysis reveals a rather different range of problems and contributory factors and, correspondingly, different types of potential interventions. Undertaking an emergency laparoscopy is not an unusual occurrence in gynaecology but the knowledge-based mistake leading to conversion to an open procedure can be better understood when we realise that staff were unfamiliar with each other and their equipment and environment, the WHO checklist was done in a hurry and without the consultant surgeon present and that staff had not previously trained as a team to deal with crisis situations. Potential interventions, therefore, might be:

  • Scrub staff from gynaecology theatres could work on a rotational basis in the main theatres to ensure they used the environment and equipment and equipment could be standardised across sites

  • Training to embed good practice in the use of the WHO checklist for theatre teams

  • Regular simulation training to support staff in the management of emergencies

The design and implementation of realistic and sustainable interventions to prevent incidents recurring is a topic outside the scope of this chapter. Suffice it to say that where possible the implementation of a physical rather than a procedural intervention is more likely to succeed (e.g. the design of a device to prevent retention of guidewires after the insertion of a central venous line rather than a change to the procedure requiring additional checks to be made). However, in a financially constrained health service sometimes physical interventions may be prohibitively expensive and well-designed checklists with training to support embedding them in practice may be the best compromise [16].

9 Supporting Patients, Families, and Staff

In this chapter, we have focussed on understanding how error and harm occur and offered models of understanding and practical approaches to investigation. We have hopefully persuaded you that understanding the wider psychological and organisational influences on clinical practice will enrich your approach to medicine and provide a foundation for improving the care provided to patients. The chapter would be incomplete however if we did not mention, if only briefly, the need to also consider the aftermath of serious errors and the needs of those affected [17].

The impact of a medical injury differs from most other accidents in two important respects. First, patients have been harmed, unintentionally, by people in whom they placed considerable trust, so their reaction may be especially powerful and hard to cope with. Secondly, and even more important, they are often cared for by the same professions, and perhaps the same people, as those involved in the original injury. They may have been very frightened by what has happened to them, and have a range of conflicting feelings about those involved; this too can be very difficult, even when staff are sympathetic and supportive. Many people harmed by their treatment suffer further trauma through the incident being insensitively and inadequately handled. Conversely when staff come forward, acknowledge the damage, and take the necessary action, the overall impact can be greatly reduced.

In our two examples, the patients eventually recovered although both experienced much unnecessary anxiety and suffering in the process. However, the long-term consequences some serious incidents can be life changing in terms of pain, disability, and effect on family relationships and the ability to work. Patients and families need support immediately after the serious incident and sometimes over long periods afterwards. The healthcare organisation concerned has a responsibility to provide or arrange for this care. Injured patients need an explanation, an apology, to know that changes have been made to prevent future incidents, and often also need practical and financial help. The absence of any of these factors can be a powerful stimulus to complaint or litigation.

Staff also suffer a variety of consequences when involved in serious incidents. Albert Wu captured the experience of making a serious error in his paper ‘the second victim’, not implying that the experiences of staff were necessarily comparable to those of injured patients [18]. Surgeons, for instance, can be seriously affected by serious complications that they perceive to have been their fault. Emotional reactions range from guilt and crisis of confidence, to anger and worry about one’s career. Even though the intense emotional impact progressively fades, there are certain cases that surgeons recollect many years later. Serious complications often make surgeons more conservative or risk-adverse in the management of patients, which can be detrimental for patient care [19].

10 Conclusions and Recommendations

It is an unfortunate truth that the prevailing culture around serious incidents in healthcare remains one of blame. When a serious incident occurs, the first priority is obviously the care of the patient and family. The second priority however should be supporting colleagues and not rushing to blame or condemn people who make serious mistakes. Some types of behaviour deserve blame and sanctions, but even the best people make honest mistakes. When this happens, they need support from both colleagues and their organisation both for their own well-being and for the sake of all the patients they will be looking after in the future.

High reliability organisations have spent decades developing robust, standardised systems of investigating incidents including the establishment of truly independent expert investigative bodies (such as the UK’s Air Accident Investigation Branch, https://www.gov.uk/government/organisations/air-accidents-investigation-branch). Healthcare has learnt from some of these lessons and in April 2017 the Healthcare Safety Investigation Branch was established in the NHS (https://www.hsib.org.uk) with the stated purpose of ‘improving patient safety through effective and independent investigations that don’t apportion blame or liability’. Their work has only just begun but will draw on existing expertise in the NHS to capture the widely shared ambition of learning from the past to improve the future.

Some branches of medicine, most notably anaesthesia, have been at the forefront of developments in patient safety [20, 21]. Human factors is a core theme throughout the postgraduate curricula for anaesthesia training and quick reference handbooks (much like those in the military or civil aviation) have been developed as cognitive aids for diagnostic challenges particularly in crises (https://anaesthetists.org/Home/Resources-publications/Safety-alerts/Anaesthesia-emergencies/Quick-Reference-Handbook). These developments in postgraduate specialty curricula must be extended to undergraduate teaching in medical and nursing schools. It is only by ensuring that young professionals in healthcare are equipped with the necessary tools to understand the complex, rapidly evolving systems in which they will be working, that they will be able to improve them [22].