Keywords

1 Introduction

Antibiotic resistance (ABRFootnote 1) has been described as a “slowly emerging disaster” (Viens and Littmann 2015). The risk of acquired ABR was recognised before the first antibiotics were widely used and its dire consequences have been understood by experts for many years, but the magnitude of the threat and the urgent need for radical solutions to limit its impact have been widely acknowledged only recently. In this chapter, we argue that it is not too late to mitigate the disaster and check its progress, at least in hospital settings, if certain contributory factors are acknowledged and addressed without delay. These factors, we suggest, include naïve optimism, ignorance, hubris and nihilism on the part of pharmaceutical companies, healthcare professionals (mainly prescribing doctors) and health administrators, among others.

By the first half of the twentieth century, improvements in living conditions in industrialised countries contributed to rapidly falling infectious disease mortality (Armstrong et al. 1999). At the same time, surgical antisepsis, clean wards, hand washing by clinicians and skilled nursing care (Larson 1989; Gill and Gill 2005), had diminished the risk of death in hospitals from puerperal or postoperative sepsis (Gawande 2012). But the remedies for serious infections, such as bleeding, purging or toxic infusions of arsenic, mercury or opiates, probably hastened, more than they postponed, death (Funk et al. 2009).

Antibiotics changed all that. From the beginning they were hailed as miracle drugs. Doctors embraced their use, not only to cure once life-threatening diseases, but also, because they seemed so free of adverse effects, to treat minor infections or even a perceived risk of infection. But, as early as 1945, Alexander Fleming, who shared a Nobel Prize for the discovery of penicillin, warned: “…the public will demand [penicillin]…then will begin an era…of abuses. The microbes are educated to resist penicillin and a host of penicillin-fast organisms is bred out which can be passed to other individuals ……In such a case the thoughtless person playing with penicillin treatment is morally responsible for the death of the man who finally succumbs to infection with the penicillin-resistant organism. ” (A. Fleming, 1945,Footnote 2 quoted in (Bartlett et al. 2013)) And, indeed, within a few years most hospital isolates of previously susceptible Staphylococcus aureus were penicillin resistant (Barber and Rozwadowska-Dowzenko 1948).

Although they recognised that overuse would promote resistance, pharmaceutical companies aggressively promoted the benefits and safety of antibiotics to doctors and directly to the public. And, in the 1950s they recognised an even larger market when Thomas Jukes, at Lederle laboratories, demonstrated the growth-promoting effect of antibiotics in food-producing animals: “Animals receiving 10 ppm of chlortetracycline in the diet developed resistance in their intestinal bacteria in less than two days…[but] their growth rate increased. ….we concluded that if [resistant pathogens] appeared … usefulness of the antibiotic supplement would vanish and farmers would stop feeding it. …The [company] decision was strongly opposed by the veterinarians at Lederle, but their wishes were abruptly denied by Dr. Malcolm [Lederle President], who made an individual decision to go ahead. Competition was right on our heels...” (Jukes 1985). And it soon caught up: “The power of the calliope in the antibiotic bandwagon increased spectacularly during the [1950s] while poultry, pigs, and patients danced to its tune.” (T. D. Luckey, 1959,Footnote 3 quoted in (Podolsky 2017)).

By the mid-twentieth century, it was predicted that antibiotics (and vaccines) would put an end to infectious diseases. “[T]he belief that infectious diseases had been successfully overcome was pervasive in biomedical circles - including … a Nobel Laureate, medical Dean, and other thought leaders - from as early as 1948……..” (Spellberg and Taylor-Blake 2013). It was widely assumed that if bacteria developed resistance to one drug, as they often did, there would soon be better ones to replace it; and, for many years, there were. Indeed there was such confidence that infections could be easily cured, that preventing them became a lower priority. But by the 1970s there was mounting disquiet about the emergence of ABR. Methicillin resistant S. aureus (MRSA) (Jevons et al. 1963) and transmissible resistance in Gram-negative bacteria (Datta and Pridie 1960) had emerged in the 1960s and their prevalence was increasing, especially in hospitals (Chambers 2001; Aminov 2010; Ventola 2015). The first International Conference on Nosocomial Infections, in 1970, was followed by the Study on the Efficacy of Nosocomial Infection Control (SENIC), in the USA (Forder 2007); in Australia, the handbook of “Antibiotic Guidelines” was published for the first time, in 1978 (Harvey et al. 2003). Anxiety that antibiotics would progressively lose efficacy, became more acute in the 1980s, when the seemingly unlimited flow of new antibiotics slowed to a trickle, and pharmaceutical companies turned their attention to more profitable projects.

Antibiotic use is no longer regarded as an unquestioned good. Antimicrobial resistance (AMR) is now seen as a threat to global health security and the “end of the antibiotic era” predicted; it is broadly accepted that urgent measures are needed to salvage what we can of the “antibiotic miracle”: more prudent use of existing antimicrobial agents; novel strategies to promote development of new ones; and greater attention to preventing the spread of drug resistance organisms (WHO 2012; CDC 2013).

2 Hospital Infection in the “Pre-Antibiotic Era”

In nineteenth century Europe, medical science advanced rapidly; there was increasing demand for new hospitals, where university-trained doctors could develop and experiment with new remedies and advance their knowledge. Anaesthetics increased the scope of surgery (Gawande 2012) and pregnant women were more likely to be admitted to ‘lying-in’ hospitals, where advances in obstetrics offered relief from excessively long, difficult labour (Loudon 1986). But hospitals were overcrowded, dirty and poorly ventilated; infectious disease outbreaks were common and maternal mortality from childbed fever much higher in hospitals than in the community.

Alexander Gordon, an Aberdeen physician, had recognised puerperal fever as a “specific contagion or infection” that could be carried between parturient women on the hands of her attendants, in 1795 (Gordon 1795), but his work was ignored until the 1840s, when James Young Simpson, in Edinburgh (Selwyn 1965), Oliver Wendell Holmes, in Boston (Holmes 2001), and Ignaz Semmelweis (Nuland 1979; Carter 1981), in Vienna, independently came to similar conclusions. Simpson also recognised that puerperal and surgical fevers were intercommunicable” and coined the term hospitalism to describe outbreaks of surgical infection, which he believed were so serious that “…every patient placed upon an operating table … is in … greater danger than a soldier entering one of the bloodiest and most fatal battlefields” (J. Y. Simpson, 1859 quoted in (Selwyn 1965)).

In Vienna, Semmelweis was troubled by the much higher maternal mortality, in a clinic staffed by doctors and medical students, than in an otherwise similar clinic staffed by midwives. After months of investigation, he realised that the only significant difference between the clinics was that, unlike the midwives, the students and doctors frequented the mortuary, dissecting the bodies of women who had died from childbed fever. When they returned to the clinic, they carried on their hands “cadaver particles which are not entirely removed by the ordinary method of washing the hands with soap....[Therefore] the hands of the examiner must be cleansed with chlorine, not only after handling cadavers, but likewise after examining patients” (Semmelweis 1983). After he enforced this strict hand washing regime, the maternal mortality in the doctors’ clinic rapidly fell to a level similar to that in the midwives’ clinic (Nuland 1979). Despite the evidence, Semmelweis’ findings lacked a conceptual basis, 20 years in advance of Pasteur’s germ theory of disease, and were largely rejected by his peers. His accusation that anyone who did not follow his recommendations would be ‘murderers’, undoubtedly contributed to their antagonism (Pittet 2004; Saint et al. 2010a).

During the Crimean war, in 1854, the British public was scandalised by a newspaper report of deplorable conditions in the British Army hospital at Scutari. When Florence Nightingale was sent to investigate, she found vermin- and lice-infested wards, excreta on walls and floors, injured soldiers in dirty, bloodstained clothes and frequent infectious disease outbreaks. Nightingale believed that disease was caused by filth and foul air (miasmas); she overcame bitter opposition from the military surgeons and formidable logistic barriers to implement a strict regime - immediate wound care; clean dressings, clothes and bedding; nutritious food; and regular cleaning and ventilation of wards. Her meticulous records showed that soldiers were far more likely to die from preventable infection than war wounds. In the January–March quarter of 1855, the mortality at Scutari was 33%, by the July–September quarter it was 2%. While critics have belittled her achievements, her methods remain the basis of good nursing care and hospital infection control (Larson 1989; Gill and Gill 2005).

In the 1860s, Joseph Lister’s knowledge of Pasteur’s germ theory informed his belief that the almost inevitable (and often fatal) suppuration that complicated compound fractures and amputations was due to “minute organisms suspended in [the atmosphere], which owed their energy to their vitality” (Lister 1867). By liberal use of carbolic acid to soak wound dressings and disinfect his hands, instruments and the operative site, he was able to manage most compound fractures without amputation and the post-amputation mortality fell from 46% (16/35) in 1864–6 to 15% (6/40) in 1867–9 (Newsom 2003). Many of his compatriots ridiculed his methods, but they were gradually accepted, particularly in Europe. His acknowledged place as the “father of antiseptic surgery” owes much to its basis in the germ theory, which gave it an authority that Semmelweis’ earlier work lacked.

As antisepsis and later asepsis, environmental hygiene and skilled nursing care were gradually incorporated into hospital practice, hospital infection rates fell and hospitals became places of healing rather than dying.

3 The Antibiotic Era

Acquired antibiotic resistance (ABR), of bacterial pathogens that affect humans, is mainly due to nearly 75 years’ of both appropriate and inappropriate antibiotic use in human medicine and dissemination of resistant organisms. Antibiotic use in agriculture and veterinary practice and environmental contamination are also implicated, but their contributions are contested and vary from region to region (Landers et al. 2012; Marshall and Levy 2011; Chang et al. 2015). The dynamics are complex and incompletely understood (Turnidge and Christiansen 2005) but, in general, exposure of bacteria to antibiotics exerts powerful selection pressure; the greater the total amount and the broader the antibacterial spectra of antibiotics used in any setting, such as a hospital (Willemsen et al. 2009; Xu et al. 2013) or community (Goossens et al. 2005; Bell et al. 2014), the higher the prevalence of ABR. More antibiotics (by weight) are prescribed for non-human animals than people and environmental contamination is a major contributor to ABR. Nevertheless, although inappropriate prescribing is probably the main contributor to drug resistant infections humans, it is now accepted that control of AMR/ABR requires a One Health approach (Robinson et al. 2016). However, the focus of this chapter is on ABR in hospitals, where multidrug resistant organisms (MDROs)Footnote 4 are most obvious and prevalentFootnote 5 and preventive measures most likely to be effective.

Most life-threatening infections are treated in hospitals, where the greatest variety of antibiotics is used, in repeated courses or for prolonged periods. In hospitals, busy healthcare professionals often carry MDROs on their hands, exposing patients to the risk of healthcare-associated infection (HAI) or colonisation with an MDRO. Hospital laboratory reports increase prescribers’ awareness of ABR and, perhaps, promote a (mistaken) perception that it is ubiquitous; this may encourage defensive prescribing – e.g. of multiple or broad-spectrum agents - to avert treatment failure. Paradoxically, increased awareness of ABR is not necessarily reflected in increased adherence to measures designed to prevent it. Now that it is recognised that profligate antibiotic use promotes ABR and inadequate infection prevention and control (IPC) facilitates transmission, the challenge is to break these intersecting vicious cycles, particularly in hospitals, where they are most apparent.

4 Antibiotic Use and Stewardship in Hospitals

Antibiotics are prescribed very frequently in hospitals; studies in USA and Australia have shown that more than 50% of hospital patients receive at least one antibiotic and up to 50% of prescriptions are unnecessary or inappropriate, according to prescribing guidelines (Turnidge et al. 2016; Baggs et al. 2016; Reddy et al. 2015). Making the right antibiotic prescribing decision is difficult, even for an experienced practitioner, particularly when the diagnosis is uncertain. For patients with sepsis - especially those most at risk of life-threatening infections, e.g. who are neutropenic or immune-compromised - delay can lead to serious complications or death from septic shock (Kumar et al. 2006). But antibiotics prescribed empirically are often continued, even after an alternative diagnosis is made, or not reviewed, despite laboratory results that indicate the empirical choice was ineffective or unnecessarily broad-spectrum (Braykov et al. 2014). Antibiotics are also often given in combination, in the wrong dose, by the wrong route or for too short or too long a period (Dryden et al. 2011; Gilbert 2015). Any of these errors can contribute to ABR, without concomitant benefit to the patient herself and, potentially, with significant harm including an increased risk of Clostridium difficile infection, MDRO acquisition or prolonged disruption of the gut microbiome, with potentially serious adverse effects (Dingle et al. 2017; Becattini et al. 2016).

Diagnostic uncertainty is not the only, or even the most common, reason for inappropriate prescribing. The prescriber’s rational concern can transform into excessive risk aversion, without regard for antibiotic conservation or potential adverse effects on the patient. A junior hospital doctor may consider ABR “morally and professionally important...” but “of limited concern at the bedside” (Broom et al. 2014). Fear of missing something or being criticised, by peers or superiors, outweighs consideration of long-term risks to future patients or the environment. She may prescribe an antibiotic, even if she thinks it unnecessary or futile, because of inexperience, her consultant’s routine practice or a duty of benevolence towards her patient that makes her want to do something. Junior doctors are required to make complicated prescribing decisions, often in the face of conflicting, inconsistent (or no) advice or feedback (Mattick et al. 2014). Broom et al. (Broom et al. 2014) concluded that “..social risks, including the peer-based and hierarchical reputational consequences associated with ‘not doing enough’” are far more potent than the risk of ABR (Broom et al. 2014).

Over the past 10 years, programs have been introduced into hospital practice in many countries with the aim of minimising inappropriate antibiotic therapy. Antimicrobial stewardship (AMS) programs aim to ensure that patients are given antibiotics when they need them – “the right drug, at the right time, in the right dose and for the right duration” (Dryden et al. 2011; Doernberg and Chambers 2017) – with the least possible selection pressure. They consist of ‘bundles’ of interventions, including: restrictions on the use of certain key antibiotics, except with specific authorisation; prescriber education and academic detailing; audit of prescribing patterns, with feedback to prescribers; optimisation of laboratory testing, including rapid diagnostics; and technological support such as electronic access to microbiology results and computerised decision support systems (Davey et al. 2017).

AMS programs depend on multidisciplinary teams - including infectious disease physicians, clinical microbiologists, specialist antimicrobial pharmacists and/or IPC professionals - whose complementary expertise and spheres of influence provide mutual support and greater authority than each has, individually. The specialist pharmacist’s expertise in drug dosing, interactions and administration and her role in implementing regulations, such as automatic stop orders or restricted drug authorisation, and auditing prescribing records, are critical to an AMS program. Nevertheless, even the most experienced pharmacist or AMS team can encounter resistance from a senior specialist who may regard their advice as a threat to her autonomy and clinical experience (Broom et al. 2016).

How effective an AMS program will be depends on what it includes and how it is implemented. A recent systematic review (Davey et al. 2017) showed that providing advice and feedback to clinicians improved prescribing and reduced overall antibiotic use more than simply imposing rules and restrictions, suggesting that AMS programs that support prescribers help to mitigate the fear of censure or litigation that often drives inappropriate prescribing. Overall, studies of AMS show that it can reduce inappropriate prescribing, pharmacy costs and avoidable drug reactions; improve therapeutic drug monitoring; shorten hospital length of stay by an average of 2 days; and may reduce rates of C. difficile, fungal and MDRO infections (Davey et al. 2013; Baur et al. 2017). Although some studies have confirmed the cost-effectiveness of AMS, more high-quality research is needed (Coulter et al. 2015).

Clearly, eliminating inappropriate antibiotic use is necessary, but not sufficient, to reduce the impact of ABR, which is exacerbated by hospital spread of MDROs.

5 Hospital Infection Prevention and Control (IPC) and ABR

5.1 Healthcare-Associated Infections and Their Consequences

According to WHO “…..HAI is the most frequent adverse event in health care [but] its true global burden remains unknown because of the difficulty in gathering reliable data” (WHO 2011). A systematic review of HAIs in high and middle/low-income countries showed that 3.5%–12% of hospitalised patients develop at least one HAI (WHO 2011), of which 50%, or more, are potentially preventable (Harbarth et al. 2003; Umscheid et al. 2011). The estimated number of people, globally, who die from drug-resistant infections each year – currently at least 700,000 - is predicted to increase to ten million by 2050 (O’Neill 2016). HAIs caused by MDROs are more difficult to treat, more likely to be fatal and more costly than comparable HAIs due to antibiotic-susceptible pathogens (Cosgrove 2006; de Kraker et al. 2011). HAI risks are associated with patient factors: severity of illness and co-morbidities such as chronic organ failure, malnutrition, immune-suppression, serious trauma or contaminated surgery; and organisational factors: bed occupancy rate; staff workload; hospital environment and infrastructure; prevalence of endemic or introduced MRDO pathogens; adherence of healthcare workers to basic hygiene principles (Clements et al. 2008; Daud-Gallotti et al. 2012; Scheithauer et al. 2017). Hospital IPC policies are designed to minimise these risks and the burden of HAIs and ABR. Unlike other adverse hospital events, MDRO colonisation and HAIs are not confined to individuals; they are communal threats that affect other patients, hospital staff and the wider community and add to the burden of AMR.

Even without clinically significant infection, MDRO colonisation has significant impacts. Patients colonised with certain high-impact MDROsFootnote 6 are identified by active admission screening and isolated in single rooms, with contact precautions.Footnote 7 These are expensive measures and they can adversely affect patient care and wellbeing. Patients in contact isolation are, on average, visited by healthcare workers less often and for shorter periods; less likely to be examined by a doctor; more likely to suffer non-infectious adverse effects, such as falls, pressure sores and fluid and electrolyte imbalance; and more likely to express dissatisfaction with their hospital care, than other patients (Saint et al. 2003; Stelfox et al. 2003; Morgan et al. 2014). They may feel stigmatised and anxious about risks to family members; they and their visitors are often inadequately informed or given conflicting information about the implications of MDRO colonisation and how to protect themselves and others (Wyer et al. 2015; Seibert et al. 2017). Contact isolation is also burdensome to healthcare workers. Compliance with hand hygiene and use of personal protective equipment (and, thus, the effectiveness of contact isolation) is often relatively poor and likely to deteriorate as the number of isolated patients increases (Clock et al. 2010; Dhar et al. 2014).

Moreover, there is conflicting evidence (Cohen et al. 2015; Morgan et al. 2017), as to whether active screening and contact isolation prevent MDRO transmission more effectively than less expensive and burdensome measures such as strict adherence to standard precautionsFootnote 8 (Huskins et al. 2011) and/or targeted contact isolation of patients with other risk factors (e.g. diarrhoea, open wounds) (Djibre et al. 2017). This raises the question as to whether these practices are ethically justified, based on the precautionary principle - i.e. that they might prevent harm to others - or cost-effective. Patients are rarely asked for their consent to be screened or informed of the consequences of a positive result, although contact isolation will restrict their liberty for others’, but not their own, benefit. It is arguable that these measures are ethically justifiable if: the specific MRDO for which they are implemented is particularly dangerous; patients are fully informed, before they are screened, of the reasons, implications and benefits of MDRO colonisation; and there is adequate staffing to ensure they are implemented with optimal effectiveness and minimal risk. An alternative approach would be to promote strict adherence to standard precautions, by all staff, behind a “veil of ignorance” by assuming that any patient might be MDRO-colonised and engaging patients - when their condition permits - and their visitors as active participants in IPC (Ahmad et al. 2016). If given an opportunity and adequate information, patients can make positive contributions to IPC, including how to minimise MDRO transmission and the adverse effects of contact isolation (Wyer et al. 2015).

5.2 Hospital IPC Programs

Given the adverse individual and communal effects and excess costs of HAIs and MDRO colonisation, healthcare organisations and professionals have an unequivocal duty-of-care and ethical responsibility to take appropriate measures to prevent them. Hospital IPC programs include, inter alia: hand hygiene; appropriate use of personal protective equipment; aseptic technique for invasive procedures; environmental hygiene; air flow; bundles of measures to prevent certain HAI syndromes, such as device-related blood stream infections; surveillance of selected HAIs and feed-back of data to clinicians; and isolation of infected and MDRO-colonised patients, with the caveats outlined above (Sydnor and Perl 2011).

It is usually impossible to trace an HAI or MRDO transmission event to a specific action, omission or individual healthcare worker, because there are inevitable time gaps and multiple factors and people involved (McLaws 2015). HAIs “do not carry fingerprints … to identify the offending healers who failed the patient.” (Palmore and Henderson 2012).

The effectiveness of an IPC program depends on organisational commitment, adequate resources and participation of everyone in the hospital community. Despite the compelling ethical imperative to “do no harm”, the economic burden of HAIs (Stone 2009) and proven cost-effectiveness (Dick et al. 2015) of IPC programs, they often struggle to attract the necessary support and resources. Their beneficiaries, like those of any preventive program, are unknown “statistical lives” whose demands are far less compelling than those of known “identified lives” who need immediate, often expensive, interventions (Cookson et al. 2008; Beauchamp and Childress 2009). The typically low priority of IPC is reflected in a vicious cycle of inadequate resources, poor compliance - and, hence, limited effectiveness - which can then seem to justify further cost cutting.

The basic principles of hospital IPC were recognised in the nineteenth century and incorporated into routine hospital policy in the latter part of the twentieth, when it became clear that antibiotics, alone, could not prevent morbidity and mortality from HAIs. Nevertheless, implementation and maintenance of IPC programs remain a major challenge. Variation in HAI rates, between otherwise comparable hospitals, presumably reflects differences in organizational cultures, policies, working conditions (Daud-Gallotti et al. 2012; Krein et al. 2010) and professional attitudes, behaviours and leadership (Saint et al. 2010a), suggesting that improvement is possible.

5.3 The Central Role of Hand Hygiene in IPC

“In the absence of the possibility to directly link individual infectious outcomes to individual hand hygiene failures… hand hygiene performance remains the only measure to judge the degree of system safety….” (Stewardson et al. 2011). Despite the proven effectiveness of hand hygiene in preventing MDRO transmission (Pittet et al. 2000; Johnson et al. 2005), compliance is often poor. The discovery, about 20 years ago, that it could be improved by use of alcohol-based hand rub (ABHR), was a major breakthrough. ABHR has significant advantages over traditional hand washing with soap and water: it takes less time, is less irritant to hands, accessible at the point-of-care and in settings without access to clean water and has more rapid and potent antibacterial action (Pittet et al. 2000; Widmer 2000). In 2007, “My Five Moments of Hand Hygiene” was introduced to improve healthcare worker training and standardise monitoring and reporting of hand hygiene compliance (Sax et al. 2007); in 2009, the “Five Moments” were incorporated into WHO hand hygiene guidelines. Since then, there have been numerous studies and reviews of factors affecting compliance and interventions to improve it (Erasmus et al. 2009; Huis et al. 2012; Neo et al. 2016; Kingston et al. 2016).

One review reported an overall average compliance of 40%; it was lower in ICUs (30–40%) than other settings (50–60%); among doctors (32%) than nurses (48%) and for moment one (before patient contact; 21%) than moment four (after patient contact; 47%). Performing dirty tasks, availability of ABHR, and performance feedback were associated with better compliance (Erasmus et al. 2009). A review of interventions reported an average pre-intervention compliance of 34% with variable, but modest improvements (8–31%) and mean post-intervention compliance of 57%. Multimodal interventions included various combinations of staff education, facility design and planning, HAI surveillance and/or compliance monitoring with feedback, financial incentives and active support by clinical champions and hospital administrations (Kingston et al. 2016).

These studies illustrate the enormous effort entailed in achieving even modest, often short-term, improvements. They contrast with the, apparently, much better compliance achieved by the Australian National Hand Hygiene Initiative, which was established in 2009 as a “standardised hand hygiene culture-change program throughout all Australian hospitals to improve … compliance among Australian health care workers; establish a validated system of …auditing to allow local, national and international benchmarking…” (Grayson et al. 2011). Between 2009 and June 2017, overall compliance increased, steadily, from 63% to 85%, which clearly represents major improvement, to above the national benchmark (70%) (http://www.hha.org.au/). But it obscures significant variation (e.g. between hospitals, professional groups and moments) and sampling biases, suggesting aggregated national data can be misleading. Moreover, the estimated auditing cost is AU$2.2 million per annum for an annual improvement (adjusted for sampling) of 1% compliance (Azim and McLaws 2014).

How should these data be interpreted? Routine audits, by direct observation, necessarily involve short periods of observation (20–30 minutes) at times of convenience and so are not representative of 24 hour/whole-of-hospital activity; it was estimated that <2% of total daily hand hygiene opportunities are sampled during a 60-minute audit (Fries et al. 2012). Auditing by direct observation is subject to the Hawthorne effectFootnote 9 (Srigley et al. 2014) and observer bias; it does not assess whether hand hygiene is performed correctly and, when compared with continuous automated monitoring, overestimates compliance (Kwok et al. 2016). Furthermore, there is no consensus as to what target compliance rate is necessary or realistic (Mahida 2016). Video or electronic monitoring systems would reduce workload, measure compliance more consistently and could improve it, by providing rapid feedback and prompts (Srigley et al. 2015), but experience with their use is limited and there are many logistic, industrial, and ethical challenges (Palmore and Henderson 2012; Conway 2016). There is clearly a need to establish realistic compliance targets, more accurate, less labour intensive auditing methods and a more holistic approach to IPC monitoring.

5.4 Doctors and IPC

There is extensive evidence than doctors’ hand hygiene compliance is consistently less than that of nurses, overall, albeit highly variable (Pittet et al. 2004; Cantrell et al. 2009). In one hospital it was more than 80% among physicians and paediatricians but only 30% among surgeons and anaesthetists (Pittet et al. 2004). Compliance has been associated with an emotional, outcome-oriented, rather than a ‘rational’, thinking style – typically associated with nurses and doctors, respectively (Sladek et al. 2008) - and inversely correlated with professional education level i.e. senior doctors were less compliant than junior doctors or nurses (Duggan et al. 2008). These differences matter: senior doctors have status and power within hospital communities and their attitudes and behaviours disproportionately influence those of other staff (Lankford et al. 2003). Poorly compliant, peripatetic junior doctors can act as “super-spreaders”, with many opportunities to transmit pathogens between the numerous individual patients they encounter each day (Temime et al. 2009; Hornbeck et al. 2012). Doctors are more likely to perform hand hygiene after patient contact, to avoid a perceived personal risk, than before contact, to protect patients (Scheithauer et al. 2011). Many believe these are equivalent, but overlook the many opportunities for contamination of their hands, from bed curtains, patient notes, door handles, computer keyboards etc., between patients. In a focus-group study of hospital staff, most non-physician participants said they noticed the hand hygiene practices of other staff and rated doctors least compliant. Doctors were confident of their hand hygiene knowledge but discounted its importance before patient contact. They rarely noticed the practices of others, apart from their senior colleagues; medical students said that senior doctors’ hand hygiene practices influenced their own (Jang et al. 2010a, 2010b).

The reasons for some doctors’ apparent lack of commitment to IPC may lie in the medical practice model, which focuses on individual patient’s clinical problems, which require investigation, decision-making, intervention, often with tangible results. IPC policies fit poorly with this model; they lack obvious utility, since they do not, meaningfully, influence clinical practice. The common perception that HAIs and MDRO acquisition are rare, but unavoidable, can promote a sense of nihilism – that they are inevitable features of contemporary healthcare. Often this is attributable to ignorance of the impacts of HAIs and benefits of IPC, which is partly because of inadequate surveillance and feedback to clinicians.

Some doctors’ apparent indifference or even hostility towards IPC may also reflect aspects of professional and organisational cultures. Traditionally, IPC has been a nursing responsibility; the role of infection control practitioners’ (usually nurses) is to implement IPC policies on behalf of hospital management, who have to report, against mandatory IPC performance indicators, to a central authority (e.g. Ministry of Health). Responsibility for monitoring these indicators, such as hand hygiene compliance, is often devolved to nurse managers, who are held to account if targets are not met. But they have little influence over doctors, who choose to ignore rules they see as unnecessary or excessive or who object, on principle, to any regulation, imposed by nurses or managers. “Senior doctors consider themselves exempt from following policy and practice within a culture of perceived autonomous decision-making that relies more on personal knowledge and experience than formal policy” (Charani et al. 2013). This professional antipathy to IPC, may also reflect the historical - but gradually changing - gender distribution between nursing, which has been a largely female profession, and medicine, traditionally dominated by men, particularly in senior positions. Doctors’ attitudes to IPC are consistent with a more general failure - there are many exceptions - to engage in quality improvement activities or comply with organisational policies, which have been linked to an entrenched medical culture (Jorm and Kam 2004) and/or to the perceived loss of professional autonomy and dominance associated with managerialism (Davies and Harrison 2003).

How widespread these attitudes are and how the hospital management handles them will determine the success or failure of hospital-wide quality improvement programs such as IPC or AMS. If they are tolerated or seen as “too hard” to address, the morale of other staff and the success of the program will be compromised and recalcitrant doctors’ skepticism about its relevance, reinforced. On the other hand, some IPC rules are unnecessarily rigid and officiously enforced. They may seem straightforward to their authors, but poor compliance is sometimes due to clinicians’ finding them confusing, incompatible with the realities of frontline practice or inappropriate in some settings (Hor et al. 2017). There are faults on both sides when, ideally, all “sides” should be working collaboratively to promote patient safety. The issues need to be addressed if healthcare management and staff are to fulfil their moral responsibility to support and participate in programs that promote patient safety. Individuals “are not somehow ‘outside’ and separate from ‘systems’: they create, modify and are subject to the social forces that are an inescapable feature of any organizational system; each element acts on the other” (Aveling et al. 2016). The success of any program is likely to hang on the extent to which the values and goals of all of its members – particularly the most influential - align with, and contribute to, those of the organisation.

5.5 The Organization’s Role in IPC/AMS Programs

Government and private healthcare funding bodies generally mandate that each hospital has established IPC and AMS programs and reports regularly, sometimes publicly, against mandatory performance targets. This does not necessarily guarantee the programs’ success; there are wide variations in practices and outcomes between apparently similar hospitals. Most studies of successful IPC/AMS interventions have focused on “what” works, rather than “why” it works. The components of organisational culture likely to determine the success or failure of program implementation are leadership, shared vision and values, inter-professional relationships, resources and service priorities (Krein et al. 2010). Successful implementation of IPC/AMS, requires commitment by hospital management, strong clinical leadership (Saint et al. 2010b), highly motivated champions (Damschroder et al. 2009) and interdisciplinary departmental teams. The goals, benefits and measures of success of the programs need to be clearly defined, but flexible enough to allow local modification, based on the knowledge and experience of frontline staff. Imposing cultural change from without is less likely to be sustainable than allowing frontline staff to discover how to change it from within (Iedema et al. 2015; Zimmerman et al. 2013). Measures of success should be defined, monitored and rewarded, at least by timely feedback, if not more tangible recognition.

6 Conclusions

AMR is an acknowledged threat to global health security and will not be adequately addressed by development of new antibiotics. The most urgent priority is to curtail the inappropriate use of antibiotics and spread of MDROs, which are most prevalent in hospitals where they are also most amenable to control. Despite evidence that properly implemented hospital AMS/IPC programs can reduce the burden of ABR, the increasing prevalence of preventable HAIs, show that many healthcare organisations have either not recognised, or failed to meet, the challenge. In this chapter, we have identified some of the barriers to successful implementation of AMS/IPC programs; although they are usually mandatory, in hospitals, their quality and outcomes vary. The organisational characteristics most likely to assure successful implementation include: commitment to a shared vision and values; strong leadership, governance and systems; respectful, collaborative inter-professional relationships and fair, cost-effective resource allocation.

Healthcare organisations and hospital managers have ethical and legal responsibilities to protect existing patients, employees and the public – and future patients whose treatment will be compromised by a lack of effective antimicrobial therapy - from preventable harm originating in hospital facilities or activities. Unfortunately, preventive programs are often a low priority because their beneficiaries are unknown future persons whose claims are eclipsed by known, present persons and powerful professional or commercial interests. Preventive programs also generally lack the solid, cost-effectiveness data that administrators demand before committing resources, especially if it is at the expense of therapies. A case for adequate resources to sustain AMS/IPC programs would, ideally, include local, as well as published, statistics on current rates, costs and adverse consequences of HAIs and ABR and personal histories of known patients who have suffered adverse effects from an HAI, contact isolation or inappropriate antibiotic administration.

Successful AMS/IPC policies will be adaptable to unit/department-specific requirements rather than rigidly imposed rules and restrictions, which fail to account for variable, unpredictable clinical exigencies and so are liable to be ignored or circumvented. Effective policy implementation needs frontline ownership of “practice-based guidelines”, based on local knowledge, including potential patient participation.

Policies and implementation plans often fail to clearly define their goals or how success will be measured. Evidence of success that can be rapidly fed back to staff is a strong motivator of adherence, but many hospital managers fail to invest in HAI surveillance and feedback to clinicians that is relevant, accessible and timely enough to motivate improvement. Despite the importance of hand hygiene compliance, its prominence as a single (often inaccurate) measure of IPC practice risks neglecting other important cultural and behavioural factors – teamwork, interdisciplinary co-operation and motivation – that determine the effectiveness of a hospital’s AMS/IPC programs.

Securing the commitment, of an often small, but powerful, minority of senior medical staff, who regard AMS/IPC programs as a threat to professional autonomy and status, remains a challenge for many hospitals. It requires, as a minimum, respectful consultation during program planning, recruitment of clinical leaders and champions and, once a decision is made to adopt it, clarity that all staff are expected to support and participate in programs to which the organisation is committed.