Starvation occupies a prominent place in the cultural memory of the 1980s: a decade that saw international concern about the 1980–1981 Maze Prison hunger strikes in Northern Ireland, charity initiatives such as Live Aid and a heightened visibility of images of starving children in impoverished areas of Africa. Against this backdrop, a pronounced public and expert debate also took place on the issue of using, withholding or withdrawing nutritional support, and letting certain patients starve to death. This discussion tied into broader debates on the technologisation of death, patient rights and the spread of sympathetic attitudes towards suicide and euthanasia. Artificial feeding (or force-feeding) is normally associated with hunger strikers. Force-feeding involves inserting a stomach tube into the mouth of a prisoner/patient which then passes downwards through the throat and oesophagus before arriving in the stomach. The passing of the tube causes most patients to gag, choke and vomit over themselves. Liquid food is then poured into the top of the tube, which descends into the stomach. Digestion is resumed. It shares similarities with artificial feeding, a procedure that keeps mentally ill patients who refuse to eat alive, as well as coma patients (Anon, 2000a). But subtle differences exist. Unlike artificial feeding, force-feeding tends to be performed against the will of patients who have decided not to eat. However, most hunger strikers are not mentally ill. To worsen matters, hunger strikers have been subjected to techniques that are ethically questionable, including the use of nutrient enemas and rectal feedings.Footnote 1

Force-feeding policies were first introduced by the British government in 1909 to tackle suffragette hunger strikes, causing public unease. During the 1910s, doctors force-fed republican prisoners in Ireland, controversially causing the death of leading republican Thomas Ashe (Murphy, 2014). Although force-feeding was permanently abandoned in Ireland, convict prisoners continued to be fed against their will in English prisons until as late as the 1970s (Anon, 1913–1940; Miller, 2016). During the Northern Irish Troubles (c. 1968–1998), the force-feeding of two young sisters – Marian and Dolours Price – and the death of IRA prisoner Michael Gaughan (following force-feeding) led the World Medical Association to formally condemn the practice as unethical in 1975 (World Medical Association, 2015).Footnote 2 It seemed that an ethical problem stretching back to the Edwardian period had finally been resolved. Western countries gradually abandoned force-feeding policies.Footnote 3 However, as this article argues, contemporaneous developments in medicine – the development of new life-sustaining technologies and the rise of bioethics – in fact broadened and diversified ongoing debates on using artificial feeding to sustain life. From the 1970s, practitioners adopted new feeding methods and used them on a far broader range of patients than ever before. Instead of being resolved, debates on force-feeding and artificial feeding in fact proliferated during the 1980s and 1990s as deeply complex ethical questions arose about who should and should not be fed. The result was inconsistent policies which remain in place and present a barrier to patient autonomy.

Despite the prominence of debates about nutritional support, the subject has received limited historiographical attention even in otherwise thorough accounts of bioethics (Ferber, 2013; Wilson, 2014). Nutritional support has been discussed in works such as Ian Dowbiggin’s A Concise History of Euthanasia and Peter Singer’s provocative Rethinking Life and Death (Singer, 1994; Dowbiggin, 2005). However, it has not been subject to focused analysis, even despite its ethical intricacy. Food is a complex phenomenon, being replete with particular meanings (Coveney, 1999). The idea of a doctor letting a patient starve carries deep emotional resonance as it clashes with expectations of medical care. It also raises important issues about patient autonomy. In the post-war period, traditional medical paternalism was dislodged as patients secured the right to make decisions about their own bodies. As an example, the force-feeding of prisoners had been inherently paternalistic. It entailed an appropriation of medical power and nullified the capacity of patients to determine their own fate. In contrast, granting the right of hunger strikers such as Bobby Sands to starve, even to death, indicated a burgeoning acceptance of personal autonomy, despite its potential consequences.Footnote 4 Linked to this were changing ideas on the sanctity of life. Traditional Judaeo-Christian perspectives saw every human life as being of equal value. The idea that individuals had a right to choose when and how to end their lives challenged these traditional views and, in many ways, reflected the secularisation of society and the burgeoning consumerist culture of the 1980s and 1990s.

But how easily did the issue of withholding or withdrawing nutritional support, in line with a patient’s wishes, fit into this transition from paternalism to autonomy, given the emotional constraints attached to the subject? Did doctors always respect the wishes of patients or was the situation more intricate? The current public interest in the force-feeding of Californian prisoners and Guantánamo Bay detainees has renewed debates on the ethical appropriateness of feeding patients against their will (Miller, 2013). Critics currently condemn the force-feeding of hunger strikers as violent and brutal; as an act that clashes profoundly with medical ethical norms.Footnote 5 Some suggest that military doctors are caught in a ‘dual loyalty’, uncertain about whether to prioritise the needs of the state or the ethical standards of their profession (Clark, 2006). Others simply denounce Guantánamo as a “medical ethics free zone” (Annas et al, 2013). What seems clear is that critics see force-feeding as a breach of a fundamental right: the right of a patient to be able to refuse medical treatment (or food) if he or she wishes. The force-feeding of prisoners emerges from such critiques as an act set apart from ‘normal’ medical practice. As an example, in 2015, Physicians for Human Rights argued that “forcing treatment on mentally competent persons constitutes ill treatment and possible torture and is contrary to professional ethics”.Footnote 6 But is prisoner force-feeding really so out of line with ethics and standards in clinical practice? This article uses contemporary history to suggest not. Since the late-1970s, medical personnel have grappled with the problems posed by feeding numerous types of patient – not just hunger strikers – without express permission, including the comatose, infants, anorexics and the elderly demented. Inconsistencies emerged which need to be addressed if feeding policies are truly to be guided by principles of autonomy.

Examining the broader historical meanings embedded in the act of feeding (or letting starve) individuals who could not – or would not – eat ultimately complicates models of a transition towards patient autonomy. It also provides an opportunity to reflect upon the development of current approaches and policies. Although present-day ethical norms generally support patient autonomy (incorporating the right to stop eating), in reality doctors and family members have struggled emotionally with the prospect of letting someone starve. The idea of over-seeing self-induced starvation produced emotional responses that repeatedly negated ideals of patient autonomy. Ultimately, this article supports Peter Singer’s contention that “when it comes to questions about prolonging life or ending it, our ethics are in a confused, contradictory mess”.Footnote 7 It maintains that highly inconsistent approaches developed towards feeding (or not feeding) particular patient groups from around the 1970s and became firmly established in medical practice. Whether or not patient autonomy is respected depended upon a confluence of factors other than clinical need including age, gender and perceptions of vulnerability (O’Mahony, 2014). If patient autonomy is to guide decisions made to use, withdraw or withhold nutritional support, it would benefit from being consistently applied. Emotional detachment is essential to achieve this but emotions, at the same time, present barriers to the implementation of ethical standards in clinical practice.

The Technologisation of Feeding

In the late twentieth century, new life-support technologies emerged that could prolong life indefinitely, keeping alive severely debilitated patients (particularly the comatose) who would otherwise have naturally died. Death itself became technologized; defeating death became a prime goal of doctors. Yet difficult decisions had to be made about when to end the life of technologically supported patients. The image of doctors ‘pulling the plug’ on vegetative, comatose patients captured the public imagination. However, equally provocative debates were waged on the seemingly more mundane matter of withdrawing (or withholding) feeding technologies: tubes and intravenous lines. ‘Pulling the plug’ is normally imagined as a straight-forward, painless flick of a switch that simply removes the function of breath, allowing the comatose to escape a meaningless, non-functional existence. On a more metaphorical level, it symbolises the turning off of modern medical technologies that prevent a dignified death by aimlessly preserving functionless life.Footnote 8 Yet the idea of killing a patient by not feeding him or her bears particular emotional resonance. Food, after all, is a natural means of subsistence unlike the complex electrical machinery attached to the vegetative bodies of the dying or comatose. As such, the idea of a doctor letting a patient starve raised a markedly different set of emotional responses than switching off a life-support machine.

In the post-war period, advances were made that allowed doctors to artificially feed patients with minimum intrusion and discomfort. In the 1960s, Pennsylvania surgeon Stanley J. Dudrick began to nourish new-born babies with an intravenous feed (InVF), developing what became known as total parenteral nutrition (Dudrick et al, 1969). In 1981, American surgeons Jeffrey L. Ponsky and Michael W.L. Gauderer published an article in the journal, Gastrointestinal Endoscopy, describing their invention of a percutaneous endoscopic gastrostomy (PEG); a procedure in which a PEG tube was passed into the stomach through the abdominal wall to provide enteral nutrition. PEG was a significant improvement on earlier feeding gastrostomies which had required more intrusive surgery (Ponsky and Gauderer, 1981). General anaesthesia was no longer needed, a relaxed abdomen was not essential, the procedure could be performed on patients with severe musculoskeletal deformities and patients suffered minimal discomfort in the post-operative period. Although initially developed for children, PEG was quickly adopted by gastroenterologists and surgeons (Gauderer, 1999, 2001). The impact of these new technologies on hospital practice was profound. Physicians and surgeons began to devise suitable feeding techniques for patients suffering from conditions including cancer, gastritis and liver disease (Rombeau and Caldwell, 1984). Surgeons penned practical textbooks on how to insert gastrostomies into patients (Grant and Todd, 1982; Philips and Odgers, 1986). Physiologists developed complex models relating to issues such as amino acid requirements, glucose levels and hormone patterns among patients receiving total parenteral nutrition (Lebenthal, 1986). Nonetheless, the rapid spread of these technologies ultimately raised an intricate set of ethical problems. For instance, if a patient was to be kept alive with artificial nutrition, at what point, if any, should a tube be removed and a patient left to starve? Well into the early 1980s, ethical guidelines on discontinuing life focused on life-support systems (particularly mechanical respirators and ventilators) but not the more basic matter of feeding (Towers, 1982).

The emotional aspects of starvation ultimately produced inconsistent clinical practices (not always based on clinical need) which remain in place. Historian James Vernon suggests that western societies generally see hunger as unacceptable. Whereas hunger and famine were relatively commonplace until around the nineteenth century, modern western sensibilities towards starvation now arouse sympathy for the hungry (as confirmed by western ‘wars’ on global hunger and newspaper coverage of impoverished children going without food).Footnote 9 Moreover, food and water are so central to human emotions that it has proven impossible to consider nutritional support with the same emotional detachment sometimes felt towards a respirator or dialysis machine. Psychological barriers relating to hunger are too intense (Lynn and Childress, 1983). In light of this, late-twentieth-century decisions to administer or withhold food and water proved to be just as complex, perplexing and emotionally challenging as contemporaneous debates about turning off respirators (Dresser and Boisaubin, 1985). Notably, as discussed in more depth below, the issue of withdrawing or withholding food also encapsulated a far broader spectrum of patients than ‘pulling the plug’: prisoners, anorexics, infants, the elderly as well as the comatose. Emotionally driven attitudes towards these different patients emerged that were separate from clinical need, being based on considerations such as age and perceptions of vulnerability.

The problem of nutritional support was very much one of medical modernity. Underlying public discussion from the 1970s onward lay a disenchantment with biomedical technologies that endlessly elongate life. Indeed, the question of nutritional support first arose in a socio-cultural climate with changing views on the nature of medical authority and the rights of the dying. During the 1970s and 1980s, a diffuse rebellion was taking place against the nature and excesses of medical power. Since the previous century, the western medical profession had acquired remarkable authority in its institutions and in society at large. At worst, this encouraged medical experimentation on the institutionalised, questionable research ethics and a relative lack of external regulation. Largely in response to a number of controversies, bioethics emerged from the 1970s; an interdisciplinary project that sponsored greater regulation on medical practice, often with the participation of individuals in diverse fields such as philosophy and theology (Rothman, 1991; Wilson, 2014). Tied to this was the emergence of the ‘death with dignity’ movement. In earlier centuries, the endurance of suffering was often viewed as a defining feature of a ‘good death’; a test of fitness for heaven (Jalland, 1996). But in twentieth-century consumer capitalist culture, the ideal death typically involved painlessness and gratification; a quick fix. For some, the right to choose how and when to die was an important part of individual liberation, perhaps the last human right. Patients sought to regain control of their own bodies from medical professionals (Filene, 1998).

Patient Autonomy and the Comatose

From the mid-1980s, efforts were made to establish firmer ethical principles on artificial feeding following a number of dramatic court cases that raised the question of whether the comatose had a right to die. In many ways, these were historic. Internationally, courts began to openly support principles of patient autonomy. As historian Peter G. Filene suggests, this formed part of a broader process in which traditional medical paternalism was transformed into a more democratic process; in this instance allowing family members to speak for incompetent patients.Footnote 10 In 1981, Robert Nejdl and Neil Barber, two Los Angeles physicians, were charged with murder after removing an InVF drip from a severely brain-damaged comatose patient named Clarence Herbert. Herbert had been admitted to hospital for routine surgery to remove a colostomy bag. During his first hour in the recovery room, Herbert suffered a massive loss of oxygen to the brain. He became comatose and was placed on a respirator. Herbert’s wife consented to having the respirator removed. However, to some surprise, Herbert did not die; he unexpectedly began to breathe on his own. Stopping InVF seemed to be the only option left to end Herbert’s functionless life, allowing him to die naturally from dehydration and pneumonia. However, a nursing supervisor viewed the cessation of InVF as morally different from removing a permanently comatose patient from a respirator. She went to the authorities. Despite their intentions, the doctors had knowingly, if not intentionally, caused death. As the District Attorney commented, “the patient did not die naturally of a terminal illness; he was dehydrated to death intentionally with death being a certain consequence as if he was shot at point blank range”. Yet things were not quite so clear cut. InVF was neither curing nor ameliorating Herbert’s condition. Nor was it maintaining his comfort. It was merely prolonging biological existence; nourishing liquids were being dripped through a permanently non-functioning body. Ultimately, a municipal court judge dismissed the charges due to a lack of evidence of malicious intent (Steinbeck, 1983).

However, it was the problems posed by Elizabeth Bouvia that brought nutritional support technologies to the forefront of international debate. While at Riverside General Hospital, California, in 1983, she began to refuse all nourishment, stating that the only Christmas present she wanted was to be allowed to die. Bouvia had cerebral palsy and requested that doctors provide painkillers while she starved herself. Three days later, doctors won a court order permitting them to force-feed Bouvia should her life become endangered. Force-feeding commenced on the same day.Footnote 11 The image of a bed-ridden, paralysed and suffering twenty-eight-year-old woman being fed against her will aroused compassion and sympathy internationally. Bouvia had asserted her right to autonomy and privacy; the hospital (and state) had declared its public interest in preventing suicide.

Importantly, Bouvia was mentally competent; capable of making her decision about whether or not to eat or be fed against her will. Leading bioethicist George J. Annas denounced her force-feeding as “brutality” and warned that hospitals were at risk of becoming “the most hideous of torture chambers”. “Where are nursing and medical schools schooled in the martial arts of restraint, forced treatment, intimidation and violence?”, asked Annas, adding that “we should refrain from that action [force-feeding] because it perverts the very meaning of care and treatment. Medical care must be consensual or it loses its legitimacy” (Annas, 1984). Yet others argued that Bouvia had in fact revealed inherent problems with the concept of patient autonomy. American philosophy Professor Francis I. Kane argued that Bouvia’s case had revealed the unsuitability of autonomy as a guiding bioethical principle. According to Kane, considerations of autonomy were less important than broader notions of the public good (Kane, 1985). Nonetheless, in 1986 a state appeals court panel declared that the “right to refuse medical treatment is basic and fundamental” and ruled that Bouvia had a right to refuse force-feeding.Footnote 12 Indeed, this was an important recognition that feeding technologies did indeed constitute medical therapy, rather than basic care. According to the New York Times, the court concluded:

Her [Bouvia’s] mind and spirit may be free to take great flights, but she herself is imprisoned and must lie physically helpless, subject to the ignominy, embarrassment, humiliation and dehumanising aspects created by her helplessness. We do not believe it is the policy of this state that all and every life must be preserved against the will of a sufferer. It is incongruous, if not monstrous, for medical practitioners to assert their right to preserve a life that someone else must live or, more accurately, endure for fifteen or twenty years. The right to die is an integral part of our right to control our own destinies (Anon, 17 April 1986).

The Bouvia case was ultimately decided on the issue of suicide, not feeding. Nonetheless, public and medical responses mostly focused on the emotive issue of feeding. Was InVF a form of basic, humane care that must always be given in hospitals? Or was it a complex form of medical treatment? If it was the latter, then surely patients had a right to be able to refuse therapeutic intervention? (Steinbrook and Lo, 1986)

The problems posed by Bouvia invoked far broader questions that touched at the heart of modern, western bioculture. Should preserving life really be the main goal of modern biomedicine? Does a commitment to preserving life preclude the freedom to commit suicide? Should bodily autonomy be prioritised over life? And does life really need to be preserved if it involves only pain and unhappiness? Whatever the answers, it seemed apparent that various values were now in conflict (Bleich, 1986). Medical technology had prompted a rethinking of how artificial feeding technologies fitted with broader western values relating to the meaning of life itself. It seems clear that American courts developed strong tendencies to support the right of patients to stop receiving food. In 1990, the Supreme Court allowed the family of comatose Nancy Beth Cruzan to withdraw her feeding tubes, endorsing the view that the 14th Amendment guaranteed the right to avoid unwanted medical treatment.Footnote 13 Debate reignited once again during the legal struggle over vegetative patient Terri Schiavo which lasted from 1990 to 2005. Ultimately, Schiavo’s feeding tube was removed. Schiavo’s husband had supported the withdrawal of feeding equipment although her parents opposed the action as their daughter was conscious (Monturo, 2009).

Although America was the primary focus of debate, other countries ruled in favour of removing nutritional support from the comatose. Britain’s first right-to-die case involved Tony Bland, a 21-year-old victim of the 1989 Hillsborough Stadium disaster. In 1993, a committee of peers examined the area of medical ethics more generally and defined nutritional support as a form of medical treatment that involved manipulating Bland’s body without his consent and conferred no benefit (Anon, 1993a). Peter Singer subsequently maintained that the British courts had (in his view, positively) broken with the traditional principle of the sanctity of life, having weighed it against other considerations.Footnote 14 Two years later, Ireland dealt with the provocative Ward case, which coalesced around a near-comatose woman who had sustained debilitating injuries during a routine gynaecological operation twenty years earlier. In the years leading up to her death, she found being fed with a nasogastric tube increasingly distressing. As her teeth were permanently clenched together, Ward could not swallow or receive nourishment in the normal way. She had no capacity for speech or communication and could barely recognise nursing staff. Ultimately, the Supreme Court upheld the view that the Irish Constitution protected a number of rights including the right to autonomy, self-determination, privacy, dignity, bodily integrity and the right to die (O’Carroll, 1995).

Of course, the privileged space occupied by patient autonomy did not remain uncontested. Most notably, the presence of theological perspectives that privileged the sanctity of life dissented against a flowing tide of opinion. In 1986, academic theologian, Fr. Robert Barry, argued that “definitively and absolutely removing food and fluids is morally identical to placing a plastic bag over a person’s head”. Barry saw the removal of feeding technologies as equivalent to “killing by benign neglect or omission” rather than “allowing to die” (Barry, 1986). Similarly, theologist Gilbert Meilaender argued in the same year that doctors and bioethicists were failing to distinguish cure from care. Meilaender argued that doctors were intent on playing God by deciding when life must end, even the lives of those who were not actually dying (Meilaender, 1986). Between 1988 and 1922, the U. S. Catholic Conference Bishops’ Committee for Pro-Life Activities formally rejected the practice of withdrawing nutrition and hydration to bring about a patient’s death (Anon, 1993b).

Outside of theology, the act of allowing a patient to starve could provoke strong emotional responses. As an act, it clashed profoundly with normal expectations of medical care. In 1983, bioethicist Daniel Callahan criticised a “stubborn emotional repugnance against a discontinuance of nutrition” and argued that “a cluster of sentiment and emotions that is repelled by the idea of starving someone to death” was impeding sensible hospital policies. In his scathing criticism, Callahan called for a re-education of public emotions (Callahan, 1983). However, Callahan underestimated the affective nature of letting someone starve. In the 1980s, American bioethicists Mark Siegler and Alan J. Weisbard warned that the withdrawal of nutritional support was an unwelcome ‘new frontier’ of the ‘death with dignity’ movement and a threat to traditional patient–physician relationships and social values. Where is all this leading, they asked? The authors resorted to the slippery slope argument by presenting the termination of nutritional support as a step towards the widespread euthanasia of ‘undesirable persons’ such as the ‘senile’, ‘retarded’, ‘incurably ill’ and the ‘aged’. “The angel of mercy”, they warned, “can become the fanatic, bringing the comfort of death to some who do not clearly want it”. According to the authors, a ‘right to die’ could soon transcend into a “duty to die” (Siegler and Weisbard, 1985). In reality, as historian Sarah Ferber maintains, ‘slippery slope’ arguments bore limited relevance to late-twentieth-century ethical contexts.Footnote 15 Yet they did raise pertinent issues relating to perceptions of certain patient groups and society’s willingness to let them die if they wished.

Nonetheless, those opposed to allowing the comatose to starve found themselves swimming against a general tide of opinion that favoured a right to die. Most bioethicists refuted suggestions that full-scale euthanasia and infanticide was imminent.Footnote 16 In 1983, the President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research recommended that healthcare professionals should always work towards sustaining life but stipulated that competent, informed patients (or their representatives) had a right to decide on withholding or withdrawing nutritional support (President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research, 1983). In 1986, the American Medical Association’s Council on Ethical and Judicial Affairs formally defined nutrition and hydration as medical treatment (American Medical Association, 1986–1989). Ideals of patient autonomy as a guiding ethical principle for the comatose still comes under attack in countries including Britain (Foster, 2009). Nonetheless, the framing of nutritional support as medical treatment, not care, in the 1980s differentiated its withdrawal from the act of killing. It became construed as a professional decision. General trends emerged in western countries for judges to support the withdrawal of artificial feeding. Many doctors no doubt continued to feel emotionally troubled at the prospect of letting a patient starve to death. However, as one bioethicist noted in 1989, “as there is absolutely no legal obligation to eat, there cannot be an absolute legal obligation for healthcare workers to force-feed all people who do not eat” (Yarborough, 1989). In relation to the comatose, health and quality of life became ranked as higher values than the sanctity of life, particularly when life consisted only of poor health and suffering.

Enforced Feeding in Clinical Practice

The late-twentieth-century comatose were mostly granted a right (through their representatives) to have nutritional support removed and be left to naturally starve. In most respects, this fits with broader models of a transition from medical paternalism to autonomy. But what happens when other patient groups are taken into account? Can a relatively smooth trend towards granting autonomy be discerned? Or was the idea of allowing a conscious patient to starve too inflected by emotional considerations? And what does this tell us about how present-day approaches to artificial and force-feeding developed? From around the mid-1980s, doctors and bioethicists negotiated various conditions in which it seemed permissible to withhold or withdraw tube feeding. But while the medical community increasingly acknowledged that critically ill patients possessed certain rights – even a right to die – many doctors were troubled by the idea of letting certain patients starve (Robertson, 1983). Emotional considerations based on factors such as age and vulnerability undermined an adherence to ideals of patient autonomy. Varied approaches to the same problem emerged, resulting in a situation based on inherent inconsistencies which remains in place today.

(i) The elderly The ageing of western populations in the twentieth century (partly a result of declining mortality rates from infectious disease) brought an increase in old-age-related diseases age such as Alzheimer’s and dementia. At the final stages of these conditions, many patients were unable to feed themselves. Their quality of life had clearly declined, but their lives were not in imminent danger. Moreover, unlike the comatose, their lives were far from functionless. Since the 1980s, doctors and family members made difficult decisions about limiting care. Despite a more general permissiveness towards removing feeding tubes, doctors and families felt challenged by the prospect of letting an elderly individual slowly starve despite their quality of life being dismal. But was tube feeding humane for these patients if their quality of life – the loss of distinctive human capacities such as self-direction and self-care – had deteriorated to an unacceptable extent? (Ackerman, 1996)

Throughout the 1980s, textbooks and care guides for the elderly increasingly supported the use of feeding tubes where deemed necessary.Footnote 17 By 1995, gastrostomy tubes had been inserted in 121,000 elderly patients in America, approximately thirty per cent of the patient population with dementia, indicating a considerable preference for maintaining life (Gillick, 2000). Evidently, a strong trend in general practice evolved during the 1980s and 1990s towards sustaining nutritional support supported by families. Emotionally driven attitudes towards hunger, starvation and old age conflicted with the rights of patients to starve as those with advanced dementia differed from the comatose in many respects. They were awake and could feel pain. Expressing discontent with philosophical debates on paternalism and autonomy, some critics pointed out that the tube feeding of conscious patients was obtrusive, uncomfortable and potentially dangerous. In their view, clinical realities were absent from abstract debates on autonomy and the sanctity of life.Footnote 18 But perhaps the most curious problem was the lack of research on the benefits of nutritional support for elderly patients, even despite its widespread use. It was only in the late-1990s that researchers began to realise that artificially fed elderly patients did not live for much longer than those who ate normally and passed away naturally.Footnote 19 It became increasingly apparent that gastrostomy tubes were failing to prolong life, ensure adequate nutrition or prevent complaints such as aspiration pneumonia. To problematise matters further, many elderly patients undergoing feeding had to be restrained; they lacked the cognitive capacity to understand why a tube was protruding from their abdominal wall and simply pulled it out.Footnote 20

Could it be that, in this instance, physicians and families failed to view advanced dementia as a terminal illness in part because of the emotional nature of the idea of starving an elderly person to death? Ironically, comfort – perhaps the main aim of end-of-life care – was more likely to have been provided if intrusive feeding technologies had not been used. When viewed in this light, it could be justifiably argued that artificial feeding was neither medically or ethically justifiable in terminally ill patients. Nonetheless, doctors and families persistently refused to grant these patients their right to die with dignity; seeing it as morally incompatible with over-riding imperatives to provide care and preserve life. They struggled with the emotional responsibility of authorising starvation even in a biomedical culture that often sanctioned such acts. These inconsistences remain in place today and have not gone unnoticed. In November 2015, the New York Times ran an opinion piece entitled “Force Feeding: Cruel at Guantánamo but OK for Our Parents” which lambasted parents and doctors for supporting inhumane practices in hospitals and nursing homes while criticising the use of the very same procedure on prison hunger strikers.Footnote 21 In this instance, it seems that general principles on artificial feeding (applied evenly to all patients) did not develop based entirely on clinical need.

(ii) Infants If many families found unacceptable the idea of letting an elderly relative slowly starve, what about their infants? If choosing to end the life of an aged person with a limited remaining life-span proved so emotive, what of a new-born or child with a potentially rich life ahead of them? The first prominent case relating to the nutritional support of infants famously involved ‘Baby Doe’, a Bloomington, Indiana, baby born in 1982 with Down’s syndrome, whose parents declined oesophageal surgery, leading to the baby’s death. Debate ensued about whether Baby Doe had been denied treatment (and food and water) not because the treatment was risky but because he was intellectually disabled. In 1983, a second case – involving a ‘Baby Jane Doe’ – arose in New York City. ‘Baby Jane Doe’ was born with an open spinal column meaning that she would have remained bed ridden, suffering from severe brain damage, throughout her life. Her parents refused to authorise surgery. But did doctors and parents really possess the right to make decisions based on their own perceptions of quality of life?

In 1984, the Department of Health and Human Services (DHHS) amended child abuse laws by adding the Baby Doe Amendment. This sought to prevent the discriminatory denial of medical treatment to ‘handicapped infants’. The provision of food and water, the DHHS insisted, was a fundamental matter of human dignity, not an option for medical judgement (John et al, 1983). DHHS rules essentially promoted the feeding of hopelessly ill infants at the same time that courts were granting autonomy to comatose patients. Its amendments were made in a socio-cultural context in which the lives of the young were considered sacred; in which children had a basic right to be shielded from threats and danger (Cunningham, 2005[1995]). At worst, the introduction of initiatives such as the so-called ‘Baby Doe hotlines’ – which encouraged hospital staff to report incidences of disabled babies being starved – risked casting doctors as potential child abusers or murderers, offering little in the way of sensible, reasoned guidelines (Annas, 1983). Essentially, discussions of the ethics of neonatal care became polarised between treating all infants without consideration of quality of life or perform infanticide based on quality of life. Whereas previously, practitioners and family members might have privately decided to allow an individual baby to die, medical personnel now worked under stricter guidelines and public accountability (although debates on ‘after-birth abortions’ still rage) (Guibilini and Minerva, 2013).

Intellectually disabled infants differed drastically from both the comatose and the elderly. The infants under question had every chance of living a long life. The quality of that life was debatable but it was certainly not functionless. In turn, these debates tapped into broader discussion of disability rights and discrimination. At worst, cases such as Baby Doe seemed to support ‘slippery slope’ perspectives that foresaw the eradication of the vulnerable and ‘worthless’. The idea of starving an infant to death, regardless of his or her condition, struck an emotional chord. The image of a parent feeding a child bore symbolic importance to family members; ending a life by withholding food disrupted traditional notions of nurturing. This symbolism was applicable to all infants, not just those deemed worthy of life. ‘Starving’ was generally equated with ‘suffering’, something doctors were meant to avoid or, at least, alleviate. Moreover, society itself shared a strong belief that children were not supposed to die and that clinicians should never give up on an infant. Questions of allowing infants to starve were met with various psychological stumbling blocks (Carter and Leuthner, 2003).

The fact that relatively few court decisions came to light relating to withholding the nutritional support of paediatric patients in itself suggests that doctors were reluctant to discontinue providing nutrition, even if juries might have supported such decisions. In a sense, children were not granted the same rights to die as adults (Levi, 2003). Doctors and nurses felt emotionally uncomfortable with starving a child to death. In most other circumstances, intentionally depriving an infant of food and water would be denounced as a monstrous act of cruelty. But as bioethicist Lawrence J. Nelson has observed, the fact that a paediatric patient was dependent and vulnerable did not make stopping feeding him or her unethical. Decisions to feed were being made in light of a general societal commitment to protecting the young from harm (rather than being based entirely on clinical need) producing sensibilities that disrupted an adherence to autonomy. Moreover, as Nelson added, children possessed the same rights to have medical decisions made on their behalf that were in their best interests (Nelson et al, 1995). The discussion tapped into broader ideas about vulnerability in which special duties towards particular patients might over-rule general rules.Footnote 22

In 2016, a study published in the New England Journal of Medicine suggested that most hospitals adhere to policies (established in the 1980s) of providing early parenteral nutrition to critically ill infants despite a limited amount of available research on the benefits. Challenging current hospital policies, the study suggested that withholding nutrition for around a week actually had long-term health benefits in many instances. Children who had built up a nutritional deficiency seem to suffer fewer infections, less organ failure and a quicker recovery than children fed through an InVF drip (Fivez et al, 2016). While this discussion did not directly address the issue of whether an infant patient should be allowed to starve to death, it once again pointed to the clinical inconsistencies towards withholding nutrition from certain patients and the power of emotional considerations towards certain groups in guiding decisions about nutritional support. As with many demented elderly patients, it seems that quality of life might in fact be improved by removing feeding tubes.

(iii) Anorexic patients Since the nineteenth century, numerous anorexics or ‘fasting girls’ have been force-fed while in institutional care. Their carers no doubt feared an institutional death but also, in many instances, recognised the importance of intimidation in tackling recalcitrant patients (Brumberg, 1988). From the 1980s, some bioethicists began to argue that it was only permissible to force-feed an anorexic patient if his or her physical condition posed an imminent threat to life (Dresser, 1984). In Britain, the Mental Health Act 1983 supported compulsory treatment when the physical health or survival of an anorexic appeared seriously threatened by food refusal. In 1993, one sixteen-year-old girl unsuccessfully applied to the Court of Appeal in Britain to claim her right to refuse food. As one psychiatrist noted at the time, the idea that anorexics had to be fed were bound up with images of anorexia as a conscious choice, rather than serious illness (Tiller et al, 1993). Ultimately, in the same year, three judges ruled that the patient in fact had the right to refuse force-feeding. A new ruling was introduced stating that doctors need to obtain an order (with the patient’s views well represented) before resorting to feeding (Anon, 26 October 1993).

The death of an anorexic patient, Nikki Hughes, in 1996 in many ways further reinforced inclinations to artificially feed such patients. Hughes had sought a European Court of Human Rights ruling to stop her doctors from feeding her (Anon, 4 August 1997). Critics debated whether anorexics had truly lost their mental competence and right to self-determination. Were judgements of ‘incompetence’ based on an assessment of the capacity of an anorexic person to make decisions or a description of that individual as a whole? After all, many anorexics passed examinations and worked in demanding jobs. Very few seemed to be at death’s door (Draper, 2000). One critic commented that force-feeding crushed the patient’s will, destroying who the patient was – the antithesis of what therapeutic treatment was meant to be (Lewis, 1999). But others, including Simona Giordano, suggested that compassion should over-rule autonomy, maintaining that anorexics should be fed for their own sake (Giordano, 2003).

The situation was far from being clear cut in America. Individual states developed different criteria for whether anorexic patients should be fed against their will. The American Psychiatric Association developed a model in 1983 but this was not completely adopted by any state (Griffiths and Russell, 1998). Those opposed to the excesses of medical paternalism insisted that the practice of force-feeding anorexics was outdated and inappropriate (Rathner, 1998). But the fact remained that anorexia could be seen as a form of (predominantly female) deviance; the use of feeding tubes as a way to regain control over the bodies of a recalcitrant patient (Orbach, 1986). It seems apparent that the policies and clinical approaches that emerged towards anorexics from around the 1980s differed from the treatment of groups such as the comatose. Autonomy tended not to be granted to anorexic patients, resulting in a number of court cases in which patients strove (often unsuccessfully) to assert their bodily rights. It seems likely that attitudes towards the feeding of anorexics were bound up with broader opinion on anorexia itself as somehow deviant, as well as attitudes towards youthfulness and femininity.

Denying the Right to Starve in Prisons

Although the force-feeding of hunger-striking prisoners is often viewed as somehow inconsistent with standard clinical norms, the imposition of feeding in fact appears fairly consistent with the management of many elderly, infant and anorexic patients. Policies of enforced treatment emerged even within a broader framework that privileged autonomy.Footnote 23 Undoubtedly, the means by which hunger strikers are force-fed – with a paraphernalia of tubes, restraints and verbal intimidation – adds an element of discipline absent from clinical encounters. The disciplinary tendencies of the modern prison system and, in particular, prison medicine itself promotes far more hostile encounters between doctor and patient than that would occur in the clinic (Foucault, 1977[1975]; Sim, 1990). Nonetheless, prisoner force-feeding needs to be conceptualised in terms of broader social and clinical attitudes that display aversion towards overseeing starvation based on particular emotional attitudes towards certain groups (rather than entirely on clinical need).

Although the World Medical Association declared prisoner force-feeding to be unethical in 1975, force-feeding policies were not entirely banished from prisons. In many ways, the 1980–1981 Maze Prison hunger strikes in Northern Ireland stand apart as an incidence where patient autonomy was granted. The British government stood by its conviction, announced in 1974 by then Home Secretary Roy Jenkins, that it would no longer force-feed IRA prisoners.Footnote 24 However, the decision to let ten prisoners starve to death was predicated more upon Prime Minister Margaret Thatcher’s stubborn determination not to yield to political protest rather than any serious consideration of medical ethical implications.Footnote 25 Internationally, convict prisoners who went on hunger strike continued to be force-fed. Justification was typically sought in the idea that state bodies have a vested interest in preventing prison suicides and deaths. Nonetheless, if we consider that compassionate attitudes towards the terminally ill, mentally disabled or comatose were inflected by factors such as age and vulnerability, then it seems equally likely that public opinion on hunger-striking criminals could be marked by disdain. Most debates on nutritional support were also concerned with quality of life should a patient remain alive. However, prisoners are not meant to dictate their living conditions or take steps to improve their lives; they are supposed to carry out prison sentences as part of a punishment involving social exclusion.

Unlike Britain and Ireland, America did not have to deal with large groups of politicised hunger strikers until the twenty-first century (with the exception of a relatively small number of suffragette prisoners). The World Medical Association’s 1975 declaration was primarily intended to safeguard politicised prisoners in conflict zones. Convict prisoners remained in a more precarious position. Given the relative absence of political considerations, in the 1980s, American debates on hunger strikes were played out in light of ethical ideas of the right to autonomy and self-determination. In the early 1980s, incidents recurrently made the American headlines. In 1982, the State Supreme Court ordered prison doctors to intravenously force-feed hunger-striking prisoner Thomas Clauso and, in 1984, convicted rapist, Joel Caulk (Anon, 22 February 1982, 26 May 1984). Whether or not prisoners had a basic right to privacy was highly contested. Ideas that prisoners had basic human rights had certainly gained currency throughout the 1970s and 1980s due to the campaigns of prison welfare activists. However, American courts reached contradictory conclusions on the matter of force-feeding: sometimes supporting bodily autonomy, sometimes supporting the state’s interests in preserving life. To complicate matters further, disagreement existed about whether or not hunger strikers were suicidal. Most claimed to be willing to end their lives if necessary but few showed a clear indication that they definitely wanted to die (Ansbacher, 1983).

One of the most high-profile cases involved Mark Chapman, convicted for murdering ex-Beatle John Lennon. In 1981, Chapman announced that he intended to stop eating to draw attention to all the starving children in the world. After refraining from eating for seven days, prison medical staff declared Chapman as mentally ill (as he seemed determined to kill himself) and transferred him to a psychiatric unit. A New York appellate court similarly classified Chapman’s behaviour as suicidal and ultimately prioritised the state’s interest in preventing suicide over the prisoner’s rights of privacy. Force-feeding was authorised. Nonetheless, the court failed to explain why Chapman’s behaviour could be considered suicidal or even if his primary intention was death. Chapman’s assertion that he was willing to starve to death did not necessarily imply an intent to die (Jamieson, 1985).

Intriguingly, in 1985, a group of American medical professionals decided that providing food and water constituted basic care in prisons, not medical treatment. This went against trends in discussion of the comatose which (as highlighted above) formally defined nutritional support as therapeutic care. In turn, this implied that prisoners did not possess the same basic human or medical rights to decide whether or not to refuse medical treatment as the comatose, pointing to further inconsistencies in supported feeding policies. In April 1978, a prisoner was committed to Washington State Department of Corrections for assault. In December 1983, he went on hunger strike to protest against what he claimed to be a lack of response by the prison administration to an assault committed upon him by other inmates. The prisoner was transferred to a nearby community hospital where he initially accepted intravenous fluids but later refrained from taking even water. While being fed, he would pull out the feeding tubes. Medical staff decided to restrain him. The prisoner filed for a permanent injunction to forbid force-feeding which was rejected by the County Superior Court on the basis that the state had a duty to provide all reasonable life-saving medical treatment. In February 1985, the prison administration appointed a special commission to examine the problem and recommend a policy. Four medical professors from the University of Washington reported that “in our view, food and water should not be conceptualised as medicines, nourishment should not be reckoned discipline, continued life should not be viewed as a form of punishment. At bottom, the issues are not ones of a medical nature but rather are administrative fulfilments of the State’s clear duty in preserving the life of this prisoner”. The policy decided upon involved giving the inmate two choices: force-feeding (accompanied by restraint) or resume eating. The inmate chose the latter (Miller, 1986–1987).

Perhaps one of the most complex and well-publicised incidences of prisoner force-feeding arose in Britain. Although the Home Office formally announced the end of prison force-feeding policies in 1974, Moors murderer Ian Brady was force-fed throughout 1975, raising questions in the House of Commons.Footnote 26 Throughout his imprisonment, Brady continued to hunger strike. It was only in 1995 that Judge Thorpe formally over-ruled the long-standing ruling of Leigh v. Gladstone (1909) which had supported prison medical staff in providing medical treatment (specifically force-feeding) to hunger strikers. With reference to the recent Tony Bland decision, Thorpe maintained that prisoners, like the comatose, had a human right to be able to decide or not to be fed; a ruling that dismissed ideas of the ‘sanctity of life’ and state interest in preserving the lives of prisoners (Dolan, 1998). To complicate matters further, a renewed hunger strike by Brady – involving a 150 day period between 1999 and 2000 of being fed four times daily by nasogastric tube – introduced a new factor into the debate: the Mental Health Act 1983 (Williams, 2001). In a court hearing, Brady argued that that force-feeding could only be used legally upon him for the mental disorder from which he was suffering, not for simply going on hunger strike. Ultimately, the decision to continue force-feeding was deemed lawful, rational and fair given that, according to the court, Brady’s hunger strike was a “florid example of his psychopathology in action”; a manifestation of his narcissism, self-importance, desire to control and tendencies for confrontation (Anon, 2000b). Public attitudes were unsympathetic. Like many hunger strikers internationally, Brady was denied a right to self-determination and autonomy. While the mental health of convicted murderers and rapists was taken into account in discussion of the patients’ ability to make a rational decision, other factors were discussed in relation to prisoners including the state’s interest in preserving life. Undoubtedly, public and judicial perspectives on the rights of prisoners such as Brady to die were inflected by negative attitudes towards the murderous acts which had result in their imprisonment.

Conclusion

Since the 1970s, complex and contradictory policies have emerged relating to the rights of patients to be allowed to die naturally from the effects of withdrawing or withholding nutritional support. Discussion on the matter arose in the context of broader debates on the technologisation of death, patient and prisoner rights, euthanasia and patient autonomy. It is often assumed that a steady transition took place in the post-war period from medical paternalism to patient autonomy. Patients gradually asserted control over their own bodies and, in some instances, secured a right to decide when and how to die. The case study of nutritional support demonstrates that this transition could be complex and piecemeal. Decisions made to feed or let starve were often rooted in emotional considerations based on perceptions of, and attitudes towards, particular patients. Court decisions often supported the rights of the functionless comatose to be allowed to pass away peacefully following the withdrawal of feeding tubes. Indeed, these have proven to be among the most cited examples of patients declaring their autonomy. Yet many physicians and family members felt uncomfortable with letting infant or elderly patients starve. They resorted to using nutritional support even despite little evidence that such an intervention brought significant benefits or likelihood of improvement.

Many anorexics and prisoners have been entirely denied a right to claim bodily autonomy, in part due to negative attitudes towards them. Force-feeding remained relatively commonplace in prisons, although its use was restrained by the need to secure court orders. Whereas the fate of infants and the elderly were more likely to be considered compassionately, the same could not always be said of prisoners denied a basic right to be able to refuse medical treatment. Indeed, as critics have recently suggested, whether or not a hunger striker’s decision to refuse food is respected has often depended upon a confluence of actors, most notably the state’s perception of particular prisoners, based on class, ethnicity and other determining factors (Garasic and Foster, 2012). It seems that the same holds true for patients more generally. Issues such as the force-feeding of prisoners need to be contextualised within broader trends in biomedicine of displaying aversion towards letting patients starve, even if the re-assertion of control over a starving patient’s body clashes with a broader imperative to grant importance to patient autonomy. It now seems clear is that feeding technologies and nutritional support is useful for many, but not all, patients. If ideals of patient autonomy are to remain privileged, bioethicists and clinicians would benefit from co-ordinating attitudes towards nutritional support and artificial feeding among all patient groups rather than relying upon policies that emerged in a piecemeal and inconsistent fashion. However, the emotional barriers which might impede this process need to be acknowledged and considered.