1 Introduction

This paper provides a systematic account and a ‘harms framework’ for understanding how artificial intelligence (AI) technologies could damage the interests of nonhuman animals (hereafter ‘animals’). Technology has sometimes greatly benefitted animals, such as via modern veterinary medicine or agricultural machines that relieved ‘beasts of burden’ (Linzey & Linzey, 2016). Yet, technology has also profoundly harmed nonhumans. Construction of the Chicago stockyards and its assembly-line systems in the 1800s, for example, enabled the mass slaughter and processing of animals (Blanchette, 2020; Sinclair, 2002). Around the 1950s, specialised factory-farming technologies like sow stalls, battery cages, and automated sheds further amplified intentional harm to farmed individuals. The Chicago stockyards also soon led to Henry Ford’s assembly-line automobiles, the modern ancestors of which unintentionally kill and injure millions of animals annually (Ree et al., 2015).

Today, in the twenty-first century, AI has significant potential to harm animals. AI refers to digital technologies that perform tasks associated with intelligent beings like classifying, predicting, and inferring (Copeland, 2022). AI’s growing power owes much to increasing data from, for example, the digital economy, online life, and manifold and integrated sensors in the environment and on or in human and animal bodies (e.g. as wearables)—the so-called Internet of Things or IoT. Its power also stems from modern machine learning (ML), including machine vision, natural language processing, and speech recognition.

In ML, a system is trained on data from which it learns to make new classifications and inferences beyond its explicit programming. We shall in this paper side-step human-level or general AI (and AI that is arguably sentient), concentrating instead on narrow (and non-sentient) AI that is developed and used for specific purposes (Russell, 2019),Footnote 1 which is arguably of more pressing moral concern than the emergence of very human-like AI.

Some existing technologies used to manage animals, such as automation in chicken sheds and dairies, may be augmented by AI. Moreover, some robots, drones, and vehicles incorporate AI in ways that may benefit or harm animals. Often the intention in developing and using AI is to positively benefit animals. For example, smart home applications for animal companions (Bhatia et al., 2020) and smart agriculture (Makinde et al., 2019; Neethirajan, 2021b) are often marketed as boons for animal welfare through better monitoring and control of the conditions in which they are kept. Another use that might benefit animals is AI image recognition to help detect illegal wildlife trafficking (O’Brien & Pirotta, 2022). Yet, as we show in some detail, AI can also act—both independently and with existing technologies—to create and amplify harms to animals (Sparrow & Howard, 2021; Tuyttens et al., 2022).

A tendency exists to see advances in AI as inevitably bringing ‘improvements across every aspect of life’ (Santow, 2020). For example, autonomous machine intelligence can seem more objective and less prejudiced than human intelligence. Nonetheless, society is increasingly recognising AI’s potential for ill (Pasquale, 2020; Tasioulas, 2022; Yeung, 2022). Despite this, the burgeoning scholarship in AI ethics (Bender et al., 2021; Buolamwini & Gebru, 2018; Eubanks, 2018; O’Neil, 2016), while vital and sometimes courageous in critiquing Big Tech power and algorithmic injustice, has largely ignored animals. While some ethicists, including Peter Singer (Singer & Tse, 2022), have recently begun to correct this oversight (see also, e.g.Bendel, 2016, 2018; Bossert & Hagendorff, 2021a; Hagendorff, 2022; Owe & Baum, 2021; Ziesche, 2021), dedicated work on AI and animals is relatively rare.

This paper’s systematic account of animal harm helps address that gap by setting out the breadth of contexts and plurality of ways in which animals may be harmed by AI. Drawing on the work of animal scientist David Fraser (Fraser, 2012), we develop a harms framework that includes intentional, unintentional, proximate, and more distant impacts of AI. While we do not propose specific ethical or legal responses, the framework provides a comprehensive and clear basis for crafting design, regulatory, and policy responses for animals.

The paper runs as follows. Section 2 outlines why concern for animal harms is warranted despite a general neglect of animals in AI ethics scholarship, explains the plural range of harms animals can arguably experience, and introduces a practical five-part harms framework or typology that recognises different types and causes of harm to animals from AI. The framework includes intentional harms that are legal or condemned, direct and indirect unintentional harm, and foregone benefits. Section 3 then uses the framework to identify and illustrate actual and possible AI harms to animals in each of the five categories, based on a narrative review of literature. Section 4 concludes by considering implications of our framework and suggesting directions for further research.

2 Understanding Harms to Nonhuman Animals from AI

In this section, we explain why we need to investigate AI’s impact on animals and outline the plural range of harms animals can experience. We also construct our practical framework of five different types and causes of harms to animals, which we subsequently apply to AI.

2.1 Why Investigate Harms to Animals?

There are three key reasons for investigating AI’s risks for animals: concern for animals in and of themselves as opposed to their instrumental use by and for humans; concern to understand and respond in a systemic way to the entangled and mutual vulnerabilities of humans, animals, and our shared ecosystems; and the lack of attention to animals in AI ethics discourse. We discuss each in turn, though we note that space forbids a fuller treatment of complex issues, like that of moral status.

2.1.1 Concern for Animals Themselves

Billions of animals live in the ‘wild’ or near human settlements and billions more are directly used by humans (Fraser & MacRae, 2011). The biomass of ‘livestock’ today is up to 15 times the combined weight of all wild mammal species (Bar-On et al., 2018). Numerous ‘owned’ animals are also found in zoos, sanctuaries, circuses, entertainment, households, and science labs (DeMello, 2021). Anthropogenic (human-caused) harms to wild and domestic animals are ubiquitous, and their suffering dwarfs human suffering both numerically and in magnitude (Sebo, 2022a).

Animal welfare and treatment are matters of rising global concern. In the 1960s and 1970s, intensive animal production technologies that harmed animals by inflicting severe confinement, pain, and distress elicited backlashes from seminal figures like Ruth Harrison and Peter Singer (R. Harrison, 1964; Singer, 1995). From this growing concern, the contemporary animal protection movement was born along with an expanding field of animal studies in academia. Many countries have agencies for animal protection. The intergovernmental World Organisation for Animal Health (OIE) declares that ‘the use of animals carries with it an ethical responsibility to ensure the welfare of such animals to the greatest extent practicable’(World Organisation for Animal Health, 2021, Article 7.1.2).

These developments recognise that sentient animals and their interests matter in themselves. To say that an animal has intrinsic moral status or worth is to say we have duties to them for their own sakes (Jaworska & Tannenbaum, 2021). Another way to put this is to say that (many) animals are morally considerable (Palmer, 2010, p. 10). Now, for many thinkers, a sufficient reason for holding that animals have intrinsic moral status is that they have a wellbeing and interests related to their sentience or ability to experience things, such as suffering and pleasure (Palmer, 2010, p. 11).

For example, some philosophers hold that if sentient interests like avoiding suffering morally matter when they occur in humans, they should also matter morally when they occur in nonhumans. To believe otherwise is to display inconsistency and prejudice (Singer, 1995). The so-called argument from species overlap—previously called the ‘argument from marginal cases’ (Pluhar, 1995)—has played a key role here in animal ethics (Horta, 2014). Very briefly and roughly, this argument (or a version of it) says that since we take the interests of human beings with mental abilities similar to sentient nonhumans—similar due to, e.g. severe cognitive impairment—to matter morally, consistency demands that we also take the interests of sentient nonhumans to matter morally (or alternatively, to matter equally). Although we cannot explore this argument here, it is important to note that it has proven difficult for opponents to undermine and has convinced many thinkers.

Moral theory has also been deployed to argue that animals ethically matter. For instance, some philosophers argue that utilitarian thinking applies to sentient animals (Sebo, 2022b), some argue that Kantian and rights thinking apply to animals (Korsgaard, 2004; Regan, 2004), while still others argue that a virtuous person would extend virtues of, say, benevolence and justice beyond humans to sentient nonhumans too (Hursthouse, 2011). While there are key differences between these theoretical approaches to animals, and while theorists may disagree on animals’ precise moral status or (as some say) their moral significance (Palmer, 2010, p. 10), there is nonetheless convergence among animal ethicists that sentient animals have intrinsic moral worth.

As noted, moral considerability for animals is often tied to a capacity for sentience. Sentient experiences may include feeling, sensation, emotion, and desire (Birch et al., 2020; D. R. Griffin, 2013; Marino & Colvin, 2015). The influential 2012 Cambridge Declaration on Consciousness states that mammals, birds, and octopuses possess neurological substrates for consciousness or sentience (Bekoff, 2012; Low et al., 2012). By contrast, simpler animals, like nematodes and jellyfish, are thought to lack these substrates. Recent studies suggest that fish and crustacean can feel pain (Crump et al., 2022), and there is ongoing debate about insect sentience (Giurfa, 2021).

Sentient beings can be harmed and benefited in ways that bear on our moral duties to them. (We discuss the nature of harms and interests below.) Some philosophers, however, also argue that non-sentient animals can both be harmed and are morally considerable (Attfield, 2016; P. W. Taylor, 2011). In this paper, we shall focus our examples on harms to sentient animals, since this is the more commonly accepted view. (In the next subsection, however, we discuss shared impacts on humans, animals, and the environment, which includes beings not generally regarded as sentient.)

Some philosophers have argued that our duties regarding animals, such as the duty to not make them suffer without good reason (Engel, 2001), are not duties owed to the animals themselves because of their intrinsic moral status but are rather duties owed to human beings for instrumental reasons. Immanuel Kant, for example, famously argued that cruelty to ‘irrational’ animals is only wrong because it increases the likelihood of cruelty to ‘rational’ human beings (Gruen, 2021). However, the view that animals have intrinsic moral status rather than merely instrumental value is now accepted by most philosophers and scientists, including for reasons we touched on above.

Nonetheless, in practice, humans still often incline toward anthropocentrism (Steiner, 2010). Anthropocentrism can be understood as the view that human interests are far more important than even significantly more urgent animal interests and/or that animal interests can be often and largely disregarded (Santiago-Ávila & Lynn, 2020, p. 6). Indeed, anthropocentric societies typically treat animals as exploitable commodities, harming and disposing of them as they please like ‘natural resources’ (Wadiwel, 2015). This widespread treatment facilitates dismissal of, ignorance about, and downplaying of animal harms, which is further exacerbated by our vested interests in the exploitation. We shall argue below that we need to be alert to anthropocentrism being reproduced in AI technologies, especially where the development and application of AI is controlled by animal use industries.

In the following discussion, we assume nonhuman animals are morally considerable. However, we do not attempt to specify their precise moral significance. We assume that people will differ on moral significance even when they agree that animals are morally considerable. Because our harms framework is not contingent on a more precise moral position, people with differing views about animals’ moral significance can equally employ it. In addition to the fact that sentient animals have intrinsic moral status, a further reason for concern about animals flows from recognition of the deep entanglement of animal, human, and environmental vulnerability to harm.

2.1.2 Human and Animal Entanglement

Human and animal lives are entwined (Sebo, 2022a), and their respective harms often go together (Gruen, 2014b). For example, sick and stressed animals can transmit diseases to humans (Centers for Disease Control & Prevention, 2022), and both wild and domestic animals have emotional value for many people and inform cultural identities in many indigenous societies (Demuth, 2019; Fuentes, 2012; Ma et al., 2020). Other examples of entanglement include the fact that domestic violence perpetrators are frequently violent to both human and nonhuman family members (N. Taylor & Fraser, 2019) and that poachers may harm animals and their human protectors at the same time (Nandutu et al., 2021).

Additionally, various socioeconomic and political developments can simultaneously harm humans and animals. Factory farms, for instance, are frequently located in low socioeconomic districts which suffer severe air and water pollution, while migrant workers endure harsh conditions in meat-processing plants (Stoddard & Hovorka, 2019). Harming animals can damage shared environments and ecosystems upon which all depend (Crary & Gruen, 2022; Kemmerer, 2015).

Conversely, protecting animals can also protect humans from harm. As the interdependence of human and nonhuman health has become clearer, some scholars have recommended classifying human and nonhuman health as a ‘universal good’ to be generally protected (Degeling et al., 2016). Technology is one important way to both harm and benefit humans via its impact on animals (Lupton, 2022). For example, emerging technology might sever mutually beneficial human–animal connections (Cornou, 2009) or alternatively enhance human–animal relations (Mancini, 2011). For these reasons, entanglement with human wellbeing augments the case for considering animal interests when examining AI.

Lately, social scientists have begun to reveal close connections between the anthropocentric dismissal of nonhuman interests and the perpetuation of human-directed prejudices (Costello & Hodson, 2010). Some scholars, for instance, argue that anthropocentric denigration of nature helps culturally and politically to justify inferior treatment of women and racialised, indigenous, and lower-class people who are located on the ‘nature’ side of a constructed binary division between human and nonhuman (Adams, 2015; Kim, 2015; Ko & Ko, 2017). Other researchers suggest that denigration of animals is linked to the dehumanisation of people by way of an underlying social dominance orientation (Dhont et al., 2014). Though sometimes controversial, such views may lead to a fuller appreciation of how human-inflicted harms to animals come about.

As a result of deep conceptual and structural entanglements in the treatment of humans, animals, and/or the environment (including plants, less sentient animals, and ecosystems more broadly) (Crary & Gruen, 2022, p. 130), some approaches reject the notion that we can easily separate intrinsic and instrumental reasons for caring about animal harms (Sebo, 2022a). Rather than taking the moral considerability of animals as essentially pitted against human interests, these approaches emphasise the reality of shared human and nonhuman interests, experiences, and vulnerabilities to harm and exploitation.

Eco-feminists, for example, have stressed that animal harms often arises from oppressions and injustices that also have human victims, such as misogyny, racism, and neoliberalism (Adams & Gruen, 2014a, 2014b). This creates an ethical imperative to ‘work to identify political and economic mechanisms… that causally explain the linked disdain for nature and the subjection of women and members of overlapping, often racialized, outgroups… calling for a restructuring of our relationships with animals, the rest of nature, and fellow humans’ (Crary & Gruen, 2022, p. 130). Accordingly, it is essential to identify, understand, and respond to shared harms in ways that are holistic and that address multiple oppressions of multiple human and nonhuman beings. The approach also emphasises solidarity against such broadly unjust systems.

To be clear, not all nonhuman harms harm humans. But many do (Sebo, 2022a). Therefore, in conceptualising harms to animals from AI technologies, it is important to look for patterns of harm against humans, animals, and the environment. In general, we can expect that technologies that harm humans will often also—directly or indirectly—harm nonhumans, and that technology that harms nonhumans will often harm (at least some outgroups among) humans too.

2.1.3 Ignoring Animals in AI Ethics Discourse

Such crucial moral ideas about human and nonhuman harm are largely missing from AI ethics discourse. While there is some literature on animals and AI in specific domains such as precision ‘livestock’ farming (Herlin et al., 2021; Tuyttens et al., 2022), wildlife conservation (e.g. Nandutu et al., 2021), and automated vehicles (Black & Fenton, 2021), much less attention has been devoted to the broader impacts of AI on animals and to shared human and animal harms, notwithstanding some exceptions (Bendel, 2016, 2018; Bossert & Hagendorff, 2021a; Owe & Baum, 2021; Ziesche, 2021). Although AI ethics guidelines from organisations and governments advocate a range of ethical principles including beneficence, nonmaleficence, and, justice (Jobin et al., 2019), these are mostly formulated explicitly for humans (and to a limited degree the environment) or are anthropocentrically applied by default (Hagendorff, 2021).

Similarly, the ‘AI for good’ movement (Taddeo & Floridi, 2018) is predominantly human-centred. Even the AI for Good Foundation, which promotes technology to service the United Nation’s Sustainable Development Goals (SDGs), focuses on AI for humanity (AI for Good Foundation, 2022). Although the SDGs rightly address environmental issues, they effectively and anthropocentrically cast animals as sustainable natural ‘resources’ (Sebo et al., 2022; Torpman & Röcklinsberg, 2021; Visseren-Hamakers, 2020). In general, mainstream AI ethics discourse construes animal wellbeing as merely instrumental to human wellbeing—and it usually overlooks even that crucial connection. Problematically, overlooking animals in discussions of AI encourages an impression that AI can only be innocuous in its impact on animals, which, as we demonstrate, is false. The fact that animals possess moral status, have interests entangled with human interests, and are often neglected in discourse that could affect design, policy, and legal responses to AI necessitates a comprehensive account of animal harm from AI.

2.2 What Is the Nature of Harm to Nonhuman Animals?

2.2.1 Understanding Sentient Animals

Some people might doubt whether we can know what harms animals because they can be so different from us in physiology, appearance, and behaviour. While many traditional societies (Serpell, 1996; Ulicsni et al., 2019) and some traditional farmers (Rollin, 2006) have deep understandings of animals from their lived experience, modern western scientists and philosophers often denied that animals are conscious or sentient (or that we could ever know if they are). Descartes and his followers famously argued that the cries of dogs being dissected without anaesthesia are the mere reactions of insentient automatons rather than expressions of genuine pain (P. Harrison, 1992). Scepticism about animal sentience subsequently infected science, along with a tendency to underestimate the complexity of animal minds (Rollin, 1989). Some people still condemn as ‘anthropomorphic’ the idea that animals have beliefs, emotions, and feelings (Kennedy, 1992).

However, radical doubt about the presence and general nature of animal minds and sentience is waning (Broom, 2014), and contemporary science takes animal minds and wellbeing seriously. Ethology and animal welfare science, for example, are often considered important for identifying and measuring experiences like distress and anxiety in animals (Broom, 1991; Duncan, 2005). Indeed, scientific welfare assessments of sentient animals are now advocated by influential global bodies like the OIE (World Organisation for Animal Health, 2021, Article 7.1.3). These days, people have ever-closer relationships with animal companions, and digital technologies like YouTube are capturing and exposing intimate details of animal lives that are sometimes new even to animal scientists (eg Searle et al., 2022). Moreover, some emerging digital applications promise to better connect us with living animals and their day-to-day experiences, through translation and interpretation of animal languages (Interspecies Internet, 2021) and providing tools for interaction (Mancini, 2011).

2.2.2 Meaning of Harm

Despite increasing human knowledge of animal minds, identifying harms for animals caused by AI faces the further obstacle that ‘harm’ is a contested evaluative or normative concept. (This is the case for human harm too.) The concept of harm is not merely descriptive; it is also normative or evaluative because it concerns what is bad for an animal or what makes them worse off (Bruckner, 2019). What some people evaluate as making animals worse off, others do not. The fact that the notion of harm is contested (in addition to there being possible empirical disagreements about when harm is present) threatens to complicate or damage an account of AI’s harms for animals (Dawkins, 2021). Consequently, we must briefly discuss what harm for animals is or can be and how we use that concept in this article.

First, our notion of harm applies to individual animals, not species or collectives. That which may harm a species (e.g. insufficient breeding that leads to extinction) may not necessarily harm individual animals. Conversely, what harms individuals (e.g. factory-farming conditions for domestic animals) need not harm their species.

Second, our notion of harm is connected to the evaluative idea of interests, which is in turn connected to the evaluative ideas of wellbeing or welfare (Crisp, 2017; J. Griffin, 1986). Animals, at least sentient ones, have interests in not being harmed and interests in being benefited. Here, ‘interest’ means having an interest rather than taking an interest (Palmer, 2010, p. 19) (although having desires is one way of defining wellbeing—see next section). Harming sets back an animal’s interests and wellbeing, while benefiting an animal promotes their interests and wellbeing (Bruckner, 2019). Removing or preventing harms (e.g. pain), and providing positive benefits (e.g. pleasure), can be good for an animal. Some entities, by contrast, presumably have no interests at all. Nothing can harm or benefit such entities; they have no wellbeing. For example, it is hard to imagine a rock having an interest in not being kicked or smashed or an interest in being ‘reunited’ with other rocks.

Third, our notion of harm has a certain kind of moral relevance. Some philosophers have claimed that animals can be harmed, but only in the way that, say, cars or bacteria can be harmed (Hsiao, 2017). Letting a car rust or killing a bacterium with an antibiotic may arguably harm them, but this does not mean we would wrong a car or a bacterium were we to ‘harm’ them in those ways. In contrast, the notion of harm we adopt here is a notion that is relevant to the possibilities that humans can, via their technologies, wrong animals or violate ethical duties to them.

Philosopher Clare Palmer confines the meaning of animal ‘harm’ to harms ‘carried out by a moral agent or agents’ (Palmer, 2010, p. 23). By contrast, our definition of harm follows the broader notions of wellbeing and interests, although it is true that we are specifically interested in harms due to human use of AI. That said, we are also interested in ‘harms’ or damage to interests that might occur without human causation; indeed, this is especially relevant to our notion of ‘foregone benefits’ that result from not developing certain kinds of AI (see below). We also do not say that for an ‘act to harm…it must be wrong’ (Palmer, 2010, p. 23): whether a harmful act is wrong depends on context and on views about animals’ moral significance.

2.2.3 Theories of Wellbeing

We marked above the problem of disagreement about what harm is. Such disagreement is reflected in philosophical theories concerning interests and wellbeing (J. Griffin, 1986). Here, we must distinguish between an animal’s ultimate (or non-instrumental) interests and their instrumental interests. UltimateFootnote 2 interests refer to that which enhances wellbeing in itself, whereas instrumental interests refer to that which does not benefit the animal directly but instead promotes an animal’s ultimate interests. For example, it might be instrumentally good, and in an animal’s interests, to receive an antibiotic; but it is (one might say) the restoration of health or the removal of suffering due to the antibiotic that is ‘ultimately’ (Crisp, 2021) good for the animal (see next section).

There are three well-known theories of ultimate harm and benefit. Hedonism locates ultimate interests solely in pleasure and pain. Under the label ‘pain’, a hedonist might also include negative experiences like distress, fear, anxiety, loneliness, frustration, and boredom. Under ‘pleasure’, a hedonist might include positive experiences like happiness, joy, and contentment. Fleeting pain may not rise to the level of a harm (Palmer, 2010, p. 23), whereas severe and prolonged pain (suffering) and the comprehensive absence of pleasure can seriously undermine wellbeing, and, at the limit, make life barely or not worth living (Mellor, 2017).

Desire theory locates ultimate interests in desires (or preferences) and their fulfilment. Sentient animals have a range of wants and motivations, sometimes partly furnished by their genetic and evolutionary inheritance (Rollin, 1992). These desires can be satisfied, unsatisfied, or thwarted. Momentary weak desires may not figure in wellbeing, but other desires do. Failure to satisfy stronger desires and/or a greater number of desires will, on this theory, constitute more severe harm to the animal.

Objective list theory locates interests not entirely in pleasures, pains, and desire satisfaction but also (or rather) in various states or activities, such as bodily integrity, health, play, social interaction, emotional expression, and control over one’s environment (Nussbaum, 2006). The more these valuable activities are missing from a life, the greater the setback to the animal’s interests. Note that on any of these three theories of wellbeing, harm can result from the absence or the deprivation of certain positive things (Green & Mellor, 2011), as well as from the presence of certain negative things.

A pluralist about wellbeing may embrace one or more of these theories of ultimate interests (Lin, 2014). Furthermore, different moral theories can variously accommodate different conceptions of wellbeing. For example, many consequentialists (who determine right action purely according to actions’ consequences) favour hedonistic or a desire-based accounts of interests. Deontologists, care ethicists, or virtue ethicists could potentially adopt any one of these theories or else a more pluralist position on wellbeing.

We can now show how an important kind of disagreement about AI’s harms might arise. Imagine an AI system that promotes close confinement for a group of animals. Suppose the system, which monitors and cares for the animals, provides the captives with many pleasures and ensures little pain. Assume that despite having many pleasures, these individuals lack the opportunities for a range of activities, autonomous choices, and relationships enjoyed by their free-living cousins. Here, a hedonist might praise this AI system as fulfilling the animals’ interests, whereas an objective list theorist might condemn it as impoverishing their wellbeing.

The good news is that agreement between people with opposing conceptions of animal interests or wellbeing may often be possible in practice. For even when we disagree at the level of ultimate interests, we may agree at the level of instrumental interests. To illustrate, imagine now that the above AI system (which severely confines animals) substantially increases their distress, provides very few pleasures, and disallows a range of activities and natural behaviours. Although the different wellbeing theorists have different reasons for considering this confinement harmful, they nonetheless may agree that it is very harmful indeed.

Such practical agreement could apply to other conditions or forms of treatment. For example, animals may have ultimate and/or instrumental interests in obtaining food and keeping territory, exercising autonomous control over their lives and environment (Špinka, 2019), and maintaining specific social relations and groups (Gruen, 2014a). Wellbeing theorists might converge in their appraisal of these conditions and treatments.

Nevertheless, some disagreements can prevent agreement at a more practical level. We already saw this with respect to some forms of confinement. Another important example is death, which also raises difficult philosophical issues (see, e.g. Višak & Garner, 2016). Some thinkers deny that death itself (not just the manner of dying) is a harm (Belshaw, 2015), perhaps because animals, unlike humans, lack future-oriented desires which death could leave unsatisfied. Yet, others believe that death (while sometimes good for an animal afflicted by suffering) can often be a very severe harm (Yeates, 2010), either because some animals actually do have future-oriented desires (Singer, 2011) or because death deprives an individual of important future positive interests. Hedonist, objective list, and even desire theorists can (though need not) subscribe to this latter view about lost futures (Palmer, 2010, p. 134).

We are now ready to introduce our five-part practical harm framework. The framework allows us to classify and capture the many ways in which AI might create new harms, or amplify existing harms, for animals. It is intended to comprehend all the types and causes of harms outlined in the sections above. Some may not be convinced by some of the harms we identify, depending on which account of harm they prefer. We have chosen not to take a stand here on these debates, preferring a capacious approach that can support a range of views on why animals matter and how they can be harmed. As we stressed, when it comes to practical application, those with different views will still quite often agree on practical harms. On less tractable questions, our framework will assist in pinpointing issues on which further debate, discussion, and research is needed.

2.3 Harms Framework for Animals and AI

Animal harms have very many causes. Many setbacks to wellbeing occur ‘naturally’, without human involvement, and include disease, injury, disability, predation, thirst, and starvation in the ‘wild’. However, many harms to animals are anthropogenic—inflicted or caused by humans. To make better sense of these possible harms, we propose the following framework, adapted from animal welfare scientist David Fraser’s typology of anthropogenic animal harms (Fraser, 2012; Fraser & MacRae, 2011; see also Quain et al., 2018). The framework is summarised in Table 1 together with key illustrative examples of each harm, which are discussed in greater detail in Section. 3.

Table 1 A practical framework for considering animal harm from AI

2.3.1 Intentional Harms

While AI is not generally designed to intentionally harm humans—autonomous killing machines being a notable exception (Noone & Noone, 2015)—deliberate animal harm is already routine and entrenched. Intentional harms are often inflicted on animals for purposes such as food and fibre production, scientific research, entertainment, and companionship (Fraser & MacRae, 2011). Many intentional harms, including confinement, husbandry procedures like tail-docking, and slaughter, are legal or socially accepted, while others such as wildlife trafficking and violence against companion animals are generally socially condemned and often illegal. AI can be designed or adopted by humans who harm animals to pursue their goals more effectively. We therefore distinguish AI-facilitated intentional harms that are currently socially accepted and generally legal, from uses and abuses of AI that cause harms that are not socially accepted and are often illegal.

2.3.2 Unintentional Harms

Unintentional harms can accompany activities pursued for other reasons. Examples include night-time lighting in chicken barns to stimulate productivity, poor husbandry practices, and harvesting crops which injures or kills field animals. These harms can affect owned and wild animals and can even result from the intention to prevent animal harm (Quain et al., 2018, p. 4). Unintentional harms are direct (causally or temporally proximate) or indirect (causally or temporally more distant). Indirect harms are often overlooked, less predictable, and sometimes even greater than intentional harms (Fraser & MacRae, 2011).

AI might unintentionally but directly harm animals while performing its primary purpose. This may occur because of programming that ignores animals or privileges a particular aspect or view of animal welfare while ignoring others or due to mistake or misadventure arising from unintended ways in which humans or animals use the technology. The ubiquity of animals, animal use, and (increasingly) AI makes such unintended harms more and more likely. The possible unintentional and indirect harms of AI are manifold. While digital technologies are often perceived primarily as immaterial, they do have real but indirect material impacts on ecological systems (Brevini, 2022; Crawford, 2021a; Taffel, 2022). And while there has been considerable attention to the unintentional indirect effects of AI by disrupting civility, democracy, and discourses that support human dignity, there has been little attention to the possibility that animals can be indirectly affected by civility, democratic governance, and ethical discourses. That is, AI-enabled system can cause epistemic and representational harms to animals as well as humans.

2.3.3 Foregone Benefits

Our fifth category of foregone benefits from AI results from an absence of positive outcomes that might have eventuated but for certain decisions. We suggest these can plausibly be counted as animal harms as they could result in suffering, frustrated preferences, absence of valuable activities, etc. Further, some omissions may be culpable, and certain AI applications may be morally worth incentivising, such as alternatives to animal testing and improved veterinary medicine.

3 Using the Animal Harm Framework for AI

We now show how the five-part framework helps classify and illuminate AI-based harms to animals, including those that might be overlooked. It is impossible to be exhaustive since possible uses of AI are vast and always emerging. Our aim is rather to include illustrative examples for each category which emerged from a non-exhaustive narrative literature review.Footnote 3

3.1 Intentional Harms: Illegal and Generally Condemned

The intentional design and use of AI for illegal activities that harm animals will of course generally be surreptitious. However, AI applications are certainly already being designed and used to perform a range of illegal or generally condemned behaviours more effectively, such as the use of drones in trafficking of illegal drugs (Shields, 2017). AI is therefore also likely to be used in criminal activities that harm animals such as illegal wildlife trafficking. For example, given the rapid development and wide deployment of AI-enabled trackers and drones to monitor and protect wildlife for the purposes of conservation (see e.g. Dauvergne, 2020, pp. 53–69), it is highly likely that badly motivated actors will also be using similar technology for criminal purposes to track animals for the purposes of illegal wildlife trade or ‘trophy’ hunting. People are already using drones to fly illegally close to wildlife such as marine mammals, causing harassment (e.g.Crumley, 2021; Rebolo-Ifrán et al., 2019).

AI abuse occurs when AI is intentionally used in deleterious (and illegal or ethically problematic) ways against its design purpose.Footnote 4 Consider potential harms arising from data collection on the whereabouts and behaviour of protected wild animals. Cooke et al. (2017) provide a number of examples of hunters, fishers, and poachers seeking to hack telemetric data collected for wildlife protection purposes in order to hunt or trade animals including Bengal tigers in India (Cooke et al., 2017, p. 1206). Poachers in South African game parks have tried to hack into tracking technology and wearables used by rangers to monitor endangered species (see also Nandutu et al., 2021).

Similarly, AI that processes basic tracking information from smart devices designed to help people look after their companion animals can be used to observe human and animal habits and harm them both. Current versions of microchipping and tracking are intended to keep companion animals safe but can already be abused by ill-intentioned actors to track down and harm family members seeking refuge from domestic violence (Humphreys & Diemer, 2021). Technologies such as Apple air tags for tracking companion animals and video interfaces for remote human–animal communication are increasingly integrated into AI systems such as smart home applications and the IoT that involve data processing and analysis that provides detailed behavioural pictures. The inclusion of animals in these applications may thus render both animals and humans more vulnerable.

Governments may also obtain access to AI-enabled protected wildlife tracking data collected by researchers for (arguably) illegal and illegitimate ends (Cooke et al., 2017; Meeuwig et al., 2015). For example, in 2015, a member of the endangered great white shark species was identified by acoustic data collected by wildlife conservation scientists and subjected to a ‘kill order’ by the Western Australian state government in order to protect public safety at the beach (Meeuwig et al., 2015). As the scientists wrote, ‘the animal’s presence in the area was only known because it had been tagged for science and there was no evidence that it had posed a threat to public safety’ (Meeuwig et al., 2015, p. 151).

In this case, the Western Australian state government had implemented an ‘imminent threat’ policy that meant great white sharks near bathing areas could be subject to a ‘catch to kill’ order, despite this species being considered endangered under national law and despite the state environmental authority rejecting lethal mitigation programs (Meeuwig et al., 2015). While such use of the data was considered justified by the government involved, it raises ethical and legal questions about whether data originally collected for a beneficent purpose (as permitted by environmental authorities) can then be used to harm the otherwise protected animals, especially if alternatives are available.

Finally, AI and data systems for animals may be hacked into by national or foreign agents to commit, for example, cyberespionage. As Greenberg puts it in an article about the recent hacking of a livestock app in the USA, ‘no app is too obscure to be a target for a determined adversary’ (Greenberg, 2022). Hacking could thus allow foreign powers to damage animal care (e.g. in farming) in belligerent operations.

3.2 Intentional Harms: Legal or Generally Accepted

AI designed to promote legal and widely accepted intentional harms to animals may be justified or may reflect unjustified anthropocentric attitudes (and so constitute misuse of AI). Consider automated vehicles programmed to ignore small animals (e.g. birds, lizards, rabbits) in favour of speed, efficiency, or avoiding property damage. The Moral Machine experiment (Awad et al., 2018) for self-driving cars showed that various cultures prioritise human over animal life in unavoidable collisions (though some are more ‘animal-friendly’ than others).

AI is being strongly promoted for ‘livestock’ farming. Many ordinary people agree that intensive animal production, although legal, severely harms animals. For example, factory farms restrict animal behaviour and autonomy while causing harms like negative experiences, unsatisfied desires, deprivations, and death. Fraser and MacRae observe that animal keeping or ownership can readily facilitate harm to sentient nonhumans (Fraser & MacRae, 2011). This feature is especially pronounced in the case of factory-farmed animals who may have lives that are impoverished or barely worth living (Singer, 1995). AI is now being used in intensive animal production (e.g. to control lights, temperature, opening and closing of gates) and increasingly to observe and collect data on the animals themselves (e.g. to check cortisol levels, pain, excitement) (Bao & Xie, 2022). Precision agriculture aided by AI has the potential to enable additional and more efficient intentional harms (as well as benefits) through expanding human control of animals. This could both magnify and further ‘lock in’ industrial animal farming harms.

Still, one aim of precision farming is to promote animal wellbeing (Neethirajan, 2021b). Here, AI systems with sensors on or around animals might monitor and respond to their health and welfare. For example, ML could detect coughing, appetite loss, and lethargy and allow automated responses like medicating or increases in feeding, with the intention of largely removing humans from the process. While smart farming may sometimes improve wellbeing (or at least some dimensions of wellbeing like health) (see Buller et al., 2020), it also facilitates intensive farming practices aimed primarily at maximising productivity rather than enabling lives well worth living. It is important to remember that production of animal products for profit is precision livestock farming’s underlying and main driving goal.

Although agriculture comprises the bulk of intentional harm to land animals, science causes great harm too (Bossert & Hagendorff, 2021b; Nikooienejad & Fu, 2022). As with farming, such harms may be comprehensive, encompassing negative experience, deprivation, thwarted desires, and death. In medical research, AI-enabled experimentation and analysis may help power an expansion in the design of new drugs (David et al., 2020), vaccinations (Arora et al., 2021), or treatments—all of which would need to be tested on animals in order to obtain regulatory approvals for use in humans. Less obviously, AI may also help design new organisms to be used as test subjects in research or for food and fibre production (Blackiston et al., 2021; Coghlan & Leins, 2020). While such applications may initially involve creating simple ‘organoids’ that lack sentience, future AI might create organisms with neurological structures (Koplin & Savulescu, 2019).

A high-profile example of AI medical research that allegedly harms animals is that of Elon Musk’s company Neuralink, which experiments with brain–computer interfaces. At the time of writing, Neuralink was being investigated for violation of animal testing laws by the US Department of Agriculture, which the company denies (Levy et al., 2022). The experiments reportedly harmed monkey subjects, some of whom died. Neuralink’s denial of animal cruelty (Ryan, 2022) arguably overlooks the point that death may be considered a harm.

Intentional but legal and socially accepted animal harms from AI extend to conservation, which has historically inflicted harms on animals that are seen as damaging to ecosystems. Some ecologists are developing AI systems (e.g. with facial recognition) that identify and capture ‘feral’ animals and/or spray them with poison (Meek et al., 2020; Slezak, 2016). While these smart traps are designed to protect endangered animals and release non-target animals, the harm of death and suffering to both targeted and misidentified animals is real (Braverman, 2019; Marris, 2021), as is the social harm (distress, grief, etc.) caused to surviving group members (Gruen, 2014a).

3.3 Unintentional Direct Harms

Unintentional direct harms to animals can occur because AI is designed or used in a way that sometimes shows ignorant, reckless, or prejudiced lack of consideration of its impact on animals or due to mistake or misadventure in the way the AI operates in practice, often because of the way other humans or animals interact with the AI. We discuss each in turn below.

3.3.1 Ignorance of Direct Harm to Animals

As mentioned above, automated vehicles could theoretically be programmed to prevent roadkill but in practice may ignore impact on animals (Bendel, 2018).Footnote 5 Sometimes unique animal appearances and behaviour may be overlooked. For example, during testing in Australia, Volvo discovered that their northern hemisphere-trained self-driving technology was ‘fooled’ by kangaroos and their unusual hopping gait (Zhou, 2017), putting the vehicles at greater risk of collisions than, say, would occur with European deer. Similarly, underwater automated robotic systems (see generally Braverman, 2019) designed to pick up rubbish (e.g. broken underwater sea cables), or to catch one species (e.g. in fishing), may mistakenly harm non-targeted species (e.g. octopuses on the sea floor, dolphins).

The growing use of AI-enabled drones and wildlife surveillance (known as telemetry) is having a significant direct material disruption on the environment. For example, a trial of delivery drones in Canberra was stopped when the machines were attacked by nesting ravens who apparently saw the drones as a threat to their young or territory (Mannheim, 2021). Automated telemetry can disrupt animal habits. For example, it is well-known that some nocturnal animals will change their habits to avoid the flashes of light emanating from automated cameras set up by wildlife scientists to capture their natural behaviour in their habitats (Caravaggi et al., 2020). Despite scientists’ best efforts to make attachable tracking devices unobtrusive, they may be unexpectedly disturbing. For example, a recent study was abruptly halted when a ‘mischief’ (family) of magpies collaborated to detach wearables (Potvin, 2022) they may have found uncomfortable or reminiscent of parasites (Crampton et al., 2022).

Proposals to deliberately harass birds with automated drones to prevent perching on buildings have also been made, sometimes with the claim that it is less harmful than alternatives (Schiano et al., 2022). More broadly, increasing tagging and tracking in the wider IoT could significantly disrupt animal activities, behaviour, and habitats. Individual or swarms of Unmanned Aerial Vehicles used to surveil and monitor ‘livestock’ might distress or even injure animals, particularly if the animals have evolved to fear predators in the air (Alanezi et al., 2022).

Smart devices are increasingly being used to keep domestic and zoo animals engaged (Webber et al., 2017). These are generally expected to increase animal enjoyment and activity but may also cause dysfunctional addictive and aggressive behaviour in animals—much as they can do in children, teenagers, and adults (Yang, 2022)!

3.3.2 Mistake or Misadventure in Operation

In deep learning (Russell & Norvig, 2010), an ML model’s ‘decisions’ are typically derived from processing of vast inputs in hidden neural layers, and the basis of the outputs may remain largely unknown or opaque. Due to these hidden layers, even the programmers of these deep learning systems may have no idea precisely why the models make a classification or inference. Consequently, harmful predictions made by this ‘black-box’ AI (Castelvecchi, 2016) can sometimes be difficult or impossible to detect and prevent. Consider a black-box AI system that provides too much or too little food or medicine to an animal in an automated environment such as a farm, zoo, or home without providing an understandable explanation for its actions (Miller, 2019). Such errors may only be discovered later (if at all) when animals get sick or sicker, by which time harm has been done.

Relatedly, limited accountability mechanisms for minimising black-box opacity may compound harm.Footnote 6 Where there is unclear assignation of responsibility or ineffective mechanisms for AI use (Reddy et al., 2020), the negative impact on animals may be greater. In automated farming, for example, total or partial removal of humans from farms poses unintentional risks. Farmers and stockpersons may have experience in reading signs of animal feeling, suffering, and ill-health which AI could miss (Werkheiser, 2018), especially if ML systems are not robust. This outcome is more likely than it might seem; we should not assume that AI will always be accurate and ‘objective’. Lack of robustness can occur, for instance, when ML is trained on unrepresentative or incorrectly labelled data about health and wellbeing leading to unreliable outputs.

AI models can also be applied to target data for which they are not suitably trained (McGovern et al., 2022). Crucially, the real-world accuracy and actual benefit for animals of automated welfare monitoring, as opposed to their hypothetical or assumed potential, is largely unestablished (Tuyttens et al., 2022). Furthermore, there may be some animal signs (e.g. subtle facial and bodily expressions) that machines cannot accurately interpret. It is relevant that designing ML for reading human feelings has been dubbed pseudo-science (Crawford, 2021b). Similarly, claims that AI is a gamechanger in affective state analysis for animals (Neethirajan, 2021a) should be treated cautiously (Bos et al., 2018).

Future harm could result from AI operating with relatively high intelligence and autonomy in achieving its goals (Russell, 2019). Stuart Russell half humorously suggests that a robot chef which runs out of meat might decide to cook the cat (Havens, 2015). But something vaguely similar might occur and is worth pre-empting. For example, an advanced robot on a fruit and vegetable farm may decide to destroy small animals that enter the farm by ‘reasoning’ that they threaten the valuable produce.

A distinctive way that AI could amplify harm to animals is when algorithmically enabled recommender and feed systems on digital media platforms and search engines promote animal cruelty for entertainment. The Netflix series ‘Don’t F**k with Cats: Hunting an Internet Killer’ revolved around videos of brutal animal killings that went viral due to YouTube’s recommender systems (Bruney, 2019). This algorithmic propensity to promote performative violence against animals resembles the algorithmic propagation of online hate speech (Mathew et al., 2019). Such AI could induce copycat offenders who benefit from algorithms pushing troubling content and who may harm animals (and humans) in a ‘perverse desire for notoriety and fame’ (Bruney, 2019).

Algorithmic recommender systems may also help expand individual and business behaviours harmful to animals, such as animal ‘crush’ videos and the sponsoring of underground dog and cock fights (Gundy, 2020). Illegal trade in exotic animals is a major feature of the dark web (Lenzi et al., 2020). Reportedly, the sharing of selfies with exotic animals (e.g. tiger temple selfies in Asia and Tiger King type scenarios in the USA) in tourist and influencer posts on social media has promoted industries that use cruel and generally illegal practices to keep animals for tourist encounters and photographs (Coldwell, 2017; Lenzi et al., 2020).

3.4 Unintentional Indirect Harms

The possible unintentional and indirect harms of AI are manifold. We discuss three indirect harm categories—material harms, harms from estrangement, and epistemic and representational harms—below.

3.4.1 Indirect Material Harms

Infrastructure supporting AI is materially impactful, and the effects of climate change may be the most significant. AI models are often computationally expensive and generate significant carbon emissions (Coeckelbergh, 2021; Schwartz et al., 2020), causing potentially massive effects on living things. Furthermore, undersea cables to support the Internet are intruding into spaces undisturbed by humans (when laid and during ongoing maintenance) (Carter et al., 2014). Mining of rare earth minerals, use of plastics to produce and package digital devices, the energy and water needed for cooling in bitcoin mining and data centres, and the e-waste produced by AI-enabled devices all eventually profoundly damage animal habitats.

Disruption of ecosystems can precipitate human migration and zoonotic disease emergence from stressed animals forced into greater competition with other wild and domestic species (Centers for Disease Control & Prevention, 2022; Thompson, 2013). AI applications can also accelerate personalised advertising, fuelling further production and consumption of material goods. They can help locate the hardest to find fossil fuels, build better factories, and intensify existing impacts of industrial technology. Such outcomes heighten climate change and habitat loss (Clutton-Brock et al., 2021).

AI-powered advertising of animal products, such as fast food and fast fashion, may promote greater use of animals in factory farms. Personalised gambling advertising will also encourage horse and dog racing and other forms of animal sport that rely on gambling revenue and the consequent harmful practices. In many of these examples, manipulative personalised advertising harms humans and ecosystems alongside animals (see generally Kingaby, 2021).

3.4.2 Harms from Estrangement

AI could gradually distance animal and farmer or other caretakers (Hemsworth & Coleman, 2010). That might sometimes be good for animals. But this estrangement might also forfeit opportunities for humans to notice individual animal needs (Werkheiser, 2018) and, moreover, to gain an intimate understanding of animals through experience and interaction, as many (say) farmers traditionally had. Indeed, AI systems may be used in contract farming in a way that operates on the farmer (as worker) as much as the animal, by telling the farmer how and when to look after the animals within certain strict parameters set to achieve certain results, like Amazon warehouse workers (O’Neill et al., 2021). In time, this may undermine mutually beneficial relationships between humans and animals (Tuyttens et al., 2022).

Animals thus unhabituated to humans due to automation may be more stressed when humans eventually but necessarily appear (Buller et al., 2020). In fact, some commentators are highly sceptical that automated farms which replace human caretakers, notwithstanding any fine-grained ability to monitor and treat individuals rather than groups, will be a long-term benefit for animals (Cornou, 2009).

3.4.3 Epistemic Harms

Indirect harms may occur when AI promotes or reinforces attitudes that animals have no moral significance. Although this may immediately harm animals too and be hard to predict, consolidation of anthropocentricism may harm future animals, perhaps on a grand scale. We call these harms epistemic harms, since they affect how we understand and regard animals. This potential harm is perhaps easiest to overlook, yet it is also vital because, as we discussed earlier, moral attitudes underpin our treatment of animals.

It is already well-known that AI can cause representational harms to humans (Buddemeyer et al., 2021). Representational harms involve conveying factually or morally false views that embody or engender insufficient ethical respect. ‘Representation bias’ occurs, for example, in ML using a training sample that ‘underrepresents some part of the [target] population, and subsequently fails to generalize well for a subset of the use population’ (Suresh & Guttag, 2021, p. 4). Groups underrepresented in training data, or represented in biased ways, can be subject to problematic classifications, as when ML models associate blackness with criminality or under-recognise non-white facial images (Buolamwini & Gebru, 2018). Such misrepresentations can render certain people or groups less visible and promote stereotypes, with unpredictable but real effects (Abbasi et al., 2019). In automated decision-making, algorithmic bias can lead to false perceptions based on gender, sexuality, race, socioeconomic class, age, etc. (Schwemmer et al., 2020).

The field of AI fairness exhibits ‘insensitivity to discrimination against animals’ (Hagendorff et al., 2022, p. 1). Hagendorff and colleagues argue that AI image recognition, language models, and recommender systems manifest anti-animal bias which potentially normalises violence against them (Hagendorff et al., 2022, p. 1). Again, this can arise from problems with training data (e.g. incorrect labelling, selection, biased samples, and data curation) and with the field of application. Even with careful design, ML techniques may inevitably absorb societal biases and generate morally problematic portrayals of groups, including animals, entrenched in widely used datasets like ImageNet.

AI-facilitated prejudices may be particularly damaging due to their reach and ability to entrench values combined with the human proneness to overconfidence in technology (Hagendorff et al., 2022, p. 5). AI outputs can variously classify animals using arguably prejudicial categorisations like ‘food animal’ and ‘working dog’; euphemistically portray (e.g. via images) livestock as free-range rather than factory-farmed; and associate some animals with terms like ‘disgusting’ (Hagendorff et al., 2022, pp. 9–14). Search engines can prioritise negative depictions of animals, which can in turn create damaging feedback loops when the accumulating biased data is used to train new AI models. AI could thereby progressively worsen prejudiced presentations of animals. Lack of fair and accurate digital representation of animals can also inhibit development of AI that promises to mitigate animal harm. For example, even if we tried to program automated vehicles or agricultural or cleaning robots to avoid harming animals, a deficiency in representative data (e.g. sufficient and accurate photos of certain animals) might not allow it.

Some AI-based systems, including in precision farming, recommender systems, and social media, could promote the notion that humans can completely control animals. Modern technologies for observing wild animals ‘are able to collect measurements 24 h per day, which enables a seamless observation, even during the night or in an inhospitable environment like the ocean or the arctic’ (Frey et al., 2017, p. 1). Such panoptical monitoring could conceivably generate the view that animals are mere objects to be tracked and exploited.

Additionally, AI technologies may suggest that harm to animals is normal or even exciting. Video games often portray animals as killable commodities (Coghlan & Sparrow, 2021), and making a sport of killing (virtual) animals might harm animals representationally (Abbate, 2020, p. 784). The context of technology use with actual animals may influence our perceptions (Coghlan et al., 2021). For example, smart technologies that enable humans to play and interact virtually with animals in factory farms (Driessen et al., 2014) may aim to promote welfare but may end up reinforcing views of animals as exploitable. Such examples could be seen as (unintended) tendencies in technology usage to suppress compassion and distance us from animals. We should therefore be mindful that AI purporting to improve welfare and human–animal connections may fail or have perverse effects (Arts et al., 2015; von Essen, 2021).

At least since Ruth Harrison’s Animal Machines (R. Harrison, 1964), people have argued that intensification and mechanisation of animal production objectifies animals. Some suggest that AI animal farms, which could be very large, high-density, and mostly automated, may further turn nonhumans into quantified objects (Bos et al., 2018). An additional danger is that AI monitoring could begin to define animal welfare (Tuyttens et al., 2022) by delivering apparently objective, standardised, continuous, automated measurements, which may be skewed towards mere health or performance indicators (Buller et al., 2020). Yet, as we emphasised, animal welfare is an evaluative concept and a contested one.

Because objectification of animals is already extreme, some may argue that intensified automation poses no additional concern. But even in (say) contemporary agriculture, farmers not only care directly for their animals to some degree, they also occasionally show their concern and distress publicly, as when natural disasters or epidemics strike and animals suffer and die (Kevany, 2020). By contrast, it is significantly more difficult to imagine any compassion from animal caretakers, or perhaps from the public, when automated systems so radically distance humans from nonhumans. Such systems may not only alter existing roles, they may replace (say) farmers with engineers and managers (Werkheiser, 2018). This further movement away from traditional husbandry could even gradually erase the very idea of a person on the land who draws on their lived experience with animals to actively care for them.

3.5 Forgone Benefits

AI could do great good for animals. When beneficial AI (now or in the future) is not thoughtfully designed and adopted, this arguably constitutes a harmful lost opportunity and disuse of technology (Parasuraman & Riley, 1997). We can mention only a few of the many possible benefits of AI here. The enormous toll to animals caused by vehicle accidents might be drastically reduced by appropriately programmed self-driving automobiles. Yet, if self-driving cars were insufficiently adopted or trained, many more animals may be injured and killed. Furthermore, despite its potential environmental harms, AI could also bring environmental benefits (McGovern et al., 2022). AI for tracking pollution, global warming, and pandemic zoonotic dangers, heightened by climate change (Carlson et al., 2021), could significantly protect animals and humans, whose interests are interdependent and entangled, at the same time.

AI may allow great strides in nonharmful medicine and research. For instance, increased uptake of organ-on-a-chip techniques (Danku et al., 2022) and testing of drugs and toxins could significantly reduce harm to living subjects. Carefully designed and implemented, AI could advance veterinary care (Ezanno et al., 2021), benefitting human carers as well as animals in the process. As Singer and Tse suggest, AI could be used in scientific research to identify plant-based proteins that could replace meat and dairy products, which might reduce animal suffering (Singer & Tse, 2022, p. 4) and improve human health.

Like some other technologies, AI might also be used to monitor and expose harm to animals. For example, AI could mine countless journal articles to determine what harms are being done to animals in research. Animal advocates might use AI to analyse animal wellbeing in farms—as they have done with drones (McCausland et al., 2018)—or to rate companies according to how they intentionally treat or indirectly affect animals. Such animal advocacy and activism might increase pressure to develop beneficial or less harmful technologies.

However, it is also possible that governments, tech companies, computer scientists, veterinarians, animal advocates, the public, etc. could reject or fail to adopt AI that might benefit animals. That might happen due to incautious development of AI and an associated backlash or lack of funding for research and development. The absence of animal-friendly AI could effectively result in many harms for animals, ranging from deprivation and negative experiences to premature death. Here is another area in which anthropocentrism and a failure to appreciate human–nonhuman entanglement could generate harm.

4 Discussion and Conclusion

Just as earlier technologies facilitated tremendous harm to animals, emerging intelligent technologies may also create new harms and amplify existing harms to nonhumans. As we saw, AI could cause negative experiences, frustrated desires, deprivation of activities, and death and so damage animal wellbeing, undermine interests, and even promote lives that are barely worth living. However, unlike the Chicago stockyards, sow stalls, and battery cages, AI promises significant benefits for animal wellbeing too.

Our harms framework helps identify various types and causes of animal harm such as intentional harms, unintentional and indirect harms, and foregone benefits. Because some harms can all-too-easily be discounted or overlooked, it is helpful to pinpoint and highlight how various harms can come about. The framework identifies certain harms that are distinctive to how AI works, such as covert harms from black boxes and epistemic harms from ML capture of human biases. It also recognises that AI could significantly promote more familiar and present harms, such as those due to environmental pollution and the intensification of farming.

AI can be a double-edged sword for animals as well as humans. For instance, AI to detect animal welfare issues may prevent some harms, yet a large-scale breakdown in automation technology might harm very many individuals at once (Tuyttens et al., 2022). The speed, reach, and easy duplication of AI means that harm that could have been relatively limited or circumscribed might scale up very quickly. Moreover, using AI to expand rather than limit some forms of exploitation such as livestock systems—as opposed to promoting alternatives such as plant-based foods—would enlarge and further entrench harm. When harms are ‘locked in’, it becomes harder to reduce harm to animals in the longer term.

AI’s positive potential raises the question of whether AI will result in net harm or net benefit to animals in various domains. Marian Stamp Dawkins (2021, p. 2) suggests it is too soon to know whether the overall welfare impacts of smart farming will be positive or negative for animals (Wathes et al., 2008). Yet there are reasons for thinking it might be negative. For instance, the move to automated farms is primarily driven by the goal of more efficient production rather than the goal of giving animals good lives. These systems may thus miss signs of poor welfare, further estrange humans from animals, and compound the objectification of animals.

In part, AI’s net impact in this or that domain will depend on the moral economy of its development. For example, if the meat industry is the major developer of AI for animal production, then the impact will surely be biased toward productivity and profit-seeking rather than comprehensive animal wellbeing. Similarly, AI used to monitor ‘invasive’ species may have different impacts depending on whether it is undertaken by organisations following traditional conservation strategies of poisoning and shooting (Doherty et al., 2019) or by those emphasising greater compassion and respect for sentient beings in conservation (Marris, 2021; Wallach et al., 2018).

As we explained in Section. 2, a potential problem with identifying AI harms for animals is disagreement over what constitutes harm, particularly over the nature of ultimate (non-instrumental) interests or wellbeing. For example, some may claim that more intensively farmed animals have good welfare if their negative experiences can be kept minimal (to be sure, a very difficult task in practice), whereas others will claim that an inability to perform various activities on some ‘objective list’ of goods is seriously impoverishing of wellbeing. Some will argue that painless killing performed more reliably by technology is a significant benefit, whereas others will argue that such welfare gains are minimal because death itself is so severe a harm.

Therefore, we propose that, in addition to empirical studies into the effects of AI on animals, philosophical awareness about the nature of animal wellbeing and interests is important when considering AI’s impacts. It would be helpful for AI ethics researchers and others to appreciate and discuss the nature of harm and to investigate the full range of possible animal—including human-and-animal—harms that might result from the technology. Researchers and AI practitioners might also advocate for greater appreciation of AI’s manifold effects on nonhumans. We hope that papers like this one will stimulate such reflection, research, and advocacy.

A next step will be to carefully consider implications of AI harms for policy, law, and practice. Precisely when and to what degree we ought to limit animal harm depends in significant part on how we assess the moral status or significance of animals (and whether we properly grasp human–animal entanglement). We have not adopted a particular view on the intrinsic moral status of animals in this paper. The harms framework we proposed can support different ethical positions on the moral significance of animals—although it does, of course, assume that we have moral reason to care about animals in the first place and to reduce harm to them in some and perhaps many circumstances.

Reducing nonhuman harm from AI could be done in many ways. Designing ethical principles into AI technology from the outset may be one appropriate response (Bendel, 2018); including animals in AI ethical codes and guidelines may be another. However, we doubt that ethical principles, self-regulation, and risk assessment alone are sufficient to limit harm to animals (Bietti, 2021). Rather, concerns about harmful AI impacts on animals will likely need to be addressed in animal welfare laws and regulations and in a host of other laws and policies, such as traffic regulations, safety mandates for automated vehicles and drones, environmental protection laws, and laws designed to regulate AI more generally (see, e.g. Pasquale, 2020).

Finally, different types and causes of harm, as identified in our harms framework, may require different responses. For example, intentional harm to animals from AI might be addressed by criminal cruelty, animal welfare, and environmental protection laws. Unintentional direct harms might be considered in design standards and legal and ethical governance of new AI. Unintentional indirect harms perhaps should prompt consideration of how to develop new measures and monitoring schemes to identify harmful impacts, establish societal level governance structures, and, perhaps most fundamentally, to question whether certain AI systems should be developed at all. Such issues constitute urgent but neglected areas for public discussion and research into AI’s likely profound impact on nonhuman animals.