1 Introduction

This article argues that humans could hate some robots, and that it matters that humans could hate some robots.Footnote 1 To defend this argument, the article proceeds as follows. Section 2 outlines why we should care about the arguments defended in this article. I begin by conceding that my thesis is only morally interesting if robots are morally considerable, as only then would it morally matter how we respond to them. I argue that morally considerable robots are not a distant possibility; we can make sensible and important moral claims about at least some robots. I then argue that a particularly pressing moral question concerns what relationships we could and should have with these robots. I explain that the existing literature on human–robot relationships focuses only on positive relationship types (e.g., love, friendship, etc.). By considering relationships characterised by hate, this article provides a novel, interesting, and timely addition to discussions of human–robot relationships. With the above in mind, I end Sect. 2 by concluding that we should care about my argument because it makes a significant original contribution to the robo-philosophy literature, and has morally important implications.

Sections 3, 4, 5, 6, 7, 8 then present and defend my central thesis. Section 3 begins by outlining two senses of ‘hate’—an everyday sense, where I can hate objects, events, etc., and a more philosophical sense, in which I can be in a relationship characterised by hate. It is only the latter type of hateful relationships that I consider in this article. I explain how relationships characterised by hate are the polar opposite of loving relationships. I then outline three conditions that must be met for x to be in a relationship characterised by hate with y. First, x must desire that things go badly for y. Second, x must view y as being inherently hateworthy. Third, x must maintain their hate for y through either direct or indirect interactions.Footnote 2

Sections 4, 5, 6, 7 argue that all three of the above conditions can be met in human–robot relationships. Namely, humans can desire that things go badly for robots. Humans can view robots as being inherently hateworthy. And humans can maintain their hate for robots through either direct or indirect interactions. Because all three conditions can be met, I argue that humans can (or at least could) have relationships with robots that are characterised by hate.

Section 8 argues that it matters that humans could hate robots in the ways outlined above. This is because human hatred towards robots leaves morally considerable robots at risk of being mistreated (e.g., by being excluded, put in danger, etc.). The section concludes by considering how discussions of human–robot hate have important implications for robot rights.

2 Why should we care about robot hate?

The arguments presented in this article depend upon (at least some) robots being morally considerable.Footnote 3 When an entity is morally considerable, it makes sense to make moral claims about them (e.g., that they can be wronged; that we have moral obligations to treat them in certain ways, etc.). Humans and nonhuman animals are morally considerable in this way, whilst toasters and armchairs are not. We cannot sensibly claim that we morally wrong a toaster by lying to it, neglecting it, etc.

My opponent could argue that robots are like toasters and armchairs—they are not morally considerable. If this is so, then my arguments (below) are morally uninteresting. Just like it does not matter if I hate my toaster, it does not matter if I hate a robot (or even could hate a robot).

In response to this initial objection, this section will consider two ways in which we could understand the moral considerability of robots. First, and in line with the above objection, robots are not, and likely never will be, morally considerable. Second, robots either currently are, or will soon be, morally considerable. I will outline evidence which suggests that we ought to favour the second position. As such, I will argue that robots either are, or soon will be, morally considerable, and so discussions of robot hate are relevant, timely, and morally interesting.

Position one: robots are not morally considerable

As Coeckelbergh (2018: 146) emphasises, in discussions of the moral considerability of robots, “…the “default” or “common sense” position denies that machines can ever have moral standing”.Footnote 4 On this view, robots simply are not the right sort of entities to be morally considerable. For defenders of this view, this is because robots do not meet any of the criteria for having moral standing: “sentience, consciousness, having mental states, having the ability to suffer, and so on” (Coeckelbergh 2018: 146). This ‘common sense’ view is discussed by, amongst others, Frank and Nyholm (2017: 316–317), Gunkel (2018: 89–91), Sparrow (2002: 313), Sullins (2011), and Torrance (2008).

A related, but slightly weaker, view can be seen in claims that, whilst it is not impossible for robots to have a moral standing, morally considerable robots are only a very distant possibility. Such a view is discussed (and ultimately rejected) by Danaher (2019a: ‘Robots can be our Aristotelian friends’). On this view, because the possibility of morally considerable robots is so remote, any claims about how we ought to treat and react to robots can be dismissed as unnecessary, irrelevant, and uninteresting.

For defenders of the above views, my arguments below (Sects. 3, 4, 5, 6, 7, 8) will likely seem unwarranted and unintuitive. For them, my claims about robots are no different to claims made about toasters and armchairs—all three sets of claims are morally uninteresting. In response, I concede that my arguments will not be of interest to those who accept position one: that robots are not, and likely will never be, morally considerable. However, as argued below, we do not necessarily need to accept position one. There are convincing reasons to favour position two: that robots either are already morally considerable, or will be morally considerable in the near future.Footnote 5

Position two: robots either are morally considerable, or will be morally considerable in the near future

As outlined above, position one argued that robots are not morally considerable because they fail to meet the relevant criteria for having a moral standing. These criteria are typically explained in terms of the possession of certain morally relevant properties: “sentience, consciousness, having mental states, having the ability to suffer, and so on” (Coeckelbergh 2018: 146). To argue against position one, we thus need to demonstrate that some robots either currently do have these properties, or at least will have these properties in the near future.Footnote 6

The claim that robots already have some of these properties is admittedly controversial. Nevertheless, there is at least some research which claims to show precisely that. For example, in their 2013 paper, Castro-Gonzales, Malfaz, and Salichs discussed how they have developed an autonomous social robot (Maggie) which they claim can implement fear, and can also display fear-reactive behaviour (such as moving away from a ‘fearful’ stimuli). They argue that “… Maggie is endowed with a decision making system based on drives, motivations, emotions, and self-learning” (139). If this is so, then Maggie would appear to possess an architecture that enables her to display at least some morally relevant properties. Namely, Maggie could be claimed to have relevant mental states (drives, motivations, emotions, and self-learning), or at least robot equivalents of these states.

Because the above claim is so controversial, many who discuss the moral status of robots instead make the weaker claim that there will likely be morally considerable robots in the near future. This weaker claim is well-discussed by Frank and Nyholm (2017), who state that “…we can imagine future robots sophisticated enough to enjoy a certain degree of consciousness” (313).Footnote 7 To support this claim, Frank and Nyholm emphasise that many researchers are either actively working to create robotic consciousness (Prabhaker 2017), or are discussing the conditions that would need to be met for a robot to be conscious (Bryson 2012; Dennett 1994). Further evidence of current attempts to create conscious robots can be seen in the work of Reggia et al. (2019). They argue that.

“…developing neurocognitive control systems for cognitive robots and using them to search for computational correlates of consciousness provides an important approach for advancing our understanding of consciousness, and… provides a credible and achievable route to ultimately developing a phenomenally conscious machine” (Reggia et al. 2019:18, my emphasis).

Given that there is ongoing research into developing robots with relevant properties (consciousness, emotions, etc.), and that this research appears to be making headway (see the Maggie example, and Reggia et al.’s claims about the credibility of creating phenomenally conscious robots), I argue that we should accept position two. We should accept that, if there are not already morally considerable robots (see Maggie), then there could be morally considerable robots in the near future (if certain conditions are met). This position is also accepted by, amongst others, Danaher (2019b), Gordon (2018), and Laukyte (2017).Footnote 8

At this point, my opponent may object that, even if robots could be morally considerable in the near future, we are not justified in making moral claims about them now. In other words, it still does not (currently) matter if I could hate a robot; this will only matter in the future when the robot becomes morally considerable. There are two main responses to this objection. First, as mentioned above, there may already be robots who are morally considerable now, at least to some extent (see the Maggie example). It matters that we could hate these robots (for the reasons outlined in Sect. 8).

Second, even if robots will only become morally considerable in the near future, this ought not prevent us from making moral statements about robots now. This is nicely expressed by Neely (2014), who argues as follows:

“The time to start thinking about these [moral] issues is now, before we are quite at the position of having such beings to contend with. If we do not face these questions as a society, we will likely perpetuate injustices on many who, in fact, deserve to be regarded as members of the moral community” (109).

Similar arguments will be presented in Sect. 8. For now though, it will suffice to reiterate that we can make sensible and important moral claims about at least some robots (those that either are or will soon be morally considerable) now.

One of the most important moral questions we can ask about robots is what form human–robot relationships ought to take. Current research has examined whether robots can be (i) lovers, (ii) companions, (iii) friends, (iv) caregivers, (v) nannies, (vi) teachers, (vii) reverends, (viii) colleagues, and (ix) teammates.Footnote 9 All of this research is necessary and important as we need to properly clarify and categorise human–robot relationships to determine how robots fit into our moral community (if indeed they do) and how we ought to treat them as a result. For example, if we accept that we can have reciprocal friendships with robots, then this could entail that we have certain beneficent duties towards them (and they to us) (Tistelgren 2018: 6–8).

The remainder of this article aims to contribute to the ongoing discussions of human–robot relationships, and to add further clarity to these debates. As shown above, the existing literature has focused largely on examining positive human–robot relationships (e.g., whether humans and robots can be friends and love one another). What is missing is a discussion of more negative human–robot relationships, for example, one predicated on human hate. By addressing this omission (Sects. 3, 4, 5, 6, 7, 8), this article makes an original contribution to the robo-philosophy literature, specifically the literature on human–robot relationships.

In sum, this section has outlined why we ought to care about the arguments that will be defended in this article. I have explained that discussions of human–robot relationships have currently not discussed how these relationships might be characterised by human hate. The arguments of Sects. 4, 5, 6, 7 are thus an important addition to the existing literature. Further, I have argued that robots either are or soon will be morally considerable, and so we can make sensible moral claims about how we ought to treat them. The arguments of Sect. 8—which outline why it matters that humans could hate robots—thus have notable moral implications. To make these arguments, the next section will briefly outline what it means for a human to be in a relationship characterised by hate.

3 Hate

I hate garlic, small talk, and rush hour trains. You probably hate many things too. In everyday life, we use this colloquial sense of ‘hate’ to express a negative reaction to certain objects, people, events, etc. It is obviously possible for humans to hate robots, in this everyday sense of the word. This, however, is not the type of hate that this article will focus on.

Instead, our focus will exclusively be on relationships that are characterised by hate. This is because, as mentioned in Sect. 2, my interest is in the relationships that humans can have with morally considerable robots. By examining relationships characterised by hate, we can begin to consider how these relationships might be negative, and what effects this might have on our treatment of robots.Footnote 10

In the existing philosophical literature, relationships characterised by hate are viewed as the polar opposite of relationships characterised by love (Ben-Ze’ev 2018: 323; Kauppinen 2015: 1721–1722). Kauppinen (2015) explains this as follows: whereas love is concerned with seeking the best for a loved one, “if I hate someone, I want him or her to do badly, whether or not it is of instrumental benefit for me. I feel bad if the person does well, get easily angry with him or her, and may be delighted if misfortune befalls him or her” (1721). This explanation of what follows when we hate someone seems intuitively correct, and we can suppose that my hateful responses to someone can vary in intensity depending on how much I hate them. For example, suppose that I mildly hate a colleague. I might want things to go slightly badly (but not terribly) for them, and be more easily irritated by them and their successes than I would normally be for other people. Conversely, if someone is my nemesis, then I may want things to go appallingly for them, I might be perpetually infuriated by them, and actively root for (and perhaps orchestrate) them to suffer misfortunes. This degrees-of-hate idea runs parallel to the intuitive idea that there are degrees of love. For instance, I may love my colleagues, my friends, and my family, but love my family the most. From this, it follows that, whilst I may seek the best for all of my loved ones, I may be particularly invested in seeking the best for my family.

Using this initial idea—that hate is the converse of love—the existing literature goes on to suggest three distinguishing features of relationships characterised by hate. First, x (the hater) must desire that things go badly for y (the hated). As explained above, this desire can vary in intensity. At the weakest level, x may desire that y is embarrassed or ridiculed. At the most extreme level, x may desire that y is annihilated. Fischer et al (2018: 311) argue that all of these negative desires (from humiliation to annihilation) ought to be understood in terms of x’s desire to destroy y. They claim that “…the emotivational goal of hate is not merely to hurt, but to ultimately eliminate or destroy the target, either mentally (humiliating, treasuring feelings of revenge), socially (excluding, ignoring), or physically (killing, torturing) …” (ibid). Fisher et al.’s analysis, however, is overly strong as it seems to presuppose that every instance of x hating y will be connected to x wanting to destroy y. In contrast, I will suppose that, whilst in relationships characterised by hate, x will always desire something bad to happen to y, this desire will not always be connected to annihilation or destruction.

The second defining characteristic is that x must judge y to have an inherently ‘hateworthy’ nature. Fischer et al (2018: 310–311) explain that the appraisals of hated person(s) have two main features. First, the hated person(s) are viewed as being a threat or inconvenience to the hater. They may be viewed as dangerous, immoral, malicious, evil, etc. These perceived character faults (however small or imagined) are viewed as reasons to hate the hated person(s). Second, the hated person(s)’ hateworthy nature is judged to be a stable attribute of them—they are inherently dangerous, immoral, malicious, evil, etc. In the eyes of the hater, the hated person(s) will always be dangerous, evil, etc. Importantly, these negative appraisals of the hated person(s) will typically be accompanied by feelings of powerlessness and lack of control. The hater (x) believes that the hated person (y) is inherently bad (dangerous, evil, etc.); that y will never change; and that they (x) are in danger of being a victim of y’s dangerous/immoral/evil plans and actions. As Szanto (2018: 10–20) explains, these appraisals generate an us–them mentality. The hated person(s) are a dangerous ‘them’, who are inherently different to the safe ‘us’ (the hater, and all those who share their hatred). Unlike ‘them’, the ‘us’ group are judged to be kind, good, moral, etc.Footnote 11

Finally, in relationships characterised by hate, hate is maintained through interaction.Footnote 12 As Fischer (2018: 325–326) emphasises, “hate needs to be fed, either by direct or indirect interactions related to the object of hate”.Footnote 13 Direct interaction is when the hater has to interact with, or be around, the hated person(s). Indirect interaction is when the hater can discuss the hated person(s) with others who also hate them (the ‘us’ group, above), or when the hater indirectly has contact with the hated person(s), e.g., by seeing their social media profiles. In both cases (direct and indirect), hate is maintained because the hater can sustain and reinforce their negative appraisals of the hated person(s).Footnote 14

In sum, this section has outlined two senses of ‘hate’: the everyday sense in which I claim to hate objects, events, etc., and a more specific philosophical sense in which I can be in a relationship characterised by hate. I have explained that it is only the second sense of hate that I am interested in. I clarified how, according to the existing literature, there are three conditions that ought to be met for x to be in a relationship characterised by hate with y. First, x must desire that things go badly for y. This is a characteristic behavioural tendency of relationships characterised by hate. Second, x must view y as having an inherently hateworthy nature. This is a characteristic appraisal, seen in relationships characterised by hate. Finally, x’s hatred of y must be maintained through direct or indirect interactions. This condition outlines the characteristic connections between the hater and the hated. Sections 4, 5, 6, 7 will argue that all three of these conditions could be met in a human’s relationship with a robot.

4 Humans could be in a relationship characterised by hate with robots

Section 3 outlined three conditions that must be met for x to be in a relationship characterised by hate with y: (i) x must desire that things go badly for y, (ii) x must view y as having an inherently hateworthy nature, and (iii) x must maintain their hatred for y through direct or indirect interaction. Sections 4, 5, 6, 7 will argue that all three of these conditions could be met in human–robot relationships. To show this, I will examine how humans currently respond to robots, and how these current responses meet these three conditions.Footnote 15 As humans can and do show hateful responses towards current robots, it is conceivable that we could also be in a relationship characterised by hate with morally considerable robots. Recall that Sect. 2 suggested that there either already are morally considerable robots (e.g., Maggie), or that morally considerable robots will exist in the near future. Section 8 will argue that it matters that we could hate these morally considerable robots.

5 Humans could desire that things go badly for robots

Section 3 argued that, to be in a relationship characterised by hate, x must desire that things go badly for y. I explained that this desire can range in intensity, from a desire to humiliate y to a desire to annihilate y. This section will focus on the most intense desire: that humans could desire the annihilation or destruction of robots. There are two reasons for this focus. First, most of the existing literature on relationships characterised by hate does discuss the desire for annihilation. This can be seen in the works of Ben-Ze’ev (2018: 323), Fischer (2018: 325), Szanto (2018: 2–9), and Van Doorn (2018: 321). As the desire for annihilation is commonly referenced in existing discussions, it seems like a viable starting point for our robot discussion.Footnote 16

The second reason to focus on the desire for annihilation, rather than less extreme desires (like the desire that y be humiliated or socially excluded), is because the desire for annihilation makes the strongest and most interesting case for potential robot hate. If we can show that humans could desire that robots be annihilated, it seems likely that we would also be able to say that humans could have the less extreme desires—that the robot be humiliated, socially excluded, etc. We could not make the same argument the other way around, i.e., that, because humans could desire that robots be humiliated, they could also desire that they be annihilated. As there is not space to consider every way in which humans could desire that things go badly for robots, it makes sense to focus on the strongest and most extreme claim. With this in mind, let us examine how at least some humans seem to currently desire that robots be annihilated or destroyed.

Since 2015, there has been a well-publicised ‘Campaign against sex robots’.Footnote 17 In essence, the campaign argues against the development of sex robots on the grounds that the use of such robots perpetuates dangerous attitudes, such as the objectification of women, and the blurring of sex and rape (as sex robots typically do not consent to sexual acts).Footnote 18 What is particularly interesting for our purposes is what the campaign suggests we ought to do in reaction to sex robots. In an article on the campaign website, Florence Gildea and Kathleen Richardson (2017) make the following claim:

“It might be argued that the solution, then, is to encourage the production of sex robots designed to appear male. But to argue for an equality of the lowest common denominator—where everyone relates to all others as an object—is to exacerbate the problem, not provide a solution”.

To me, the above implies that no modifications to the production of sex robots would remove the problems with objectification. Consequently, it seems that Gildea and Richardson are implicitly suggesting that the only solution would be to eliminate all sex robots by discouraging or banning the production of sex robots.Footnote 19 We can reach this conclusion if we follow the logic of the campaign arguments. The campaign begins by making a claim—the use of sex robots is dangerous (due to concerns about objectification, etc.). This perceived negative evaluation (danger) extends to all sex robots (of all gender appearances), and so all sex robots are viewed as dangerous (in some way). Because the danger of sex robots is so extensive, we should remove or eliminate all tokens of the dangerous object (all sex robots). This argument is logical and structurally sound, even if we do not agree with the central claim (that the use of sex robots is dangerous).Footnote 20

As presented above, Gildea and Richardson’s arguments work by emphasising a supposed inequality between humans and robots. First, they accept that sex robots are created to fulfil human needs and desires, whilst the same is not true about the human (who does not fulfil the needs and desires of the sex robot). An upshot of this—as suggested above—is that as humans created sex robots for this aim (human fulfilment), they can also destroy sex robots when said human fulfilment has unintended negative consequences (like sexual objectification). Second, Gildea and Richardson implicitly emphasise the inequality in vulnerability between robots and humans.Footnote 21 They suggest that although a sex robot is not harmed when a human uses it, the use of sex robots can indirectly harm the most vulnerable humans (e.g., women, children) by creating societal issues (like objectification and issues with sexual consent) that disproportionately put them at risk. If one adopts this line of thought, then it is at least plausible to claim that the inclusion of sex robots in human society is dangerous and problematic, and that the best solution is to remove the sex robots (by destroying and/or banning them).

The above has used the ‘Campaign against sex robots’ to suggest that current attitudes towards sex robots can cause at least some humans to develop ‘hateful’ desires to eliminate or destroy all sex robots. If this is so, then this suffices to show that at least some humans can develop desires that things go badly for at least some robots (here, in the extreme sense that the robot be annihilated). As shown above, this desire for destruction seems to follow a basic logic. The robot becomes an object of hate (in the sense that one can desire its destruction) if the robot’s inclusion in human society is at least widely perceived as a danger or threat that needs to be eliminated (however, elimination is understood). This same logic can arguably extend beyond sex robots. Simply put, humans could desire that any robot be destroyed if said robot is perceived as a danger or threat that needs to be eliminated. The perceived danger of the robot (or of its use by humans) could be understood broadly and include both minor threats (e.g., robots could have ‘offensive’ glitches, like accidentally swearing in front of children), and major threats (e.g., robots could collect personal data about human users). It does not seem implausible to suppose that this perception of robots, and the subsequent desire to destroy them, could extend to the morally relevant robots discussed in Sect. 2.

6 Humans could view robots as having an inherently hateworthy nature

Section 3 explained that, to be in a relationship characterised by hate, x (the hater) ought to perceive y (the hated) as having an inherently hateworthy nature. One consequence of this aspect of hate is that all members of the hated group (all who are like y) are tarred with the same brush. If there is something inherently wrong with one token object of hate (e.g., one advanced social robot), then there will be something wrong with all similar tokens (all other advanced social robots). In what follows, I will draw on current negative reactions to robots to explain how humans can come to view robots as having this inherently hateworthy nature.

First, there is the uncanny valley effect (Lay 2015). This occurs when humans find human-like robots eerie and disturbing. This is because, whilst the robot looks human in its features, it may not react, behave, or speak in a naturally human way. As Mathur and Reichling (2016) explain, the uncanny valley effect can cause humans to view robots as inherently untrustworthy. This is a generalised reaction. All humanoid robots could be viewed as inherently eerie, disturbing, and untrustworthy simply because they are humanoid robots. Such general, negative appraisals can be used to ground hate towards humanoid robots if the robots’ inherent eeriness, disturbing-ness, or untrustworthiness is taken to be dangerous or threatening in some way (see 4.1, above).Footnote 22

This view—that at least some types of robots are inherently eerie, disturbing, and untrustworthy—also seems to extend to robots that are not humanoid in appearance (and so are not part of the uncanny valley effect). For example, the dog-like robots developed by Boston Dynamics are often described as ‘creepy’ or ‘terrifying’ (DeCosta-Klipa 2019; Titcomb 2016). This suggests that, for at least some people, all robots (humanoid or otherwise) have an inherently disturbing or threatening nature. If so, then this could sustain an us–them mentality (Sect. 3, above) whereby robots are a threatening ‘them’ who ought to be hated because of the danger that they pose to human society. Once again, as this negative view of robots seems to potentially extend to all robots, it could apply to the morally relevant robots discussed in Sect. 2.

7 Human hatred towards robots could be maintained through interaction

It is a unique feature of human–robot relationships that a lot of our preconceptions about these relationships have been developed through fiction. Many, but not all, science fiction and fantasy plotlines about robots present a dystopian view whereby advanced robots clash with humans, and are ultimately viewed as a ‘dangerous threat’. Examples of this can be seen in The Terminator films, Westwood, and Humans.Footnote 23

Research by the Leverhulme Centre for the Future of Intelligence has emphasised the significant negative effects that these preconceptions have on our views about robots. Cave et al. (2019) surveyed 1078 UK citizens to observe how negative preconceptions of A.I. (created via interaction with media) affect perceptions of artificial intelligence (A.I.). They found that 51% of respondents were concerned that the rise of A.I. will lead to the alienation and obsolescence of human beings, with 45% concerned that there will be an A.I. uprising (ibid: 4). As “25% of respondents explained A.I. in terms of robots” (ibid: 5), this implies that a non-negligible number of people view robots as a danger or threat, in virtue of preconceptions developed through dystopian narratives.Footnote 24

As fiction is a key way in which humans understand, and indirectly interact with, robots, dystopian narratives could be a significant factor in the maintenance of human hatred towards robots. This will be particularly true in cases where humans use their fears and negative preconceptions to avoid directly interacting with social robots. This is because if such humans fail to directly interact with social robots, they are unlikely to be exposed to evidence which could contradict the fears generated by dystopian narratives.Footnote 25

Human hate towards robots (in terms of viewing robots as threats that need to be eliminated, above) can also be sustained through direct interaction with robots. This can happen in cases where direct interactions with robots reinforces the belief that robots pose a specific threat (e.g., that they are unsafe, or unpredictable, etc.). For example, consider the current American response to robots entering the workforce. The rise of robot workers is typically linked to the threat that existing human workers will be made redundant and will experience a worse quality of life as a result. News reports on these fears often frame the reports in terms of ‘hate’ (Condliffe 2019; Matyszczyk 2019). Indeed, a 2017 study by the Pew Research Centre emphasised that, out of 4135 respondents, “85% of Americans are in favour of limiting machines to performing primarily those jobs that are dangerous or unhealthy for humans…” (Smith and Anderson 2017). This suggests that those who will directly interact with advanced, social robots (the human workforce) can come to view social robots as a threat (to their quality of life) and see the robots as expendable (only to be used for dangerous jobs). This latter concession (that robots could do the dangerous jobs) could again be taken to show support for (and potentially reinforce) the ‘hateful’ idea that robots can (and perhaps should) be harmed, destroyed, or otherwise eliminated from mainstream society.

In sum, Sects. 4, 5, 6, 7 have argued that humans can (and perhaps already do) meet the conditions for being in a relationship characterised by hate with robots. I explained how humans can desire the total elimination of certain types of robots because said robots are viewed as inherently dangerous, eerie and untrustworthy. I suggested that this hate can be maintained through indirect interaction (via dystopian fiction) and direct interaction (e.g., by robots entering the workforce). As humans can already meet the above three conditions in their relationships with existing robots, it is not unreasonable to suppose that humans could also be in relationships characterised by hate with morally considerable robots. As with Sect. 2, I leave it as an open question whether these morally considerable robots already exist (and so we could already be in relationships characterised by hate with them), or whether morally considerable robots will be developed in the near future. The next section (Sect. 8) will outline why it matters that humans could be in relationships characterised by hate with morally considerable robots.

8 It matters that humans could be in relationships characterised by hate with robots

Most of this article has been concerned with demonstrating that humans could be in relationships characterised by hate with robots. I have explained how current responses to robots meet the conditions for being in such a relationship (Sects. 4, 5, 6, 7), and I have suggested that these same responses could be extended to morally considerable robots either now or in the near future (Sects. 2, 4, 5, 6, 7). This section will outline why it matters that humans could be in relationships characterised by hate with morally considerable robots.

First, it matters because it shows that humans could feasibly have negative relationships with robots. This is important because the existing literature on human–robot relationships has focused on positive relationships, like friendship, love, etc. (Sect. 2). As human–robot interaction becomes more commonplace, it is vital that we have an in-depth understanding of how these relationships work. Our understanding is incomplete if we do not acknowledge the very real possibility that some of our relationships with morally considerable robots will be negative. For example, my arguments on hate (Sects. 3, 4, 5, 6, 7) suggest that we ought to also consider whether robots can be enemies, opponents, competitors, etc. It is only by considering these negative relationships that we can understand the problems, as well as the benefits, of human–robot interaction. For instance, it is likely that, by further examining human–robot relationships involving hate, rivalry, etc., we will get a better understanding of the conflicts that can arise between humans and robots (in terms of resources, opportunities, rights, etc.).Footnote 26

Second, it matters that humans could hate advanced, person-like robots because there is an imbalanced relation between us (humans—the hater) and them (robots—the hated). At present, it is only us that can create advanced, person-like robots, and it is also us who can destroy robots or eliminate them (by ceasing production). Regardless of how advanced, person-like, or functionally similar robots are to us, or how similar they will become to us (Sect. 2), they are currently not equal to us in this regard. Robots’ continued existence is entirely dependent upon us and our good will. As explained in Sects. 3, 4, 5, 6, 7, relationships characterised by hate explicitly involve bad will towards robots (via a desire that bad things happen to the robot). Given the enormous power that we wield over robots, it is important that we acknowledge how hate (bad will) could bias or prejudice our perceptions of them. Their existence could depend upon us doing this.Footnote 27

Finally, and in relation to the above, the fact that humans could be in relationships characterised by hate with morally considerable robots has significant applications for the ongoing robot rights discussions. If morally considerable robots are to enter human society and have relationships with us (Sect. 2), then we ought to be clear about how we should treat these robots (and how they should treat us). An important consideration here is whether morally considerable robots could themselves have rights, in the sense of having claims that others have duties to fulfil. For instance, we might consider whether a morally considerable robot has a legal status, a nationality, a right to privacy, etc.Footnote 28

All of the existing research in favour of robot rights is predicated on the following claim: If robots are sufficiently person-like, then they are morally considerable, and ought to have rights.Footnote 29 The above discussions of hate (Sects. 3, 4, 5, 6, 7) suggest that this claim is too quick. This is because, as outlined in Sects. 4, 5, 6, 7, it is when robots are sufficiently person-like (in terms of appearance, behaviour, attributes, or skill-set) that humans can view them as inherently hateworthy and (consciously or otherwise) seek their destruction. Consequently, it is the very conditions for being a rights-holder (being sufficiently person-like) that puts robots at risk of human mistreatment (unjust discrimination, exclusion, destruction, etc.—the very things that rights are supposed to protect against). This is important because it is humans who determine the conditions for being a rights-holder (being sufficiently person-like), and also humans who can hate robots when they meet these conditions.Footnote 30

To explain this claim further, we can draw on Manne’s (2016) critique of humanism. Manne begins by outlining an oft presented view that inhumane conduct (oppression, genocide, rape, etc.) can best be explained in terms of dehumanisation—x is able to mistreat y because they see y as less than human (and akin to a nonhuman animal, an object, etc.). In reaction to this view, Manne suggests that at least some cases of inhumane conduct do not show dehumanisation: “Their actions often betray the fact that their victims must seem human, all too human, to the perpetrators” (391, and again at 399, 400, 401, 403 and 404). Here, it is because the hated subject is sufficiently person-like (or human-like, in Manne’s sense) that they are hated and mistreated. Manne explains this further as follows:

“Under even moderately nonideal conditions, involving, for example, exhaustible material resources, limited sought-after social positions, or clashing moral and social ideals, the humanity of some is likely to represent a double-edged sword or outright threat to others. So, when it comes to recognising someone as a fellow human being, the characteristic human capacities that you share don’t just make her relatable; they make her potentially dangerous and threatening in ways only a human being can be—at least relative to our own, distinctively human sensibilities” (Manne 2016: 399-400).

Whilst Manne takes humanness (or being sufficiently person-like, in our sense) to be an exclusively human trait, we can extend her arguments to the morally considerable robots discussed in Sect. 2. These robots are sufficiently person-like in the sense that they have relevant high-level properties (rationality, emotions, etc.), abilities, behaviours, skill sets, etc. In virtue of being sufficiently person-like, these robots can be viewed as dangerous or threatening to human-beings. Using Manne’s arguments, it follows that this perception (that the robot is dangerous or a threat) could become increasingly virulent in nonideal cases (as in the cases in Sects. 4, 5, 6, 7, where robots were perceived to increase the risk of sexual violence and to contribute to job losses).

So, what does the above mean for robot rights? Largely, the proponents of robot rights views seem correct to suggest that robots ought to have rights when they are sufficiently person-like (however these person-like criteria are understood). If we accept that all morally relevant entities ought to have at least moral rights, then it follows that person-like robots (who are morally considerable, see Sect. 2) ought to also have these rights. However, as emphasised above, the current robot rights views seem to miss an important connection between a robot having rights and a robot also being an object of hate. In what follows, I will explain how the discussions of hate presented in this article (Sects. 3, 4, 5, 6, 7) could actually help the robot rights views, rather than undermine them.

First, robot rights views should explicitly state how the conditions for a robot being a rights-holder (being sufficiently person-like) are also the conditions that could allow a robot to also be an object of hate (see the above discussions of Manne and Sects. 3, 4, 5, 6, 7). We need to make this statement explicit so as we can be aware of the potential biases and prejudices that we may (consciously or otherwise) have against robots. Note that I am not suggesting that proponents of robot rights (or anyone who discusses the rights of robots, whether in support or not), hate robots (intentionally or otherwise). Nor am I claiming that the connection between the conditions for being a rights-holder and being an object of hate mean that it is acceptable to deny rights to robots simply because we (could) hate them and want to destroy them. That would be akin to genocide or racism, and is obviously not a morally acceptable stance to take. Instead, my claim is simply that, by acknowledging this connection, proponents of robot rights views might be able to identify some of the biases and prejudices (unconscious or otherwise) that may prevent others from supporting robot rights. If we can identify the biases that cause people to have negative reactions towards robots (e.g., seeing them as eerie and threatening, see Sect. 6), and dispute these negative perceptions, then it may be easier to get the robot rights movement off the ground when the time comes (i.e., when we are satisfied that there are morally considerable robots who ought to have rights).

Second, by examining relationships characterised by hate, it may be possible for proponents of robot rights to identify specific human behaviours that robots ought to be protected against. For example, drawing on the arguments of Sects. 4, 5, 6, 7, we might suppose that morally relevant robots ought to be protected against destruction, or forced labour in dangerous environments, etc. In other words, by drawing on discussions of hate, we could learn what human threats robots are particularly vulnerable to, and what moral (and perhaps legal) protections they ought to have as a result. These considerations could help proponents of robot rights views to identify the scope of robots’ rights (if and when they have these rights). As the Neely quote in Sect. 2 emphasised, it is important that we start engaging with these moral considerations now “before we are quite at the position of having such beings to contend with” (Neely 2014: 109).

9 Conclusion

This article has argued for two claims. First, humans could be in relationships characterised by hate with morally considerable robots. Second, it matters that humans could hate these robots. This is at least partly because such hateful relations could have long-term negative effects for the robot (e.g., by encouraging bad will towards the robots). The article ended by explaining how discussions of human–robot relationships characterised by hate are connected to discussions of robot rights. I argued that the conditions for a robot being an object of hate and for having rights are the same—being sufficiently person-like. I then suggested how my discussions of human–robot relationships characterised by hate (Sects. 4, 5, 6, 7) could be used to support, rather than undermine, the robot rights movement.