1 Introduction

For the purpose of this paper, I assume that humans will form loving relationships with social robots and that, for some people, there will be no relevant difference between the love they have for their robot partner and the love they might have for a human partner.

In one sense, this is not a straightforward assumption. Much has been written on the very possibility of our loving robots: on whether humans can love robots (de Graaf 2016; Levy 2009; Nyholm 2020; Viik 2020) and on whether humans should love robots (Levy 2008; Frank and Nyholm 2017).

Despite this important theoretical work, it is clearly the case that no number of papers written on the proper conditions of love will stop a person from being in what, for them, is a loving relationship with a robot or convince them that it is not really love. Emotions will be felt and relationships will be formed and, if it seems like love to the human involved, then the loss of the robot will cause similar psychological harm to the loss of a human partner, no matter whether the relationship at issue is technically one of love. Therefore, for the purposes of this paper, theories of what conditions must be met in order for a relation to count as a loving relation will largely be placed to one side. In particular, we will not assume that the robot is itself able to reciprocate with emotions or even that the robot has any inner mental life at all.

There are other areas of research where the possibility of human–robot love is an underlying assumption. For example, there are substantial examinations of the possibility of human–robot marriage and the impact that this would have on family law (Ryznar 2019; Yanke 2021), a legal arrangement that surely has love as a precursor, and Saetra (2022) examines how our forming loving relationships with robots might change what love is, again assuming that such a relationship is possible.

Beyond these examples, there is much supporting literature that explores and documents our propensity to form an emotional attachment to social robots. Research in human–robot interaction has shown that, regardless of beliefs of what is ‘on the inside’, human users report feelings of reciprocity in their interactions with robots (de Graaf et al. 2016; Lammer et al. 2014).Footnote 1 Other empirical research has shown that humans flourish in the presence of robots (Broadbent et al. 2009; Wada and Shibata 2007). While these studies are not about love, they do paint a picture in which future robot love is feasible. Weber-Guskar (2021) questions how we should feel when robots become affective partners and focusses on three real-life-inspired cases in which the humans involved develop special relationships with robots, chat bots or other AI companions.Footnote 2 The examples are not specifically ones in which the humans would describe themselves as being in love, but they are clearly examples in which an AI system becomes emotionally important for the person using it.

Finally, much has been written on the more general question of whether the human propensity to form an emotional attachment to robots is indicative of the need to introduce robot rights. Such an approach proposes that the mistreatment of robots under conditions where we feel an emotional attachment to them can cause an indirect harm to humans (Darling 2016, 2021; Nyholm 2020). Coeckelbergh (2010) and Gunkel (2018) defend a ‘relational’ view according to which robots might be candidates for moral consideration based on their ability to engage in social relations, and Coeckelbergh (2021) identifies our feeling that we have a relationship with another entity as sufficient justification for the entity’s moral protection.

Therefore, although there are a relatively small number of existing cases in which humans are reporting substantial and significant loving relations with robots, the seeds are sown. It is against this background that I assume that future humans will form substantial loving relationships with robots. The question is, should the existence of such a relationship be viewed as a reason for the robot or the human to be given special protections or to be granted additional moral considerations? I first consider Mamak’s (2021a) claim that a loving or ‘significant other’ relationship can turn the intentional destruction of a robot into a criminal act, beyond the destruction of property, in particular by granting rights to protect loving relations between humans and robots (Sect. 2). In Sect. 3, I argue that it would be a mistake for such a special relationship status to introduce protection within the legal system. However, I argue that we should take seriously the fact that the destruction or demise of some robots will bring about feelings of bereavement and that there is a risk of the additional harm of ‘disenfranchised grief’ if such loss is not validated (Sect. 4). Finally, in Sect. 5, I consider future circumstances in which the destruction of a beloved robot might be classed as a hate crime.

2 The destruction of one’s significant other

Mamak (2021a) asks us to consider the following scenario.

Imagine that a woman—Inga—has been living with a robot—Otto—for 40 years. She treats the robot as a partner and constantly says that she loves him, spends time with him, eats meals with him, talks with him, and cuddles with him while watching movies. Inga also has a sexual relationship with Otto. She understands that the robot has no inner life. As she experiences things, her need to be in a meaningful relationship is fulfilled. Her friends and family know that she treats her robot as a partner, and they also treat him as her partner. He even receives presents from them. She and her robot occasionally spend time with their mutual friends. Unfortunately, during one such meeting at their house, one of the guests starts to misbehave and attack Otto in front of Inga, kicking and damaging him. Her screams do not stop the guest, and Otto is completely destroyed. There is no chance to rebuild him or collect data from him. Inga is devastated and, crying, calls the police, which sets a case in motion. After brief proceedings, they conclude that, from the perspective of criminal law, nothing actually happened. (2021, 1–2).

Mamak considers this to be an unacceptable outcome. He argues that our robot significant other deserves special treatment in the eyes of the law. Mamak argues, persuasively, that the wrongness of destroying the robot lies in the value of the robot to the human, not in any monetary value that the robot might have. It turns out that, as Otto is a very old model, he has very little monetary value. But the point is that, even if Inga were compensated for financial loss, that can in no way compensate her for the loss of a much-loved partner. Inga has lost a life partner because of the violent act of another. If Inga’s life partner had been a human she would be able to appeal to criminal law for some form of retribution. We can imagine that, were Inga able to do this in response to Otto’s abuse, it might provide some comfort to her. It is entirely understandable for Inga to feel the need for her life partner’s destruction to be recognised as a significant harm. In addition, Inga might want to ensure that others do not have to experience the loss that she has had to endure and might believe that, if there were some law to protect loved robots from vicious attack, this might act as a deterrent to future destructive acts.

Mamak is clear that the wrongness of the act of destroying a beloved robot lies in the emotional value (and resulting loss) to the human and not in any independent moral status that the robot has gained. In addition, Mamak is clear that affording robots protection does not itself confer moral status on robots. In this way, the protection that Mamak wants to afford robots flows from a different source than the protection that robot rights theorists argue for when they base their claims on properties that robots may present as having. In addition, the protection differs from the protection that the law affords to humans and other animals, as that protection does flow from assumptions that agents have significant moral properties.

As I argue in the next section, I think that Mamak is wrong in his claim that the love relation can itself be a reason for the granting of protective status to some robots. However, Mamak’s paper is important as it raises the matter of the substantial emotional harm that can come to those who love robots and the question of how society might respond to such harms.

3 Love and the law

Whilst acknowledging the significant harm that has come to Olga (see Sect. 5), I do not think that granting legal protection to some robots on the basis of their being loved by a human is the right step. First, recognising being loved as affording special legal status is certain to have unwelcome consequences. Once we have recognised love as being legally relevant in the case of robots, it is not clear that we could stop it from creeping into other legal cases as a justification for additional rights. As Mamak himself notes in quoting Fairfield, ‘Law progresses by analogy, not formal logic: by relating old stories to new contexts’ (Fairfield 2021, 55). In this case, the fact that the introduction of love as a legally relevant factor into the legal system would bring unfairness, as I argue below, should be an indicator that love is an unwelcome and complicating legal consideration.

Consider first a case which draws on a particular feature of the Inga/Otto case: the intentional destruction of Otto as a beloved object. Imagine that Inga did not have a relationship with a robot at all but instead had devoted her life to writing what she considered to be a masterpiece of literature. All of her spare time was spent writing and perfecting it and the process had played such a large role in her life that she now considered the work to be a part of her identity. She never had children and never had a serious romantic relationship because she did not want those to get in the way of her important project. All her friends know what the manuscript means to her, including Tristan. Tristan deeply dislikes Inga and to hurt her he destroys her manuscript: an evil act and devastating for Inga. As he suspected (and hoped) this sends Inga into a long-lasting depression.

As despicable as this act is, and as horrific the consequences for Inga, a person loving something is not in itself a reason for that thing to be afforded further legal protected. We can imagine a hundred plausible varieties of this story by replacing different much-loved objects of central value in Inga’s life. In each version of this story, we feel a huge amount of empathy for Inga when they are destroyed. But in none of the cases, do we have the intuition that Inga’s love alone can transform the object into something that criminal law should afford special protection. In addition, if we do have that intuition then we might ask why the call for legal protection for loved robots should not then be extended to other much-loved things. To put this another way, what it is it about our love for robots that would make them a special case?

In the considerations above, I purposefully screened off the fact that Otto is more than a beloved object to Inga: he is her life partner. Perhaps this is what makes robots a special case. Mamak explicitly builds his case for rights on the claim that ‘significant others’ deserve special treatment, where ‘significant other’ is intended to pick out something beyond a mere object, even a much-loved one. The moral relevance of significant others is that, in a situation in which two agents are in danger, it is expected that we would be motivated to save our loved one before turning to save the other. However, although this much is true, the bare claim that our significant others deserve special treatment must be qualified.Footnote 3 What is the normative force of the ‘deserving’? Are we morally obliged to give our significant others special treatment? Or is this a permissible action? Or is it only excusable? In addition, under what conditions can the ‘special treatment’ kick in? I consider this last question first. If two children are in danger and one of them is the child of the potential rescuer, it is morally permissible for the person to save their child over the other. If two dogs are in danger and one of them is the rescuer’s dog it is likewise morally permissible for the person to save their dog over the other dog. Similarly, if two robots are in danger and one of them is the rescuer’s it is morally permissible for them to save their own robot first. In that sense, it is permissible for significant others to be given special treatment. However, consider the following scenario:

Tiên has had her pet dog Bo for ten years. He is by far the most important being in her life. Tiên had early life experiences that made her retract from future relationships with humans and, beyond engaging with her work colleagues, she doesn’t place much value on the humans in her life. In fact, Tiên has a low opinion of people in general. Bo, on the other hand, is her best friend and constant companion. Tiên’s love for Bo is immense. While exercising Bo on the beach one day he and a small child get swept into the sea by a freak wave. Tiên is the only adult around and it is likely that she only has time to save one being.

What should Tiên do? First, we must accept that losing Bo will be devastating to Tiên. She is Bo’s human carer and she feels that the relationship she has with him is akin to the relationship that a parent has with their child: she is responsible for his wellbeing. The human child who is in danger, on the other hand, has no significance for Tiên and is in no conceivable way her responsibility. Despite all of this, despite Bo being Tiên’s closest companion, it is right that she should save the child first. The fact that Tiên loves Bo is not taken as a legitimate reason for her decision to let a child die. To do so would not be morally permissible behaviour. One might think that this is because we implicitly assume that the child is also loved by someone and that this love offers additional protection to the child. But even if we stipulate that the child has no living relations and is loved by no one, and Tiên knows this, it is still the case that Tiên must save the child. This will require a great deal of personal sacrifice, but being part of a moral community requires that Tiên puts what is in her own best interests to one side for the sake of the values of that community and what it recognises is the greater harm.

What does this tell us about the principle that we can give preferential treatment to our significant other? It tells us that love can permissibly act as a tie-breaker when we are faced with a decision over whom to prioritise of two beings of the same moral standing but not in cases in which there is a misalignment of moral standing: more specifically when our loved one is an entity deemed to have less moral standing than the other entity at risk.Footnote 4

What about the normative status of the claim that our significant others are deserving of special treatment? Williams considers an example in which it seems absurd to expect a man faced with two people in danger, where one of them is his wife, to treat both equally: it is clear that he can give preferential treatment to his loved one in his actions (1976, 17). As Williams sees it, it is morally permissible for the man to save his wife. Perhaps inconveniently, ‘one reaches the necessity that such things as deep attachments to other persons will express themselves in the world in ways which cannot at the same time embody the impartial view, and that they also run the risk of offending against it.’ (1976, 18) Williams reaches the conclusion that, to have substance, a life cannot grant supreme importance to an impartial system, that the ‘system’s hold on [life] will be, at the limit, insecure.’ But that insecurity itself cannot be limitless: it must retain a tension with the impartial view. As such, it is not clear that decisions following from feelings of love do themselves have positive moral value rather than being justifiable exceptions to the impartial moral system. That is, there is a sense in which love is permitted to get in the way of the system working fairly: a recognition that, as Williams put it, the impartial system is insecure. But it does not follow from that permissibility that love or being loved can be seen to impart some additional moral worth to an entity such that they are then deserving of special treatment.Footnote 5 It is not even clear that choosing to save your loved one over another is the morally right thing to do, rather than simply an excusable thing to do.Footnote 6 Understood in this way, it becomes clearer why loving a person or thing, or categorising some entity as a significant other, is not an appropriate candidate feature for additional objective legal protection but is rather something accommodated within the moral system.

We might even go further and recognise that being unloved is a particularly cruel reason for losing out on some legal protection. It is certainly not something that we would tolerate for other beings: I do not think anyone would want to live in a society in which the beloved are given special status while the unloved are seen to be less in need of protection. We certainly would not want to see such thinking extended to criminal law that protects humans or other animals. In these cases, if anything, we might be inclined to see the unloved in need of additional protection because the loved already have their supporters. Returning to the robot case, a scenario in which a beloved robot has an indirect protected status purely in virtue of being loved would lead to an outcome which is intuitively unfair: a two-tier society in which some robots fall under protective rights while others can be mistreated. Not only are they unloved, but being unloved legitimises their mistreatment: a double blow.

Love itself should not be seen as imparting additional protection, giving those who love additional legal claims. Having said that, I do believe that the loss of a loved one must be counted. Below I will argue that there are other ways for robot love to be counted, perhaps even within the legal system, that do not require robots to be afforded protective rights in criminal law.

4 The harm of unacknowledged loss

It could be that, at the heart of Mamak’s plea for robot love to be protected, is the concern that loving relationships with robots will not be taken seriously. Here, I consider one potentially damaging consequence of those in robot relationships being a minority group where their needs are not taken seriously by the broader society.

Consider a scenario in which Tristan is Inga’s friend but he simply does not understand what it is for Inga to love Otto and does not consider this to be a valid loving relationship for Inga. In this scenario, Tristan does not attack Otto but Otto is involved in an accident in which he is irreparably damaged. When Inga turns to Tristan for support he does not understand her grief. He wonders why she can be so upset. After all, Otto was only a robot. Tristan finds Inga’s ongoing grief to be confused and wonders whether maybe he should buy her a new model robot as a replacement for Otto.

In this scenario, although Tristan does not bring about the initial harm to Inga, he adds to her suffering by bringing about disenfranchised grief. Disenfranchised grief, a term introduced by Doka (1989, 2008), is defined as ‘grief that results when a person experiences a significant loss and the resultant grief is not openly acknowledged, socially validated, or publicly mourned […] [A]lthough the individual is experiencing a grief reaction, there is no social recognition that the person has a right to grieve or claim for social sympathy or support.’ Much has been written on the condition and Doka himself gives a range of examples in which we can find disenfranchised grief, not all of which involve bereavement.Footnote 7

One particularly poignant context in which we find disenfranchised grief is in the loss of life partners and spouses in sexual minority communities. In the movie A Single Man the main character George, who has been in a long-term homosexual relationship with Jim, receives a phone call informing him of the death of Jim, his life partner, in a car accident. We discover that, rather than hearing of Jim’s death directly, it is Jim’s cousin who is calling to tell George about Jim’s death: Jim’s parents do not inform him due to their prejudice.Footnote 8 Broken-hearted, George showers, shaves, dresses and prepares himself for his work day. Despite the death of a loved one being one of the most painful experiences that a human can face, sexual minorities very often report the impact of disenfranchised grief as they are left attempting to grieve the death of a partner or spouse while others deny the legitimacy of their love and portray their mourning as illegitimate. ‘Socially unacknowledged mourners face increased difficulties in recognising their own grief and achieving a sense of emotional closure after the death’ (McNutt and Yakushko 2013, 88). This leads to psychological distress and a risk of further emotional complications.

Another prominent example of disenfranchised grief is in the context of the death of a family pet. In their (2021), Cleary et al. report the experiences of many pet owners who claim that, by creating a distinction between grief for a human and grief for a pet in society, the experience of grief when a beloved pet dies was made more complex for them. Bereaved pet owners ‘struggled with their feelings of overwhelming grief in the face of societal disapproval, and even acted to minimise or hide their true grief to fit with societal norms’ (2174). Those who reported the resulting disenfranchised grief after the loss of a pet also displayed increased symptoms of anxiety, depression and functional impairment.

By analogy, although Inga’s loss should not be reflected in special protection for Otto, the significance of her loss must be socially recognised if she is to be protected from further unnecessary harm. One might argue that Inga believed that Otto was not sentient and so, from her point of view, would not have felt pain or fear on his death and that this might help to diminish her grief when compared to the loss of a human partner. This might be right to an extent: when we experience the death of a loved one our grief is compounded with worries of how the experience felt to them. On the other hand, we can imagine many scenarios in which a person might be assured that their human loved one felt no pain or awareness of impending death and, although that might comfort them to some extent, it surely does not diminish their loss nor their need for bereavement support.Footnote 9 If we accept that Inga can have felt love for Otto and acknowledge that she has experienced 40 years of her life with him as her life partner then we must also accept that her loss on his eventual destruction will be devastating, and that must surely be socially recognised.Footnote 10 More generally, in a society where humans form loving long-term relationships with artificial agents, we must be prepared to recognise the emotional impact on people when, for whatever reason, that relationship ends.Footnote 11

5 Prejudicial hate and robot love

I turn now to a scenario in which Tristan does not misunderstand the significance of Inga’s relationship with Otto: he sees the significance and it makes him angry. Tristan thinks that loving a robot is weird and seeing Otto and Inga together makes him feel deeply uncomfortable. Despite Inga’s proclamations to the contrary, Tristan refuses to see her feelings as being love as it ‘should be’, holding on to the view that romantic love should only be between two humans. Tristan has a human partner and sees his own robot as simply a replaceable tool so, although he can see that Inga acts as if she really loves Otto, Tristan simply cannot validate the relationship as one in which there is love. It angers him that Inga keeps declaring her ‘love’ for Otto: he sees it as somehow ‘spoiling’ love for him. It is this anger that motivates his destruction of Otto.

We have seen denials of love played out with other protagonists with devastating consequences. Members of the LGBTQ + community have faced horrific struggles to have their loving relations recognised as legitimate: their right to simply live their life loving who they wish is still denied in many places today. Loving relations between individuals from different races have similarly faced overwhelming discrimination (Caballero and Aspinall 2018). In both of these cases, there are countless examples of individuals being the targets of criminal acts where prejudicial hatred is the motivation. These acts are now recognised in many jurisdictions of criminal law as ‘hate crimes’. In the UK a hate crime is defined as ‘Any criminal offence which is perceived by the victim or any other person, to be motivated by hostility or prejudice based on a person’s race or perceived race; religion or perceived religion; sexual orientation or perceived sexual orientation; disability or perceived disability and any crime motivated by hostility or prejudice against a person who is transgender or perceived to be transgender.’Footnote 12

Is there a future in which we would describe Inga as being the victim of a hate crime? I think that there could be if those who choose to love robots become a minority that is recognised as being in need of protection. It is not far-fetched to think that resistance to robot relationships would arise. As the examples from minority relationships show, humans do not have a great track record of being accepting of love that that does not conform to their own ideals. In addition, as Weber-Guskar notes, ‘[t]here is a rather strong intuition that there is something wrong about getting attached to a machine, about having certain emotions towards it, and about getting involved in a kind of affective relationship with it’ (2021, 601).Footnote 13

If, in the future, the destruction of a robot partner under certain conditions could be counted as a hate crime then it is possible that Otto’s destruction could be penalised under criminal law.Footnote 14 Note that this is a solution that does not require Otto to be granted any rights. The crime is committed against Inga. Her property has been destroyed because of prejudicial hatred against those humans who have robots as significant others. It is interesting to note that this move would also protect Inga from other forms of abuse and injustice. For example, Tristan’s using the term ‘robot lover’ as a slur would be criminalised as hate speech; Tristan’s setting up a website that encouraged the hatred of those who form relationships with robots would be criminalised, counted as incitement of hatred, and so on.

It seems to me that, if that the harm that has been inflicted on Inga were to be recognised as a criminal act, it is as a hate crime. In this way, Inga’s love for Otto is recognised and protected by criminal law, but Otto himself is not.

6 Concluding summary

I have argued that, as despicable as the intentional destruction of Inga’s beloved Otto is, we should not afford robots protective rights based solely on them being a human’s ‘significant other’. To do so would result in an intuitively unfair system in which some beloved objects are afforded protective rights while others are left unprotected. In addition, bringing love into the legal system in this way would likely lead to further undesirable situations in which beloved humans or animals are afforded more protection than their unloved counterparts, in virtue of the legal precedent that protection for beloved robots would set. However, I have also argued that the possibility of humans forming loving relations with robots will require social acceptance of robot love if further harms are to be avoided. First, if loving robot relations are not deemed valid, there is the risk of the bereaved person experiencing the further harm of disenfranchised grief. Second, without social acceptance, there is a possibility that humans who love robots will become a minority group in need of protection. I argue that this could lead to certain acts against members of the group being criminalised under hate crime legislation.