Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable?
- 9.8k Downloads
The development of highly humanoid sex robots is on the technological horizon. If sex robots are integrated into the legal community as “electronic persons”, the issue of sexual consent arises, which is essential for legally and morally permissible sexual relations between human persons. This paper explores whether it is conceivable, possible, and desirable that humanoid robots should be designed such that they are capable of consenting to sex. We consider reasons for giving both “no” and “yes” answers to these three questions by examining the concept of consent in general, as well as critiques of its adequacy in the domain of sexual ethics; the relationship between consent and free will; and the relationship between consent and consciousness. Additionally we canvass the most influential existing literature on the ethics of sex with robots.
KeywordsSex robots Rape Artificial intelligence Consent Free will Legal community
…creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations… (Delvaux 2016, p. 12).
In the report, there is a list of key characteristics of the sorts of robots Delvaux has in mind: they need a certain amount of functional autonomy, capacities for learning, “physical support”, and the capacity to adapt to the environment (Ibid., 6). The main examples of such robots given are autonomous vehicles, care robots, medical robots, drones, and robotic technologies for human repair and enhancement (Ibid., 8–9).1
A kind of robot that is not on the list of examples is the sex robot. But humanoid sex robots are being developed, and there is a widespread interest, or at least fascination with them (Danaher 2017). Presumably, sex robots can be functionally autonomous, capable of learning, have “physical support”, and adapt to their environment (Levy 2008). Hence they can be “smart” in the way that robots that are candidates for “electronic personality” should be. So, if we follow the logic of the reasoning in Delvaux’s report, it would be reasonable to put forward smart sex robots as candidates for the electronic person-status—with the appropriate rights and obligations—that the report discusses. This observation is the first main motivation for our discussion below.
This perspective from the EU is very different, it seems, from a point of view associated with the roboticist Joanna Bryson. According to Bryson, the appropriate metaphor for the relationship between humans and robots is that of master and slave; robots should be “servants you own” (Bryson 2008, p. 65). This claim is predicated by a previous argument that we have a moral obligation not to create robots that could have moral status (as agents or patients), personhood, or to which we could have moral obligations (Gunkel and Bryson 2014). Bryson is explicitly not suggesting that robots should be “people we own.” Nevertheless, she argues, we will own them, and are responsible for their existence, capacities, and even potentially their “goals and desires” (Bryson 2008, p. 72).
Combining the EU perspective with Bryson’s, we would get a situation with a human–robot legal community featuring a set of legally incorporated robotic sex-slaves (or sex-servants).2 This strikes us as an undesirable result, not the sort of legal human–robot community we should strive to bring about. So we here want to discuss options for avoiding this outcome, while taking seriously the idea that somewhere down the line, it may make sense to attribute a significant legal and moral status to smart robots, including humanoid sex robots.
In the human case, the key issue that typically separates legally permitted sexual relations from legally forbidden sexual relations is consent. If a person (e.g. a child) is unable to give/withhold consent, it is illegal to perform a sex-act on that person. Or if a person is able to give consent, but does not give his or her consent, it is also illegal and an act of rape to perform a sex-act on that person. This suggests to us that one possible avenue for incorporating robots into sexual community is via the concept of consent.
In discussing the topic of robotic sex and consent, we approach the question of robot-rights via the idea of human–robot communities.3 We do so because recent work that takes seriously the idea of extending moral and legal categories to the robotic domain proceeds precisely in this way. For example, Mark Coeckelbergh argues that when it comes to considering whether robots can ever be accorded rights and moral consideration, the key question should be what type of relations and communities can be established with robots (Coeckelbergh 2010). Similarly, when David Gunkel argues that we can “vindicate” the idea of rights for robots, he argues for this idea by considering what kind of relationships and companionships there can be between people and advanced robots (Gunkel 2012, 2014).
The type of robot that seems most likely to be incorporated into our moral and legal communities are social robots. A social robot is defined by Kate Darling as “a physically embodied, autonomous agent that communicates and interacts with humans on a social level” that is “specifically designed to elicit” anthropomorphic responses from users (Darling 2016, pp. 214–215). Humanoid sex robots clearly fit this description.
Elsewhere, we have recently discussed whether it is possible to bring sex robots into the romantic community by means of advanced technology that would enable mutual love between humans and sex robots (Nyholm and Frank 2017). Our question there was whether it is conceivable, possible, and desirable to create relations of mutual love between humans and sex robots.
Here, we want to ask a similar question regarding how and whether sex robots should be brought into the legal community. Our overarching question is: is it conceivable, possible, and desirable to create autonomous and smart sex robots that are able to give (or withhold) consent to sex with a human person? For each of these three sub-questions (whether it is conceivable, possible, and desirable to create sex robots that can consent) we consider both “no” and “yes” answers. We are here mainly interested in exploring these questions in general terms and motivating further discussion. However, in discussing each of these sub-questions we will argue that, prima facie, the “yes” answers appear more convincing than the “no” answers—at least if the sex robots are of a highly sophisticated sort.4
The rest of our discussion divides into the following sections. We start by saying a little more about what we understand by a “sex robot”. We also say more about what consent is, and we review the small literature that is starting to emerge on our topic (Sect. 1). We then turn to the questions of whether it is conceivable, possible, and desirable to create sex robots capable of giving consent—and discuss “no” and “yes” answers to all of these questions. When we discuss the case for considering it desirable to require robotic consent to sex, we argue that there can be both non-instrumental and instrumental reasons in favor of such a requirement (Sects. 2–4). We conclude with a brief summary (Sect. 5).
1 Basic issues
knows your name, your likes and dislikes, can carry on a discussion and express her love to you and be your loving friend. She can talk to you, listen to you and feel your touch. She can even have an orgasm!5
The company promises that soon, they’ll also start selling a male counterpart to Roxxxy, called “Rocky.” Other companies are developing similar sex robots.
We’re skeptical that any current robot could really do all the things Roxxxy is said to be able to do in the sales pitch (Cf. Nyholm and Frank 2017). But we view Roxxxy—or the way she is advertised—as a good illustration of what we have in mind when we talk about a sex robot here. We mean a robot that looks like a human person, that has a fairly sophisticated type of artificial intelligence (AI), and that can perform various different types of functions and agency, including but not limited to sexual acts and behaviors (Cf. Danaher 2014).
We are particularly interested in robots created to be highly versatile and human-like in the way that they interact with human users. We are here less interested in robots that neither look like humans nor act in ways that replicate human actions. It is the more sophisticated, human-like sex robots where it makes most sense to take seriously the question of whether it is conceivable, possible, and indeed desirable to have a robot capable of consenting (or not consenting) to sexual encounters. The more a robot approximates a human individual (in terms of how the robot seems or in terms of its capacities), the more it starts making sense to seriously consider whether we should extend moral and legal norms that apply to humans to the interaction between humans and these advanced robots. That is why our focus is on sex robots designed to be similar to human sex-partners.6
Having explained what we mean by a sex robot, we now turn to the crucial issue of what consent is in the context of sexual interactions. Consent in general is often taken to be a “morally transformative act” (Wertheimer 1996, p. 342).7 This means that giving our consent to something another person does to/with us can change their action from being morally impermissible to permissible. A doctor’s action of cutting a patient open with a scalpel and removing an organ is morally permissible and sometimes morally obligatory because the patient (in most cases) has consented. It becomes surgery instead of assault.
Robin West points out that consent not only demarcates sex that is a crime from sex that is not, it also has a “legitimizing function” (West 2009). The presence of consent is taken in the contemporary western legal context to mean that the sexual activity, such as same-sex sexual activity, should not be regulated, interfered with, or prohibited by the state.
Alan Wertheimer distinguishes between the “ontology of consent” (features an act must have to be considered consent at all) and the “principles of consent”: moral considerations in light of which consent carries significant moral weight and can be thought of as “transformative” (Wertheimer 1996, p. 347). The ontology of consent includes the following basic elements: (1) consent must be an act of some kind, in other words, it “is performative rather than attitudinal” (Ibid., 346). (2) consent can be “explicit or tacit, verbal or nonverbal” (Ibid.). And (3) consent can only take place when “certain background defects” (i.e. coercion or lack of competence) are absent (Ibid., 347).
The principles of consent on the other hand cannot be so easily rattled off; they are largely dependent on the context in which the consent is given and the overarching normative theory that is best (Ibid., 350). For the purposes of our discussion, Wertheimer’s point is that whether or not something counts as consent is usually not the definitive moral question. Rather, what we consider as morally transformative (or even meaningful) consent is the product of previous moral theorizing (Ibid., 344).
In sexual, medical, and other contexts, the person giving consent must meet certain criteria to be able to do so, sometimes referred to as decisional capacity or competency. Appelbaum (2007) discusses the components of decisional capacity and how to assess it in the medical context. He focuses on “the abilities to communicate a choice, to understand the relevant information, to appreciate the medical consequences of the situation, and to reason about treatment choices” (Appelbaum 2007, p. 1835). Having this ability takes more than simply stating one’s choice and grasping basic medical facts. Patients need to be able to “explain how their values relate to the facts of the situation and connect to a conclusion in a way that makes sense” (Rhodes and Holzman 2004, p. 371). Underlying the ethical importance of establishing competence is the basic principle of respect for autonomy in medicine. Having these capacities is taken as a proxy for being autonomous in a sense that requires that others do not interfere with your decisions (Rhodes and Holzman 2004).
There are at least three striking differences between consent within medicine and within sexual relations. First, the epistemic standards for consent are ostensibly higher when it comes to consent to a medical procedure than when it comes to consenting to sex. Consenting to sex does not seem to require more than a minimal understanding of the risks and benefits of the sexual encounter. And with the exception of certain very relevant facts, for example, that they are HIV positive, initiators of sex do not seem obliged to disclose much information in order for the other person to be able to meaningfully consent.
Second, in medicine the refusal to consent to a procedure a physician thinks will be beneficial for a patient is generally taken as a red flag that the patient’s decision making capacity should be assessed. It is not only the physician’s obligation to solicit consent from the patient, they are also obliged to investigate the reasons behind a patient’s refusal to consent. In the case of sex, there is typically no obligation on the part of initiators of sex (or necessary expertise) to investigate the other parties’ reasons for consent or lack thereof.
Third, the physician–patient relationship is guided by a set of robust ethical commitments to the good of the patient and respect for the autonomy of the patient. Two persons considering having sex are not necessarily in the same position. A physician is often taken to have fiduciary obligations to their patients, partly because of a radical power-imbalance between doctor and patient. The doctor, in most cases, knows more than the patient, is in a position of greater social status in the medical context, and is not rendered vulnerable by illness, pain, or disability. The much-less-powerful patient has to put his or her trust in the physician’s competence and benevolence in order to receive care. In the sexual context, one person does not automatically have a fiduciary obligation to the other. This seems to be the case even though in many cases there is a power-imbalance between the parties to the sex-act. Further, the sex-act is not necessarily in the interests, in any sense, of the person being propositioned.
1.2 Robots and consent: short literature review
There is a small, but growing literature on legal and ethical issues relating to sex robots. Some contributions already briefly touch on the topic of consent. However, consent has not yet been thoroughly discussed in this literature.
The contributor to the discussion of sex robots whose work is perhaps most often discussed is David Levy. In Love and Sex with Robots, Levy predicts that in the future, people will not just want to have sex with robots. They will also want to marry robots and have them as their romantic partners. When Levy discusses consent, his main focus is not on whether the robot would consent to sex. It is rather whether the robot would consent to—or be able to consent to—marriage with a human. Notably, Levy thinks that many norms and values related to marriage and romantic partnership will change. But consent, he nevertheless writes, is something that will and should remain an important component of a marriage (Levy 2008).
Kathleen Richardson (2015), who is campaigning against sex robots, is very critical of Levy’s defense of sex robots and his analogy between the human–robot relationship and the customer-prostitute relationship. The attitude of a customer (usually male) to a prostitute (usually female) is morally problematic. The customer, Richardson argues, lacks empathy for the prostitute, fails to engage with the prostitute as an autonomous agent with a distinct subjectivity of her own, treating the prostitute as a mere thing (Richardson 2015, p. 291). This morally problematic and gendered attitude is likely to be “reproduced” in the relationships between humans and sex robots (Ibid, 292). Although Richardson does not explicitly address the issue of consent, she is engaging with closely related moral concepts like autonomy, agency, and subjectivity. Her position is compatible with the view that consent is a key moral requirement for some things that we do to others precisely because it is a way of showing respect for their autonomy.
Feminist critic of humanoid sex robots Sinziana Gutiu (2016) also discusses consent. Gutiu worries, in her “The Robotization of Consent”, that the introduction of humanoid sex robots will have erosive effects on many people’s attitudes towards consent in human–human sexual interactions. In the robot-case, consent will not be an issue, Gutiu argues, and this may lead a significant group of people to have distorted attitudes regarding whether consent is needed from potential human sex-partners.
Another important voice in the debate about the legality and ethics of sex robots is John Danaher. He critically examines the arguments of Richardson and Gutiu against humanoid sex robots. Danaher also discusses the question of whether the absence of consent in human–robot sex-relations would set a problematic “symbolic” standard, whereby consent is represented as not being important within sexual relations (Danaher 2017). Danaher argues that the symbolic meaning of human interaction with non-human artifacts—such as sex robots—can change over time. So even if current attitudes towards sex robots among many people interested in having sex with them may have ethically problematic undertones of disregard for consent, there is no guarantee that sex robots will continue symbolizing such sexual relations in the future. Therefore, Danaher argues, we should not be too quick to infer broad conclusions about whether sex robots are good or bad from what they currently symbolize (Danaher 2017).
These are the most substantial contributions so far in the literature on consent in sexual interactions between humans and robots. These are all important contributions on which to build. But we think a more deep-reaching discussion of whether consent is conceivable, possible, and desirable in human–robot sexual relations is called for. For example, when Levy discusses whether a robot could consent to marriage, all he says is: “if [the robot] says it consents and behaves in every way consistent with being a consenting adult, then it does consent” (Levy 2008, p. 159). While this is an interesting remark, we think it is both too brief and too behavioristic. More discussion is needed before we can settle whether a robot can consent. And whether a robot can consent doesn’t only depend on outward behavior. What goes on “on the inside” matters, too, and this requires that we think about whether a robot can consent and what this would mean.
Also, if it is possible to create sex robots who could consent (or not) to sex with human partners, this could block the worry Gutiu and Richardson have about human–robot sex. If the robot—as well as the human—could consent to it, sex between mutually consenting humans and robots could set a good, rather than a bad, example for sex between humans. The worry that human–robot sex sets a bad example for human–human sex can either be dealt with by not having any human–robot sex or, alternatively, by creating morally and legally defensible types of human–robot sexual relations. If what sex robots symbolize becomes something desirable and justifiable—i.e. consensual sex—this would indeed change what sex robots symbolize in a good way, in line with how Danaher argues that we should try to make robot sex symbolize something positive, rather than something negative.8
Of course, with a very simple type of sex robot, it might seem silly to investigate consent between humans and robots. However, suppose the robots start being equipped with very advanced AI and that organizations like the EU start attributing rights and personhood to other advanced robots. Then the prospect of sex robots that can consent to sexual interactions with humans becomes a much more pressing topic to investigate. As noted above, we are here primarily interested in robots constructed to be very human-like in their appearance and behavior.9 When it comes to such robots, we should take Levy’s idea that consent matters within human–robot interactions seriously. Hence the discussion of the conceivability, possibility, and desirability of consent within human–robot sex-relations below.
2 Is it conceivable that a sex robot might be able to consent to sex?
We turn now to the question of whether it is conceivable that sexual interactions between a human and a sex robot might involve mutual consent. As to whether a human can consent to such a sexual encounter, we think that there is less room for a negative answer. However, there is room for sensible discussion about whether a robot could be advanced enough that a human could grant consent to it, making the robot the recipient of this consent rather than any human person who owns or operates the robot. One can imagine disagreement about this, whereby some would say that the consent is always offered to whoever proposes the robot as a sex-partner, whereas others might think that the robot is an independent enough agent that it can be the agent who seeks consent. Our main focus here, however, is on whether the robot could be the party who grants consent.
The first thing we need to do is to explain what we mean by the question of whether it is conceivable that a robot might consent to sex. We are asking whether or not there is anything conceptually incoherent about imagining a situation in which a robot gives consent. As we understand the question, then, it is conceivable that a sex robot consents to sex in case there is no conceptual contradiction or incoherence in imagining this. At this point, we are not asking whether it is likely or probable that robots might be able to consent to sex. We will save that question for the section below that discusses whether it is possible to have a robot that can consent to sex.
2.1 Consent and consciousness
The first possible “no” answer to the conceivability-question we will evaluate is inspired by Danaher’s above-mentioned discussion and focuses on consciousness and its relation to consent. Danaher rules out the possibility that sex robots might be harmed based on the fact that he thinks of them as lacking conscious experience, and he also rules out the possibility that sex robots might consent for the same reason (Danaher 2017). The first argument for a “no”, then, goes like this: firstly, giving consent requires being conscious. Secondly, sex robots lack consciousness. Therefore, sex robots cannot be conceived of as being able to consent or not consent to sex. Does this argument give us reason to regard a consenting sex robot as an inconceivable notion?
We think there are two reasons for rejecting this argument. Reason number one: we can imagine future robots sophisticated enough to enjoy a certain degree of consciousness. Researchers, e.g., Arati Prabhakar—are currently working on creating artificial consciousness, for example via human–machine interfaces (Prabhakar 2017). Daniel Dennett argues that whereas robots may not become conscious “in just the way humans are”, it is possible to set out practical requirements for making a robot conscious in a robot-specific way, enabling the robot to do most of what humans do with the help of our consciousness (Dennett 1994, p. 133). Ray Kurzweil thinks that, when we figure out how to “upload” the “salient contents” of our brains onto “another computational substrate”, this will create a conscious computer (Kurzweil 2013). Joanna Bryson, in turn, argues that on some plausible definitions of consciousness, some machines with AI can already be said to be conscious—for example, if we define consciousness as the ability to report verbally what one has percieved (Bryson 2012). On the whole, then, we see no principled reason why it should not be possible to create artificial consciousness.
There is no reason, if this succeeds, why such technologies might not be incorporated into the internal architecture of humanoid sex robots. Rather, sex robots are among the likely candidates for robots into which this technology might be incorporated—certainly if the sex robot is also supposed to be a “loving friend” of the sort Roxxxy is advertised as being. So even if current sex robots like Roxxxy lack consciousness and are therefore unable to have the sort of subjective states we associate with consent, future sex robots may have the relevant forms of consciousness (Lumpkin 2013).
At the same time—and here comes reason number two—we doubt that consciousness is even a necessary requirement for consent in the first place. At least if by consciousness we mean something like subjective feelings or “qualia”. Consent is primarily related to the will, and it is not clear to us that the will is first and foremost characterized by subjective feelings and qualia. The will is a mental capacity to endorse possible suggestions and courses of action, and to make judgments about what to do or accept (O’Connor 2016). This may require the ability to process information and to engage in reasoning or deliberation based on that information and certain goals or values. None of this seems to require, by necessity, that the subject of the will has any particular feelings or qualia while exercising their will. So even if consent has subjective components related to the ability to take in and deliberate on the basis of information-inputs and internal representations of alternatives and values, none of this seems to go beyond what we can imagine that a sophisticated AI-system can become able to do.
So we think there are two reasons why lack of consciousness in at least current robots should not be taken to render consenting sex robots inconceivable. Firstly, future robots are likely to have a form of consciousness.10 Secondly, it is not clear that conscious feelings and qualia are necessary components of the sorts of operations of the will we associate with giving or withholding consent.
2.2 Consent and free will
Speaking of the will, there is a second possible “no” answer we now wish to turn to. This one has to with the idea free will. The relationship between consent and free will is given surprisingly little attention in the philosophical literature; more commonly the focus is on the relationship between free will and moral responsibility (Strawson 1962; Frankfurt 1969; Dworkin 1970) or the relationship between consent and autonomy (Grisso and Appelbaum 1998; Dworkin 1988). Nevertheless, we take it that one possible argument for a “no” answer to whether robots could consent could go like this: Step 1: The ability to give consent requires possession of free will. Step 2: It is inconceivable for robots to have free will. Conclusion: It is inconceivable that robots could give or not give consent. How should this argument be evaluated?
Of course, one might think that it is inconceivable or incoherent to think that anyone has free will, humans and robots alike. On this view, robots cannot consent, but neither can we. Maybe this is because for any given decision, in order for it to be an exercise of free will, there must be “alternative possibilities” available to the agent and because this is thought incompatible with the best formulation of determinism (Kane 1996, p. 37). Alternatively, one might think that some free-will undermining form of determinism applies in some way to robots, as artefacts, as things made of metal rather than flesh and bone, or as objects designed by humans, that does not apply to human beings. Clearly, whether we think it is even conceptually possible that robots could possess free will depends on what we think free will is in the human case. It is beyond the scope of this paper to fully discuss the extensive literature on free will. We will confine ourselves to three key points.
First, that there are a wide range of plausible compatibilist views of human free will, from Daniel Dennett’s free will “worth wanting” (1984), to Strawson’s analysis of the reactive attitudes (1962), to Michael Smith’s rational “raft of possibilities” (2003), to the “reasons responsiveness” theories of McKenna and Coates (2016) and Fischer and Ravizza (1998). If we accept one of these compatibilist understandings of free will, it seems less fantastic to claim that future robots might come to possess free will (McCarthy 2000).
[f]or an agent to have free will is for her to possess the psychological capacities to make decisions—to imagine alternatives for action, to select among them, and to control her actions accordingly—such that she is the author of her actions and can deserve credit or blame for them (Nahmias 2016).
If free will is a set of psychological or cognitive capacities, it can come in degrees, develop over the course of childhood into young adulthood, and be more or less impaired by a variety of conditions, e.g. addiction (Ibid 12–13). On this kind of view, it is conceivable that robots may possess some of the capacities that constitute free will, without possessing others, just as children do.11
Third, it should be noted that we need not necessarily accept that free will is a requirement for someone to be able to consent. At least, this is open for discussion. For example, Benjamin Vilhauer argues that free will is not a prerequisite for the ability to give rational consent and subsequently be held morally responsible for one’s actions (Vilhauer 2013). His position is rooted in a revisionary Kantian conception of persons as ends-in-themselves, regardless of whether they have free will or not (Ibid 142).
As we see things, what is most important here is really what agential functions a robot could perform. Can it take in information about alternatives open to it and then evaluate those alternatives on the basis of certain values and priorities that it operates on the basis of? Can the robot take a stance, based on the information it processes and its evaluation of its options? If a robot is able to perform these agency-functions, we think that it has enough by way of what can be considered as basic free will for it to make sense to regard the robot as giving consent. Granted, on some conceptions of free will, this may not yet count as free will in a strong sense. However, it would be enough for it to make it plausible to conceive the robot as being able to give/withhold consent. Ultimately, it is not important, as we see things, whether such abilities are sufficient for the robot to fall within the extension of the concept of free will. It is much more important whether these abilities are sufficient for the robot to fall within the extension of the concept of consent.
To conclude this section: we don’t think that worries about whether robots might possess a strong form of free will should lead us to determine that robots could never be able to consent to sex. What matters is instead whether they acquire a sufficient “degree” of free will and agential capacities of the sorts most typically related to the concept of consent. We see no conceptual incoherence in imagining that robots could come to acquire such features.
3 Is it possible to create a sex robot that can consent (or not consent) to sex?
We have interpreted the conceivability-question as being about whether there is any conceptual incoherence involved in the thought of a robot giving consent. In this section, when we turn to the question of whether it is possible to create such a robot, we interpret this question in the following way. We first ask whether current technological advances make it likely or probable that sex robots can be created with the ability to consent, or not consent. We then turn to the question of whether future technological advances might make it likely or probable that such robots could be constructed.
A “no” answer to the question of whether it is possible to build a sex robot that can give/not give consent appears most plausible if we limit ourselves to what is feasible with currently available technologies and the forms of AI they use. It is hard, with current technologies, to build a flexible robot that can respond to novel situations in adaptive and creative ways (Mindell 2015). Typically, current “autonomous” robots need a large number of humans to operate as a support-system behind the scenes to guide and adapt the robot’s responses to novel situations and environments (Ibid). Given this, it can plausibly be argued that robots are not yet sophisticated or autonomous enough for it to make sense that a current sex robot would truly be able to deliberate in the sort of way we might think is required for giving, or withdrawing, consent to sex in an unfamiliar situation.
Some even argue that it is not possible to build an autonomous robot who will be able to act on the basis of reasons for or against options, even if the technology develops much beyond what is currently possible. As we understand them, this is the perspective Purves et al. (2015) take in their recent discussion about whether autonomous weapons equipped with AI would be able to act on the basis of reasons for or against targeting certain enemies. If that thesis generalizes, perhaps this means that even future AI-equipped technologies won’t be able to give/not give consent, given that this requires the ability to make decisions based on reasons.
However, we are skeptical of the claim that neither current nor future AI-equipped autonomous robots could be said to be able to act for reasons for or against options. Our skepticism is based on the functionalist perspective sketched at the end of the foregoing section. If the robot has an internal value-system along with a set of priorities, and is able to take in information about action-options and then test those action-options against its values and priorities and select a course of action, it strikes us as plausible to construe this robot as acting on the basis of reasons. And it does not seem outlandish that a robot could do this—either with currently developing technologies or with future advancements.12
As we move from what systems can currently do to what they are likely to be able to do as technologies develop in the future, the “no” answer to whether it is possible to create robots that can give consent along the lines described above becomes less and less convincing. The “yes” answer instead becomes more and more convincing (Cf. McCarthy 2000; Lumpkin 2013). Above we argued that there is no conceptual incoherence in imagining a robot that is able to give consent according to the lines described above. In this section we wish to make the conjecture that with further advances in what autonomous and smart robots are able to do on their own, it is also likely and probable that future robots will be able to perform the functions we associate with deliberating about a sexual proposition and then either giving, or not giving, consent. However, we leave it to technical experts to make well-informed estimates of how soon the necessary forms of AI will be widely available and cheap enough to be put into sex robots.
To summarize this section: with current technologies, it may be unlikely that we can construct robots autonomous enough that they can be considered able to give or refuse consent to sex with humans. However, as technology advances, we do not see any principled reason for thinking it unlikely that the right kind of AI will be developed within the not-too-distant future.
4 Is it desirable to have a sex robot who can give or refuse consent to sex?
We come now to the issue that is perhaps most pressing from an ethical and legal point of view: namely, whether it is desirable to create the sort of sex robot we have been envisioning above. That is, is it desirable to create a sex robot able to either give, or refuse, consent to sexual propositions from humans? As before, we will first consider possible “no” answers to this question, before considering reasons why we might answer “yes” to this question. We will start with the issue of whether consent is the right issue to focus on in the first place.
4.1 Is consent all that is needed?
So far, our discussion has been premised on the assumption that consent is a good standard for legally and ethically non-objectionable sexual interaction between humans, for which reason we have investigated whether it makes sense to apply the idea of consent within the robotic domain. However, feminist critics argue that in the human case, consent is not as satisfactory of a condition for sexual relations as it is commonly thought to be.
in which individuals would be required to consult with their partners before sexual penetration occurs…it would require only what conscientious and humane partners already have: a communicative exchange, before penetration occurs, about whether they want to engage in sexual intercourse (Anderson 2005, p. 1407).
Robin West, in turn, argues that consensual sex may still be “unwanted and unwelcome” and that this kind of sexual contact “often carries harms to the personhood, autonomy, integrity and identity of the person who consents to it—and that these harms are unreckoned by law” (West 2009, p. 224). There mere fact of consent does not tell us that the sex act was ethically permissible, harmless, good for the persons involved, or good in any other sense. Analogously, an employment arrangement may be consensual, and thus not slave labor. Yet it could still be unfair, unsafe, exploitative, etc. (Ibid).
Catherine MacKinnon also argues for the welcomeness requirement of sexual relations (2005). The mere existence of consent, even if it is verbal, explicit, and affirmative, is not sufficient for a defense in a rape accusation, she thinks. The consent-requirement fails to take into account the background-conditions of women’s social and economic inequality.
Supposing that we agree with the reasoning of these authors, should we conclude that consent is not a desirable marker of morally and legally justifiable sex—either between humans or between humans and robots? We think that one should not be too quick to draw that conclusion.
Firstly, the negotiation-model Anderson offers can be seen, not as an alternative to the requirement of consent, but rather as a more elaborate idea of when a person really fully consents to sex. On this interpretation, consent requires a shared negotiation. It’s not enough that a sex-partner doesn’t resist or seemingly plays along. To secure consent, some sort of shared negotiation has to occur.
Secondly, it is possible to interpret West’s point as being that the range of wrongful sexual behaviors is not exhausted by acts of rape (= non-consensual sex). It also includes a wider range of possible harms and misconduct: viz. further ways in which a person’s autonomy or status can be harmed by an unwanted sexual act or advance on somebody else’s part. Thus one form of very grave sexual wrongdoing consists in not seeking a sex-partner’s consent—but there are also other forms. The most ideal forms of sexual interactions, we can agree with both West and McKinnon, are the ones welcomed by all involved parties.
We agree with these feminist legal critics, then, that the almost exclusive legal and ethical focus on the importance of consent to sex leads to neglect of other significant elements of sexual morality and perhaps even the way sex-crimes should be adjudicated. This leads us to conclude that consent may not be sufficient for sex to be ethical. The mere presence of consent does not rule out the possibility that a sex-crime has occurred. However, we maintain that consent is necessary for both ethically and legally permissible sex. So if and insofar as the human–robot case should be modelled on the human–human case, consent would be a necessary, though perhaps not sufficient, requirement in order for sexual acts to be justifiable and desirable.13
4.2 We should make sex robots that can consent
That consent is desirable in the human–human case does not yet automatically mean that it is also desirable in the human–robot case. In order to argue that it is, we first wish to briefly return to our starting point above.
As we saw in the introduction, there is serious discussion—both in the EU and within academic ethical and legal theory—about the prospect of extending rights and person-status to advanced robots. This means, as we noted above, that robots would be brought into the legal community. What this means, furthermore, is that what rights and status we give to these advanced robots has direct implications for what type of legal community we create.
If we legally incorporate sex robots into the legal community, but we don’t require that consent—or something similar to consent—be required in the context of human–robot sex, this has the following implication. It means that the legal community does not take a strong stance against non-consensual sex with human-like members of the legal community. We think that this is an unacceptable implication.14 The legal community should make it very clear that any member of the legal community who enjoys the status of personhood needs to give his, her, their, or its consent before any sexual acts are performed on them. It cannot be that the legal community does anything that can be construed as condoning what is sometimes called “rape-culture”, i.e. a mindset by which non-consensual sex is normalized or otherwise implicitly or explicitly approved of largely as a result sexist attitudes, institutions, and patterns of behavior (Cf. Buchwald et al. 2005 on rape culture and Danaher 2014 for the connection to sex robots).
In other words, our recommendation is for the legal community not to include any new members without also requiring that other members seek their consent before attempting to have sex with these newly incorporated members of the community. It is desirable to have a legal community in which the general message is that, if anybody wishes to have sex with any person within the legal community, then it is necessary—but perhaps not sufficient—that the person gives consent first. That is the kind of legal community we have and should continue to have.
Notice that this is a conditional claim. The claim is that if it makes sense to incorporate robots with advanced AI into the legal community, then, firstly, sex robots with advanced forms of AI would be suitable candidates for inclusion into this community and, secondly, we should then require that they not be subjected to non-consensual sex. However, if sex robots lack sophisticated AI enough for them to be suitable candidates for inclusion in the legal community, it will make less sense to view it as desirable that the robots should need to be able to give consent. The less human-like robots are—the more they are like dolls that can move in certain ways—the less of a need there is to extend human moral and legal categories to them. However, the more human-like robots—including sex robots—are, the more it becomes desirable to extend legal and moral categories to them.
We think Levy (2008) and Turkle (1984) are right that once robots reach a certain level of sophistication, people will intuitively start treating them like persons/agents. And we worry that it will be confusing for people if it is okay to do things to certain persons/agents (e.g. sex robots) that it is not okay to do human persons. We follow Gutiu and Richardson in believing, as one might put it, that by interacting with very human-like robots in certain ways, humans will be “trained” to also interact with other humans in that same way (Gutiu 2016; Richardson 2015; Cf. Kant 1997, p. 212).
As P.F. Strawson notes, people cannot help but responding to each other with a range of social emotions (“reactive attitudes”) when others they interact with appear to be “normal” adult human agents (Strawson 1962). And, we can add, we typically receive what might be called “peer feedback” on our reactive attitudes from those around us. That is, people tend to approve or disapprove of others’ reactions and emotions in relation to each other. This contributes to legitimizing and reinforcing these attitudes and emotional responses. We need a situation that helps to legitimize and reinforce justifiable attitudes towards human-like persons.
The prevalence of sexual assault and rape on college campuses suggests that many young people—particularly young men—are confused about the boundaries of morally and/or legally acceptable sexual conduct and that some blatantly disregard these boundaries. This suggests to us that introducing a category of rights-holding persons into the sexual community for whom different rules apply, since their consent need not be sought, is likely to create even more confusion about or disregard for consent within this sub-set of young people. It would “teach” them, as we might put it, that the sexual community is a hierarchy whose members differ in their rights and duties. This is the wrong message to teach young people. Rather, they should be taught that anybody who is a rights-holding person within the sexual community has a right to refuse consent to sex. Having very human-like robots with rights and person-status, but whose consent need not be sought, is likely to counteract this part of sexual education of the young, instead teaching the message that persons differ in their rights within the sexual community.
So in addition to the non-instrumental argument above about how the legal community should take a strong stance against allowing any of its members to lack sexual autonomy, we also think there is an instrumental argument to be made here. The legal community should not condone rape-culture in any way (non-instrumental argument). Nor should it allow its members to be “taught” and “trained” to disregard consent, as might happen if we create a category of robotic rights-holders whose consent to sex need not be sought (instrumental argument). For these reasons, we think that requiring consent is desirable within the context of sex between humans and sex robots—at least when the sex robots have advanced enough AI that it makes sense to seriously consider bringing them into the legal community.
As discussions about whether or not to include robots with advanced AI into the legal community start to develop, we cannot leave sex robots out of this discussion—especially not if the sex robots are made to be similar to human beings in their level of sophistication. And if sex robots are brought into the legal community, and the question of extending a form of “electronic personality” is taken seriously, this raises the question of how we can avoid the outcome that the legal community ends up having a class of legally incorporated sex-slaves. A possible solution is to introduce the notion of consent into the domain of human–robot sexual relationships, in a way similar to how consent is crucial for legally and morally permissible sexual relations between human persons. Hence we have been discussing whether it is conceivable, possible, and desirable that humanoid robots should be able to consent to sex. For each of these three questions, we’ve considered reasons for answering “no”, and found that there are important reasons why one might want to answer “no” to these questions. In the end, however, we have argued that there are also important reasons why one might answer “yes” to all three questions.15
Regarding examples of human repair- or enhancement robotics the report is unspecific. It simply refers to robotics used for “repairing or compensating human organs and functions” and associated “possibilities of human enhancement” (Delvaux 2016, p. 9).
As we read Bryson, her account is primarily concerned with the moral status of robots, not their legal status. Yet the issues about ownership that she discusses are very closely related to legal issues, as ownership is both a moral and a legal concept. Moreover, if Bryson’s moral account is correct, this could offer support for a certain position on the legal issue of the status of robot. For these reasons, we find it interesting to ask what the outcome would be if we use both the suggestion from the EU and the arguments from Bryson in a larger, combined argument about what the moral and legal status of sex robots should be.
An objection that is sometimes posed to any discussion of robot rights is that worrying about robot rights is premature given that the basic human rights of many human beings are not respected. We agree that promoting human rights should be the priority. But we do not think this means that one needs to set aside all (theoretical) questions about non-human rights until all human rights are realized; the same goes for discussions of animal rights.
A clarification: in taking seriously the prospect of creating a kind of person-status for sex robots, our idea is not to treat these robots in a way that completely matches up with what is appropriate in relation to our human person-status. Rather, the sort of person-status it might make sense to attribute to a robot would be of a more limited kind, appropriate to the given level of AI and the specific human-like capacities that the robot would have. For different kinds of robots, this might mean different things, depending on what capabilities and features the robots would have.
http://www.truecompanion.com/shop/about-us (accessed on March 27, 2017).
To clarify further: we are here focusing specifically on sex robots designed to both look like humans and be human-like in their behaviors and capacities. There can of course also be robots (or dolls) that look like humans but that lack any artificial intelligence that make them similar to humans in their capacities. Likewise, there can also be robots whose artificial intelligence make them similar to humans in certain ways in their capacities, but that don’t look like humans. As we see things, a mixture of a human-like appearance and human-like AI-capacities creates a particularly heighted emotional and moral tension, making it particularly pressing to address the issue of whether there should be a requirement for consent in case people wish to have sex with such robots. However, we acknowledge that a human-like appearance and human-like AI can both be emotionally and morally loaded on their own, even if and when these properties do not appear together in one robot.
Here we consider sexual and medical consent. We exclude consent to be governed because it is not consent between individuals, and there is less focus in that literature on the capacities necessary to be capable of consenting.
Notably, sex between humans and robots could set a good example for humans in some ways (e.g. by being consensual) while simultaneously having other features that do not function to set a good example for humans. For instance, if the sex robot is built to look like a human child or a non-human animal, this can set a bad example for humans even as that same robot is designed to be able to give consent and in that respect be designed so as to set a good example. We are here focusing on the issue of consent in particular, and we are narrowing down our discussion to sex robots designed to look like adult human beings.
We want to remind the reader at this point that in addition to a sex robot’s human-like appearance and its capacity to perform a certain range of human-like behaviors and actions, we follow Delvaux’s above-cited report in treating functional autonomy, capacity for machine learning, embodiment, and the capacity to adapt to the environment as further key characteristics of any robots to which we might attribute certain rights. All of these things can come in degrees. This makes it hard to specify thresholds robots must reach in order to be suitable candidates for any robotic rights. For the purposes in this paper, viz. to investigate whether it is at all concievable, possible, and desirable to have sex robots able to give consent to sex, we think that the discussion can justifably be kept at a fairly general level, which corresponds roughly to the level of abstraction in Delvaux’s report for the EU.
And as we noted above, writers like Bryson (2012) even argue that some robots already possess a form of consciousness.
A possibility also worth taking into account is that certain kinds of robots might come to possess a capacity for free will that is only comparable to that of an infant or perhaps a small child. Importantly, this could already provide grounds for attributing moral status to the robot even if this would not be enough of a capacity for free will for the robot to qualify as being able to give/not give consent. The “degree” of free will we are most interested in here, however, is whatever minimal degree is necessary in order for it to make sense to regard the robot as being able to give or not give consent.
On the general issue of whether robots can exercise agency, Pettit (2007) argues that even a very simple robot can be said to exercise agency. In his example, a robot performs the function of picking up objects with certain shapes and putting them into a box, doing so in a way that discriminates among different shapes. This is already, Pettit argues, an example of an agent who acts in the service of a goal in a way that is sensitive to its environment. If we add to this example that the robot’s choice of which shapes to put into the box is regulated by a set of values and priorities that have been programmed into the robot, we can add that the robot is not only pursuing a goal in an intelligent way, but that it is also choosing how to do this on the basis of reasons provided by its values and priorities. For more on artificial agency, see also Floridi and Sanders 2004.
This paper does not discuss harms caused by rape that relate specifically to human biology. We thank an anonymous reviewer for raising this point. The reviewer suggested that part of the reason consent is required for sexual contact in the human case, but not in the case of robots, is that in some cases sex may result in pregnancy or contracting sexually transmitted infections. We do not think that these capacities and vulnerabilities are the central moral reasons why consent is required in the human case. Rather, the need for consent has to do with respect for personal autonomy, which is not essentially tied to one specific set of biological features. A human being’s biological nature is not the main reason why consent is needed in the human case, it is also not be the main reason why consent is needed (or not) in the robot case.
Let us remind the reader at this point that by a “human-like” robot, we don’t mean a robot that is exactly like a human in all respects. Rather, we mean a robot that is designed to look like a human, and that also has AI that enables it to perform a wide range of behaviors and functions that we associate with human beings. This creates a spectrum where different humanoid robots can be more or less human-like. Thanks to an anonymous reviewer for prodding us to clarify this.
We are grateful to three anonymous reviewers for their very useful and detailed feedback.
- Anderson M (2005) Negotiating sex. South Calif Law Rev 78:1401–1438Google Scholar
- Bryson J (2008) Robots should be slaves. In: Wilks Y (ed) Close engagements with artificial companions. John Benjamins Publishing Company, Amsterdam, pp 63–74Google Scholar
- Buchwald E, Fletcher PR, Roth M (eds) (2005) Transforming a rape culture. Milkweed Editions, Minneapolis, p XIGoogle Scholar
- Danaher J (2017) The symbolic-consequences argument in the sex-robot debate. In: Danaher J, McArthuer N (eds) Robot sex: social and ethical implications. MIT Press, CambridgeGoogle Scholar
- Darling K (2016) Extending legal protection to social robots: the effects of anthropomorphism, empathy, and violent behavior towards robotic objects. In: Froomkin M, Calo R, Kerr I (eds) Robot law. Edward Elgar, CheltenhamGoogle Scholar
- Delvaux M (2016) DRAFT REPORT with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC%2BPDF%2BV0//EN. Accessed 27 Mar 2017
- Dennett DC (1984) Elbow room: varieties of free will worth wanting. MIT Press, CambridgeGoogle Scholar
- Dworkin G (1970) Determinism, free will, and moral responsibility. Prentice Hall, Englewood CliffsGoogle Scholar
- Grisso T, Appelbaum PS (1998) Assessing competence to consent to treatment: a guide for physicians and other health professionals. Oxford University Press, New YorkGoogle Scholar
- Gunkel D (2012) The machine question. MIT Press, CambridgeGoogle Scholar
- Hern A (2017) Give robots ‘personhood’ status, EU committee argues. The Guardian. https://www.theguardian.com/technology/2017/jan/12/give-robots-personhood-status-eu-committee-argues. Accessed 27 Mar 2017
- Kane R (1996) The significance of free will. Oxford University Press, OxfordGoogle Scholar
- Kurzweil R (2013) How can my consciousness survive indefinitely? http://www.kurzweilai.net/ask-ray-how-can-my-consciousness-survive-indefinitely. Accessed 27 Mar 2017
- Levy D (2008) Love and sex with robots. Harper, LondonGoogle Scholar
- Lumpkin J (2013) Are robots the future of sex? TedxSiliconAlley presentation. https://www.youtube.com/watch?v=nxVBjfHzdI4. Accessed 27 Mar 2017
- McKenna M, Coates, DJ (2016) “Compatibilism”, The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.). https://plato.stanford.edu/archives/win2016/entries/compatibilism/
- MacKinnon CA (2005) Women’s lives, men’s laws. Harvard University Press, CambridgeGoogle Scholar
- Mindell D (2015) Our robots, ourselves. Viking, New York CityGoogle Scholar
- Nahmias E (2016) Free will as a psychological accomplishment. In: Schmidtz D, Pavel C (eds) The Oxford handbook of freedom. Oxford University Press, OxfordGoogle Scholar
- Nyholm S, Frank L (2017) From sex robots to love robots: is mutual love with a robot possible? In: Danaher J, McArthur N (eds) Robot sex: social and ethical implications. The MIT Press, CambridgeGoogle Scholar
- O’Connor T (2016) “Free Will”, The Stanford Encyclopedia of Philosophy (Summer 2016 Edition), Edward N. Zalta (ed.)Google Scholar
- Prabhakar A (2017) The merging of humans and machines is happening now. Wired. http://www.wired.co.uk/article/darpa-arati-prabhakar-humans-machines. Accessed 27 Mar 2017
- Smith M (2003) Rational capacities, or: how to distinguish recklessness, weakness, and compulsion. In: Stroud S, Tappolet C (eds) Weakness of will and practical irrationality. Oxford University Press, Oxford, pp 17–38Google Scholar
- Turkle S (1984) The second self: computers and the human spirit. MIT Press, CambridgeGoogle Scholar
- Vilhauer B (2013) The people problem. In: Caruso G (ed) Exploring the illusion of free will and moral responsibility. Lexington Books, Lexington, pp 141–159Google Scholar
- West R (2009) Sex, law and consent. In: Wertheimer A, Miller F (eds) The ethics of consent: theory and practice. Oxford University Press, OxfordGoogle Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.