1 Introduction

The increasing number and sophistication of robots are bringing about changes in humans' lives on many levels. There are discussed changes on the macro level—for example, how robots are changing the job market (c.f. [1, 2]—but also on the micro level, such as appropriate responses to individual robots. Some human–robot interactions seem relevant to our moral practice and are already under discussion. Experts believe that the discussion is unlikely to stop at ethics and will probably precipitate changes to the scope of the law [3, 4, 5]. Various guidelines have already been proposed on artificial intelligence (AI) ethics issues [6, 7, 8]. One ethical problem on which to focus is the question of violence against robots and possible legal recourses. There are examples of both positive and negative treatment of robots by humans [9], but the negative behaviors are often ethically more interesting than the positive ones.

Questions about whether it is acceptable to mistreat robots are not new. In 2014, Knight stated that it might seem ridiculous to consider whether the treatment of machines should be regulated ([10], 9). The issue has long been deliberated by ethicists, philosophers, and lawyers, as well as raised in popular media (c.f. [11, 12]). However, these questions become more pressing with the increasing complexity of robots and the fact that they share an expanding number of features with humans and animals. A recent example of public concern on this topic relates to the Boston Dynamics robots (c.f. [13, 14]). Videos in which robots were kicked and pushed caused a significant stir, and PETA, an organization fighting for animal rights, was even asked to intervene [15]. A more recent example described in Wired was seen in California in 2019, when a drunk human knocked down and repeatedly kicked K5, a security robot from Knightscope [16], see also [17]. Wired discussed the K5 case under the title “Of Course Citizens Should Be Allowed to Kick Robots.” This paper defends the opposite view: Violence against robots should prohibited, at least partially.

The first section of this paper is devoted to the question of requirements for introducing a general ban on violence against robots. It is difficult to imagine such a ban coming into force until there is a consensus on the issue of robots’ moral status. The second section considers the implications of a focus on public violence, in particular how this amendment shifts the point of the discussion from robots’ moral status to protecting the public sphere against antisocial behaviors. The final part of the paper contemplates how such a change in the law could affect the position of robots.

2 A Ban on Violence and the Moral Status of Robots

Introducing a prohibition on violence against robots to the legal system might be perceived as an example of granting robots rights [18]. Granting rights to robots could seem unthinkable, but, as Stone has pointed out, granting rights to new entities was present in the history of law ([19], 2). Discussion of robots’ possession of rights is strongly connected with deliberation on their moral status, more precisely on their moral patiency, that is the capacity to be a target of right or wrong ([20], 505). Four possible approaches can be considered. The first is based on the intrinsic properties of a candidate for the moral circle, the second is based on an interpretation of Kantian indirect duties of humans to animals, the third is virtue ethics, and the fourth adopts a relational perspective.

The usual procedure for attributing moral status, which could take the form of some rights, to an entity entails looking at the entity’s intrinsic properties. Properties are a robot's intrinsic characteristics, defining what it is ([21], 16). If the entity has a number of crucial features, it could be said to have moral status and deserve certain rights (cf. [22]. A candidate for inclusion in the moral circle needs to be characterize by a certain ontology [23]. It is necessary to consider what properties are important in determining moral status, for example consciousness or the ability to feel pain (c.f. [24,25,26,27,28,29,30]. A position based on properties is often accepted as legitimate in a discussion on rights. Commentators rarely contest that if a robot has particular qualities, it is acceptable to give them rights (c.f. [31, 32]).

In 1964, Hilary Putman argued that the material of which an entity is composed should not determine the possibility of its possessing rights, but rather that it is properties matter [33]. Granting moral status based on the possession of certain qualities could be an encouraging stance to those who believe robots need legal protection, but closer examination reveals problems. For example, some question which qualities would be sufficient, how to understand qualities (c.f. [34]), and even whether we should create robots which have qualities in the first place (c.f. [35,36,37]). This approach also attempts to defer discussion of this topic to an unspecified future time [38]. The key problem relates to epistemological limitations ([39], 212), for example how to assess whether an entity possesses s certain qualities, such as the ability to feel pain. In Why You Can’t Make a Computer That Feels Pain, Daniel Dennett asserted that there must be doubt that such knowledge of another entity is possible [40], and see also [41]. As Gunkel has explained, Dennett does not prove machines’ inability to suffer, however, but our difficulties in explaining the experience of pain in the first place ([18], 147).

The other popular argument for attributing moral status to robots centers on indirect duties, which are usually associated with Immanuel Kant and his views on nature and our obligations toward animals. Kant stated,

So if a man has his dog shot, because it can no longer earn a living for him, he is by no means in breach of any duty to the dog, since the latter is incapable of judgment, but he damages the kindly and humane qualities in himself, which he ought to exercise in virtue of his duties to mankind. Lest he extinguish such qualities, he must already practice a similar kindliness toward animals; for a person who already displays such cruelty to animals is also no less hardened toward men. ([42], 212)

Kant believes that we have indirect duties toward animals not because of their qualities, but because of our own. As we can see in this argument, an analogy can be drawn between such discussion of animals and the debate over the status of robots (c.f. [43, 44]), in some sense also [22], in that context, it could be asked whether we should treat robots as Kantian dogs [45]. However, other commentators have critiqued such an analogy as being an unreasonable starting point for a discussion of the moral status of robots (c.f. [46]), see also [21]. Another work with implications for this debate is the recent book by Joshua Smiths, who argues that the proper treatment of robots could positively impact humans’ dignity and value [47]. Like the aforementioned approaches, this approach aims to avoid causing harm to humans by the mistreatment of robots.

The third approach is virtue ethics. Virtue ethics focuses on the character of the agents, not on their individual actions (see more [48,49,50]). In the context of robots’ moral status, we could ask what the mistreatment of robots tells us about a person’s character. Sparrow argues from virtue ethical premises that even if “cruel” treatment of a robot has no implications for a person’s future behavior towards people or animals, it may reveal something about their character, offering us reason to criticize those actions [51]. Sparrow states, moreover, that “Viciousness towards robots is real viciousness” ([52], 23). This approach does not claim that mistreatment of robots is “bad” for robots or bad in a utilitarian sense, but in the sense that it is incompatible with the model of the virtuous agent [53]. As Coeckelbergh puts it, mistreatment of robots damages the moral character of the person engaging in the behavior [54].

The final approach to robots’ moral status is relational and is mostly represented by Coeckelbergh and Gunkel (c.f. [18, 38, 39, 55,56,57]). In their view, the source of moral consideration lies not in how the entity is built but on the bonds of our relationships with them. The key is social relations between humans and robots ([39], 217). As Gunkel writes,

According to this alternative way of thinking, moral status is decided and conferred not on substantiative characteristics or internal properties that have been identified in advance of social interactions but according to empirically observable, extrinsic relationships. ([18], 241)

In answer to the question of whether violence against robots should be banned based on the views presented above of robots’ moral status, each of the four above approaches can theoretically justify such a ban.

Practical aspects of the topic are even more problematic. A view on this question based on intrinsic properties is the most widely accepted in the literature, but it is difficult to show that robots have certain qualities. The other approaches are not widely accepted, and it is difficult to imagine that legislators would introduce changes in law when there is no common acceptance of their necessity among experts in the field. As discussed above, changes in the law could be introduced from a different perspective, avoiding a discussion of moral status.

It is necessary to consider how exactly a robot can be defined. Many machines could be considered robots: Vacuum cleaners, autonomous vehicles, humanoids, military robots, bots on social media, and even smartphones might be classified as such. Jordan identifies three main reasons for the difficulty in defining the term “robot.” The first is that the definition is not settled, even among experts in the field; the second is that a definition is continually evolving due to changing social contexts and technologies; and the third is that science fiction determined the conceptual framework before engineers addressed it [58]. Science fiction’s role might seem irrelevant, but as Adams et al. have pointed out, science fiction’s influence over AI and robotics is substantial. Science fiction has inspired research when, usually, inspiration would be assumed to travel in the opposite direction ([59], 30). Considering such aspects is also important in the regulation of robots. A particular regulatory decision could depend on how society perceives robots. Consideration of this issue cannot focus only on the technical aspects of such entities but must also regard social perceptions and the fact that those perceptions depend upon the portrayal of robots.

Some definitions of robots exclude different types of artifacts. For example, Winfield’s definition of a robot emphasizes embodiment, entailing that bots are not robots: “A robot is an AI with a physical body” ([60], 8). As Gunkel has pointed out, researchers wrestling with definitions when writing on robots usually resort to operational definitions, which are used for further deliberation [18]. This approach is also evident in the discussion of legal implications in robotics. For example, in his introduction to Robot Law [61], Froomkin defined a robot thus:

The three key elements of this relatively narrow, likely under-inclusive, working definition are: (1) some sort of sensor or input mechanism, without which there can be no stimulus to react to; (2) some controlling algorithm or other system that will govern the responses to the sensed data; and (3) some ability to respond in a way that affects or at least is noticeable by the world outside the robot itself, ([62], XI)

My own position in relation to any proposed regulation is that a decision on what a robot is must be based not on the intrinsic internal qualities of a robot but on its appearance. The argument supporting this claim is introduced in the next section.

3 A Ban on Public Violence Against Robots

From a practical viewpoint, policymakers could ignore philosophical deliberations, as they occasionally seem to do, and could immediately introduce legal protection against violence offered to robots. Law is conventional, and a lot of rapid changes are possible. However, certain commentators have argued that, even if it were possible to do so, legal rights should not be given to robots [35, 63]. Brożek and Jakubiec have argued that legal responsibility should not be introduced to autonomous machines [64]. Authors have observed that any such law could only be “law in books” and could not be used in real life (i.e., could not be “law in action”). If a policymakers wants to change the law in this area, the change should cohere with folk-psychology [64]. According to Hutto and Ravenscroft "Folk psychology is a name traditionally used to denote our everyday way of understanding, or rationalizing, intentional actions in mentalistic terms." [65]. Brożek and Jakubiec’s argument indicates that the decision should depend on consensus. In the case of contemporary robots, even experts are extremely divided on both the moral status of robots and how the law should react.

This paper focuses on violent behavior toward robots. Proposals have already been made to legally limit the treatment of robots, but these proposals are based, at least partially, on different arguments than those presented in this paper. Two papers are particularly relevant to this question: Whitby’s “Sometimes it’s hard to be a robot: A call for action on the ethics of abusing artificial agents” [66] and Kate Darling’s “Extending legal protection to social robots: The effects of anthropomorphism, empathy, and violent behavior towards robotic objects” [43]. Whitby has emphasized that his publication is only an invitation to discussion. However, he does not merely formulate an abstract idea but argues for changes in the treatment of robots. He suggests that we should limit our behavior toward them because if we do not do so, the outcome could be violence against humans. To support his position, Whitby refers to the allegedly negative impact of violent video games on gamers’ behavior ([66], 329). He is concerned that if there is a possibility of mistreating robots, violence against humans could ensue. Similar arguments in other studies focus on the abuse of robots by children [67]. In Whitby’s logic, if we do not stop children from abusing robots, they could become violent toward other people. This argument is uncertain (see in that context [68]). Many studies have examined the impact of video games on violence, and the results have not provided strong evidence that games have such an impact (see, e.g., [69]). Darling has proposed extending legal protection to one specific kind of robot, namely social robots, which are designed to interact with humans on a social level ([43], 213). She uses the argument of indirect duties discussed above. The justification for my own thesis differs from that presented in these two papers. However, it is partially connected with the "indirect duties" approach in the form used by Darling. Violence against robots could be partially prohibited if the question of the moral status of robots is avoided. For example, we can ask instead, “Should we ban public violence against robots?” Such an amendment removes the discussion of the moral status of robots. It is “public morality” that is being protected, rather than the robots’ moral status. My view should not be seen as indirect protection of humans' moral status because I distinguish the legal treatment of public and private violence against robots. In my view, private violence should not be banned. In my approach, there is a focus on regulating robots' presence in public spaces (see also: [70, 71]).

I argue that public violence against robots is contrary to public morality and should be prohibited. To support this argument, I consider other prohibitions and show that the logic behind the existing rules should include violence against robots, if such violence is similarly perceived. Considering that this is an argument for the “coherence of the legal system,” the Polish criminal law system is used as an illustration, but similar provisions are present in many other legal systems.

In Polish criminal law, prohibited acts are crimes or contraventions [72]. More serious behaviors are crimes, usually situated in the criminal code, and more petty behaviors are contraventions. There is a special code of contraventions, but many prohibited contraventions also exist in different legal acts. Three examples of contraventions could be considered, two of which are listed in the chapter on the code of contraventions entitled “Contraventions against public decency”:

Article 140. Whoever publicly commits an indecent act shall be punishable by detention, restriction of liberty, a fine of up to PLN 1,500, or a reprimand.

Art. 141. Whoever shows in public places an indecent ad, caption or drawing or uses indecent words, is subject to the restriction of liberty, fines of up to PLN 1,500, or the penalty of reprimand.

The third provision is from the Act on Sober Upbringing and Combating Alcoholism of 26 October 1982:

Art. 14 clause 2a. It is forbidden to consume alcohol beverages in a public place, except in places intended for their consumption on the spot, in points of sale of such beverages.

Legal scholars have commented on the provision prohibiting an “indecent act,” addressing not only the wording of the provision but also the name of the chapter in which it is situated. They have argued that public decency is, in other words, public morality [73]. The perpetrator of an indecent act violates the moral norm, leading to feelings of shame and embarrassment on the part of the witnesses [74]. According to the Polish Supreme Court,

An indecent act is a behavior that, in the specific circumstances of time, place, and surroundings, should not be expected due to the customary norms of human coexistence, which therefore causes widespread negative social assessments and feelings of disgust, anger, and indignation. An indecent act is therefore characterized by a sharp contradiction to generally accepted norms of behavior. (see the judgment of the Supreme Court of 2 December 1992, III KRN 189/92)

Public nudity is an example of the behavior prohibited under this provision. As we can see from how the provision is understood, the error in the action goes beyond the act as such. Any person can be naked in their own homes. The illegality of such activity is the context in which it happens because it causes discomfort for witnesses of the act. The same is true in regard to the public use of swear words or drinking alcohol. These behaviors are not intrinsically bad nor illegal regardless of context: People may do these things, but these acts are limited in public spaces because such acts can discomfort potential observers. In other words, we protect potential witnesses from acts that could be unpleasant for them to watch in public spaces. The common grounding of these prohibitions is that they are all perceived to be morally wrong if enacted in public spheres, making them contrary to public morality. Anyone can enact these actions in private spaces, and there is nothing intrinsically immoral about any of the listed behaviors. The prohibition regards only their public performance.

Looking at the public sphere as something special in this way is not new. The Italian thinker Cesare Beccaria, in his treatise “On Crimes and Punishments” (first edition 1764), mentions crimes against “public peace” as one of the three basic types of crime: “Finally, in the third type of crimes we find particularly those which disturb the public peace and the calm of the citizenry, such as brawls and revels in the public streets which are meant for the conduct of business and traffic” ([75], 29). His understanding also covers more serious behaviors, but he identifies that the public sphere, as a place of calm for citizen, should be protected by law.

It is necessary to question whether violence against robots is an act similar to those subject to the prohibitions discussed above and whether it is perceived by the public as something morally bad. The media have reported on a number of reactions to violence against robots, as discussed above, for example when the Boston Dynamic robots were kicked [76]. Another example of violence against robots was the case of hitchBOT, a hitchhiking robot destroyed by vandals in the United States after successfully travelling though several other countries [77]. It is interesting to consider why the media highlight the story of the destruction of hitchBOT but do not cover the destruction of a toaster. As Coeckelbergh has pointed out, “Many people respond to robots in a way that goes beyond thinking of the robot as a mere machine” ([78], 142). Robots are not perceived simply as a tool [79], and the key concept here is empathy. Humans can empathize with robots [80]. In the debate on granting rights to robots, Turner has termed this as an “argument from compassion” (96, 155). Psychological research has demonstrated that humans can empathize with robot “pain” [81], and other research has confirmed this finding [82, 83]. Many people believe that it is morally questionable to act violently against robots. Researchers have analyzed the tweets about the hitchBOT, and a qualitative analysis of Twitter users’ reactions to the destruction of the robot suggests that they perceived the actions of the vandals to be morally corrupt [84]. People experienced discomfort at the idea of robot pain and believed that the perpetrators’ acts were immoral. These responses depend not on the intrinsic qualities of robots such as ability to feel pain.

One might wonder, if robots cannot suffer, why not educate people about that instead of banning such behaviors? The proposed provision responds to how people react to such acts, independent of whether that reaction is justified on the grounds of the moral status of robots. Violence is perceived as something unpleasant to see. It is also not entirely obvious that this feeling results from a lack of knowledge about whether robots can suffer. A recent study by Lima et al. reports that “Even though people did not recognize any mental state for automated agents, they still attributed punishment and responsibility to these entities” [85], 1, see also [86]. This observation suggests that the reaction to robots is independent of knowledge about their ontology. Even if people understand that robots do not have an inner life, they still feel empathy toward them. What the discussed law aims to achieve is to protect public spheres from behaviors that make people feel uncomfortable. In this way, banning violence against robots and not recognizing the moral status of robots are not seen as a contradiction. The law should consider how people function and that an empathic feeling toward robot pain seems to be part of us.

It is unclear whether people would accept such a law. One survey asked whether robots should have rights. Findings from one survey that listed a range of specific rights, including one addressing violence, indicated “that even though online users mainly disfavor AI and robot rights, they are supportive of protecting electronic agents from cruelty (i.e., they favor the right against cruel treatment)” [87]. Thus, public acceptance of such a law seems possible, but before any legal decision is made, more research is necessary. For example, the scope acceptance of limitations of humans' behaviors toward robots needs to be empirically investigated.

In summary, violence against robots is regarded as morally wrong by the public. It is perceived as something unpleasant to watch, like other acts against public morality and, therefore, should be prohibited. This approach complements existing rules, making the system more coherent.

It is reasonable to question why a proposal prohibiting violence against robots should be advanced rather than a proposal in relation to other (embarrassing or discomforting) behaviors that could be perceived in a similar way. Other behaviors could also be interpreted as contrary to public morality (such as spitting or screaming) but are not currently prohibited widely. One issue is timing. Violence against robots is a relatively new phenomenon for society to deal with. Discussion of this topic and how the law should react continue. This social problem might also become more important as robots become more animal-like and human-like and as we encounter more of them in everyday situations. It is important to discuss potential solutions to this problem well in advance, and my proposal offers a possible solution that could be introduced at any time.

The other issue to contemplate is why we should consider use of repressive instruments to regulate human behavior. Some scholars have argued that our life is already over-criminalized (c.f. [88]). The literature attests that enforcing morality by law is a direct path to over-criminalization (cf. [89, 90]). Significant social effects are created by excessive use of criminal law, for instance extensive incarceration (c.f. [91, 92]). These problems are critical, but rejecting the possibility of regulating human behavior toward robots could also entail the rejection of the validity of other prohibitions (such as the prohibition of public indecency). I argue from the situation of the existing legal system and its intrinsic coherence. The other question to consider is whether criminal law should regulate such behaviors. The consequences of the answer to that question concern not only behaviors against robots but also similar behaviors against public morality. Importantly, the behavior subject to the proposed ban will not be a crime: A person punished for such an act will neither have a criminal record nor end up in jail.

It is useful at this point to return to the issue of definition. Introducing a provision against violence to robots, especially in the sphere of prohibitions, requires the use of terms that make the relatively easy distinction between prohibited and non-prohibited acts. For that reason, the criterion of distinction should not lie in the intrinsic qualities of robots or their social role but in their external appearance. I propose to use the term “life-like” robot in the provision. That choice is coherent with the justification of the ban mentioned earlier. This descriptor also marks the point at which my justification differs from Darling’s view on protecting social robots. According to Darling, “A social robot is a physically embodied, autonomous agent that communicates and interacts with humans on a social level” ([43], 215). Arguably, social robots need special treatment (c.f. [93, 94]), including protection against violent behavior, but there are practical obstacles to introducing such a postulate into law. The social role of the robot is almost impossible to recognize ex ante in every case. It would be challenging to introduce a provision based on that feature of robots. In some cases, it would be possible only ex post to determine whether the attacked robot has a social role or not. In the proposal advanced in this paper, this problem does not exist. An assessment of whether the robot is protected ex ante could be made simply by looking at it.

The justification based on wanting to eradicate antisocial patterns in the public sphere is phenomenological: It relates to how the behavioral act is perceived. Under such a prohibition, robots would be protected, such as social robots, but not only and not all, since there is a need to use a demarcation criterion considering the multitude of forms in which robots are created. There could be a sophisticated robot that looks like stone or bread. Even if it is kicked, this robot is unlikely to provoke empathic feelings. People show empathy particularly towards robots when their outer appearance is similar to that of living beings [5]. Therefore, a criterion should be based on appearance. This criterion should cover human-like and animal-like robots, and the “life-like” category includes both. This part of the deliberation resembles John Danaher’s view that robots should be welcomed into a moral circle based on empirical behaviorism, which focuses on how we perceive robots [22]. However, Danaher is discussing moral, not legal, status. In a critique of Danaher’s approach, Smids identifies that knowledge of the design process and the robot’s ontology are also highly relevant [95]. Danaher is not necessarily arguing against Smids’ position, but he focuses on observable behavior, a feature to which we have access.

Some arguments used here would support a ban on violence against virtual robots (bots); however, in my proposal I use the legal concept of public spaces (generally understood by lawyers as physical places) and the regulation of behaviors in such places. Until there is a change in how public places are perceived to include public virtual places, the proposed provision will not cover violence against bots. Thus, the proposal concerns only embodied robots. Embodiment is not mentioned directly in the proposal but results from the earlier justification. The previously discussed provisions regulating behaviors in public spaces (swearing, drinking, and indecent acts) are concerned with physical places. The argument concerning the coherence of the legal system is valid as far as it is concerned with such physical places. Additionally, the provision will ban the performance of violence by the perpetrator, and only such behaviors will be prohibited. Projecting acts of violence on the big screen in public places is something for further and separate consideration; it might also upset bystanders, but it will not be covered by this proposed provision. Some may argue in support of such a ban, but it is beyond the scope of the proposed law.

In regard to how the situation concerning the regulation of violence might change after the proposed provision is introduced, it should be recalled that violence against humans is already a crime under the criminal code. Violence against animals is, in general, prohibited under the Act on the Protection of Animals. Violence against owned robots to the point of their destruction is already a crime under provisions for property protection. The proposed change will include violent behavior toward robots that does not cause significant destruction. It should be added that it concerns only intentional violence. That is, the perpetrator must have a specific mental attitude [72], meaning that unintentional or artistic (also intentional) violent behaviors would not be treated as prohibited acts. Punishments should suit existing provisions. Under the mentioned legal system, the punishment would be a reprimand or fine, thus symbolic rather than harsh.

To summarize, the ban should concern embodied life-like robots, regardless of their level of sophistication and intrinsic qualities. There could be other reasons to change the law, aiming to prohibit misbehavior toward robots, but the reason presented in this paper is limited to public violence and is independent of the internal features of robots or their role. Based on the abovementioned examples of provisions stating prohibitions, the new provision, focused on robots, could read roughly as follows: Whoever publicly treats life-like robots violently shall be subject to punishment.

4 Moral Side Effects of Prohibition

On my analysis, the justification for making changes in law and banning public violence against robots does not relate to the moral status of robots. Such a regulation could be introduced to law immediately, based on the existing logic of the legal system and in the interest of legal coherence. However, such changes would not entail that the question of robots as entities with moral status would become irrelevant. Such changes would partially satisfy demands to grant moral status to robots and address concerns about human epistemological limitations. At the same time, the deliberations presented in this section could be seen as arguments against the proposed provision by those opposed to granting moral status to robots. Such laws could raise moral considerations toward entities that are seen by critics of granting rights to robots, as undeserving of special treatment (see e.g. [35]). The prohibition of public violence against robots has two possible moral side effects.

One effect of the legislative decision outlined above is the granting of protection by law to robots, at least partially. Although the aim of the provision differs (it should directly protect public morality, translated as protecting society from inconvenient experiences in public places), if people obey that law, robots gain protection in public places. This outcome could be treated as a side effect. It is an extremely limited protection, but it is a starting point for granting robots stronger protections. Other entities whose moral status is not in doubt, such as animals, are also not totally protected. On the one hand, the mistreatment of animals in many countries is legally curtailed; on the other, in the same countries it is possible to kill animals for safety reasons, for food, and for clothing. For example, a farmer could be criminally liable for mistreatment of a cow and be sentenced for the same, but retain the right to kill the same cow for meat without any legal consequences. From that perspective, other provisions protect robots as a side effect. Robots are others’ property and are usually expensive. If someone kicks and damages a robot on the street, that act could be considered a crime. The value protected in such cases is not the moral status of the robot but that of the property (see [96], 165). There is a possibility that the system of norms could introduce actual protection for robots on the grounds of public or private law, without a single provision aimed at improving the situation of robots as entities with moral status.

The other possible moral side effect is a potential change in public morality manifesting itself through the recognition of the moral significance of robots. Such a change could be helpful for the wider acceptance of laws aimed solely at justification, as connected with the moral status of robots. For the law to be effective, it should be based on widely accepted views. It may be possible for the law to change the perception of robots and cause people to see their intrinsic moral significance. The law can change morality.

The legal philosopher H.L.A. Hart has deliberated on the connection between law and morality, stating that morality impacts law and that law impacts the development of morality ([97], 1). One way to understand that the law has moral value is to say it has the potential to achieve moral goals. Green considers this an instrumentalist thesis of the law [98]. Brownell and Child have debated the instrumental value of law in three categories: law as a moral advisor, law as a moral example, and law as a moral motivator. From the perspective under discussion, the most important of the three categories is the “moral advisor” function, and there are two ways for law to play that role. The authors refer to the “coordination function” and the “moral leadership function” of law, the second of which is crucial. The law can change public perception of public values, and the authors give examples of such change, including the recognition that rape should be prohibited in marriage or that children should be provided with special protections ([99], 33), along with the promotion of moral norms endorsed by only part of the population. In their words, “the law serves us by expanding our moral horizons and promoting us to attend to issues to which we might not otherwise have given much thought” ([99], 3).

Research on opinions about granting rights to robots suggests there is no common belief that robots are entities deserving rights based on their intrinsic qualities [87]. A number of experts simultaneously promote the recognition of the moral significance of robots (c.f. [18, 22, 39]), and tension results from the variation in positions on the topic. It seems that few people hold the view that robots should be recognized as possessing moral status, at least for now, and the majority of people think otherwise. However, attitudes might change, and the wider public might think differently about robots if the proposed change in the law to directly include robots is implemented. This possibility problematize my legal proposition, especially from the perspective of positions that are against treating robots as belonging to a moral circle (c.f. [36]).

5 Conclusions

This paper has considered whether violence against robots should be banned, a question usually connected with the matter of the moral status of robots. Limitation on how humans can behave toward robots is one consequence of their possessing moral status. I have noted that a positive answer to this question is possible on the grounds of the four discussed approaches, which focus on the intrinsic properties of the robot, indirect duties toward them, virtue ethics, and a relational approach. The most widely accepted way of ascribing moral status is the first approach, but it is simultaneously the most problematic. There is no consensus on which properties are crucial, what they exactly philosophically mean, or how we could know that the robots have those properties. For now, at least, it seems that robots lack the qualities required to grant them status based on their ontology. Indirect duties, virtue ethics, and relational approaches are not that popular, and it could be problematic to apply them directly in law without achieving an acceptance threshold among both experts and the public. However, if the concern is public violence rather than violence more generally, a discussion of robots’ moral status can be avoided. Prohibition of public violence against robots focuses not on the robots themselves but on public morality. The wrongness of such acts is not connected with the intrinsic characteristics of the acts but the fact that they are carried out in public. Furthermore, such a prohibition is coherent with existing regulations in the legal system that aim to eliminate certain behaviors in public places, such as prohibitions against swearing, going naked, and drinking alcohol. The proposed change could be introduced into law immediately. Despite the fact that this regulation would be detached from the discussion of robots’ moral status, it could bring about “moral side effects” for robots and afford them certain partial rights. If people behaved according to the new regulation, robots would be protected in some spheres. The change of law suggested in this paper could prompt changes in moral attitudes toward robots, based on the notion that changes in the law can lead to changes in morality. However, this could also be seen as an argument against introducing such laws by those opposed to granting moral status to robots.