Humans, Neanderthals, robots and rights

Robots are becoming more visible parts of our life, a situation which prompts questions about their place in our society. One group of issues that is widely discussed is connected with robots’ moral and legal status as well as their potential rights. The question of granting robots rights is polarizing. Some positions accept the possibility of granting them human rights whereas others reject the notion that robots can be considered potential rights holders. In this paper, I claim that robots will never have all human rights, even if we accept that they are morally equal to humans. I focus on the role of embodiment in the content of the law. I claim that even relatively small differences in the ontologies of entities could lead to the need to create new sets of rights. I use the example of Neanderthals to illustrate that entities similar to us might have required different legal statuses. Then, I discuss the potential legal status of human-like robots.


Introduction
The growing number and sophistication of robots raise questions about their place in our social life. Increasing attention has been paid to the moral and legal status of robots in recent years (cf. Gunkel 2018;Darling 2016;Gellers 2020;Turner 2018;Balkin 2015;Abbott 2020;Nyholm 2020;Smith 2021;Bennett and Daly 2020;Pietrzykowski 2018;Kurki 2019;Darling 2021;Calo 2015;Balkin 2015;Pasvenskiene 2021, Mamak 2022). One of the issues resulting from that discussion is that of "robot rights." Debate over "robot rights" contains different positions, some of which are mutually exclusive (for more about these standpoints, see Gunkel 2018). At one end of the spectrum are those who wonder whether robots could be granted human rights (Miller 2015;Brooks 2000). At the other end are scholars who not only oppose granting robots rights (cf. Bryson 2010; 2018) but even "deny that robots are the kinds of beings that could be granted or denied rights" (Birhane Kamil Mamak kamil.mamak@gmail.com 1

Presuppositions in law about human beings
Law is social technology (Fairfield 2021). It is created by humans for our own purposes 1 . The law is created in language by legislators who aim to partially organize the world by that law. Lawgivers cannot escape including some features of the world in their work. Some characteristics are apparent and intentionally embedded in provisions, but some could be included in law unwittingly, which could be partially connected with the way we experience the world. Popper described the nature of observation and pointed out that when we experience the world, we are using sets of theories that we are not always aware of, which he called "background knowledge" (Popper 1996). Observations are the basis for drawing conclusions in different areas of our lives. "Unspoken" assumptions also exist in law (Hart 1963, 11).
Sarkowicz identifies that the law can be interpreted on three levels: First, the descriptive level, which is the exact meaning of the words; second, the normative level, which allows us to know the legal norm embedded in interpreted law; and third, the level of presuppositions (Sarkowicz 1995). According to Sarkowicz, the level of presuppositions " […] comprises all kinds of information about the world which surrounds the lawgiver, its society, and man with his goals, desires, and system of values" (Sarkowicz 1995, 231). There are also ontological assumptions about the human body (Sarkowicz 1995, 155), such as presuppositions about technological reality (cf. Mamak 2019). The lawgiver may not even be aware of all the assumptions about the world reflected in the provisions (cf. Gizbert-Studnicki and Płeszka 1990).
The assumptions in law about human beings are of different kinds. We can see explicit, implicit, and even unconscious information about human beings by looking at criminal law provisions. Let us imagine that we are from another planet and the only information that we can obtain about the humanity are the provisions of criminal law. We could find out a couple of things about ourselves, for example, that humans are mortal and killing someone is a crime. The human body can be damaged, sometimes irreversibly. Humans are sensitive to some substances, as suggested by the impact of alcohol and drugs on the responsibility and situation of the defendant: they behave differently and have limited control of their body. We could estimate more or less how long people live through hints in the provisions about the age of criminal responsibility, the age at which children have special protection, and even in sanctions. The possible punishments would look different if we were able to live around 10,000 years. Humans reproduce sexually, and the sex drive is something that is a strong motivation for humans' actions but is possible to control. Children are delivered by mothers, who could "not be fully themselves" just after delivering a baby. Infanticide is treated by law as a milder kind of homicide if carried out by the mother in the puerperium. Article 149 of the Polish Criminal Code states that "A mother who kills an infant during the period of delivery under the influence of its course, is subject to the penalty of deprivation of liberty for between 3 months and 5 years" (translation: Wróbel, Zontek, and Wojtaszczyk 2014), whereas "typical" murder is subject to the penalty of deprivation of liberty for no less than 12 years, the penalty of deprivation of liberty for 25 years or the penalty of deprivation of liberty for life. The way in which we could be legally excused for committing a crime is acting in selfdefense, which is a legal construction built upon the instinct of self-preservation (cf. Ashworth 1975). We accept that the drive to be alive cannot be stopped by the law, and that laws against that drive cannot be socially accepted.
Philosophical presuppositions are also made about the nature of human beings. In a paper on the philosophy of biology, Andersen et al. shows that some nonempirical aspects of research cannot be avoided, which they call "philosophical biases" (Andersen et al., 2019). In criminal law, some assumptions are also philosophical. One of these, which is at the core of criminal law, is the presupposition of free will (cf. Jones 2002). The criminal law in its current form would not have made sense if human behaviors were entirely determined. Some question those assumptions and propose an alternative response to the actions which are traditionally associated with crimes (cf. Caruso 2021). The other important assumptions about the nature of human beings are associated with Cartesian dualism, which makes distinctions between body and mind. It is pointed out that dualism has a central place in law (cf. Benforado 2010; Fox and Stein 2015;Lawrence 2020). In the light of research, these assumptions could be pasted into questions (cf. Clark 2008;Dent, Nielsen, and Ward 2020;Damasio 1995). The places in the legal system in which we could "detect" the impact of dualism are not obvious. For example, in the provisions that govern the procedural collection of information about crimes, it is possible to take by force a blood sample from the alleged perpetrator, but it is generally prohibited to force someone to testify against themselves (cf. Redmayne 2007). In the language of Cartesian dualism, we could say that the content of the mind is protected and the content of the body is not. We could also find dualism in the thinking about punishments (cf. Mamak 2021a). However, these are certainly not the only places in the legal system where dualism is imposed. Further analysis could reveal more such legal institutions, because Cartesian dualism is embedded in many spheres of our lives as it is an obvious truth about human nature (Brożek 2016). The point of referring to this is to show that the law can reflect assumptions about human beings, some unconsciously and some which may not even be true, like Cartesian dualism. Behind the provisions is the human being, but in the version which the legislator presupposes. The fact that specific laws are about humans is not always apparent at first glance, which will be an important notion for further deliberations. The human body, as well as how human nature is imagined, impacts the law. In some places in the legal system, this may be apparent, but in some we may not even realize that the content of the law is determined by human biology.

Neanderthals from the perspective of human law
To illustrate how a human-like embodiment is essential for the content of the law, it is good to consider the potential place of entities that are similar to humans but not completely the same. 2 To learn something about ourselves and our surroundings, it is good to have another point of view. The Neanderthals seem to offer such an alternative viewpoint through which we could look at ourselves and the law. The discussion about bringing back to life extinct species to life such as mammoths and Neanderthals touches upon such issues as the technical and ethical aspects of such procedures (cf. Cottrell, Jensen, and Peck 2014;N. Levy 2013). Technology that is currently considered and seems to be in our reach (at least theoretically) is about to modify the cells of 2 I was discussing this issue in a popular science blog in Polish (Mamak 2017). existing species to achieve the state that the modified entity would be as similar to the original as possible. For example, modified cells of elephants could be used to "revive" the woolly mammoths (cf. Mezrich 2017; Zimmer 2021).
Let us assume for the purpose of this paper that Neanderthals are back. Now, we could ask whether they would be humans in the understanding of the law. I will focus on the criminal law. The legal provisions use the word "human"; for example, the element of the Polish criminal code which forbids homicide starts with the words "Whoever kills a human." One systematic name for Neanderthals is Homo neanderthalensis. The Latin word "homo" is translated as "human," so reading the mentioned crime literally could interpret it to cover the killing of both homo sapiens sapiens and homo (sapiens) neanderthalensis. Would, then, that kind of homo be obliged to respect and protected by existing provisions on the same grounds as humans, or would it be necessary to establish another legal framework for them? What is also worth mentioning, in the context of the potential moral and legal status of Neanderthals, is the fact that Neanderthals were also "sapiens" ( not only "homo"). This characteristic could make the quest of situating the Neanderthals in the legal/moral realm even more problematic. The discussion here is divided into two groups of problems: first, whether Neanderthals could be perpetrators of crimes, and second, whether they could be considered as victims of the same.
In criminal law, the perpetrator is a natural person who can be held responsible for their crime. It is pointed out that the requirement for criminal responsibility is moral agency (cf. Brożek and Janik 2019; Asaro 2007). Only moral agents can be responsible for their actions, as agents understand good and wrong and are able to choose between them. Would Neanderthals be moral agents in that sense? The answer to this question requires knowledge of the ontological characteristics of Neanderthals. We cannot describe Neanderthals wholly -at least for now -but research on them could give us some hints in that respect. It is identified the crucial feature that separates humans from other animals is language (cf. Cox 2018). So, one of the questions about the potential status of Neanderthals could relate to their language abilities. The results of a recent analysis show that Neanderthals had the ability to speak and understand speech, like modern humans (Conde-Valverde et al. 2021). It is also pointed out that Neanderthals buried their bodies, and, as Mellears indicates, "[…] we must assume that the act of deliberate burial implies the existence of some kind of strong social or emotional bonds within Neanderthal societies" (Mellars 1995, 381).Voluntary burial of their dead could attribute transcendental thought to them; it may also imply a concern about death, about what lies beyond death, and the meaning of existence (Ayala and Cela-Conde pygmy chimpanzees (bonobo) (for more on the moral life of animals see, cf. Bekoff and Pierce 2009).
We do not know for sure whether the Neanderthals or other hominin ancestors had similar brain parts to those that impact our moral practices. However, if we are at least partially different in that respect (anatomically), it would mean that holding them responsible for what we believe is wrong, and they do not think to be, would be imposing our moral values. Doing so may be in our interest, but it does not seem right not to take their perspective into account. In that sense, they would not be fully moral agents from a human perspective. It could be said that there is a moral threshold that needs to be crossed by the entity to be treated by law as a human being with full mental capacities.
Yet, not all humans are moral agents. In the case of small children or people with mental deficits, we do not talk about criminal responsibility, but the law still reacts to their actions (cf. Yaffe 2018; Packer 2009; Wróbel and Zoll 2014). In the case of humans who are dangerous to others but cannot be held responsible, we sometimes put them in psychiatric hospitals. If Neanderthals were not moral agents in the human sense but committed acts that we consider crimes, would we treat them as mentally ill? Those kinds of deliberations could be seen as speculative, but there are no other potential existing legal frameworks that could be used for analyzing the legal answer to harm caused by agents that do not meet the criteria for criminal responsibility 4 . Treating Neanderthals as mentally ill as a reaction to harm or potential harm arising from their actions does not require the law to be changed. The existing provisions could incorporate them as humans with mental deficits. In my opinion, such a response to the discussed problem does not seem to be right. It would be extremely anthropocentric and humiliating for Neanderthals to treat them as second-class citizens. Our "merit" and their "deficiency" lies in how our brain is built. The law should not treat the whole species of Neanderthals as the worst humans. In my opinion, the contemporary criminal law is not ready to incorporate Neanderthals as perpetrators.
The claim that Neanderthals would not be perpetrators in the meaning of contemporary criminal law does not necessarily entail a similar notion on the ground of the possibility of their being victims of crimes. To be a victim of a crime, there is no need to be a moral agent. Children and the mentally ill are protected by law, sometimes even more because they are the most vulnerable. The mentioned example of crime forbids killing humans. Would Neanderthals be included in the meaning of "humans" from the perspective of the possibility of being victims of crimes? In my opinion, 4 Dennett used a similar framework to discuss the legal response to killings by Hal 9000, a computer from the Stanley Kubrick movie Space Odyssey: 2001 (Dennett 1997). 2017, 478). There is also discussion about the kind of art that Neanderthals created (cf. Callaway 2014; Ayala and Cela-Conde 2017). Thus, Neanderthals could use language, had a social life and maybe even transcendental thoughts, and created art, but can all this help us conclude that they would have moral agency? It is doubtful. Although similar to humans, Neanderthals would not be completely the same as us. What is especially important is that Neanderthals had a different brain architecture (Ayala and Cela-Conde 2017). Pearce and colleagues claim that morphological differences between the brains of Neanderthals and anatomically modern humans could lead to differences in social cognition (Pearce, Stringer, and Dunbar 2013).
But what do brains have in common with moral agency? As mentioned above, moral agency requires the possibility to distinguish between good and wrong actions. What we mean by good and wrong is embedded in human moral practices. The question here is not whether Neanderthals could hold moral values, but instead, whether their moral values would be exactly the same as ours. Churchland claims that our morality is connected with the way in which we are built, with a special focus on our brain (Churchland 2011). She uses research on the brain but does not claim that brain science can answer every question concerning morality: "Rather, the point is that a deeper understanding of what it is that makes humans and other animals social, and what it is that disposes us to care about others, may lead to greater understanding of how to cope with social problems" (Churchland 2011, 4). As she adds, social practice and culture are hugely influential in human moral practice, but morality may not be limited to those aspects (Churchland 2011, 3). At least partially, therefore, our morality is dependent on how our brains are built. In that sense, there is no universal morality common to whole entities, but what we called morality is human-centered. Not all entities which make decisions that can be evaluated through human morality are moral agents. For example, some actions carried out by animals could be evaluated as morally wrong from our perspective, such as surplus killing, or the killing by animals of more prey than they can eat (cf. Kruuk 1972; Appleby and Smith 2018). However, this type of killing will not lead them to be held morally responsible, let alone ascribed criminal responsibility. This example does not mean that the animals do not have morality. Animal studies suggest that morality belongs not exclusively to humans, and also, at least some animals do have their morality. What should be mentioned, it seems that morality is not evolving in one direction, but it could evolve in other directions 3 . Differences are possible even among species that are very similar from the biological point of view -like chimpanzees and law is built in the image of human beings and is dependent on the human body, tailored to humans in their current form. We have compared humans and Neanderthals. Neanderthals also have bodies and brains and could use language, and so on. Nonetheless, the legal status, set of rights, and duties may be different. Bearing this point in mind, we can turn to deliberations about the legal status of robots.

Robots and human laws
In this part, I would like to discuss the potential legal status of human-like robots. In recent years, the legal status of robots has been the subject of extensive discussion. It is pointed out that the moral status of robots is one of the main topic in the ethics of Artificial Intelligence (Gordon and Nyholm 2021), and legal scholars are also interested in this area. Schröder indicates that "Controversies about the moral and legal status of robots and of humanoid robots in particular are among the top debates in recent practical philosophy and legal theory" (Schröder 2020, 191). Strongly correlated to the topic of moral and legal status is the issue of robot rights, which has also been subject of recent scientific concerns (Gunkel 2018;Gordon and Pasvenskiene 2021;Schröder 2020;Harris and Anthis 2021;Lima et al. 2020).
In the context of this paper, it is useful to refer to the discussion about the moral standing of robots. It is mentioned that the recognition of moral status is a natural starting point for granting legal rights to an entity (Danaher 2020). However, the notion that an entity deserves some legal recognition does not address the content of such rights (cf. de Graaf, Hindriks, and Hindriks 2021). There are different approaches to moral patiency, such as propertiesbased, indirect, based on the Kantian indirect duties toward animals, relational, virtue ethics, or that which looks for sources of robots' moral status in the broader environmental context (cf. Gunkel 2018; Gellers 2020; Nyholm 2020; Smith 2021). However, for further deliberation, I will focus on the properties-based approach, which focuses on the robot's ontology.
The properties-based approach to moral status addresses what robots are. According to this view, if the robot is characterized by a particular ontology, then it is eligible for the moral circle. Different properties are discussed as crucial to determining the moral status of robots, such as sentience, consciousness, the possibility of feeling pain, and intelligence (cf. Véliz 2021; Gibert and Martin 2021;Kingwell 2020;Himma 2009;D. Levy 2009;Floridi and Sanders 2004;Mosakas 2020;Sparrow 2004;Hildt 2019;Torrance 2014). humans and Neanderthals would require different statuses in this regard, too.
Even if we decide that Neanderthals are another kind of human and, due to being part of the human family, they have human dignity (more on dignity, cf. Riley 2018) that justifies their treatment as humans, it does not mean that the law will be ready for them. First, at least to some extent, some provisions with crimes are addressed toward humans (if the Neanderthals did not meet humans' requirements for moral agency). Second, and connectedly, the same "crimes" committed by Neanderthals on Neanderthals would not be covered by law. Third, some behaviors that Neanderthals could perceive as wrong may not covered by today's law. As mentioned before, our morality could be at least partially dependent on the structure of our brains, as Churchland pointed out: The truth seems to be that the values rooted in the circuitry for caring-for well-being of self, offspring, mates, kin, and others-shape social reasoning about many issues: conflict resolution, keeping the peace, defense, trade, resource distribution, and many other aspects of social life in all its vast richness (Churchland 2011, 8).
This notion could mean that the morality of Neanderthals, who have a different brain, could be different. From our perspective, fine actions could be treated by them as morally wrong, and vice versa: things that are not acceptable for us, which we treat as crimes, could be acceptable to them.
To sum up, if the differences in our brains cover the parts responsible for morality, the potential legal status of Neanderthals would not be the same as that of humans. Besides the biological closeness, the nuanced differences in the brain's structure could lead to the notion that Neanderthals could have a slightly different morality. Morality and law are intertwined (cf. Hart 1963;Fuller 1964;Dworkin 2013). At least parts of the law have a source in our morality, especially the grounds of criminal law. This means, among other things, that they may not be moral agents in the sense that humans are. They could not be responsible for at least some actions. The other issue is that it does not seem to be right to treat them as mentally ill. In order to prevent them from doing harm, a completely new framework will be required. The law is also not ready for the protection of Neanderthals. Even if we grant them the status of humans, the current legal framework would need some improvements. For example, some actions may not be perceived as harmful by them, and some behaviors which they may treat as morally wrong may not be prohibited by today's law.
Those deliberations show that slight differences in anatomy could lead to different legal statuses of entities. The

K. Mamak
Assuming that robots would be equal in almost all respects with humans, and even if we accept as a society that such robots deserve moral and legal recognition equivalent to humans, the legal response will need to be nuanced. It will not be possible to enact an act that states "robots and humans are equal under the law." Such a position would be anthropocentric. There will still be a need to enact new laws. Possible sets of laws and the rights of humans and humanlike robots may overlap but it will never be possible for one to be a subset of the other, Some human rights could be relatively easily transferable to robots, such as the right to life (Mamak 2021b). Some could be irrelevant for them, such as the right to privacy or the law that protects freedom of religion -maybe there will be no context in which these could be applied to robots. There could also be rights that will not be needed until such robots appear, covering issues essential for them which we cannot even imagine currently. The need for such rights could be connected with their different ontology. In practice, if we want to protect robots with criminal law, some crimes would also be applicable for them, some could be irrelevant, and there could be a need to enact new ones which are tailored to them.
To be fair to the authors who suggested that the robots could be human rights holders, I do not think that they would defend the thesis that robots will have the exact set of human rights as human beings. Instead, I believe that they wanted to show openness to the possibility of granting robots some human-like legal status and do not necessarily think in-depth about the exact contents of the rights.
One further thing I want to discuss here, which was also a matter of concern in the section discussing Neanderthals, is criminal responsibility. As I discussed, the Neanderthals may not be human-like moral agents, despite their biological closeness to humans, due to the differences in the construction of their brain. The moral system of robots could be different from ours because our system is partially related to our biology, which robots would not share. How could this impact the discussion of the moral agency of robots and strongly related with that the problem of responsibility (for more about the discussion on the issue of moral responsibility and AI agents/robots, see cf. Matthias 2004;Danaher 2016;Hakli and Mäkelä 2019;Gunkel 2020b;Babushkina 2020;Kraaijeveld 2020;Gogoshin 2021)? Would robots be moral agents in the same sense as we humans (for more about the discussion about making robots moral, see cf. Wallach and Allen 2010; Kokkonen 2020)? This seems possible if we deliberately implant our moral values in robots or they decide to adapt to our moral practices. However, if their moral agency appears independently, as a by-product of their development, their moral values may not be exactly like ours. That notion should be worrying; our values, including the value of human life, may not have a similar This approach seems to be less controversial than the other approaches. The first provision of the Polish act concerning the protection of animals starts: "An animal as a living being capable of suffering is not a thing. Humans owe them respect, protection, and care." As we can see here, an ontological approach focused on the possibility of feeling pain is given as a reason for introducing those provisions. For example, if we know that robots could feel pain, we should avoid causing that pain, which does not require additional moral justification (But see : Dennett 1978;Bishop 2009). This approach could be helpful for law -it has potential to be treated as a reasonable source of change in the law, and it is to some extent easily translated into concrete legal postulates. In theory, if robots share the qualities of dogs, then the law should treat them like dogs. If robots are ontologically equivalent to humans, we should give them human legal status. But would it be possible, in practice, to give human-like robots the legal status of humans? Miller, discussing the legal status of human-like robots, wonders whether sophisticated robots (automata) should have full human rights: "My concern is whether automata that exhibit all (or sufficiently close to all) traits considered to be distinctive and necessary for being a human should thereby enjoy full human rights" (Miller 2015, 375). He answers the question negatively, basing his claim on an ontological concern and pointing out what is crucial is that robots (automata) have a constructor and a given purpose, and humans do not. However, other scholars discussing the ontological differences between humans and robots in the context of moral status have come to opposite conclusions about their meaning. Putman indicated decades ago that the materials used in the construction of a robot should not matter; what should matter is the qualities the robot possesses (Putman 1964). Danaher, discussing the issue of the moral status of robots, suggests that a particular ontological essence is not necessary for such status (Danaher 2020(Danaher , 2032. Are, then, full human rights for robots possible?
I also -like Miller -ground my claims in ontology, but my approach is different. Independent of answering the question about whether robots "should" have a legal status equal to humans, I claim here that human laws and rights cannot be applied directly to robots. Laws, as noted before, are built for humans with specific bodies. I pointed out the embodiment of humans as an obstacle to equating the legal status of robots and humans and any other entities that may appear. Even if robots are indistinguishable from humans, their legal status will still be different from that of humans. Robots will not share the same human legal rights and duties due to the fact that the content of human law is tied to human biology, which robots do not have. In other word "robot rights" will never contain the full scope of "human rights," due to the ontological differences among us.
status among other entities that develop a kind of moral agency.

Conclusions
The place of robots in the law universe depends on many things. One is our decision about their moral status, but even if we accept that some robots are equal to humans, this does not mean that they have the same legal status as humans. Law, as a human product, is tailored to a human being who has a body. Embodiment impacts the content of law, and entities with different ontologies are not suited to human law. As discussed here, Neanderthals, who are very close to us from a biological point of view, and human-like robots cannot be counted as humans by law. Doing so would be anthropocentric and harmful to such entities because it could ignore aspects of their lives that are important for them. It is certain that the current law is not ready for human-like robots.
Acknowledgments I want to acknowledge my fellow RADAR researchers at the University of Helsinki, especially Raul Hakli, Pekka Mäkelä, for their helpful comments.
Funding Academy of Finland, decision number: 333873

Open Access funding provided by University of Helsinki including Helsinki University Central Hospital.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.