1 Introduction

In the research field of machine ethics, Moor (2006, 2009) and Allen and Wallach (2011) define four types of artificial moral or ethical agents, the most advanced of which is full ethical agent (Moor 2006, 2009), or full-blown Artificial Moral Agent (Allen and Wallach 2011).Footnote 1 This type has three main characteristics: autonomy, moral understanding and a certain level of consciousness,Footnote 2 including (at least) a subjective self, intentionality and moral emotions such as compassion, the ability to praise and condemn, and a conscience.

When we speculate about the future development of such agents, we cannot help but think about the issue of control, also known as The AI Control Problem (Bostrom 2003b, 2012, 2014; Yudkowsky 2008; Russell 2020): how can we ensure that such autonomous intelligent artificial agents – we hereby assume that future conscious artificial agents will be (approximately) as intelligent as humansFootnote 3—remain under our control? In other words, how can we guarantee that autonomous intelligent systems do not behave in unintended, undesirable, or even offensive ways toward humans. One prominent, and perhaps the leading, solution to the control problem is the idea of value alignment (Yudkowsky 2016; Christian 2020; Russell 2019), i.e., the idea that we can teach AI systems specific human values and instruct them to take them into account as part of their decision procedure. Some thinkers believe that the control problem poses an existential risk that must be mitigated in various ways, and researched and developed as part of a (relatively) new inquiry field—AI Safety.Footnote 4 Notwithstanding, value alignment itself, as a prominent solution to the control problem, suffers from a few problems of its own (Gabriel 2020; Firt 2023b).

This paper aims to discuss various aspects of full-blown Artificial Moral Agents (AMAs). We begin the discussion from a certain starting point, which basically presupposes, for the purpose of this discussion, our ability to construct value-aligned full-blown AMAs. Let us clarify our premises:

a) Full-blown AMAs can be implemented. This means that we are able to construct artificial agents that are autonomous, i.e., able to perform complex tasks, including those demanding human-level cognitive abilities, without human intervention; have moral understanding, i.e., are able to act from morality and not just according to morality (Moor 2006, 2009), and are conscious in the sense mentioned above (to be further discussed in the following sections); (b) value alignment is resolved. That is, we are technologically able to teach AI systems a set of predefined values and preferences and direct them to incorporate these values and preferences into their decision-making processes. Note, however, that the former does not entail that any system, however, advanced and capable, will ultimately follow those instructions to the letter. To further clarify this last point, it might be helpful to think about the analogical case of educating young humans: during our childhood and adulthood, dedicated human beings (parents and close family members, teachers, and other role models) teach us, in various ways, the values that society decided should guide our behavior. However, when we become adults, we might decide that we know better, or that some ways of social conduct are morally wrong, and perhaps we should behave differently, as we see fit; we shall revisit this analogy in the concluding section.

Having depicted our presuppositions, the argument we put forward in this paper is the following: contrary to some current beliefs,Footnote 5 the creation of full-blown artificial moral agents, which (or should we say, who?) are endowed with a subjective self and moral emotions, as presupposed, and trained to be aligned with human values, does not guarantee, in itself, that these systems will have human morality and therefore, it is questionable whether they will be inclined to honor and follow what they will believe to be the wrong moral values.

It is crucial to note at this point, and this remains true throughout this paper, that we do not mean to claim that there is such a thing as a universally shared human morality or a strictly defined set of moral values that all human beings agree on; perhaps there are a few moral values that all humans agree on, but this is irrelevant in the context of this paper. What we mean to argue is that, as there are different human communities holding different sets of moral values, the moral systems or values of the discussed artificial agents would be different from those held by human communities, for reasons we discuss below.

Thus, to further explicate the argument, the outline of this paper contains the following: in section two, we examine whether and to what extent full artificial moral agents are moral agents in the human sense. In section three, we put forward our argument, namely that the type of morality full-blown AMAs possess is very likely to be different from human morality. In section four, we discuss some implications of our argument and conclude.

2 How full ethical agents meet the criteria for moral agency

In this section, our aim is to describe full-blown artificial moral agents and to show how their features, as so defined, meet the criteria for moral agency, in the same sense that normal human grown-ups do.

As aforementioned, we begin with Moor’s (2009) taxonomy of artificial ethical agents; we are hereinafter specifically interested in the fourth and most advanced agent, i.e., full ethical agent. These agents are distinguished from less advanced types by their metaphysical features: “full ethical agents have those central metaphysical features that we usually attribute to ethical agents like us—features such as consciousness, intentionality and free will. Normal adult humans are our prime example of full ethical agents.” (p. 1) Allen and Wallach (2011) refer to a similar (if not identical) type, full-blown artificial moral agent, “(which depends on ‘strong’ AI) or even ‘weak’ AI that is nevertheless powerful enough to pass the Turing Test” (pp. 105–106) they however, are reluctant to commit to capacities other than autonomyFootnote 6 and moral decision-making capacities.Footnote 7 We shall henceforth define this type of full ethical agent as autonomous, in the sense of being able to operate as designed without human intervention, capable of moral decision making, in the sense of being able to “identify and process ethical information about a variety of situations and make sensitive determinations about what should be done..[and] when ethical principles are in conflict…work out reasonable resolutions.” (Moor 2009: 1); and finally, conscious to a certain degree—at least to a degree of having intentionality, and morally-related emotions such as compassion, empathy, and the ability to praise and condemn.

Now, to examine how these features satisfy the criteria for (the standard view of) moral agency, let us look at the following: “It is generally thought there are two capacities that are necessary and jointly sufficient for moral agency. The first capacity is not well understood: the capacity to freely choose one’s acts.” (Himma 2009: 22) Without getting into debates regarding the concept of free will, this requirement means that no external influence is a direct cause of a moral agent’s action, as Himma (ibid: 23) explicates: “it is a necessary condition for being free and hence a moral agent that one is the direct cause of one’s behavior in the sense that its behavior is not directly compelled by something external to it.” Additionally, and equally important in this context, is the requirement that the moral agent’s action should be the result of a certain deliberative process, which in turn indicates some kind of rationality: “Insofar as one must reason to deliberate, one must have the capacity to reason and hence be rational to deliberate.” (ibid.) The second capacity is loosely described as “knowing the difference between right and wrong.” Obviously, this is too loose a description; no normal adult human being (our paradigmatic example of a moral agent) is able to infallibility determine right from wrong in every circumstances; what we require here is the ability for moral reasoning, i.e., to be able to use moral concepts, to understand moral principles and when and how to apply them in different circumstances. In conclusion, the necessary and sufficient conditions for moral agency can be summarized as follows: “for all X, X is a moral agent if and only if X is (1) an agent having the capacities for (2) making free choices, (3) deliberating about what one ought to do, and (4) understanding and applying moral rules correctly in paradigm cases.” (ibid: 24) But this is not the entire story, for the aforementioned conditions can be argued to presuppose consciousness as well. Let us see how.

First, when we assert that an agent is acting without external intervention, after a certain kind of deliberation, as defined above, we implicitly presuppose some kind of volition or an intentional state of mind; hence, we presuppose conscious mental states. Also, the very notion of deliberation, i.e., of deciding between alternatives based on rational abstract reasoning, presupposes understanding and manipulation of concepts and symbols. Second, moral reasoning and moral understanding require, at least when referring to moral agents, conscious mental states (Behdadi and Munthe 2020; Firt 2023a). In any case, the standard view of moral agency we hereinafter presuppose for the remainder of this discussion acknowledges consciousness as a necessary condition for moral agency (Himma 2009; Behdadi and Munthe 2020; Floridi and Sanders 2001). Note that we do not argue for or against the standard view; we follow Moor’s (2006, 2009) and Allen and Wallach’s (2011) definition of full ethical agents, and we accept the standard view of moral agency as a premise. Next, we shall see how this definition of full ethical agent meets the conditions for moral agency.

As previously mentioned, full ethical agents are endowed with consciousness, intentionality and free will; moreover, they are autonomous and capable of moral decision-making. These features make them the perfect candidates—alongside humans—for moral agency: conscious intentional agents, making free choices, following moral deliberation, and having the capacity for understanding and correctly applying moral principles. In the next section we argue that although full ethical agents are moral agents in the same sense that adult normal humans are, they will probably not have the same type of ethics or morality.

3 A different kind of morality

In the previous section, we showed how full ethical agents are in fact moral agents, in the same sense that adult normal humans are. In this section, our aim is to argue that while this is true, i.e., that full ethical agents and adult normal humans are moral agents in the same sense, it is very likely that their ethics or morality will be substantially different from human morality, in general.Footnote 8 We do that by examining the prominent views regarding the foundations of morality in humans, and showing that these roots are radically different and cannot be applied to artificial agents. Note, we do not aim to provide support for any of the views regarding the basis of morality, or claim that there is one correct view; our aim is to briefly review them and show how each of them is either not applicable, or is applicable, but in a fundamentally different way, when artificial agents are concerned. In what follows, we briefly explore what we consider to be the salient views regarding the basis of morality.

First, evolutionary biology, i.e., the view that moral behavior is a product of evolutionary pressures (Alexander 1989; Moll et al. 2003; Hauser 2006; Joyce 2005; Krebs 2008; FitzPatrick 2021). The essential idea behind this view is that the human traits that underlie moral behavior are the outcome of evolution and were dominated by natural selection. In other words, the human brain has been biologically prepared by natural selection to engage in moral judgment, and the neurobehavioral processes were shaped and adapted by social pressures and cues over many years of evolution—“social instincts,” and decision-making strategies, that enabled early humans to optimize social living and resolve conflicts in adaptive ways.

Second, the neuronal basis of morality, i.e., the claim that morality is rooted in our brain structure and neuronal activity (Moll et al. 2005; Greene et al. 2004; Forbes and Grafman 2010; Casebeer 2003; Churchland 2012). Being biological creatures, it seems obvious that at least in some respect, brain activity and neurological processes are causally related to moral thinking, reasoning and behavior.

Third, the essential role of emotions in our moral judgements (Prinz 2007; Roberts 2013; Drummond and Timmons 2023: §2.2; Schmitter 2021). This broad view includes more specific sub-views that support a relatively wide variety of claims, starting from the hypothesis that morality has an emotional foundation, through the claim that there is a dependent relation between moral emotions and ethical value judgments, the strength of which can vary according to the view.

Fourth, the claim that morality is rooted in our culture and the human social structure and ways. In other words, morality is based on a social contract or agreement between humans. The idea behind social contract views is to derive the content of morality and the justification for being obligated to follow moral principles, from the notion of an agreement between all those in the moral domain (Scanlon 1998; Gauthier 1986; Rawls 1971).Footnote 9

Clearly, the previous paragraphs only sketch the outline of each approach. Each of these views has several sub-views and numerous ins and outs; some of the views are interconnected and have mutually complicated dependencies.Footnote 10 Our argument does not depend on any of these inter-dependencies or on the details of any of the views; simply put, our goal is to claim that for any of these views, if assumed to be a foundation of human morality, we can show how it does not hold or cannot be applied in the case of artificial moral agents. For example, if we assume that evolutionary biology has shaped human morality as we understand it today, then the claim is that it cannot be applied in the same way as a foundation for the morality of full ethical agents; in this case, it is because full ethical agents are not biological entities and their evolution will most probably be radically different than human evolution.

Before delving into the details, we should note that these comparisons (i.e., between human morality and the morality of artificial agents) are more complicated than it may seem, simply because humanity and artificial agents may take several different forms in the future: First, the issue of human evolution. Certain views (Bostrom 2003a; More and Vita-More 2013) support the claim that human evolution has yet to reach its final state, in which case certain changes, physical as well as mental, are still to occur; future humans may contain certain elements, features or characteristics that we now attribute to artificial systems. In this case, future human morality may manifest itself in diverse ways, as we discuss below. Second, the previous point can be equally applied to the case of artificial agents. In fact, it may even be difficult to distinguish or set definitive boundaries between humans and artificial entities. Will a human with synthetic implants in his brain be considered artificial? imagine a digital emulation of a human brain, to which the mind of a real person is uploadedFootnote 11; in such a future, can we unequivocally determine the difference between humans and artificial agents? In the following sections, we discuss the main possibilities.

3.1 Contemporary humans, full ethical agents

In this sub-section, we discuss the relatively straightforward case: contemporary human beings as the product of biological evolution as we know it, and future non-human artificial moral agents, meaning those that do not contain human-originated features such as uploaded minds or biological organs.

With regard to the view that evolutionary biology underlies our moral behavior, we can state the following claims: based on the definitions of this section, artificial moral agents have no biological parts and therefore, cannot be subject to evolutionary biology. However, they may still experience certain evolutionary pressures.Footnote 12 It is important to note that the evolutionary pressures that shaped human beings and those that will shape full ethical agents are inherently different: the stuff they are made of is different (biological vs. non-biological), and the environmental and evolutionary processes involved differ significantly. As a result, this approach cannot serve as a foundation for the morality of full ethical agents. Moreover, we think that the fundamental disparities between the entities and their evolutionary paths preclude meaningful comparisons.

Regarding the neuronal basis of morality, we can divide the considerations into two parts: simulation vs. emulation and underlying stuff. The first part, simulation vs. emulation, concerns the issue of whether the specific implementation and (hardware and software) architecture of the discussed artificial system is a simulation or an emulation of the human brain. Here, a simulation of the human brain and its activity refers to a system that can replicate many critical aspects of human brain behavior. Thus, we think that it is reasonable to assume that even if a sufficiently accurate simulation of the entire human brain were achievable, its underlying structure and activity would still differ in some potentially significant ways from that of the human brain. Therefore, any morality derived from such simulation of the neuronal structure and activity is likely to be different.

When we refer to the emulation of the human brain, we mean something similar to Whole Brain Emulation (WBE; Sandberg and Bostrom 2008; Sandberg 2013). Under this option, the specific implementation and (hardware and software) architecture of the discussed artificial system closely matches that of the human brain up to a certain resolution. The primary distinction lies in the underlying material, as we assumed that this section deals with systems composed solely of non-biological parts. The question then boils down to whether a difference in the underlying material entails a difference in the morality that dependsFootnote 13 on it. Note that under the assumptions of this section, the difference is substantial. Human brains consist of biological cells, mainly neurons and neuroglia, and the brain activity is electrochemical. In contrast, artificial systems are built using silicon-based chips made using conventional complementary metal–oxide–semiconductor (CMOS) technology.Footnote 14 We believe that this difference in material is significant and reasonably leads to differences in the dependent higher level (i.e., morality).Footnote 15

Regarding the essential role of emotions in our moral judgements, we present the following arguments: we here assume that full ethical agents possess a certain level of consciousness, which includes moral-related emotions such as shame, regret, compassion and empathy, but may encompass a range of other emotions as well. However, we must also consider the underlying foundation of these emotions, without committing to a specific viewpoint. This foundation could involve our physical body, our brain, its structure, neurological activity, evolutionary factors, psychological influences, or a combination of these elements. In all of these cases, whether taken individually or collectively, distinctions exist between human beings and full ethical agents. Consequently, it is reasonable to argue that the emotions rooted in these differing foundations would also differ from human emotions. We do not think we can say anything substantial about the specific nature of these differences, and any speculations about their precise character are beyond the scope of this paper. Nonetheless, we claim that if the emotions experienced by full ethical agents differ (whatever that entails), it follows that the morality based on these emotions will also differ.

With respect to social contract theories, which posit that morality is based on an agreement among members of a moral community driven by self-interest and rationality, we can assert the following: Given that social contracts serve as the foundation for our moral behavior and that full ethical agents are rational beings with their own interests, it is likely that they will establish some form of agreement. This agreement may be motivated by their mutual self-interest or by their rationality and autonomy, mirroring the behavior of human moral agents. However, the key difference lies in the content of this agreement; for all the reasons discussed above, it is reasonable to infer that the moral values upheld by these agents will differ from those of human beings. While some values, such as equality, reciprocity, the rule of law, and consent, are inherent to the concept of a social contract, these values, as well as other values such as justice, freedom, safety and more, may be given different interpretations, varying degrees of importance, or even replaced or omitted altogether within moral communities comprised of artificial agents, for reasons we might not comprehend.

3.2 Hybrid humans, hybrid full ethical agents

In this sub-section we discuss the general case of hybrid human beings and hybrid artificial agents. First, let us clarify what we mean by these terms: Hybrid humans represent a speculative future stage in human evolution; it refers to any future stage where humans can commonly use technology to enhance their bodies and cognition. Hybrid artificial ethical agents refer to any artificial full ethical agent that may have a physical embodiment (whether biological or non-biological) and/or contain human-originated features, such as uploaded minds or other biological organs.

To be sure, there are endless versions of hybrid humans or full-blown AMAs on the continuous spectrum of human enhancement, between contemporary humans and super- or trans-humans, whose bodies and cognition are technologically enhanced to the fullest; or between software-based full ethical agents and hardware-based, WBE-based, embodied agents, with uploaded minds. Clearly, we cannot analyze all these endless versions, but we can reach certain interesting conclusions regarding (what we believe to be) the more interesting and relevant cases, i.e., when humans and full ethical agents become sufficiently similar. Let us clarify why we think these cases are interesting and relevant, and what we mean by ‘sufficiently similar’. As argued in the previous sub-section, there are compelling reasons to think that contemporary humans and full ethical agents (as portrayed in the above sub-section) will differ significantly in terms of morally due to the dissimilarities described above.

We shall further assume that as humans and full ethical agents become more and more similar, the likelihood of their ethics remaining as dissimilar as before will decrease.Footnote 16 By ‘sufficiently similar’, we mean that the two discussed entities share a certain number of essential properties. We have already discussed different views regarding the foundations of morality, so we have a reasonable understanding as to what counts as essential for morality. In the following paragraphs, we examine a case where humans and full ethical agents are sufficiently similar and discuss whether this is enough for a common moral ground.Footnote 17 Note that we do not claim that constructing full ethical agents to be similar to humans is the optimal way in terms of performance, level of intelligence, functionality, etc. We examine this option because it appears to be the best option for yielding similar morality.

Let us outline the specifics of our case: humans have evolved and are able to enhance their bodies and cognition. They can use artificial organs made of living cells, biodegradable polymers and other materials developed by future bioengineering. More plainly, humans have largely become cyborgs or ‘augmented humans’. Additionally, cognition can be enhanced using implants and wearables. Full ethical agents can now be embodied in a physical body, possibly composed partly of biological parts. Their cognitive abilities are implemented through the emulation of the human brain (WBE), possibly using biological neurons; by definition, this implementation produces consciousness of the kind previously mentioned. In what follows, we present a few significant insights derived from our analysis of this case:

First, depending on the technological stage of development, hybrid humans may exhibit differences from contemporary humans in various aspects. First and foremost, their physical features may vary. Physical aspects like the motor system, the perceptual system, and interactions with the environment influence some of our cognitive functions, including perception biases, memory recall, comprehension and high-level mental constructs (such as meaning attribution and categories), reasoning and judgment. Even if one does not completely accept the presuppositions of embodied cognition (Shapiro and Spaulding 2021), the following presuppositions can be reasonably accepted as influencing the cognitive capabilities and cognitive state of a person, to some extent: our body constrains the concepts we can acquire, i.e., the concepts through which we understand the environment. More specifically, computationally inspired concepts, such as symbol, representation, and inference, may not have the same meaning, or should be completely replaced when bodily informed cognitive systems change their ‘infrastructure’. It is assumed that the body plays a constitutive role in cognition—it is an integral part of a cognitive system. Therefore, we presuppose that a substantial change to the body—e.g., brain implants, brain-computer interfaces, nervous-system-affecting implants, the replacement of significant body parts such as sensory organs, limbs, internal organs, bladders, etc., with artificial parts—will influence the cognitive system.

Consequently, when one accepts these presuppositions to a certain degree, one accepts that embodiment and cognition are interlinked, and as a result so are embodiment and morality. To illustrate, let us explore a few probable features of such a hybrid future, with an emphasis on those related to bodily and cognitive enhancements. In the future, synthetic implants and body organs will be commonly available for transplantation. This could include sophisticated and accessible 3D-printed organs, bioengineered organs, artificial organs, nanotechnology for organ repair, and more. These advancements promise improved health and together with other medical and scientific advancements such as anti-aging pharmaceuticals, genetic interventions and optimized lifestyles can also extend human lifespan significantly. Cognitive enhancements will also be commonly available—human–machine interfaces, substances or drugs enhancing cognitive function, leading to improved memory, focus, and problem-solving abilities, genetic engineering, neural Implants enhancing cognitive capacities, and integration with digital systems such as AI. In this evolving landscape, it is reasonable to assume that a few of our moral values will be updated or even completely replaced. These changes may affect values associated with our perception of aging, our understanding of sickness, injuries, bodily damage, the meaning of life, and our attitudes towards individuals with unaltered intelligence, such as those without brain enhancements.

Following this, we shall claim that hybrid humans, and specifically those portrayed in the case described above, may have different views regarding (artificial) moral agency; after all, hybrid humans themselves qualify as artificial moral agents to a certain extent– both types of entities are a mix of artificial and biological parts, making them full moral agents. Consequently, we should consider the following points: First, if hybrid humans are indeed to be considered as full ethical agents, then the discussion in the previous section applies to these ‘products’ of near-future human evolution. We will explore this point in more detail shortly. Second, we should keep in mind that hybrid humans will have a different perspective toward other types of artificial moral agents, such as full ethical agents. Several different reasons contribute to this outlook, some of which have been already mentioned: (a) hybrid humans will likely have a different moral system than contemporary humans. Consequently, their decision-making procedures and the values guiding these procedures will differ, possibly to a degree that may appear incomprehensible to us; (b) as presupposed in this section, they will resemble other forms of full ethical agents vastly more than we, contemporary humans, do; and (c) following (a) and (b), they will probably adopt a wholly different viewpoint regarding other types of full ethical agents. For example, they may not perceive artificial agents as entirely foreign or non-human forms of intelligence, particularly in cases where they themselves will have brain implants or brain-machine interfaces, allowing for a new biological-digital integration of thought processes.

Notwithstanding, we should still keep in mind that hybrid humans and future full ethical agents, even when taking into consideration their similar versions, i.e., the augmented humans and emulated, embodied artificial agents portrayed at the beginning of this section, will still exhibit certain physical and mental differences, which might vary depending on the specific implementation. In the following and concluding section, we explore the ramifications of the various options outlined in the preceding two sections.

4 Concluding remarks

This concluding section has two main objectives: First, to bring our argument on the morality of full ethical agents to its conclusion, namely that, from the perspective of contemporary humans, these artificial agents are unlikely to share our moral values. Second, to analyze and draw conclusions about the future state of affairs discussed in the preceding sections, where hybrid humans and artificial agents coexist.

Let us begin with the morality of future full-blown artificial moral agents. Based on the discussion in sections II and III, it is reasonable to conclude that these agents will not share a common moral system with contemporary humans. This means they will not adhere to the same values, behaviors, and principles that current human communities consider moral or ethical. This point is particularly important in light of statements like the one made by Meta's chief scientist, Yaan LeCun, in a recent debate:Footnote 18 “they [future AI systems] will have emotions, they will have empathy, they will have all the things that we require entities in the world to have if we want them to behave properly. So, I do not believe that we can achieve anything close to human-level intelligence without endowing AI systems with this kind of emotions, similar to human emotions. This will be the way to control them.”

According to LeCun, future AI systems will possess empathy and all the necessary qualities for proper behavior, as they will be equipped with human emotions, which he believes will also be our way to control them. This underscores the significance of our argument and its conclusion, which is that the future ethical agents we discuss will lack moral emotions similar to ours to guide their behavior. Consequently, they will not conform to our understanding of proper behavior, making it infeasible to control them based on emotional parallels. In fact, this raises the question of whether we can hope to control them at all. Why so? Because in the case we create full ethical agents, endowed with a different moral system, different moral values, perhaps even foreign values, in the sense of being incomprehensible to a certain degree, will it really be possible to align their values to ours? I encourage the reader to engage in the following thought exercise: think of a normal human being, a member of your moral community. Imagine aligning her values to a set of incomprehensible moral values, so that from that moment onwards, she would behave properly according to this set of moral values, which are beyond her understanding, which are unintelligible as far as she is concerned; remember, these agents are as sophisticated as human beings, able of moral decision making as we are, and supposedly care about their moral values as we do. Although we assumed, for the purposes of this paper, that value alignment is solved, note that once full ethical agents are deployed—analogously to young humans reaching “adulthood”—they should be free to shape their inner convictions and act upon them, as long as they stay within the boundaries of the laws of their state and society. Again, analogously to adult human beings,Footnote 19 who may decide that they know better, or that they disapprove of some social norms and choose to behave differently, as they see fit. Consequently, can we trust a full-blown artificial moral agent to follow moral principles that might be contrary to his own inner moral principles?

Now, let us conclude the case of hybrid human beings and full ethical agents. As previously mentioned, hybrid human beings can be considered as artificial moral agents to a certain degree, and in any case, it is likely that their perspective of full ethical agents will be different from ours, for various reasons, already discussed above. This has a bearing on the previous point as well; when we consider how to deal with full-blown AMAs having a different moral system than ours, we should also take into account the option that it might not be us, contemporary humans, who will have to deal with these future full ethical agents, but our future us, the next step in our evolution, the hybrid humans.

Clearly, a detailed analysis of a speculative future stage is not likely to be accurate, but we find it reasonable to claim the following: (a) as discussed in section III.II, hybrid humans are likely to exhibit moral differences from contemporary humans; (b) these partially artificial entities, who are also full moral agents, will likely hold a different, less frightened, more sympathetic, view of full ethical agents and their (different) moral values; (c) lastly, hybrid humans will still likely to be somewhat different from full ethical agents, but as they get closer and closer to them in terms of their physical and mental structure, we may need to reconsider the distinction between them; in all practical aspects, they may become one specie.

There is one last point to consider. In this paper, we discuss the difference between our moral values and the moral values of future artificial moral agents. We conclude that they will have different, perhaps foreign, moral systems, which may make it challenging for us to control and trust them. In the previous paragraph, we also noted that hybrid humans, often referred to as cyborgs or trans-humans, may eventually merge or integrate with what we currently define as artificial moral agents. On one hand, some thinkers believe that this future step in evolution—hybrid humans, cyborgs, trans-humans—may eventually act against certain interests of contemporary humanity (Warwick 2003; Fukuyama 2002). Their nature and morals may become detached from those of contemporary humans to such a degree that their actions and interests might not align with our own. On the other hand, they will be the contemporary humans of their future time. Who are we to interfere with their considerations?