In this section we develop the main arguments against the possibility of AI being genuinely virtuous, at least in the present and near-future state of technology. We hold that AI-based social robots can only (be taught to) behave in a virtuous way (externally observable output) but cannot genuinely be virtuous (internal dimension of virtue). This happens because there are three major limitations in the current and foreseeable deployment of AI, linked to the three requirements that need to be fulfilled by some entity for it to act in a virtuous way. In short, the virtuous agent will perform (1) the right actions, (2) with the right feelings, (3) in the right way. We leave aside the second condition, given the limitations of social robots to fulfil it, as discussed at the end of the previous section. Furthermore, given that the third condition of acting in the right way includes both acting for the right reasons and acting in the right circumstances, we split it into further two ones. We thus reach a set of three conditions that robotic AI systems need to satisfy in order to be virtuous: (a) performing the right actions (b) for the right reasons and (c) in the right circumstances (for variations of these conditions see [28, 31, 40]). We discuss each of these three requirements below.
Right Actions
To discuss the possibility of AI performing the right actions, we build our argument based on the virtuous—virtuously (VV) distinction introduced by Roger Crisp [31] to account for the specificity of virtue ethics in relation to deontology and utilitarianism. Crisp ([31], pp. 269–270) defines the VV distinction this way: “A virtuous action in certain circumstances is what is required in those circumstances and what a virtuous person would do in those, or relevantly similar, circumstances. A virtuous action is done virtuously (at least in part) when it is done, for the right reasons, from a firm disposition to perform actions of such a kind (that is, from a virtue)” (emphasis added).
The VV distinctions starts from Aristotle’s note that actions done in accordance to virtues require that the agent “acts in a certain state, namely, first, with knowledge, secondly, from rational choice, and rational choice of the actions for their own sake, and, thirdly, from a firm and unshakeable character” (NE, 1105a30–33). As Crisp puts it, there is a difference between “doing the right or virtuous action, and doing the action in accordance with virtue or ‘virtuously’” ([31], p. 269).
But before asking whether robotic AI systems can act virtuously by satisfying the three conditions above, we need to take a stance on the very possibility that AI can act. Although it is not the place here to contribute to this ongoing debate, nor to the way this is linked to the debate around autonomous AI, we briefly note a point regarding the possibility of AI acting. In our view, AI ‘acts’ in a way situated in between the broad sense in which humans act and the sense in which, for instance, a stone ‘acts’ when it ‘breaks’ a window. Imagine a child needs to be pushed out of the way of some oncoming vehicle. I might see this, and intentionally save the child. Alternatively, a stone might roll down from the mountain side and push the child out of the way, with the same result minus intention. Or a social robot relying on AI algorithms might push the child and thus save the day. Now the question is: Does the robot act more like the human, or the stone? If the robot is programmed based on a machine learning AI algorithm, which is not limited to basic programming rules such as ‘push the child out of the way of some oncoming vehicle’ but is rather based on complex programming such as ‘protect and save children’s lives’ that needs intelligent reasoning to instantiate acting in particular situations, then we might accept that the robot acts more like the human and less like the stone. This is the case of robot companions such as Jibo [41], or loneliness robots that we can envisage in the near-future given current developments in social robotics.
Nonetheless, the robotic AI system seems to be far from the point at which we might drop the quotation marks from its ‘acting’ and equal it to human acting, given complex issues related, for instance, to intentional, cognitive and psychological mechanisms. From a virtue ethics perspective, this has to do with the difference between its doing the right action (pushing the child), and its doing it in the right way (pushing the child for the right reasons, in the right circumstances, in the right manner, etc.). We turn to the possibility of AI performing an action in the right way in the next sections of our article.
At this point, for the sake of the argument, let us accept that AI can act in a way that at least externally resembles human acting sufficiently enough to drop the quotation marks. In this case, is the robotic AI system able to act virtuously? Arguably, the first two conditions for acting virtuously, namely, (i) acting with knowledge and (ii) from rational choice of the actions for their own sake, may currently or in the near-future be accomplished by AI. However, we hold that that the third condition (iii) of acting from a firm and unshakeable character imposes stronger requirements, which are unattainable by AI at present or in the near-future. This is because condition (iii) states that acting virtuously or rightly is related to the character of the virtuous agent,Footnote 5 understood as a “fixed and permanent state” ([42], p. 136). Virtue is a stable and enduring trait of a person, and in our praising or blaming an agent for acting virtuously we are considering to a significant extent the agent themself as possessing such a virtue or disposition ([33], p. 26). Because virtue is related to character, it cannot be evaluated in isolation and fragmented, but only by considering someone’s life as a whole [25, 27, 28, 34, 40]. This highlight on the intrinsic value of the virtuous character “rests on the plausible assumption that we care about what people—ourselves or others—are like, and not simply about what they do” ([32], p. 38).
Right Reasons
When someone performs a virtuous action, they act for the right reasons and motivation (NE 1139a32-37).Footnote 6 While robotic AI systems have the capacity to display externally observable virtuous behaviour, things become complicated when it comes to reasons or motivations.
But is reference to reasons and motivations necessary, as long as AI systems behave functionally indistinguishable from a moral human person? Many would answer in the negative [4, 18, 43]. For instance, Danaher ([3], p. 2023) endorses an “ethical behaviourism” view, holding that “robots can have significant moral status if they are roughly performatively equivalent to other entities that are commonly agreed to have significant moral status”. Similarly, Howard and Muntean ([44], p. 220) hold that morality can be quantified through moral behaviour: “the moral decision-making process is ultimately a computational process, similar to the classification of complicated patterns, playing games, or discovering strategies, creating new technologies, or advancing new hypotheses in science”.
However, there are those who hold that we need to take into account not only the what, but also the why related to AI moral behaviour and decision-making. Despite the possibility that AI systems might turn out to make extensionally indistinguishable or even better moral decisions than humans, their decisions could not be made for the right reasons ([45], p. 36): “AI cannot be motivated to act morally; it simply manifests an automated response which is entirely determined by the list of rules that it is programmed to follow. Therefore, AI cannot act for reasons, in this sense. Because AI cannot act for reasons, it cannot act for the right reasons”. No matter how appealing this argument sounds, it tends to over-simplify recent developments in ML-based AI, especially regarding research on trained neural networks, which demonstrates that AI may display unpredictable behaviour. This may be taken to suggest that AI may act for reasons. But is AI able to act for the right reasons?
In the Aristotelian virtue ethics framework, the fact that the virtuous person acts for the right reasons means that these reasons have been integrated or embedded in their way of being. Virtuous agents phenomenally perceive their situation [30, 46], including reasons and motivations for acting virtuously. When a virtuous person acts virtuously, they do not need to evaluate each time whether they have good reasons for doing x, because this is already an implicit motivator for them [34]. This idea rests on the point highlighted by virtue ethicists that there is a difference ‘on the inside’ between someone who is good or virtuous and someone who is not, with the implication that there is something characteristic of “what it is like to be a good person” ([34], pp. 21–22). Such a difference resides in the way the virtuous person deliberates.
Virtue requires the right or appropriate attitude to the virtuous action, resulting in an inner harmony of the virtuous person with their choice of action. The simple performance of a good action does not make the person virtuous because the action could have been the result of a wrong reason. This is a requirement that, for instance, the mere continent person cannot satisfy [32]. While the enkratic or self-controlled person behaves virtuously against their inclinations, the virtuous person acts virtuously in harmony with theirs, without the need to put effort into the way they deliberate on possible courses of action [34]. In Aristotelian terms, the former takes pain and the latter pleasure in acting virtuous, an idea generally shared by most accounts of virtue ethics [34].
But if we agree with Annas that there is such an internal difference between the virtuous and the vicious, besides the external difference reflected in different actions, would it be right to accept that such an internal differenceFootnote 7 goes on as well between humans and robotic AI systems performing virtuous actions?
The use of deep neural networks in non-embodied algorithms such as AlphaZero does indeed demonstrate the growing autonomy of AI, which can be trained by mixed methods of supervised learning and reinforcement learning, resulting in AI that generates is own strategies for action [47]. Such strategies remain to an important extent opaque and unpredictable. Machine learning based AI systems are actually designed with the very intention to train themselves to reach unexpected and unpredictable results [48]. Nonetheless, whatever the unpredictable output, this relies heavily on the datasets that are then fed to the training algorithm. Unlike human beings, AI systems deliberate on the right reasons by constantly and instantly calculating the right thing to do out of (potential) infinite possibilities embedded by the huge set of data that is fed for training. And the way the AI algorithm reaches its strategies based on the datasets is based on a mathematical calculus.
But this mathematical calculus resulting in virtuous output behaviour does not amount to being virtuous, despite the impressive rational abilities that it involves. Ethical deliberation is inherently different from mathematical deliberation because it takes into account the particular contexts, including reasons, of particular people ([49] cited by [17]). The virtuous agent exerts moral judgement that is not reduced to a “fixed, psychologically detached, and entirely transparent rational mechanism” ([7], p. 14). Aristotelian virtue ethics highlights that “moral knowledge, unlike mathematical knowledge, cannot be acquired merely by attending lectures” ([30], p. 24). We further explain moral knowledge in the next section of our article, where we discuss the role of practical wisdom in connection to acting in the right circumstances. We end this section highlighting that the virtuous person is different from a non-virtuous one not only in behavioural or performative aspects but, more importantly, in their inner deliberative process, which is such that it makes moral knowledge possible. The fact that artificial agents lack moral motivation rather turns them into artificial psychopaths [50].
Right Circumstances
Doing the virtuous or right action also implies the ability to develop and exert phronesis or prudence or practical wisdom, namely, wisdom to discern rationally the proper course of action relative to a specific situation [28]. The guiding power of virtues is related to exemplary models: the right or good action is that which a virtuous person would choose to do in the given circumstances (NE, 1105b5–7), and this choice is guided by practical wisdom. We argue that phronesis requires a form of deliberation that is inaccessible to AI.
Unlike other virtues such as courage, justice or temperance, practical wisdom is an intellectual (dianoetic) virtue, as it pertains to the rational part of the soul, instead of character. “Intellectual virtue is acquired primarily through teaching, while the virtues of character arise through habit” ([29], xiv). Although it is an intellectual virtue, practical wisdom “operates within the moral realm, uniting cognitive, perceptual, affective, and motor capacities in refined and fluid expressions of moral excellence that respond appropriately and intelligently to the ethical calls of particular situations” ([15], p. 99). For this reason, as Aristotle contends, without phronesis, ethical virtues and virtuous actions are impossible. “Virtue makes the aim right, and practical wisdom the things toward it” (NE, II44a10).
Practical wisdom involves the capacity for moral reasoning and decision-making, being able to morally deliberate in specific contexts. It is “less a capacity to apply rules than an ability to see situations correctly” ([29], xxiv). The person who is practically wise “sees what to do in an immediate way and does the good thing in a close to automatic way, as if it were second nature” ([9], p. 206). This happens because the phronimos has internally embedded that which gives the rightness of an action relative to particular circumstances based on prior experience and acts almost intuitively, though not unknowingly or unconsciously [51]. It takes prudence to discern the right or virtuous course of action, and this involves a form of intuitive understanding “of the right aspects of particular situations” ([32], xx). This form of understanding further rests on moral habits, involving a combination of practical skills and implicit knowledge that contribute to developing stable dispositions to act ethically relative to complex circumstances [41].
The type of deliberation required by prudence cannot be replicated by AI. Deliberation is “a part of being practically wise” ([29], xxv) and is concerned with variables, with things that are inexact. For this reason, practical wisdom cannot simply be taught, but instead must be learned in real-life situations, through exercise or practice. Phronesis is not a form of propositional knowledge, quantifiable in a set of programmed rules [51]. Virtue cannot be defined strictly in terms of behavioural rules without the exercise of moral judgement [32]. It takes prudence to perform a virtuous action, and not some set of general rules to be applied mechanically on specific life contexts [32, 46]. Of course, we have various examples of robotic AI systems being responsive to some forms of stimuli, from complex processing of natural language to facial expression and emotion-like reactions such as smiling or whimpering, but this is a limited scope of input–output relationship. Instead, phronesis is directed to one’s self and in relation to one’s whole life and “involves a practical knowledge about oneself from the inside out, and from within the particular situation in which one exists” ([51], p. 215).
To deliberate correctly on the proper course of action, the virtuous agent relies on practical wisdom that further stems from a particular conception of how one should live [27, 30] and of the nature of the good life specific to human beings [8]. Understanding how to act in particular circumstances is informed by this type of moral knowledge, which is knowledge of the good, and which requires knowledge about how the world works [8]. Paying attention to the specific circumstances of right action is part of acting in the right way and this is an important element of exercising virtues in the broader goal of pursuing the good life, in the Aristotelian tradition of virtue ethics. And it is in this context that the life history of agents becomes important to understanding the relevant elements conducive to right actions done in the right way [17, 26, 42]. This personal life history is needed for agents to embed context sensitivity in their deliberations, enabling the exercise of practical wisdom.
Is phronesis therefore completely out of reach for AI at present or in the very near future? Using the concept of functional morality coined by Wallach and Allen [39], we hold that AI can at most display functional—that is apparent phronesis. The idea of endowing AI with some capacity for moral reasoning implies that we have a clear picture of what it is the correct moral truth [17], so that we can code a form of moral epistemology based on which AI can learn [52]. However, making the right moral decision “is not a chess game where the outcome is a win or loss” ([52], p. 731). Practical wisdom is essential in deciding upon the right or virtuous action and this is a major difficulty for the possibility of AI systems to be virtuous. There is no fixed corpus of ethical truths (be it in the form of textbooks or examples of human responses) to be used as a training dataset for deep learning algorithms [17]. The form of training that AI could undergo to display functional prudence would at most equip it with some form of behavioural competence, but not comprehension, similar to neurons in a brain [53]. Functional morality does not involve that AI understands the tasks performed, but rather mere performance of ethical determinations. Apart from the ongoing discussion concerned with the (im)possibility of implementing ethics into machines [17], such divide means that AI is not a genuine phronimos, as this would require a complex and profound, including moral, understanding of the situational context, in addition to performing the right actions for the right moral reasons.
Having now argued that robotic AI systems cannot be virtuous based on their incapacity to satisfy the three conditions of (a) performing the right actions (b) for the right reasons and (c) in the right circumstances, we return to the supporter of moral AI and their possible objections envisaged at end of the first section. At the current point of discussion, it would be fair to say that the supporter of moral AI will further object to our distinction between being virtuous and behaving in a virtuous way (in addition to their objection towards the requirement that a virtuous entity acts with the right feelings). They will probably argue that this distinction rests as well on the anthropocentric and all-too-demanding understanding of virtuous action as embedding an inner dimension that can obviously not be fulfilled by robotic AI systems. Indeed, even if we initially gave up dismissing the possibility of AI being virtuous based on the inner dimension of virtue (that is, feelings), we finally ended up arguing that even the external dimension of virtue (that is, actions) rests on some inner requirements that AI simply cannot fulfil in order to genuinely be virtuous.
But this does not necessarily mean that we must keep on bringing further arguments that resist the objection of anthropocentrism. It might simply mean that, given the virtue ethics understanding of virtue in the Aristotelian tradition, AI is not a fit candidate for what it means to be a virtuous entity, and that it has to settle for a less demanding (but nonetheless important) moral goal, namely, to behave in a virtuous way. Or it might well be the case that we need to develop other types of virtues, specific to AI, to escape anthropocentrism, such as android arête—where, however, the use of virtue is rather metaphorical [54]. Or focus is instead on the human user and their character, on deploying social robots—including isolation robots-to develop virtues in humans [7, 8, 15]. But if we are to evaluate the possibility of current or near-future robotic AI to be virtuous within the framework of Aristotelian virtue ethics, then the answer is negative. Our point is consistent with the view advanced by other scholars in the broad discussion concerning the moral status of robots and AI, namely, that what “goes on ‘on the inside’ matters greatly” ([55], p. 223), somewhat mirroring the (older) debate around philosophical zombies. All things considered, we may currently and in the near future only speak of human-like AI systems in a similar way that we speak of human-like zombies. And zombie agents are not virtuous, at least not in the way virtue ethics understands being virtuous.