At the end of the movie I, Robot, the robot Sonny looks at Detective Spooner and asks him if they now are friends. Spooner has up to that point despised all robotic beings and has shown them nothing but hostility. Sonny, however, has proven himself to be a loyal friend to him over the course of the film. Spooner reaches his hand out to the robot. In a close-up, we see his human hand shaking the robot’s mechanically constructed metal hand. Yes, friendship between AI and humans is or will one day be possible and desirable, at least that’s what director Alex Proyas tells us. But how does it look in reality? Can we humans actually call robots our friends?

Philosophically, the condition for friendship is first of all that there is a reliable moral practice between the two potential friends, based on mutual recognition as agents. This mutual recognition presupposes that we trust ourselves to have reasons for our actions. In a sense, we presuppose the integrity of the other. We assume that the individual elements that determine the other’s actions and his life as a whole fit together, that we are not dealing with independent elements that are activated depending on the situation. A person who always says what he believes his counterpart expects from him would no longer be perceived as having integrity.

Motives that do not seem to fit with other motives, give us a reason for inquiry. We want to know how this motive for action fits in with others we already know. Or to put it another way: We want to understand why a person acts in a certain way. We feel perplexed when this is not possible, when we see contradictions that cannot be resolved. It is essential to our connection with other people, whether they are close to us or not, that we trust and expect them to give their lives a coherent, reasoned structure.

AIs do not act according to their own reasons. They have no feelings, no moral sense, no intentions, and they cannot attribute these to other persons. Without these abilities, however, proper moral practice is not possible. To be able to distinguish a justified from an unjustified request, it is necessary to assess the requesting person correctly, to recognize her motives, and to consider her interests. The special obligations to loved ones can only be determined on the basis of shared intentionality and shared emotions. The motive of benevolence presupposes a certain degree of empathy, the ability to put oneself in the shoes of others.Footnote 1 Since a digital computer does not have qualia, it lacks the crucial ingredients of moral judgment; it does not have moral judgment but could at best simulate it.

Assuming an optimization calculus could enable such a simulation—which ethical “program” would one resort to? The two dominating paradigms of ethics are oriented either towards classical utilitarianism, which aims to optimize one’s actions in such a way that the best consequences result, or towards Immanuel Kant’s Categorical Imperative, which demands that one’s motives for action (maxims) be examined for their universalizability: “Act only according to that maxim through which you can at the same time will that it become a universal law.” Which of the two is the right one from the point of view of digital humanism?

The answer is neither since both the utilitarian and Kantian criteria are hopelessly overburdened in the face of the complexity of ethical deliberations. The following arguments can be made in favor of this view.

  1. I.

    The fact that I am asked to do something by a person is a good reason to comply with that request. This is true independently of whether I thereby do the person some good, and also independently of whether the general compliance with such requests is desirable. The request itself constitutes a reason for action. This is where utilitarianism fails.

  2. II.

    I have a good reason to do something if I have committed myself to it. One’s obligations constitute good, morally binding reasons. This applies quite independently of whether this obligation is connected with sanctions or whether I must expect disadvantages if I do not fulfil this obligation. This is where the Categorical Imperative reaches its limits.

  3. III.

    I have duties that come with my social and cultural roles. A teacher has special duties towards her students. This constitutes her role as a teacher. Parents have special duties towards their children. This constitutes their role as parents. Neither the teacher nor the parents have the same duties towards children from other classes or another family. The fact that children from another class or another family might be more in need of help than one’s own students or children does not change the special moral bond towards one’s own students or one’s own children.

At the same time, however, moral judgment must take into account the fact that particular obligations limit the principle of equal treatment. Thus, no one will doubt that there is a special degree of reciprocal obligation between persons who are friends or relatives that does not exist to this extent between persons who do not share this kind of bond. Duties that come with social roles, we might say, systematically violate the principle of equal treatment. If we treated all people equally, there would be no bond, no community, no friendship, no humane society.

These criteria of moral judgment can collide. If a fire breaks out in the school building and the teacher, who must make sure that her class gets out of the school building as quickly as possible, also has her own child at school who is in the next room: whom should she save first? Her child or her school class?

  1. IV.

    Equality before the law is an expression of an attitude of equal respect and dignity that we (should) accord to all people. This also applies to everyday situations. When tourists ask for directions, we should not make our willingness to help dependent on the color of their skin. A discriminatory everyday practice, such as not wanting to sit next to people of a different skin color on the bus, is incompatible with a humane society and with democracy as a way of life.

It is not inclinations and momentary impulses but our ability to take an evaluative stand that characterizes us as rational beings. This evaluative stand is based on judgment, that is, the capacity for deliberation. This capacity for complex weighing of moral reasons cannot be replaced by an optimization criterion, just as a genuine analysis of the ethical determinants of moral practice cannot take the form of an algorithmic rule, however sophisticated it may be. Moral deliberation can only be done by human beings.

The attractive robot woman Ava from Ex Machina has learned to correctly interpret people’s facial expressions and gestures as well as the modulations of human voices. She knows when her counterpart is angry, sad, or in love.Footnote 2 However, she “knows” this in the form of an abstract knowledge that she uses to achieve her own goal—namely, to free herself from her prison. Just as she can read her counterpart, she can also use her own facial expressions and gestures to make her counterpart believe that she herself has feelings. She succeeds in making Caleb believe that she is in love with him and wants to be with him. What Caleb doesn’t understand until it’s too late is that there’s more separating them than just a glass wall. Ava has no feelings of her own. Like an intelligent autistic person, she has only learned what it is like to objectively “understand” people’s feelings. This enables her to manipulate others but not to have those feelings herself.

“Do you want to be my friend?” Ava asks Caleb about halfway through the film.

“Of course,” Caleb replies.

“Will it be possible?”

“Why will it not be?” he asks.

Caleb falls for Ava’s manipulations. He thinks a friendship between them is possible, even already exists. He trusts what she tells him and thinks she can trust him too. In the end, this trust turns out to be a fatal misjudgment. To Ava, Caleb is an object like any other. Only that in addition he was merely a means to free herself. When she ends up leaving him locked behind a thick glass wall to his fate, she has no sympathy for him whatsoever. Caleb desperately pounds on the glass and screams out for her. In his face, one can read not only the despair of having to meet his certain death here but also the despair of having been so wrong about her. Ava, now leaving her prison, walks through the forest until she comes to the clearing from where a helicopter will take her to civilization. As the helicopter takes to the skies, the film cuts to Caleb one last time. He tries in vain to shatter the bulletproof glass with a stool. The computer screen in the room remains black, the light surrounding it is red. These two colors, associated with hell in Christian iconography, are not chosen randomly. His death is horrible, but the real hell is recognizing that Ava, whom he believed to be a sentient being and whom he wanted to help, actually has no feelings or moral judgment at all.

In the last scene, we first see shadows of people standing at a crossroads. Shortly after, we also see Ava’s shadow. To simply stand on a crossroads one day—that is exactly what Ava had wished for. Now she has fulfilled that wish. The camera suggests to us that Ava also perceives people as if through a thick wall of glass. Like the researcher Mary, who knows all about colors and the neurological concomitants of color perception but has never seen anything colored, Ava may know all about human behavior there is to know but will neither feel like a human nor make moral judgments. So, like all AIs, she will never be able to be a reliable friend.