Early morning somewhere in a future US-American city. Detective Spooner, a cool guy in a leather coat, Allstars, and a baseball cap, gets ready to go to work. As he opens the door, he flinches. In front of him there is a metal humanoid FedEx robot with a parcel under its arm.

“Good morning, Sir,” the robot greets him politely. “Yet another on time delivery from…” But that’s as far as he gets.

“Get the hell out of my face, cannery,” Spooner tells him while pushing him to the side. The robot looks at him, seemingly confused, but wishes him a nice day anyway.

This is the year 2035. Robots are not only used in factories but also in private households. They walk alongside people on the street, take out the garbage, do the shopping, and walk their owners’ dogs. At least, that’s how the world looks like in the movie I, Robot (Alex Proyas. USA, 2004). These robots are presented to us as submissive servants who are not treated particularly well. When they are bumped into, they are the ones who apologize. Their status is that of slaves, whose only purpose is to be used by humans. At the beginning of the film, the following sentences are projected onto the screen:

  1. 1.

    A robot shall not harm a human being or, through inaction, allow a human to come to harm.

  2. 2.

    A robot must obey the orders given to it by humans except for those orders that conflict with the First Law.

  3. 3.

    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.Footnote 1

These laws, which are a direct quote from the story “Runaround” by Isaac Asimov, make it quite clear how robots are supposed to function.

Spooner, the protagonist of the film, feels nothing but contempt for robots. For him they are under general suspicion. When a theft occurs in the city, he always suspects robots first, not humans.

Not only Spooner but society as a whole has little compassion for its mechanical slaves. Once they are no longer needed and have reached the end of their usefulness, they are disposed of on the outskirts of the city and put into containers, where they must spend the rest of their probably eternal, digital existence. There they stand, nestled closely together as if they wanted to comfort each other. The robots’ “faces” reflect a kind of noble capacity for suffering. They are sad robots. Robots that don’t understand why they are treated so badly. Proyas wants us to come to the conclusion that discriminatory treatment of robots is unjust and inhumane.

In reality, however, no one has yet come up with the idea of applying the norms of the Animal Welfare Act to robots, or even of granting them human rights. There is a practical consensus that computers and robots have no mental states. We agree that robots unlike animals—to which the capacity to suffer is attributed—are not sentient. So far, there has been no serious initiative to grant rights to computers or software systems based on their sentience.

There is nothing to suggests that even today’s most complex software systems possess consciousness. If they did, we would have to strictly regulate their further use with immediate effect and attribute fundamental and human rights to them. Painless killing, which is permissible regarding animals but ethically and legally impermissible regarding humans, would then be prohibited. In analogy to the Great Ape Project, which wanted to overcome speciesism and grant humanlike animals human rights to the extent that they have comparable characteristics, robots and autonomous software systems would also have to be granted human rights. If we assume that robots created by us are personal beings that are endowed with an identity, responsibility for action, autonomy, and the accompanying individual dignity (a so-called e-person (electronic person)),Footnote 2 the software systems in question could then no longer be manipulated in analogy to the right of informational self-determinationFootnote 3 of human individuals, because this would contradict the Kantian principle of non-instrumentalization of rational beings.

And yet some proponents of Artificial Intelligence claim that it is fundamentally impossible to distinguish between a human brain and a computer. Thus, lawyers and sociologists are increasingly concerned with the question of the extent to which (future) robots can be held liable in the event of errors, i.e., have a juridical responsibility. In international research institutions, lawyers are asking whether robots are to be regarded as mere tools for which their owners or manufacturers must be liable or, whether, depending on their degree of autonomy, they will at some point enjoy a special status that grants them responsibility and rights. After all, according to the legal argument here, robots would also have duties to fulfill.

In October 2016 in Saudi Arabia, a robot was officially granted citizenship for the first time in history. The robot in question was “Sophia,” an android robot with a female face and body that mechanically simulates facial expressions. The citizenship theoretically not only gives Sophia rights, but also duties. The very fact that she is allowed to move about unveiled—unlike all other Saudi Arabian women—caused much discussion in Saudi Arabia and beyond.

In I, Robot, robots have a lot of duties. If they don’t fulfill them, they are prosecuted just like humans. But then, by implication, shouldn’t they also have rights like humans? At least that is the foundation of ethics and law in civil and democratic societies.

Just as in I, Robot the film A.I. Artificial Intelligence (Steven Spielberg. USA, 2001) imagines a future where robots have become a normal part of our everyday life. They are slaves and service providers. Sad service providers, one might add, as they are presented to us as sentient beings who suffer from being treated as second- or even third-class humans. Spielberg makes his position quite clear by using melodramatic means to make to the viewer believe that in the near future it will be essential to give robots not only legal rights but grant them also especially the right to (human) dignity.

Anyone who takes Spielberg’s idea that a robot has the same dignity as a human being seriously must assume indistinguishability between humans and computers or software systems. But anyone who thinks that there can be no categorical difference between human brains and computers is denying the foundations not only of scientific practice but of the human way of life in general. Whoever resents his PC because it proved to be disobedient has a problem of rationality and reality. He is attributing properties to his computer that it does not have. Only in philosophy seminars or the indistinguishability of humans and machines can be asserted. Outside, this assertion seems grotesque, as it is incompatible with the actual practice of those who advocate it. Of course, we turn off our computers when we no longer need them, we dispose of them in the junkyard, without shedding a tear. The computer is not an Other, but a tool, far more complex than a shovel, far surpassing some human capabilities, but still just a physically describable apparatus without desires or beliefs. With this in mind, we should not strive to make robots as human-like as possible.

In one of the most emotional scenes of Spielberg’s film A.I., we see how discarded robots are brought to a kind of circus arena. Under the eyes of a roaring crowd, they are put in a cannon and shot into the air. “But I still function perfectly,” one robot protests in despair as he is led away into the arena. Obviously, the robots don’t want to die. But the drunken crowd has no sympathy. To them, robots are just an accumulation of metal. To the viewer, however, the robots are presented as sentient beings, who suffer from wrong and inhumane treatment. Just because robots are machines—that’s the message of the film—doesn’t mean they are worth less than humans: they have the same dignity.

In philosophy, it is quite controversial what constitutes human dignity. Some believe that it is the special sensitivity and the capacity for suffering that demands special consideration. Others believe that human beings have (basic) rights by nature—or by God—which are inalienable and which constitute the special dignity of human beings. Those who stand in the tradition of Immanuel Kant base dignity on the autonomy that is inherent in human beings. Accordingly, it is the human capacity to weigh up reasons that makes humans autonomous agents and gives them the special status as beings who have dignity.Footnote 4

In his book The Decent Society, the Israeli philosopher Avishai Margalit has placed human dignity and self-respect at the center: We must not treat anyone in such a way that he has reason to feel humiliated and harmed in his self-respect. Artificial Intelligence have no self-respect, no feelings that we can hurt. Their personal identity is not vulnerable and they do not have the ability to reflect on their life. The preconditions for ascribing dignity to them are not fulfilled.

Since human dignity and human rights are so central to our very understanding of ourselves, but also to the legal and political order in which we live, we should be careful not to jeopardize this core of human ethos by overextending it. Populating the world with Artificial Intelligence, to which we attribute abilities and characteristics comparable to those of humans, would inevitably lead to the destruction of this ethos. Seen in this light, it makes more sense to read Spielberg’s oppressed robots as a metaphor for the treatment of African American slaves in history than as a realistic depiction of an abusive treatment of robots.

At the end of A.I. there are no humans left on earth. Not a big loss, as it seems to the viewer, since he only got to know cold-hearted human beings throughout the film. The only beings who showed compassion in A.I. were robots. Robots that have been oppressed and abused. At the end of his long story of suffering, the protagonist David, the little robot, is finally redeemed by angel-like alien robots who have come to Earth. He, who has longed for the love of his long-deceased human mother all his life, is now given the opportunity to be reunited with her, as the alien robots bring her back to life through a DNA reconstruction. At last David can be happy. Although this bliss will only last one day (as the reconstruction can’t survive longer than 24 hours), his wounds can now heal. The film thus joins the ranks of melodramatic Christian-influenced narratives of the nineteenth century such as the novel Uncle Tom’s Cabin (Harriet Beecher Stowe, 1852) where the Afro-American protagonist Uncle Tom must endure great hardship, suffering, and even death in order to receive salvation (and make the readers understand that racism is bad). Seen from this perspective, Spielberg’s A.I. needs to be read not as a realistic and serious assessment of the status of robots, but as perpetuating the Christian narrative of suffering and resurrection and as a metaphorical comment on racism.