This chapter introduces the theories that form the basis of the ethical review of robots and AI systems. We introduce the major approaches to moral theory (deontology, consequentialism and virtue ethics) and discuss the relation of ethics to law. Finally, we discuss how these theories might be implemented in a machine to enable it to make ethical decisions.

The terms “ethics” and “morality” are often taken as synonyms. Sometimes they are distinguished, however, in the sense that morality refers to a complex set of rules, values and norms that determine or are supposed to determine people’s actions, whereas ethics refers to the theory of morality. It could also be said that ethics is concerned more with principles, general judgements and norms than with subjective or personal judgements and values.

Etymologically, the word ethics goes back to the ancient Greek “ethos”. This originally referred to a place of dwelling, location, but also habit, custom, convention. It was Cicero who translated the Greek term into Latin with “mores” (ethos, customs), from which the modern concept of morality is derived (Cicero 44BC). The German philosopher Immanuel Kant (see Fig. 3.1) characterised ethics as dealing with the question “What should I do?” (Kant 1788). There are several schools of thought on ethics and we will introduce them in here in no particular order.

Fig. 3.1
figure 1

(Source Johann Gottlieb Becker)

Immanuel Kant (1724–1804)

3.1 Descriptive Ethics

Most people, when thinking  of ethics, have normative ethics in mind as described below. Like ethnology, moral psychology or experimental economics, descriptive ethics deals with the description and explanation of normative systems. For example, experimental results exhibit certain features of moral intuitions of people: studies using the “ultimatum game” show that many people have certain intuitions about fairness and are willing to sacrifice profits for these intuitions (Güth et al. 1982). These empirical insights form the basis of descriptive ethics which in turn provides essential input for normative ethics. Normative evaluation of actions is not possible without descriptive elements of empirical insights. In recent years, “experimental ethics”  has even been formed as a sub-discipline of its own (Lütge et al. 2014). For the remainder of the book, we use the term ‘ethics’ to mean ‘normative ethics.’

3.2 Normative Ethics

Ethics can be defined as the analysis of human actions  from the perspective of “good” and “evil,” or of “morally correct” and “morally wrong.” If ethics categorises actions and norms as morally correct or wrong, one then speaks of normative or prescriptive ethics. An example of a norm is that the action of stealing is morally wrong. Normative ethics is usually not regarded as a matter of subjectivity, but of general validity. Stealing is wrong for everybody. Different types of normative ethics make judgements about actions on the basis of different considerations. The most important distinction usually made here is between two types of theories: deontological and consequentialist ethics.

3.2.1 Deontological Ethics

Deontological  ethics is characterised by the fact that it evaluates the ethical correctness of actions on the basis of characteristics that affect the action itself. Such a feature, for example, may be the intention with which an action is performed or the compatibility with a particular formal principle. The consequences of an action may be considered in addition, but does not form the exclusive basis of the judgement. The term deontology or deontological ethics derives from the Greek “deon”, which essentially means duty or obligation. Deontology can thus be translated as duty ethics.

To give a practical example of deontological ethics, since the 2000s large and medium-sized companies have increasingly tried to project a social or environmentally friendly image through certain marketing and PR measures. Often, as part of these measures, companies donate sizeable sums to combat certain social ills, improve their environmental footprint, or work with NGOs to more effectively monitor the conditions of production among suppliers. Nevertheless, many citizens refuse to positively assess this commitment of companies as ethically genuine. The public discussion sometimes ridicules such programmes of Corporate Social Responsibility (CSR). Critics argue that in these cases companies are not really concerned with rectifying grievances, but only with polishing up their own image and ultimately maximising their bottom line, albeit in a more sophisticated way. Regardless of whether the CSR projects in question contribute to improving some of the (environmental or social) issues, critics are more concerned with the companies motivations than with their action or the results. The companies motivations being the key deontological element for this argument.

Kant is responsible for developing one of the most frequently cited deontological ethics. He argues that an action is only obligatory if it satisfies the “categorical imperative”. There are many different wordings of the categorical imperative, which is best understood as a way of determining ethically permissible types of behaviour. The most frequently cited version states, “Act only according to that maxim you can at the same time will as a universal law without contradiction.” (see Sect. 4.3.3).

3.2.2 Consequentialist Ethics

Consequentialism  is another important ethical theory. Consequentialist theories determine the ethical correctness of an action or a norm solely on the basis of their (foreseeable) consequences. The difference between consequentialism and deontological ethics can be seen in the previously used example. From the perspective of consequentialism, the motives of a company to invest in CSR play no role. For this ethical evaluation of a company’s CSR programme, the only decisive considerations relate to the impact on society, wildlife, nature or maybe social harmony. As long as a CSR programme promotes certain values or, more generally, helps to solve certain social problems, the program can be considered ethical. This also applies if a particular CSR programme was merely motivated by the desire to improve the image of a company or increase sales.

3.2.3 Virtue Ethics

The  concept of virtue ethics mainly goes back to the Greek philosophers Plato (see Fig. 3.2), who developed the concept of the cardinal virtues (wisdom, justice, fortitude, and temperance), and Aristotle, who expanded the catalogue into eleven moral virtues and even added intellectual virtues (like Sophia \(=\) theoretical wisdom). The classical view on virtues held that acting on their basis was equally good for the person acting and for the persons affected by their actions. Whether this is still the case in modern differentiated societies is controversial.

Fig. 3.2
figure 2

(Source Richard Mortel)

Plato

3.3 Meta-ethics

If  ethics can be regarded as the theory of morality, meta-ethics is the theory of (normative) ethics. Meta-ethics is concerned, in particular, with matters of existence (ontology), meaning (semantics) and knowledge (epistemology). Moral ontology is an account of what features of the world have moral significance or worth. Moral semantics is an account of the meaning of moral terms such as right, wrong, good, bad and ought to name the most prominent. Moral epistemology is an account of how we can know moral truth.

3.4 Applied Ethics

Normative  and meta-ethics are usually distinguished from applied ethics. Applied ethics refers to more concrete fields where ethical judgements are made, for example in the areas of medicine (medical ethics) , biotechnology (bioethics) or business (business ethics). In this sense, general normative considerations can be distinguished from more applied ones. However, the relation between the two should not be seen as unidirectional, in the sense that general (“armchair”) considerations come first and are later applied to the real world. Rather, the direction can be both ways, with special conditions of an area in question bearing on general questions of ethics. For example, the general ethical principle of solidarity might mean different things under different circumstances. In a small group, it might imply directly sharing certain goods with your friends and family. In a larger group or in an entire society, however, it might imply quite different measures, such as competing fairly with each other.

3.5 Relationship Between Ethics and Law

Often, ethics and law are seen as being clearly distinct from each other, sometimes even as opposites, in the sense, for example, that ethics starts where the law ends. Persons or companies would then have legal duties and ethical duties, which have little relationship with each other. However, such a view can be challenged, in several ways. First, legal rules often have an ethical side, too. For example, legal norms that make environmental pollution illegal still remain ethical norms, too. Much of the legal framework of a society (like anti-trust laws) has a great ethical importance for a society. Second, ethics can (and has) to some extent become a kind of “soft law”, in the sense that companies need to follow certain ethical standards even if the law in a particular country does not strictly require it. For fear of damaging their reputation, or decreasing the value of their stock, for example, companies are, in many cases, adhering to ethical rules which for them have nearly the same consequences and impact as legal rules (“hard law”). At times the specific ethical process of a business is even used as a unique sales argument, such as companies selling “fair trade” coffee instead or just plain legal coffee.

3.6 Machine Ethics

Machine ethics attempts to answer the question: what would it take to build an ethical AI that could make moral decisions? The main difference between humans making moral decisions and machines making moral decisions is that machines do not have “phenomenology” or “feelings” in the same way as humans do (Moor 2006). They do not have “moral intuition” or “acculturation” either. Machines can process data that represents feelings (Sloman and Croucher 1981), however, no one, as yet, supposes that computers can actually feel and be conscious like people. Life-like robots have been developed (e.g. Hanson Robotics Sophia—see Fig. 3.3) but these robots do not possess phenomenal consciousness or actual feelings of pleasure or pain. In fact, many argue that the robot Sophia represents more of a corporate publicity stunt than a technological achievement and, as such, represents how the mystique around robots and artificial intelligence can be harnessed for attention. In this section we discuss how to design an AI that is capable of making moral decisions. Technical and philosophical elements are presented. Yet we should note that the goal of creating machines that make moral decisions is not without detractors (van Wynsberghe and Robbins 2019). Van Wynsberghe and Robbins note that outside of intellectual curiosity, roboticists have generally failed to present strong reasons for developing moral robots.

Fig. 3.3
figure 3

(Source Hanson Robotics)

The Sophia robot

The philosophical element is a detailed moral theory. It will provide us with an account of what features of the world have moral significance and a decision procedure that enables us to decide what acts are right and wrong in a given situation. Such a decision procedure will be informed by a theory as to what acts are right and wrong in general.

For philosophical convenience, we assume our AI is embedded in a humanoid robot and can act in much the same way as a human. This is a large assumption but for the moment we are embarking on a philosophical thought experiment rather than an engineering project.

3.6.1 Machine Ethics Examples

The  process for an ethical AI embedded in a robot starts with sensor input. We assume sensor input can be converted into symbols and that these symbols are input into a moral cognition portion of the robot’s control system. The moral cognition system must determine how the robot should act. We use the term symbol grounding to refer to the conversion of raw sensor data to symbols. Symbols are used to represent objects and events, properties of objects and events, and relations between objects and events.

Reasoning in an AI typically involves the use of logic. Logic is truth-preserving inference. The most famous example of logical deduction comes from Aristotle. From two premises, “Socrates is a man” and “all men are mortal” the conclusion “Socrates is mortal” can be proved. With the right logical rules, the premises we may need to deduce action will be based on symbols sensed in the environment by the robot.

To illustrate, we can assume that our robot is tasked with issuing tickets to speeding cars. We can also assume also that the minimal input the system needs to issue a ticket is the symbol representing the vehicle (e.g. the license plate number) and a symbol representing whether or not the vehicle was speeding (Speeding or NOT Speeding). A logical rule of of inference can be stated as “If driver X is speeding then the robot U is obligated to issue a ticket to driver X.” In much the same was as we can deduce “Socrates is a mortal” from two premises, we can derive a conclusion such as “the robot U is obligated to issue ticket” from the rule of inference above and a statement like “the driver of car X is speeding.”

Now consider a more difficult problem for the machine that is still morally obvious to a human. Imagine a robot is walking to the post office to post a letter. It walks along a path by a stream. Suddenly a toddler chases a duck which hops into the stream. The toddler slips and falls into the water which is one metre deep. The toddler is in imminent danger of drowning. The robot is waterproof. Should it enter the water and rescue the toddler or should it post the letter? This question is morally obvious to a human but a robot does not have empathy and feelings of urgency and emergency. Hence, it needs rules to make the decision. To solve the toddler problem the robot must have an understanding of causation. If the toddler remains in the water he or she will drown but the letter can be posted at any time. If the robot rescues the toddler, it will not drown, but it may miss an opportunity to post the letter. As a result the letter will arrive a day late.

How does the robot determine what to do? It needs to represent the value of a saved life compared to the value of a one day delay in the arrival of a posted letter at its destination. In deontological terms the robot has two duties, to save the toddler and to post the letter. One has to be acted on first and one deferred. To resolve the clash between duties we need a scale on which the value of the consequences of the two actions can be compared. Such a scale would value saving the toddler’s life over promptly posting a letter. One can assign a utility (a number) to each outcome. The robot can then resolve the clash of duties by calculating these utilities. Thus to solve the toddler versus delayed letter problem we compare the two numbers. Clearly the value of a saved life is orders of magnitude larger than a delayed letter. So, the duty to post the letter can yield to the duty to save the toddler.

Now suppose that the value we give to the living toddler was \(+\)1,000,000 and the value we gave to an on time letter was \(+\)1. Clearly there are orders of magnitude of difference between the value of the toddler and a promptly posted letter. Now consider a scenario in which the robot is travelling to the post office driving a truck with a million and one letters each valued \(+\)1. The robot sees the toddler fall into the stream and using the same logic as was used in the previous example determines not to stop and help the toddler. The moral arithmetic in this case is 1,000,001 to post the letters versus 1,000,000 to save the toddler. It is a narrow call but posting the letters wins by one point. To implement deontology in machines one needs a way to resolve clashes between duties. If you take a naive consequentialist approach and just assign simple utilities to outcomes you run the risk of running into counter-intuitive dilemmas such as this illustrative example.

3.6.2 Moral Diversity and Testing

One  of the main challenges for machine ethics is the lack of agreement as to the nature of a correct moral theory. This is a fundamental problem for machine ethics. How do we implement moral competence in AIs and robots if we have no moral theory to inform our design?

One could design ethical test cases that an AI has to pass. Ideally we would create many test cases. Moral competence can be defined with respect to the ability to pass these test cases. In theory, as an agent or robot goes through iterative cycles of responding to new test cases its moral competence would expand. In so doing, one might gain insights into moral theory.

Testing and even certifying if an AI is fair and ethical is currently an important area of research. The Institute of Electrical and Electronics Engineers (IEEE) announced a Standards Project that addresses algorithmic bias considerations in 2017. Toolkits have been created that help developers to test if their software does have a bias, such as the AI Fairness 360 Open Source Toolkit,Footnote 1 audit-AI.Footnote 2 Some companies offer services to test the bias of algorithms, such as O’Neil Risk Consulting and Algorithmic Auditing,Footnote 3 and even big companies like Facebook are working on Fairness Flow, a tool to test biases. Keeping in mind that this is an area of ongoing inquiry, it should be noted that some researchers are pessimistic about the prospects for machine morality. Moreover, a number of research groups have developed or are developing codes of ethics for robotics engineers (Ingram et al. 2010) and the human-robot interaction profession (Riek and Howard 2014).

Discussion Questions:

  • Explain the difference between normative ethics and meta-ethics.

  • How would you judge the behaviour of a manager of an AI company who improves the fairness of their algorithms in order to increase the profit of their company? Discuss both from a deontological and from a consequentialist point of view.

  • Do you think AI can become ethical? Can ethics be programmed into a machine? Discuss.

Further Reading: