1 Introduction

In his 1980 seminal paper “Do Artifacts Have Politics,” Langdon Winner discusses the famous case of Robert Moses’s overpasses. Moses has been a very influential urban planner working in the state of New York during the first half of twentieth century and contributing to give shape to some of its important spaces, from Central Park in New York City to Jones Beach, the upstate widely acclaimed recreational park. These overpasses, located in Long Island, have a peculiar feature: they are very low such that automobiles can pass easily below them, while trucks and buses cannot get access to the roads where these overpasses are built. Notably, Some of these roads are the ones leading to Jones Beach. Far from being an unintentional mistake in Moses’s design process, these overpasses are rather the expression of his racial prejudices (Winner, 1980). Moses decided to design them precisely to make the access to some areas, such as Jones Beach, easy for automobiles and particularly difficult for public transportation. The reason? To prevent people with low income, who often at that time did not possess private cars, from accessing these recreational areas. In his view, the overpasses were built in such a way to facilitate the access to some areas, to “automobile-owning whites of ‘upper’ and ‘comfortable middle’ classes, as he called them […]. Poor people and blacks, who normally used public transit, were kept off the roads because the twelve-foot tall buses could not get through the overpasses. One consequence was to limit access of racial minorities and low-income groups to Jones Beach” (Winner, 1980, p. 124).

This case clearly exemplifies how design entails moral considerations, although here in a very negative sense. Moreover, it represents an instance of the claim that morality is not only a matter of humans but also a matter of how artifacts are designed and shape humans’ perceptions and actions, as has been discussed by many scholars—among others by the famous sociologist Bruno Latour. For example, speed bumps, according to Latour, incorporate the normative prescription that drivers should slow down before reaching them. Hence, how humans design artifacts can deeply influence actions, including their moral decision-making. This does not mean that artifacts are capable of moral reasoning but that they can be designed in order to shape humans’ moral decision-making.

This chapter deals with the ethics of design and in particular with the so-called moralizing technologies. According to Verbeek (2011), the moralization of technologies is the deliberate attempt to design them to shape moral decision-making. This makes moralizing technologies deeply connected to the responsibility in the design of technologies and in particular to active responsibility, that is, the deliberate attempt to design technologies both to avoid negative consequences and to promote positive ones. The overall goal of the chapter is to critically analyze the promises and perils of moralizing technologies with particular attention to computer and digital technologies. Beyond presenting some of the traditional issues extensively discussed in the literature so far, this chapter aims at evidencing some novel ones that have not yet received the attention they deserve. The structure of this chapter is as follows. Section 2 illustrates the conceptual framework of the discussion and in particular the invisibility factor and the notion of experimental technology. Section 3 presents moralizing technologies through the illustration of two thought experiments. Section 4 focuses mostly on the critical issues and challenges in the moralization of technologies. Finally, Sect. 5 concludes the chapter by summarizing its main content and considering some open issues.

2 Conceptual Framework

The idea that artifacts are “bearers of morality” and designed in order to shape human decision-making is not new, as we briefly discussed in Sect. 1. In this section, we focus on computer technologies and, in particular, on two features that have an impact in their role qua moralizing technologies.

The first feature concerns the invisibility factor, described for the first time by computer ethicist Jim Moor (1985). According to Moor, computer operations are invisible: one can know inputs and outputs of computers but only be dimly aware of their internal processing. This invisibility contributes to generate policy vacuums concerning the use of computer technologies and their ethical significance. Moor distinguishes three kinds of invisibility: invisibility of abuse, invisibility of programming values, and invisibility of complex calculations.

Invisibility of abuse describes those unethical behaviors that take place by exploiting the invisibility of computer operations. An example is the stealing of money from a bank by a programmer who writes a program transferring the excess interest from the bank to their account. Not only is this abuse very different from getting into a bank with a gun and asking the teller for the money but also more difficult to be detected because the computer operations making it possible are mostly invisible.

Invisibility of programming values concerns the values of programmers that are usually embedded into their programs in an invisible way. As programs are the results of human processes—Moor stresses—they contain human values both in the positive and in the negative sense. Moreover, these programming values can be inserted into the programs both intentionally and unintentionally. For example, the development of a program for airline reservation can be designed in a way to show as the best results those of a particular airline company, even if its flights are not the most convenient ones. Programming values can be also not deliberately inserted into a program, when, for instance, the programmer is not aware of their bias.

Invisibility of complex calculation describes how computers are capable of very complex calculations that go beyond human comprehension, which is also the reason why computers have been created. An interesting example is the four-color conjecture solved in 1976 by a computer program at the University of Illinois. The three kinds of invisibility, even if proposed in 1985, are still valid. It is not difficult to recognize how they can be applied to many of the situations we experience today, from different types of bias of artificial intelligence (AI) algorithms to the complexity of deep learning techniques.

The second feature of digital technologies we aim at highlighting in this section is connected to what the ethicist of technology Ibo van de Poel has labeled experimental technologies (van de Poel, 2016). Experimental technologies are those technologies whose risks and benefits are hard to estimate before they are properly inserted in their context of use: “I will call technologies experimental if there is only limited operational experience with them, so that social benefits and risks cannot, or at least not straightforwardly, be assessed on basis of experience” (van de Poel, 2016, p. 669). According to this characterization, nanotechnologies, algae based on synthetic biology, autonomous vehicles, and human enhancement drugs are examples of experimental technologies. Yet in this chapter, we focus exclusively on experimental computer technologies. For instance, several applications adopting AI or machine learning (ML) techniques are experimental in the sense suggested by van de Poel. The inherent complexity of these technical artifacts, together with the uncertainty connected to their interaction with the environment and the users, makes it very difficult to precisely predict their benefits and risks. This is potentially true for any technology that is complex enough. Indeed, many technologies in their initial phases of development are interested in the famous Collingridge dilemma (Collingridge, 1980). This dilemma describes the differences between the early phases of a technology, where its social embedding is characterized by uncertainty, and the later stages, when this uncertainty might be decreased, but the entrenchment of the technology into society is so strong that it is already too late to overcome its negative effects. In the case of some current computer technologies, this experimental nature is very evident: it is not by chance they are sometimes labeled as emerging technologies to further stress their experimental nature. This nature raises several concerns in terms of the possibility to anticipate and predict their risks which, in the case of computer technologies, are particularly serious as they are likely to impact very large portions of populations given their extensive diffusion.

The invisibility factor and the notion of experimental technology can be profitably used as interpretative frameworks for some of the current computer technologies. Moreover, they impact on the notion of moral responsibility as discussed in the current ethics of technology. The traditional paradigm of responsibility is usually centered around what is called the passive approach: when something undesirable has occurred in the development or use of a technology, the idea is to look backward to reconstruct who is responsible for this negative outcome. Beyond passive responsibility, in the last years, a different approach has been proposed: active responsibility, that is, the responsibility relevant before something negative has occurred. In other words, active responsibility is about both preventing the negative effects of a technology and designing it to realize its positive effects. Active responsibility thus promotes a proactive approach to technological development and evidences how technological design can play an essential role to address responsibility (van de Poel & Royakkers, 2011). Responsibility here is not only a form of backward-looking Accountability in the sense of being held to account for, or justify, one’s actions toward others, but a proactive attitude according to which designers are morally accountable also at the beginning of the design process. The idea to design technologies for avoiding negative effects and for promoting positive ones is very powerful and tries to anticipate the solutions of some issues already at the design level. At the same time, this anticipation is extremely critical: the possibility to steer technological development is always difficult because of its high level of unpredictability, but becomes particularly difficult when dealing with technologies that are both experimental and invisible in the sense outlined above.

In the next section, we will move further along this direction and focus on moralizing technologies, that is, a particular type of technology designed to promote positive effects and to steer human moral decision-making. The idea is that moral decision-making can be the result of human processes together with their interactions with technologies. In other words, the moralization of technologies exploits the possibility of moralizing also our material environment, including the technologies, beyond the usual possibility of moralizing people.

3 Moralizing Technologies

To better understand the nature of moralizing technologies, let us introduce a couple of thought experiments. Thought experiments are traditionally used in philosophy as “devices of imagination” for various purposes (Brown & Fehige, 2022). Additionally, a long tradition of thought experimentation characterizes scientific reasoning, including prominent natural philosophers, such as Galileo Galilei, Gottfried Leibniz, and Isaac Newton, and scientists like Albert Einstein. Here, we devise two thought experiments, based on realistic and partly already existing technologies, but conceived in such a way to stretch our imagination to some interesting directions.

The first one is an alcohol lock for cars. Existing alcohol locks for cars, by analyzing drivers’ breath, check the alcohol level in their body and signal if this level is above the limits imposed by the law. If the alcohol level is beyond this threshold, the car stays locked, and the driver cannot use it. Let suppose now that cars equipped with this alcohol lock would not be more expensive than cars without this system. Let us also suppose that they have some other desirable features, difficult if not impossible to have in reality. First of all, all the personal data collected during the analysis of the level of the alcohol would stay completely private: only the user could know and access them. Second, such an alcohol lock for a car would work in a perfect way, meaning that it would produce neither false positives nor false negatives. Finally, the process to analyze the level of alcohol in the blood would be very smooth and fast such that the driver would spend a minimum amount of time to check their alcohol level. As it should be clear, these three last features are imaginary in one way or another: we are well aware that in reality, it is not possible to have technologies that work without the possibility of any mistake or that personal data cannot be 100% protected. However, the goal of this thought experiment is not to focus on the details of the design of such a device but rather to investigate the opportunities and challenges of moralizing technologies. And here it is very clear that the alcohol lock for cars is a technology designed to moralize cars’ drivers. Similarly to Latour’s speed bumps telling the driver “slow down before reaching me,” alcohol locks for cars incorporate in their design the maxim “don’t drive when you have drunk too much.” It is important to stress that today, notwithstanding an increased awareness on the dangers of driving while drinking, still many accidents occur for this reason, and the efforts provided to change the cultural attitude with respect to this problem seem to be not enough to solve it. Automobiles equipped with alcohol locks, possessing the desirable features we have listed, appear as a promising way to solve this issue in a definitive way.

The second thought experiment focuses on a serious and very urgent issue as well, the scarcity of water and the consequent need of saving and efficiently managing it. We are well aware today of the importance of water and how vital it is to save it, in particular in some areas of the world. Although it might seem that the problem of water is not a matter of death or life, like the driving while drinking case, water is an essential resource, and its scarcity has a profound impact on human lives at both the individual and the collective level, such as in migrations, wars, and other tragedies caused by drought. Individual behaviors can make a difference in water preservation; at the same time, many of us are used to having plenty of water available and, for example, to take very long showers with scarce attention to the amount of water consumed. Here, again, we can imagine a moralizing technology supporting us in the process of saving water while not reducing the comfort of our long showers. Let us imagine, in this case, a smart showerhead that, if applied to our shower, can reduce our daily consumption of water up to 50%. Once again, this device would be economically affordable and very easy to use. The label smart aims at stressing two important elements. The first one concerns the idea of having a technology that solves the problem in a smart way. In terms of the goal of saving water, it would be the same to have a shower programmed in a way to stop after some time—say 2 min—namely, when the daily allowed consumption of water has been reached. But of course, this will not be the same in terms of our comfort: no one would buy a shower like this with the risk of having the water interrupted when, for example, still having to rinse the shampoo from the hair. The second element is the idea that the imagined showerhead can learn from our habits so that the experience of the shower is both tailored to our preferences and, at the same time, allows us to save water. Here, once again, implementation details are not the core of our thought experimentation. Rather, it is the goal of this exercise with imagination that is important: the smart showerhead, when applied to our shower, can save water in a smart way without reducing the comfort of the shower experience. For example, the device could learn that we do not like much water when using the soap, and so adapt the flux of the shower accordingly, while we love a strong flux when rinsing our hair. Finally, imagination is important to stress, exactly as in the previous case, that this technology should work smoothly without any error and protect collected data in a perfect way.

These two fictional but realistic cases serve to illustrate the power of moralizing technologies: they easily show how moral decision-making can become a matter both of humans and technologies. This does not mean, of course, that technologies are capable of moral reasoning but that they constrain, influence, and shape our moral decision-making in some decisive ways. They both illustrate the moralization of technologies as “the deliberate development of technologies in order to shape moral action and decision making” (van de Poel & Royakkers, 2011, p. 207). It is not by chance that in this section, we have imagined two cases of moralizing technologies that, contrary to Moses’s racial overpasses, steer our moral action in the direction of positive values, such as to avoid car accidents due to alcohol and to save water. However, as we will describe in the next section, it is not enough to design technologies to achieve positive outcomes for eliminating the many critical reactions that can emerge from this approach.

4 Exploring the Promises and Perils of Moralizing Technologies

Moralizing technologies offer several promises in terms of positively impacting human actions by moralizing the material environment, including technologies, in which humans live. Both thought experiments of Sect. 3 show how the solution of very serious problems can be achieved by means of technologies designed in a way to promote an active approach to responsibility. In the first case, the alcohol lock for cars tells you “don’t drive while drinking”; in the second case, the smart shower tells you “don’t waste water.” Yet, there is a very significant, immediate difference between these two technologies: in the first case, the goal of avoiding car accidents due to alcohol is attained by means of a strong limitation to our actions, whereas in the second case, there is no apparent limitation to our freedom: we can take showers as long and comfortable as we like. This difference is well represented by different reactions: when asking people if they would buy the alcohol lock for cars, the answers are mostly negative, while when asking if they would buy the smart showerhead, the answers are almost all positive.

There are also some common elements worth considering when analyzing the critical elements of moralizing technologies. The first element concerns the fear that technologies, and not humans, are in control: such a fear is usually more strongly perceived in the alcohol lock for cars example. This is a key point for at least two reasons. First, technologies in control, and in particular in sensible contexts, raise concerns about possible technocratic drifts where humans might be governed by machines. It is not necessary here to make appeal to science fiction or imagine dystopic future scenarios: it is enough to observe how many decisions impacting both individuals and societies (i.e., police profiling or court sentencing) are increasingly delegated to decision systems based on algorithms (Crawford, 2021; O’Neil, 2016; Scantamburlo et al., 2019). Second, at least in recent history, human autonomy is strongly and deeply intertwined with dignity. Even if the relationship between autonomy and dignity has a long tradition, a recent revamp of it is offered by the debate on current recommender systems that, learning from our previous choices, suggest what movie to watch next, what song, what book, what purchase, but also what friend or romantic relationship to make or engage in shaping how we see the world (Zuboff, 2019). Are we still autonomous in a context in which the fabric of our societies is weaved with these silent and invisible computer technologies? Every time human autonomy is touched upon, the risk of losing dignity emerges in a way that easily swifts in the direction of a complete dehumanization when technologies are in full control. A critical case at point is that discussed for self-driving cars and the idea of programming them to decide who to kill (a young kid or a group of elderly?) in the case of unavoidable accidents, transforming ethical reasoning into a calculation while dismissing complex human deliberative processes (Fossa, 2023).

The second negative reaction toward the moralization of technologies, and of computer technologies in particular, concerns the risk of losing the capability of moral decision-making. The worry is that the constant and increasing delegation to machines of our decisions—also those with a strong moral impact—could make us incapable of exercising our moral competence. As moral decisions are complex and the result of articulated processes of deliberation, the risk could be then to become incapable of dealing with this complexity if not constantly exercised. One could argue here that moral decision-making is a sort of innate capability in humans and that, even if delegated to technological artifacts, it will not disappear from us. Yet, the risk of becoming lazy and unaware of the moral scope of many of our decisions is real and could move us toward a possible de-responsibilization. Moreover, there are situations in which it is crucial to deactivate this delegation to technologies and exercise your own judgment (Nowotny, 2022).

Before delving into the third type of negative reaction, where the key question is a matter of power of who decides how moralizing technologies have to be shaped, it is worth stressing some further elements in the two thought experiments we introduced in Sect. 3. Both concern some limitations to human autonomy. This limitation is well evident in the first case, where it is physically impossible to drive when drunk, while in the second case is subtler: to save water, the user has to confine themselves to the preferences learned by the smart showerhead that, in order to achieve this saving, adapts to their habits while preserving the comfort of the shower experience. Limitations to human freedom are, of course, common experiences in everyday life, and we live in societies where laws constitute an example of these limitations. There is, however, a substantial difference between the limitations imposed by the law, prohibiting driving when having some amount of alcohol in the blood, and that imposed by a technology such as the alcohol lock for cars that implements this law. In the case of the law, one has the freedom to decide not to follow it (with all the risks and possible consequences of this decision), where in the case of the alcohol lock for car, it is precisely this possibility that is eliminated: if the percentage of alcohol in their body is beyond the limit imposed by the law, it is physically impossible for the driver to use the car. Such physical impossibility does not hold for any moralizing technology. For example, it is evident that in the smart showerhead case, the moralizing technology does not impede the possibility to take the shower; it only shapes how to take it. This probably explains the different attitudes and reactions people have in front of the two thought examples. At the same time, it evidences the importance of how these moralizing technologies are designed. Would it be possible to conceive an alcohol lock for cars working in a different way? A better solution would be probably that of a design more similar to the annoying sound of the vehicles seat belts that do not block one in driving the vehicle when they are not in use, but constantly remember this fact.

It is not the place here to investigate possible better designs for moralizing technologies (one crucial element would be if it would be possible to understand the limit between the benefits of these technologies and the attempt to escape the technologically imposed limits to freedom). Rather, it is the place to discuss one critical element of moralizing technologies not sufficiently debated so far. This element is whether there is a way to moralize technologies in a democratic way. And here, hopefully, our thought experiments will be useful for illustrating this.

One further critical element in the moralization of technologies can be the fact that this process is usually the result of invisible decisions of small groups of people and not of a public deliberation achieved in democratic terms. In this respect, the first case (alcohol lock for cars) is paradoxically less problematic than the second one (smart showerhead). Indeed, the alcohol lock for cars implements a law that is the result of a democratic process. In democracies, at least ideally, laws are decided by elected representatives. Therefore, there should be a clear sense of responsibility in deciding and setting up any law: in theory, this process should be transparent and those who decided it accountable. Then, of course, the passage of moving from the level of the law to the level of the technology that is critical for human autonomy is limited, when the law is technologically implemented as in the case of the alcohol lock for cars. This is not true in the case of the smart shower, although one has the choice of whether or not to buy a smart showerhead: we have seen that the perceived and effective degrees of freedom are wider. However, in this case, who decides how the technology should be moralized and which values to be inserted not only are opaque, but also they are not the result of a democratic and publicly debated process. This problem arises when the choices behind the selection of some values and their technological implementation are mostly invisible and not subjected to public discussion, oversight, and control. Whether technologies can be moralized in a democratic way is an open question that cannot be solved in the space of this chapter. A good starting point is the awareness that the issues at stake are not only moral but political as well.

The discussion of the critical elements of the moralization of technologies shows how to design technologies for the good is not enough. First of all, unintended consequences can always arise if we consider the design process only as a translation of constraints (even of moral nature) into the technical artifact. For example, it might be the case that to save water, the smart showerhead increases the overall energy consumption because it requires a large amount of energy to train the algorithm capable of “smartly” regulating the flux of the water. Moreover, the invisibility factor, typical of any computer technology, plays a major role in the case of moralizing technologies: it is not only the opacity of the inner working of the algorithm but of the socio-technical process shaping the moral account of these technologies. Finally, given that moralizing technologies can be experimental in the sense discussed in Sect. 2, the high degree of uncertainty makes it very difficult, if not impossible, to assess their risks and benefits at the design level.

5 Conclusions

Artifacts do have politics, and today this is even more evident as many of our human decisions are taken through technologies and through computer technologies in particular. Technological design is a complex process that requires moral choices and not merely technical ones. In this chapter, we have discussed how the moralization of technologies, in accordance with active responsibility in the ethics of technology, is a promising approach. At the same time, we have evidenced some important critical elements that should be taken very seriously at this stage. These criticalities show how technological design is a complex socio-technical process that cannot be reduced to its technical elements. Not only the people who will use these technologies should play a role in this process, but also the intrinsic moral and political connotation of the process should be clearly recognized.

This awareness can be translated at different levels: designers cannot simply inscribe a technical function into the design of a technology; policy makers need not only to regulate but to intervene in the co-shaping of technologies from the design phases; citizens must be aware that it is not enough to have technologies designed for the good, but it is essential to know and discuss who decides which values are embedded in the technologies and how.

It is a quite radical shift of perspective, in particular in a time in which several new policy vacuums emerge every day. One role for philosophy is thus to fill in these vacuums by means of conceptual clarification (Moor, 1985): to regulate technologies, it is essential to understand their nature. This is not a job only for philosophy but rather is an interdisciplinary effort devoted to asking questions, analyzing problems, and discussing possible solutions capable of building on the strengths of many different disciplines.

Discussion Questions for Students and Their Teachers

  1. 1.

    Can you think of examples of the invisibility factor connected to current computer technologies? Do you think new kinds of invisibility (beyond invisibility of abuse, programming values, and complex calculation) should be proposed to describe current computer technologies?

  2. 2.

    Discuss possible ways to moralize digital technologies in a democratic way by means of examples.

Learning Resources for Students

  1. 1.

    Kroes, P. and Verbeek, P.P. (2014) (eds.) The Moral Status of Technical Artefacts. Springer.

    A book containing several arguments and counterarguments on the moral status of technology and technical artifacts. One of the foundational books in the analytical approach to the philosophy of technology.

  2. 2.

    Johnson, D. (2008) Computer Ethics. Fourth Edition. Prentice Hall.

    One of the first textbooks in computer ethics adopting a socio-technical approach. A bit outdated with respect to the examples, yet very interesting in terms of theoretical frameworks.

  3. 3.

    Pelillo, M. and Scantamburlo, T. (2021) (eds.) Machines We Trust. Cambridge (MA): MIT Press.

    Edited volume presenting contributions that consider the “ethical debts” of AI systems. It presents a variety of issues and approaches.

  4. 4.

    Peterson, T., Ferreira, R. and Vardi, M. (2023) ‘Abstracted Power and Responsibility in Computer Science Ethics Education’ in IEEE Transactions on Technology and Society, 4:1, 96–102.

    A paper discussing the concept of abstracted power to describe how technology may distance computer scientists from consequences of their action. It stresses how abstracted power impacts on responsibility.

  5. 5.

    Taebi, B. (2021) Ethics and Engineering. Cambridge: Cambridge University Press.

    A comprehensive view on the ethical issues of engineering with an attention to engineering practice. An advanced textbook with a scholarly approach.