Morality is commonly understood to consist of the values, norms, and habits that are taken to be self-evident in a specific group of people. In daily life, morality is thought to come forward in people’s judgments, their expressions of praise and blame, but their morality can also remain implicit in what people think they have to do, what it is worthwhile to strive for, their feelings of achievement and satisfaction, and their sense of shame, guilt, regret or uneasiness about how things are going. Ethics, by contrast, is taken to be a reflective discipline about this morality of people. Ethics investigates morality and asks what the meaning of our moral judgments actually is, what they are about, and how we can justify them or find out that something should be changed.

What comes forward in these presuppositions is that in the most common understandings, morality and ethics figure as distinctively human affairs. This humanist understanding of ethics is a heritage of our Enlightenment past, in which modern sciences were construed on the basis of an ontological distinction between subjects and objects which also influenced the most influential approaches to ethics that were formed during that period. A Kantian deontologist ethics focuses on the subject as a source of ethics, making autonomous choice the source of moral agency. Utilitarian consequentialist ethics, such as Jeremy Bentham proposed, shapes ethics according to a scientific model. By means of an analogy between the meaning of ‘good’ and ‘bad’ and experiences of pain and pleasure, Bentham gave moral judgments a basis in experiences of the objective world.

In view of this past, it is unsurprising that ethical reflections on technology have most commonly focused on human beings as well. Ethics of technology was just a variation of applied ethics—biomedical ethics, ethics of information technology, ethics of nanotechnology—which investigate specific problems that technologies raise for human beings, such as designers or users of technology. In these approaches, technologies figure as instruments, which—just like other objects—are morally neutral in themselves. The ethical reflection concentrates on the choices that human agents make when they create new technologies or use them to realize their purposes, or when technologies bring about pleasure and pain (risks) for inhabitants of society at large.

It is very rare that ethics of technology focuses on the morality of technology itself. This is however the ambition of Peter-Paul Verbeek in his book Moralizing technology: Understanding and designing the morality of things. This project nicely flows from his earlier work, most notably his book What things do: Philosophical reflections on technology, agency and design (2005) in which he studied the relation between human beings and material culture. Building on Bruno Latour’s perspective on actor–networks in which human and non-human agents (artifacts) are thought to co-shape each other’s identities and activities and Don Ihde’s intricate analyses of the ways in which technologies shape human experiences of reality, Verbeek builds his own philosophy of technological mediation, in which he shows how understandings of human life—including understandings of the good life—are always generated in contexts in which other human beings and artifacts are also present. These contexts of interaction shape the lives that people are able to lead, because they influence interactions, daily routines, and perceptions of reality. These changes may also be morally relevant, for new technologies can make people responsible in ways that would be inconceivable without the available technologies in a specific context.

While Verbeek’s approach in What things do was analytic and descriptive, in Moralizing technology he calls for the development of a normative ethics which takes the moralizing powers of technologies seriously. The need for such a normative ethics is illustrated with his example of a prenatal ultrasound scan, to which he returns several times in the book. In the Netherlands this scan is done when a woman is 20 weeks pregnant in order to find out whether the unborn child has Down syndrome or spina bifida. But the scan does not only fulfill a medical purpose; it also changes the moral relationships of all human agents involved. While the scan visualizes the child as an individual being, thus breaking the privileged child–mother relationship and helping fathers to imagine their future fatherhood, it also makes the parents responsible for this being in new ways: Even before its birth, the child is looked at as a potential patient and parents may be confronted with a decision about whether or not this child should be born. Furthermore, this responsibility for the life of the child is inescapable: even though prenatal screening is voluntary, parents who refuse it are responsible for accepting the risk that they may bring into the world a child with Down syndrome or spina bifida. There is no way in which parents can return to a time before there were ultrasound tests and they did not have to take a decision about it.

An example such as this one shows, according to Verbeek, that we need an ethics of technology that not only focuses on the decisions that human beings take, but which also investigates how technologies bring about situations that impose new moral demands on human beings. We need to investigate the moral significance of technological artifacts themselves. In Moralizing technology, Verbeek argues that this demands a posthumanist ethics, which no longer takes the Enlightenment subject–object distinction as its self-evident starting point, but leaves room for the possibility that morality gets a material shape. For this posthumanist ethics, Verbeek builds on Peter Sloterdijk who in Rules for the human zoo criticized the humanist understanding of human beings as animals gifted with reason and speech, thus neglecting “biological and ‘material’ aspects of human beings” which in the near future become part of the domain of ethics because it will become possible to shape human kind with a combination of genetic and reproductive techniques. (p. 36)

From Heidegger and Latour, Verbeek adopts the practice-based approach to human–technology interactions, which is considered primary to the ontological subject–object opposition that was the starting point of Enlightenment ethics. In daily occupations—such as cooking, conversing, playing, carpenting—human beings do not relate to things as subjects toward objects, but just use them to do the things that they are doing. The objectifying gaze that a subject can adopt towards objects only emerges in situations in which their interaction with things is somehow disturbed, for example because a thing has a defect or is broken. The practical being in the world that Heidegger described, develops in Latour’s work into the actor–network theory in which the way human beings deal with their material environment is like a web or network “(…) of relations in which humans and world are intertwined and give meaning to each other.” (p. 28)

These everyday relations that human beings and technologies have are lost in the subject–object distinction, but according to Verbeek we need them if we want to understand the way in which technologies matter morally. It is the basis of the approach that he develops in Moralizing technology, in which he carefully takes the reader through different argumentative steps. After his defense of a posthumanist ethics, he subsequently specifies how artifacts can be part of the moral community, how in this technology–human interaction, there could be room for an ethics for human agents, and how the morality inherent to things should be ethically dealt with in design.

A short review like this cannot do justice to the fine way in which Verbeek brings forward these original ideas, building on important predecessors, but also taking distance from them when that seems fit. But it is clear that in his characterization of the morality inherent to technological artifacts and the moral attitude that human agents are able to develop, it is presupposed that the moral character of both takes shape in relation to each other in networks of interaction. The ethics that he proposes is therefore an approach that takes contexts of interaction as a starting point. This cannot be a version of an Enlightenment ethics, but takes a good life ethics as its inspiration which departs from the interwoven character of subjects and objects and assists human beings to develop a thoughtful relation to both. A good life approach leaves more room to see morality not only as the result of human decision-making, but allows to think through how moral action takes shape in practices that connect people to the material world in which they live. The later work of Foucault, according to Verbeek, provides the most inspiring starting point for the development of such a good life ethics of technology. While Foucault did not primarily focus on technologies, the approach that he adopted to ethics in his later work takes as a starting point a human subject who acquires meaning in social contexts which are characterized by societal forces and structures, and which could accommodate technological mediating powers as well.

Verbeek’s book is a very interesting and inspiring piece of work. As for shortcomings, there is only one point to bring forward, which is that Verbeek does not actually question whether we want to engage in some of the interactions with technologies. Of course, he is right that technology is already pervasive in our lives, and it is impossible to take it out of our world. It may be good and wise to anticipate new interactions with technologies and integrate new technologies in our perspective of the good life. But we might also think about limitations that we want to impose on their influence on our lives. Do we want to shape all aspects of our lives—including for example our reproduction—in connection to technologies? Or do we want to restrain or diminish the effects of technology on some aspects of our lives? Verbeek’s ethics of technology seems to provide little basis for the definition of such limitations. Verbeek tries to improve the moral character of the technologies that will come about and seems to presuppose that they will come.