Science and Engineering Ethics

, Volume 15, Issue 1, pp 51–68 | Cite as

Technological Delegation: Responsibility for the Unintended

Open Access
Original Paper


This article defends three interconnected premises that together demand for a new way of dealing with moral responsibility in developing and using technological artifacts. The first premise is that humans increasingly make use of dissociated technological delegation. Second, because technologies do not simply fulfill our actions, but rather mediate them, the initial aims alter and outcomes are often different from those intended. Third, since the outcomes are often unforeseen and unintended, we can no longer simply apply the traditional (modernist) models for discussing moral responsibility. We need to reinterpret moral responsibility. A schematic layout of a model on Social Role-Responsibility that incorporates these three premises is presented to allow discussion of a new way of interpreting moral responsibility.


Moral responsibility Technological artifacts Dissociated technological delegation Social-role responsibility 


In a wealthy neighborhood, a small committee complained about the amount of traffic in their streets. The City Council responded to this and asked one of their civil servants to solve the problem. He ordered traffic locks to be built throughout the neighborhood and promised to assess the situation in a year. His evaluation showed that many drivers took a detour through a poorer area where many young families live instead of using the main route which is often jammed. The report showed that in this neighborhood significantly more accidents had occurred which proved fatal for playing children. Quickly, the question arose: who is morally responsible for this—unintended and undesired—outcome? Are the drivers to blame for not taking the main route or is it the civil servant who ordered the building of the traffic locks? Or was he just following orders and is the City Council to be blamed? We also could consider the parents: were they watching their children closely enough?

Because no one foresaw (and could have foreseen) what would happen, it is hard to place blame on a specific actor. So, should we simply not try to assign moral responsibility? Retrospectively, this may be wise, as pointing out who is to blame is perhaps less fruitful than solving the problem. But prospectively, just because the outcome is hard to foresee, it is not desirable to promote an attitude of “trial and error”—or even worse, an attitude of “anything goes”—by giving up on moral responsibility.

Problems with moral responsibility are common in our current technological culture mainly because we increasingly make use of technological artifacts.1 Movies replace the traditional theater plays, delivering entertainment from a distance. With the help of the internet, we can chat real-time with people on other continents and order products from around the world. In warfare, we can push a few buttons and thousands of miles away, a bomb will drop. For almost everything, we rely on technological artifacts. We delegate, so to speak, many tasks (fully or partially) to technology. We do not deliver messages ourselves, but we delegate the delivery of our messages to the Internet and to computer programs like Windows and Outlook.

Due to this process of delegation, our actions have an impact over a greater distance and over a longer period of time. There arises some remoteness between the initial actor and the eventual action, which is uncommon when acting directly. Without the remote mediation of technological artifacts it is impossible for humans to destroy whole cities within a few minutes. Further, technological artifacts can make a difference over a long period of time. For instance, to improve the economy of a rural region, a one-way railway bridge was built over a river about a century ago. Freights could travel to and from the south of the Netherlands to the Ruhr Area in Germany, and people could commute between two Dutch provinces. The new railway brought, with several different initiatives, economic growth. However, the bridge just has one set of tracks, so only one train at the time can cross the river. Nowadays, the bridge has become an obstacle and often increases delays as one train every 5 min is no longer enough. But demolishing the old bridge and building a new one is virtually impossible because the trains would not be able to run for a long time. So the decision to build the one-way bridge had consequences for literally hundreds of years.

To address the relevance of remoteness in time or space, I will use the term dissociated technological mediation. Dissociated technological delegation involves entrusting a task to an artifact with the aim of affecting others over a greater distance and a longer period of time.

Of course, I am not the first to argue that technologically mediated actions are different from direct actions. Hans Jonas already claimed in the late 1970s that we need to rethink moral responsibility, since the nature of human action has altered [2]. Traditional models for assigning moral responsibility are developed for direct actions in which one or several well-defined agents do something which has certain well-defined consequences. But technological actions are not the same, Jonas argues. First, the “problem of the many hands” has to be considered. Since many people are involved in the development and use of an artifact, assigning moral responsibility becomes more complex. Second, often the people involved are not in each other’s company and are unknown to each other. Nevertheless, their actions are interrelated and lead to new and unexpected outcomes. Also, technologies provide humans with new powers, like flying and curing deadly diseases. Because of these changes, we need a new view on moral responsibility [2].

Thirty years after the first presentation of Jonas’ work, his threefold diagnoses of the problem remains relevant, but new insights call for some nuances. This article discusses Jonas’ diagnoses, using new insights produced by Science and Technology Studies (STS) scholars and philosophers of technology on the social role of technological artifacts. In the nineties their studies showed that the mediating role of technology is even more complex than Jonas assumed. Being one of the more comprehensive, Bruno Latour’s [3] and Peter-Paul Verbeek’s [4] views on technological mediation are central in the third section of this article. The mediating role of technological artifacts is explored to sketch the complexity of the relation between the initial action and the eventual outcomes. Because of this complexity, traditional moral responsibility models are no longer helpful. In the fourth section, this is discussed more systematically. The fifth section shows that drawing up static moral rules for dealing with technology, like Hans Lenk suggests [5, 6], will not solve the problem either, since technological mediation constantly changes our actions and our moral viewpoints. Therefore, strict rules or ethical codes are not sufficient over the passage of time. We need another model for moral responsibility and some features of such a model are sketched in the last section. First, to make clear why assigning responsibility is so problematic with technologically delegated actions, we need to focus on classical theories of moral responsibility to make clear how direct human action is perceived.

Moral Responsibility

To explain that dissociated technological delegation leads to problems in assigning moral responsibility, it is important to see what responsibility means. One of the most complete lists of parts for analyzing the idea is composed by Günther Ropohl [7]. This list consists of six parts: (1) The subject of responsibility; (2) The object of responsibility (which can refer to the act, its consequences or the attitude of the subject); (3) The petitioner or petitioners; (4) The consequences; (5) The time perspective; (6) The moral norms used for the evaluation.

The first three form the minimal model of moral responsibility, as described by Walther Zimmerli [8] and Joseph Bochenski [6]. This minimal model of moral responsibility states that if the actor2 performs an action that has a certain impact on morally relevant entities, the actor can be held responsible [8]. So ‘being responsible’ means the actors (the subjects of responsibility) have to justify their actions (the objects of responsibility) to the affected parties (the petitioners). To complete the understanding of the notion, Ropohl has extended this basic model with three additional elements. First, responsibility can be assigned retrospectively (afterwards) or prospectively (in advance) and so the time aspect needs to be added. Second, attributing moral responsibility involves making a value judgment. As Kurt Bayertz argues [9], a strict distinction has to be made between arguing that someone has done something, which is stating a fact, and arguing that someone is morally accountable, which is a normative statement. The final element is that responsibility always refers to the consequences of these actions. If an act does not have any (possible) consequences, it is unlikely that questions of responsibility will arise. This does not mean the consequences are the object of responsibility; it just means that we are considering questions of responsibility because something happens or because something could happen. Based on this analysis being responsible means (given certain norms and values) having to justify your acts that have a certain impact on a morally relevant entity.

Rophol’s analysis and the definition abstracted from it are useful in settling what responsibility amounts to. However, it does not tell us who is responsible for what: it does not tell us who is responsible or who should take responsibility in which cases. Hans Lenk has extensively analyzed the existing moral responsibility theories [5, 6]. He distinguishes two “real” types of moral responsibility models: Action Responsibility and Universal Moral Responsibility. Action Responsibility stresses the connection between the actors and the consequences of their actions. Authors defending Action Responsibility argue that actors are responsible if there is a clear causal relationship between their actions and the consequences of those actions. Causality and intentionality are the key notions here. According to Universal Moral Responsibility “being responsible” means the actors have to justify their actions by the moral rules or prima-facie principles. Universal Moral Responsibility stresses the obedience to moral norms or principles. Action Responsibility and Universal Moral Responsibility share at least four preconditions of the traditional models for assigning moral responsibility [10].
  1. 1.

    The actor has acted himself or herself.3

  2. 2.

    The actor has acted freely and willingly.

  3. 3.

    The actor knew or could have known the relevant facts.

  4. 4.

    The actor knew or could have known the relevant norms and values.


By understanding the complexity of dissociated technological delegation, however, it becomes clear that the preconditions cannot be fulfilled. To explain this, I will first discuss the characteristics of dissociated technological delegation.

Technological Delegation

Hans Jonas recognized that managing, using and developing technology is—unlike direct acting—a multiactor process with remote and unforeseen consequences. As a result, technological delegation leads to complex motivational problems and raises questions about responsibility. It is important to distinguish the two because we should not address motivational issues to escape moral responsibility.

The motivational problems of technologically delegated acts are caused by the detachment of the actor and the eventual outcome. Günther Anders [11] described this effect in a macabre way when he discussed the bombing of Hiroshima. To drop a bomb, the pilot only had to press a button and he had to face neither the victims nor the consequences of the bombing. Without hearing or seeing the effects, he was able to kill millions of people, while, as Anders claims, listening to some classical music. Due to remoteness, people become emotionally detached.

Anders’ negative attitude towards modern technologies faced extensive criticism [12]. Technologies like television and internet have increased the emotional engagement in many national and international political debates on animal and child abuse, starvation in the Third World and war. Technology in general does not make us less engaged with the world. But Anders is right insofar that experiencing the impact of the acts one initiates is relevant for our moral motivations. When we delegate actions, we do not experience their impact (positive or negative) as we do when acting directly. Hence, the emotions we feel are different. This is not only the case in warfare: teachers often feel sorry for communicating bad results to hardworking students face-to-face. But when filling in digital forms on the intranet, typing anonymous student numbers and grades, most teachers only feel sorry about the bad statistics of the group. The disappointment, frustration or indifference of the students is not witnessed.

Emotionally there is a huge difference between direct acting and dissociated technological delegation, that is not relevant for assigning moral responsibility. It does not matter whether you tell a lie face-to-face or by telephone. Of course, lying on the telephone is easier because you do not have to look your recipient in the eyes. Still, you are equally morally responsible for lying. One should not confuse the relevance of our emotions for moral motivation with the notion that emotions are relevant for our moral responsibilities.

Next to remoteness in space and time, dissociated technological delegation decreases engagement since often it affects people who are not involved in the initial actions or decisions and who remain anonymous in the process. Even when it is clear in advance that some groups will be worse-off, while others will benefit, it is often not clear which individual will have the benefits and which individual will have the disadvantages: names and faces are not shown. In his famous article “Do artifacts have politics?” Langdon Winner describes the Long Island bridges that are built over the parkways [13]. He claims that for political purposes, the bridges are just nine feet above street level,4 preventing black people visiting the beach. In general black people were poor and had to travel by buses that are too high to pass under the bridges. As Winner shows, the design of the bridges had an enormous political power over largely unknown individuals who were just affected because they used a certain means of transport.

However, is it—for assigning moral responsibility—relevant that the actor knows the petitioner? One could argue that humans have more caring duties towards one’s siblings and loved ones than towards strangers. This may be true; however, this argument does not show that one is not responsible for the impact of one’s (in)actions on others just because one does not know them. There are good reasons to reject such a conclusion: if the bartender puts some arsenic in a beer of a randomly chosen customer, he is equally responsible for killing them, as he would have been if he had put the arsenic in the drink of a carefully chosen colleague.

Summarizing, remoteness and affecting anonymous people raises motivational problems for acting responsibly: one does not experience the impact and often does not know the people involved. This is important for understanding why people act detached, but it does not say anything about moral accountability. To readdress the case this article begun with concerning the question of moral responsibility, it is not relevant whether the civil servant knew the children that died, nor is it relevant that the district in which the accidents happened was or was not part of his jurisdiction.

However, when it comes to moral responsibility, there is a significant difference between dissociated technological delegation and direct action. Latour and other contemporary continental philosophers like Verbeek have shown that the behavior or acts of the initial actor are mediated by technological artifacts [3, 4]. For example, the television, with the remote control, does not only facilitate watching the preferred programs, shifting from one channel to another; it has changed radically the leisure time of many people and it has even lead to the new verb “to zap”. Also, Freud was one of the first to make use of couches in his practice because he realized that laying down would relax his patients, making them tell more than they would normally do.

Latour describes the change in behavior of people due to the mediation of technology [15] by explaining that, frequently, young people are acting much more assertively or even aggressively when they are driving a “sporty” car than if they are driving a plain family car. The car mediates their ends: they do not only want to arrive at their destination, but they want a tough or sporty driving style.

Based on these kinds of observations, many philosophers of technology [3, 4, 16, 17, 18] conclude that artifacts are mediating our behavior. Technological artifacts are not just passive tools: they affect our actions and intentions. The speed bump case of Latour shows why this mediation is relevant [19]. He describes a speed bump as a constantly present police officer—a sleeping policeman—who prevents speeding by forcing drivers to slow down. Police officers become less involved in enforcing speed limits. Regulating other people’s behavior is delegated over a longer period of time, while the initial actors (like policy makers, civil servants and police officers) are absent.

Drawing the analogy of the sleeping policeman further, Latour argues that distinguishing the technical and the human dimensions by means and ends is a mistake [19]. He describes humans and technological artifacts by their program of action, which consists of the agent’s series of goals, steps and intentions. If human agents are not capable of reaching their goals, they fall back on technical artifacts. At that moment, the agent is taking a detour with technological mediation and the original ends change. The initial agent will reach a new goal instead of the original. Latour explains this abstract notion of mediation by explaining four accumulating forms: translation, composition, black boxing and delegation [19].

Translation is about the detour a human agent takes when getting involved with a nonhuman like a technological artifact. Humans and nonhumans have a program of action that consists of a series of goals, steps and intentions. If human agents are for instance not strong enough to reach their goals, they fall back on another human or a technology, which also has its own program of action. As a result, the initial ends of the agent will change and the human and nonhuman together will reach their communal new aim. For instance, in the Castle of Bad Bentheim (Germany) the first staircase behind the main entrance consists of only oblique and curved steps that differ in shape and height. The reason to build the stairs was to be able to reach the walls from the inner court, but of course, the enemies were less welcome on these walls. Designing the stairs so irregularly would slow the heavy armored men substantially, changing their program of action from “running up the stairs as fast as possible” to “carefully climbing the stairs, while using a shield to protect oneself against arrows”.

With “mediation by composition” Latour explains that many actions and agents are already the result of a collection of programs of actions of several humans and nonhumans. When developing new technologies, many translations take place between different humans and nonhumans. For example in the design of new computer programs existing software and user-friendliness needs to be considered. New technologies should be compatible with existing technologies and with embedded human customs. As a consequence, norms that are embedded in the supporting technologies are also included.

The third form of mediation is black boxing which makes the joint production of new aims opaque. The acts of a driver are the result of a combination of the car, the driving lessons, the traffic rules, the behavior of other drivers, and the construction works that resulted in the road. Simply “being in a hurry” is not a full explanation for speeding, if the road or the other road users make it impossible to drive too fast. The road, the car, other drivers, and the traffic rules co-define what speeding is. “Black boxing” refers to the idea that when all these facets come together, unsuspected outcomes arise.

The last form of mediation Latour distinguishes is about the meaning of technologies in the sense that they can be seen as signs or messengers: technologies can have a symbolic function. Latour uses the speed bump in comparison with the traffic sign and a police officer as an example for this last form of mediation. Regulating speeding, which is a task of the police, is delegated to a signal: paint on the road or a traffic sign. But when it becomes clear that signs do not have enough normative power, we delegate regulating speed to the “sleeping policemen”. When we see a speed bump, we slow down, even if it is relatively low.

Mediation and the Four Preconditions for Moral Responsibility

These four accumulating forms of mediation create a real problem for assigning moral responsibility. The four necessary conditions of Action Responsibility and Universal Moral Responsibility are contrasted with the characteristics of technological delegation.

Ad 1. The Actor has to have Acted Himself

First, according to the classical responsibility theories, you should have acted yourself for it would not be fair if people blame or praise you for something you did not do. Hans Jonas showed that many actors are involved in technological development and use, which leads to the “problem of the many hands” [2, 20, 21]. This makes localizing moral responsibility rather problematic. Every human actor is just a cog in the machine, making it difficult to address someone in particular.

Recognizing the translational role of technologies makes this point even more complex, because other interacting entities are added. Often, more than one human and one artifact are involved, as mediation by composition makes clear. When discussing artificial intelligence, Floridi and Sanders have addressed this point extensively, arguing for assigning moral responsibility to artifacts [16]. But also artifacts without intelligence can be understood as co-shapers of action, though of course they can never be held morally responsible in any sense. The stairs of the Castel of Bad Bentheim are responsible for slowing down the enemy as well as for the stumbling of many contemporary tourists, but they are not morally responsible. Nonetheless, they blur the question of moral responsibility. Who is to blame when a tourist breaks a leg? Who is to blame for unfortunate forms of mediation? Realizing the active role of technologies adds even more hands to the problem of the many hands, and makes the matter even more complex.

Ad 2. The Actor has to have Acted Freely and Willingly

Second, the main responsibility theories claim that no one or nothing should have forced you to act in a certain way. If someone puts a gun to the cashier’s head and asks him to hand over the money in the till, the cashier will not be punished for theft. Also, if a psychotic patient tries to strangle the nurse because the voices in his head tell him she is a malicious alien, he will be declared to be of unsound mind.

Mediation by composition shows that for instance, engineers are not free to design whatever they want and users cannot use technological artifacts as merely neutral means. The technological and social network defines the field in which they work. The QWERTY keyboard is designed so it deliberately slows down our typing. When people used typewriters with small hammers that stamped the letters though the ink ribbon, it was necessary not to type too fast. Otherwise, several raised hammers would get jammed together. Smits and Leijten [22] have described how hard it is—with no need for slowing down our typing—to introduce a new, more efficient keyboard: the already established technical (software and hardware) and social (i.e. touch-typing courses) context, makes it virtually impossible. Therefore, arguing that Apple, Dell or Asus are morally responsible for producing “inefficient” keyboards is problematic.

Ad 3. The Actor is or could have been Aware of the Relevant Facts of the Case

Third, Universal Moral Responsibility and Action Responsibility point out that it is unjust to assign moral responsibility to someone not having the capabilities or possibilities to access and process the available information. Ray Spier makes the moral relevance of knowledge clear for nuclear programs [23] and explains that it would be unfair to hold the general public responsible for what happens in a top secret laboratory.

Within technological development, what constitutes the relevant facts is a complex question. Who is to blame for the use of asbestos in so many buildings? Who should pay for the safe removal of this material? Who should compensate the construction and mine workers who fell ill because they were exposed to asbestos too often? When at the beginning of the twentieth century asbestos was widely introduced, no one imagined that it brought with it serious health risks. On the contrary, adding asbestos in paint and construction materials reduced the risk of fire. People even cooked on elements of asbestos. It took some decades for people to realize that asbestos could cause cancer. Who is responsible for all the casualties?

Within STS, David Collingridge was one of the first to explain that technological development and use are often hard to foresee at the beginning of the development. Only when time progresses, more adequate knowledge about the impact can be developed, but then the malleability decreases substantially and any negative effects are hard to redeem [24]. We now know that asbestos is toxic; however, even in the most optimistic scenarios, it will still take many decades before asbestos is removed from all buildings. If it is unclear how certain technologies will develop and what the consequences will be, one can wonder how someone can ever be held responsible for the consequences. Black boxing, the result of the four accumulating levels of mediation, puts even more stress on this point than Collingridge did. If it is indeed only retrospectively possible to understand what happened during the black boxing, like Winner argues [25], then one can wonder whether it is fair to assign moral responsibility retrospectively.

Ad 4. The Actor is Aware of the Relevant Norms and Values

The last condition of the classical responsibility theories resembles the third one, but it focuses on the moral instead of the factual awareness. If the actor is not (or could not have been) aware of the relevant norms and values, he or she cannot be blamed to have violated them. For instance, traditionally in some Asian countries burping after a dinner is a compliment for the cook. In Western societies, from Victorian times, it is considered bad manners, even in a suppressed form. An Asian gentleman who burps after dinner, trying to please the host cannot be held responsible for being impolite.

Tsjalling Swierstra and Arie Rip elaborate on technological mediation of norms and values when discussing nanotechnology and NEST ethics [17]. They argue that new technologies do not only change our more factual perceptions, but new technologies can also change what we believe is decent or morally right. It is not likely women would have worked outside the home without modern technologies like contraceptives, washing machines, microwaves and heating systems. Another great example can be borrowed from the movie Gattaga (1997). Suppose that it would possible to genetically screen intelligence and that a patent-owning laboratory offers to screen twelve weeks old fetuses for a large amount of money. Most probable, this would raise quite some moral indignation. First, many believe that ending a pregnancy just to get a more perfect child is immoral. Further, such a scenario would bring a new social divide which is considered very unjust. But suppose, that new, cheap screening methods are developed that are effective before the fertilization takes place (which is the case in the movie) and soon all parents are screening their children. Would it not become immoral not to give your children the best opportunities? In such a scenario, it can easily become a moral obligation for parents to screen on intelligence.

According to most theories one could only be responsible if one could have known the relevant moral norms [7, 26]. When technologies alter our values and principles, assigning moral responsibility by human agents becomes even more complex.

Since all four preconditions of Action and Universal Moral Responsibility are problematic, the models are not suitable in cases of dissociated technological delegation. The conditions of these models are too strict. Still—especially prospectively—it is important to be able to assign moral responsibility. It is not desirable to promote that one acts without considering the outcomes of these delegated acts, but we also do not want to forbid all kinds of technological developments in advance.

Task Versus Social Role Responsibility

Although traditional models of moral responsibility cannot cope with the complex issues of technological delegation, there is still an urgent call from society for moral responsibility. A history of ideas in the practice of engineering and science show that this call is often directed at engineers and scientists, assigning a special Role Responsibility to the people who are inventing our devices [27]. One of the most prominent trends of this increased attention for Role Responsibility5 is the growth in ethical codes, but role responsibility has more sources than just ethical codes. Role Responsibility states that human actors are responsible for their actions and the consequences, in the sense that they have to justify their actions considering their special role or task. These kinds of responsibility are institutionalized in, for instance, contracts, moral codes, laws, common practice and bilateral agreements. In the strictest sense, the point of this form of moral responsibility is that one should act by the practical rules that come with the task, job or role one fulfills.

The social and moral importance of moral codes, contracts, laws and the like, are undeniable, but for our problem, they only solve part of the problem. The problem remains for in a dynamic context of technological development restricting Role Responsibility to rules that belong to a task is that such static rules soon become inadequate. It is for instance hard to design a comprehensive moral code or legal rule for how to develop and use biotechnology, IT or nanotechnology, when at the moment such technologies are still young.

Another problem with rules and codes is identified by Lenk [5] and Zimmerli [8]. They admit that role-responsibility is important, but they argue that Role-Responsibility is not a “real” moral responsibility. In their analyzes, role-responsibility is mistakenly understood as Task or Functional Responsibility, or, in German, Aufgabenverantwortlichkeit [5]. Now Task or Functional Responsibility, is often used—or misused—to avoid responsibilities. Functional Reasoning easily leads to statements like “this is not my job” or “I am just following orders”. One can be a “good” executioner in the sense that one is able to execute convicts efficiently while giving a great show to the audience. But of course, this does not say anything about the question whether being an executioner is morally right or wrong.

Lenk is right in pointing our that task-responsibility does not necessarily equal moral responsibility. But Role-Responsibility—understood as Social Role Responsibility—is not the same as Task or Functional Responsibility [27].6 Task or Functional responsibility is a minimalistic form of responsibility which implies the tasks one has to fulfill in a certain occupation. It is about the minimal norms that apply to people in that specific practice. Legal norms, contracts and orders define such duties. The model of Social Role Responsibility defended here is a far more proactive approach which also includes the desirability of one’s role in society. A proactive responsibility model calls for a guiding attitude that has a broader scope then just the minimum requirements [28], and addresses the attitude of the initial actor.

The point made here is closely connected to the earlier discussions in this journal [27, 29, 30] on encouraging engineers to consider more than their minimal responsibility. As Bird writes, the task of the Educational Forum of the Journal of Science and Engineering Ethics is to help engineering students to consider their responsibilities as being part of a larger society and the human community. Designing a construction or gardening tool that fulfills the legal safety requirements is part of the Functional Responsibility. But designers can make it their responsibility to design a tool that is much safer than legally required. However, taking this Social Role Responsibility seriously means also to be willing to consider technological mediation.

For creating a model of Social Role Responsibility, a vocabulary that has a broader focus then just functions or professions is needed. Cooper has argued [31] that replacing the central notion of profession with the MacIntyrean notion of practice [32] in business ethics provides room for a development of an alternative ethics [31, p. 321]: “The conception of practice is more constructive than that of profession; it is a larger framework within which to develop a normative perspective…Profession, unfortunately, may connote self-protection and self-aggrandizement and produce images of paternalistic expertise.” By introducing this notion of practice, we can slightly alter the meaning of responsibility: it is no longer a matter of whom to blame or to praise. The question shifts more into the proactive account of what it means to be conscientious in your social role. Here, attitude and intentions become the prime focus of attention, instead of action.

This notion of responsibility, namely, the shifting of the emphasis from blame and praise to that of being a responsible actor, has a different relation to the four strict preconditions described in the former section. The “whodunit” approach of the classical responsibility theories shifts more to the background and more room is created to reflect on the desirable ways of dealing with technology. The Social Role Responsibility way has also greater potential than task responsibility. The four preconditions to introduce this alternative model are reformulated in this section.

From Action to Social Role

The most obvious criticism in addressing Social Role Responsibility is claiming that asking for a proactive attitude is begging for altruism. However, this is not necessarily the case. By definition, actors act responsibly if they act in a reflective and inquiring way so they can respond to questions on why they have acted the way they did. Similarly, Social Role Responsibility states that human actors are responsible for what they do, in the sense that they have to justify their actions considering their special social role. What does this mean? In daily life, the answer this question depends on the relevant people and in what contexts they find themselves. Or, to put it differently: what counts as a responsible attitude depends at least on their social role. The object of responsibility should be defined by the roles of the subject, making the people responsible for the morally good fulfillment of their special social role. When referring to these roles, one implicitly refers to practices, which are a valuable source of our morals for they provide a coherent network of norms and values, often based on an internally shared worldview [32].

As previously stated, “to act responsibly” means “to act in an inquiring and reflective way”. Alongside the norms and values of the practices of the petitioners, one has to try to understand how in the specific situation the human actors and the technologies mediate with each other, influencing the eventual outcome. The involved actors—scientists, engineers, users, managers, commissioners, governmental institutions, etc.—have to enquire about and reflect on the possible unintended outcomes of their actions. At least we have to consider whether the actor’s intentions remain equal when delegating actions to technologies.

As shown, most problems with technological delegated actions are related to the idea that the human-technology interaction is highly dynamic and therefore rather unpredictable and not malleable. Therefore, it is sensible to give a central place to the effort the actors are willing to make instead of focusing solely on their actions. Are they prepared to seriously collect and consider all kinds of information in their decisions and to shape their own social role? As stated, Social Role-Responsibility is not the same as Functional Responsibility. One should not ask “What am I supposed to do according to my contract or boss?” Instead and more importantly one should ask: “What is morally the best action, given the social role I want to fulfill in these different practices?”7

From Freedom of Action to Co-shaping Social Roles

By arguing that one can want a certain social role, I presume the actors have some special kind of freedom. Social constructivists often argue that social roles are predefined: one is born within a certain family, in a certain country and has a certain gender. Further, education, personal history, social environment and profession partially limit the freedom of an actor. But this does not mean there is absolutely no freedom to shape our individual social roles. On the contrary, since the notion of practice is included, the extended model of Social Role Responsibility presumes a co-construction of action: people shape practices and at the same time practices shape people. In other words, within the social context of practices, there is still room for individual responsibility [33]. However, to be able to be responsible the individual needs a social environment that leaves some room for this social responsibility. Therefore, to be successful in claiming individual responsibility in a technological or scientific practice, an atmosphere that encourages social role responsibility must be created [34].

The problem of the many hands is a complex factor in this phase of the inquiry, but this should not be an excuse to escape all responsibility. As argued, Social Role Responsibility should not be limited to only engineers; managers, users, policymakers and commissioners are obvious examples of actors who also share responsibility. In other words, they also fulfill an influential social role, so we should not just accuse the engineers or scientists who are actually developing the devices [35, p. 65]: “The very elasticity of the engineer’s role produces puzzles about the scope of engineers’ responsibilities. Moreover, the extent to which engineers have room to exercise independent judgment within the requirements for cooperation of business organizations is similarly elastic, depending on the local circumstances.” But, since the scientists and engineers are the ones developing the technological devices we are concerned with, there are situations in which they must fulfill a certain role that asks for extra effort. Several authors have recently argued that engineers should not adopt their given social role passively. Instead a forward-looking attitude, which is sensitive to the creative changes of social circumstances, is needed [36]. Or, in Mitcham words [27]: “This is role responsibility not as passive acceptance but as active agency, recognizing the extent to which we create roles at the same time that we are created by them”. Such claims are based on the presumption that people do not simply have a social role, but rather that they co-create their own roles.

From Knowing to Visualizing

The different forms of mediation distinguished by Latour—which accumulate in a process of black boxing—create a real problem for foreseeing what will happen. Still, even within this model, when one is acting one has the responsibility to “enquire” and “reflect” on future events. Here, absolute knowledge becomes more a matter of imagination. Many authors who discuss technology show us the importance of imagining different possible futures [21, 29]. Or, in Jonas’ words [21, p. 27]: “First duty of ethics of the future is visualizing the future.” However, they do not provide us with an understanding of the social role of technological artifacts. Of course they refer to unexpected and unintended results of technologies, but they do not describe how they perceive the mechanisms of technological mediation like Latour and Verbeek. Understanding mediation is important as it can help us to imagine how the technology-human interaction can evolve [37]. Besides, a full understanding of mediation reveals that not only engineers or scientists are shaping our technological culture and society but a complex network of factors including human actors and technological artifacts. Also policymakers, funding agencies and users are included in a model of Social Role Responsibility. Most writings in engineering ethics do not address the responsibilities of the other actors. As a result most of the burden is placed on the engineers and scientists, while within the organized public debates other actors can limit their social role responsibility by just referring the problems to the engineers and scientists.

To be fair, we need to address everyone involved within the development, management and use of technologies, and not only the engineers. Of course, there are some cases in which the well-informed scientists and engineers bear more responsibility since they have the capabilities and possibilities to access and process the relevant information. For instance, Spier [23] shows that nuclear projects for military or for energy uses are often highly confidential. The public, journalists or the government are often not fully informed about all pro’s and con’s, and it becomes difficult for them to exert social power effectively. Therefore, the well-informed managers, scientists, and engineers have a greater burden on their shoulders [23]. The less access to information, the harder it becomes to assign responsibility. This unequal distribution of responsibility may be fair in particular cases, but one has to be bear in mind it is not fair to make such a principled distinction from the start, since other actors are exerting their social power as well.

From Static Ethics to Perceiving Different Moral Norms

In daily life, we are inclined to expect people to adopt the norms and values of the practice in which they are involved. So, the actors at least need to consider the norms and values of the practice in which they act. But as explained earlier, when talking about dissociated technological delegation, the consequences of the act are remote from the subject of responsibility, and the consequences affect uninvolved, anonymous people. This means that the petitioners are often not part of the practice in which the subject is acting. So, if actors only consider the norms and values of the practice in which they participate, they do not take their social responsibility seriously. Therefore, practices affected by their delegated actions should be part of their reflections as well.

As explained, remoteness is only a matter of time and space, and deeper understanding reveals that initial actors find themselves in another practice—with other moral norms and ends—than the petitioners. By considering the moralities of different practices in the enquiry one can get a deeper insight on the moral desirability of certain aspects of the practice one is acting in.

Looking at the complexity of the problem described so far, such an enquiry should involve at least the following three points: First the morals of the involved practices should become clear. This consists in an analysis of at least two practices: the practice in which the actor is acting and the practice in which the actions have their consequences. The notion that moralities of practices evolve has to be considered in this step. Developing some ethical scenarios can be helpful (see also [17]). Second, one has to be willing to reflect on what the effects are of the technological delegation. What forms of mediation are imaginable under what scenarios? One has to be willing to try to open the black box—even though there is always the risk of finding it empty, as Winner warns [25]. Here again sketching scenarios could be useful. Third, when the involved practices and some possible outcomes have become clearer, the actors have to identify the social role they fulfill in the practices in which they are involved.

The Model in Practice

In this article, I raised some issues that need to be discussed in relation to dissociated technological delegation and argued for the further development of a social role model. First, I showed how technological delegated actions are morally and emotionally different from direct actions. Second, I argued in four parts, why classical traditions of moral responsibility are not helpful when dealing with questions about responsibility. Third, I showed that the contemporarily popular solutions of task or function responsibility, like moral codes, can only solve part of the problem. Finally, having defined the problem at hand, I composed a preliminary structure of a model of Social Role Responsibility. The four most distinguishing features of such a model can be summarized as follows: (1) The willingness and the efforts taken to reflect on one’s social role becomes the object of responsibility instead of just the mere actions; (2) The model presumes an ability to actively co-shape ones moral role, instead of the more absolute freedom to act; (3) Technological artifacts are not considered as neutral tools but as entities that mediate human actions and intentions; (4) Instead of referring to generally applicable rules, the model asks one to consider the dynamics of several interacting practices.

Let us return to the case of the traffic locks to see what these features imply. For the sake of the length of this text, the focus will be on the responsibilities of the Council and the civil servant. Here, the first characteristic of the model draws our attention to their considerations regarding their social roles. What were the leitmotifs of the Council to grant the committee their request? Were they only concerned about the political and economic support of the more well-to-do citizens, or did they honestly endeavor to make all neighborhoods comfortable living places? Was it just an unfortunate coincidence that serving the interests of the richer area harmed those of the poorer area? The second type of questions is about the effort taken to co-shape one’s social role. The question whether the civil servant only carried out his tasks under the town planning rules, or whether he considered his role in a broader social perspective is highly relevant. The third and forth category of questions are both about whether the people could imagine what would happen. (a) How much effort was undertaken to explore and imagine the possible outcomes of this redirection of traffic? (b) Did the Council or the civil servant consider what the possible scenarios would be? Or to put it in more practical terms: Did he perform any research on expected behavior and the possible ensuing traffic conditions when they closed the original route? (c) Was any attention paid to the morals and ends of the other involved practices? For example, the practice of traffic flow (being quick, being free to drive where one pleases, taking the shortest route) and the practice of raising children in the different neighborhoods? These practices needed special reflection and consideration.

Of course, within the scope of one article, it is not possible to develop a full model for moral responsibility. To craft a full theory of Social Role Responsibility applicable to technologically mediated actions, and to ask people to make the effort that comes with it, all four features need further development and discussion. Especially urgent are questions concerning the limits of the responsibilities assigned. How much effort to reflect on their social roles can we ask of people who are already overloaded with work? What are the limitations of the responsibility to co-shape one’s moral role: should people be willing to sacrifice their jobs and careers, when it is obvious that someone else will do the job? How much effort and money can we expect people to invest in carrying out all the technical, social, and future orientated enquiries? Though these questions have not been addressed fully in this article, the four distinguishing features of the model provide some indications of what questions we should address in our modern high-tech societies to deal responsibly with technological artifacts.


  1. 1.

    In this article I often use just the term “technology” to refer to technological artifacts. In his influential book Thinking through Technology, Carl Mitcham distinguishes four different meanings of the concept of technology [1]: 1. technology as object, 2. technology as knowledge, 3. technology as activity, and 4. technology as volition. In this article, I focus on the first category: technological artifacts defined as material objects such as tools, machines and consumer products.

  2. 2.

    The actor can be an individual or a collective.

  3. 3.

    Or deliberately refrained from acting.

  4. 4.

    In some literature the historical credibility of this case is questioned, e.g. [14]. Nonetheless, the example (being fictional or real) beautifully illustrates the political force of technologies and technological artifacts.

  5. 5.

    Often used synonyms of Role Responsibility are Professional Responsibility and Functional Responsibility [5].

  6. 6.

    Carl Mitcham uses the term Role Responsibility Plus [27].

  7. 7.

    Often proactive forms of responsibility have a more prospective character instead of the retrospective character of the traditional responsibility models [28]. However, it is also possible to provide a retrospective account, since retrospectively one can ask whether the attitude was a responsible one. Or to put it differently: one can retrospectively enquire the prospective attitude.



I would like to thank Peter-Paul Verbeek and Tsjalling Swierstra for suggestions that led to substantial improvements to an earlier draft of this paper. Further, I am greatly indebted to Julie Bytheway and Edward Spence for proofreading the final version.

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.


  1. 1.
    Mitcham, C. (1994). Thinking through technology: The path between engineering and philosophy. Chigaco: The University of Chicago Press.Google Scholar
  2. 2.
    Jonas, H. (1979). Das prinzip verantwortung. Frankfurt am Main: Insel Verslag.Google Scholar
  3. 3.
    Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford: Oxford University Press.Google Scholar
  4. 4.
    Verbeek, P. P. (2005). What things do—philosophical reflections on technology, agency and design. Penn State: University Press.Google Scholar
  5. 5.
    Lenk, H. (1993). Uber verantwortungsbegriffe und das verantwortungsproblem in der ethik. In H. Lenk & G. Ropohl (Eds.), Technik und ethik (2nd ed., Vol. 8395, pp. 112–148). Stuttgart: Reclam.Google Scholar
  6. 6.
    Lenk, H. (2005). German perspectives. In C. Mitcham (Ed.), Encyclopedia of technology and ethics. Detroit: Thompson Gale.Google Scholar
  7. 7.
    Ropohl, G. (1993). Neue wege, die technik zu verantworten. In H. Lenk & G. Ropohl (Eds.), Technik und ethik (2nd ed., Vol. 8395, pp. 149–176). Stuttgart: Reclam.Google Scholar
  8. 8.
    Zimmerli, W. C. (1993). Wandelt sich die verantwortung mit dem technischen wandel? In H. Lenk & G. Ropohl (Eds.), Technik und ethik (2nd ed., Vol. 8395, pp. 92–111). Stuttgart: Reclam.Google Scholar
  9. 9.
    Bayertz, K. (1995). Eine kurze geschichte der herkunft der verantwortung. In K. Bayertz (Ed.), Verantwortung: Prinzip oder problem? (pp. 3–72). Darmstadt: Wissenschaftliche Buchgesellschaft.Google Scholar
  10. 10.
    Swierstra, T. E. (2005). Trapped in the duality of structure: An STS approach to engineering ethics. In H. Harbers (Ed.), Inside the politics of technology (pp. 199–227). Amsterdam: Amsterdam University Press.Google Scholar
  11. 11.
    Anders, G. (1987). Die antiquiertheit des menschen. Munchen: C.H. Beck.Google Scholar
  12. 12.
    van Dijk, P. (2000). Anthropology in the age of technology: The philosophical contribution of gunther anders. Amsterdam: Rodopi.Google Scholar
  13. 13.
    Winner, L. (Ed.). (2004). Do artifacts have politics? Lanham: Rowman & Littlefield Publishers.Google Scholar
  14. 14.
    Woolgar, S., & Cooper, G. (1999). Do artefacts have ambivalence? Social Studies of Science, 29(3), 433–449. doi:10.1177/030631299029003005.CrossRefGoogle Scholar
  15. 15.
    Latour, B. (1992). Where are the missing masses? The sociology of the new mundane artefacts. In W. Bijker & J. Law (Eds.), Shaping technology, building society. Cambridge: MIT-press.Google Scholar
  16. 16.
    Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14, 349–379. doi:10.1023/B:MIND.0000035461.63578.9d.CrossRefGoogle Scholar
  17. 17.
    Swierstra, T. E., & Rip, A. (2007). Nano-ethics and nest-ethics: Patterns of moral argumentation about new and emerging science and technology. NanoEthics, 1(1). doi:10.1007/s11569-007-0005-8.
  18. 18.
    Noorman, M. (2008). Limits to the autonomy of agents. In A. Briggle, K. Waelbers, & P. Brey (Eds.), Current issues in computing and philosophy. Amsterdam: IOS press.Google Scholar
  19. 19.
    Latour, B. (1994). On technical mediation: Philosophy, sociology, genealogy. Common Knowledge, 94(4), 29–64.Google Scholar
  20. 20.
    Jonas, H. (1982). Technology as a subject for ethics. Social Research, 49(4), 891–898.Google Scholar
  21. 21.
    Jonas, H. (1984). The imperative of responsibility. Chicago: The University of Chicago Press.Google Scholar
  22. 22.
    Smits, R., & Leyten, A. (1991). Technology assessment: Waakhond of speurhond: Naar een integraal technologiebeleid. Zeist: Kerckebosch b.v.Google Scholar
  23. 23.
    Spier, R. (2002). Ethical issues engendered by engineering with atomic nuclei. In R. Spier (Ed.), Science and technology ethics. London: Routledge.Google Scholar
  24. 24.
    Collingridge, D. (1980). The social control of technology. London: Frances Printer.Google Scholar
  25. 25.
    Winner, L. (1993). Social construction: Opening the black box en finding it empty.Google Scholar
  26. 26.
    Swierstra, T. E. (1999). Moeten artefacten moreel gerehabiliteerd? K&M—tijdschrift voor empirische filosofie, 23(4), 317–326.Google Scholar
  27. 27.
    Mitcham, C. (2003). Co-responsibility for research integrity. Science and Engineering Ethics, 9(2), 273–290. doi:10.1007/s11948-003-0014-0.CrossRefGoogle Scholar
  28. 28.
    Meijboom, F., Visak, T., & Brom, F. (2003). Verantwoord vertrouwen: Een onderzoek naar overheidsverantwoordelijkheid voor een betrouwbare agro-food sector. Universiteit Utrecht, Centrum voor Bio-ethiek en Gezondheidsrecht: Utrecht.Google Scholar
  29. 29.
    Pritchard, M. (2001). Responsible engineering: The importance of character and imagination. Science and Engineering Ethics, 7(3), 391–402. doi:10.1007/s11948-001-0061-3.CrossRefGoogle Scholar
  30. 30.
    Bird, S. (1998). Educational forum: Stimulating a sense of responsibility. Science and Engineering Ethics, 4(2), 213–214. doi:10.1007/s11948-998-0051-9.CrossRefGoogle Scholar
  31. 31.
    Cooper, T. (1987). Hierarchy, virtue, and the practice of public administration: A perspective for normative ethics. Public Administration Review, 47(4), 320–328. doi:10.2307/975312.CrossRefGoogle Scholar
  32. 32.
    MacIntyre, A. (1985). After virtue (2nd ed.). London: Duckworth.Google Scholar
  33. 33.
    MacIntyre, A. (1999). Dependent rational animals: Why human beings need the virtues. London: Duckworth.Google Scholar
  34. 34.
    Salzberg, A. (1997). Commentary on “the social responsibilities of biological scientists”. Science and Engineering Ethics, 3(2), 149–152. doi:10.1007/s11948-997-0006-6.CrossRefGoogle Scholar
  35. 35.
    Weil, V. (2002). Engineering ethics. In R. Spier (Ed.), Science and technology ethics. London: Routledge.Google Scholar
  36. 36.
    Richardson, H. (1999). Institutionally divided moral responsibility. In E. Frankel-Paul, F. Miller, & J. Paul (Eds.), Responsibility (pp. 218–249). New York: Cambridge University Press.Google Scholar
  37. 37.
    Verbeek, P. P. (2006). Materializing morality: Design ethics and technological mediation. Science, Technology & Human Values, 31(3), 361–380. doi:10.1177/0162243905285847.CrossRefGoogle Scholar

Copyright information

© The Author(s) 2008

Authors and Affiliations

  1. 1.Department of Philosophy, Faculty of Behavioural SciencesUniversity of TwenteEnschedeThe Netherlands

Personalised recommendations