Advertisement

Design for the Value of Responsibility

  • Jessica Nihlén FahlquistEmail author
  • Neelke Doorn
  • Ibo van de Poel
Living reference work entry

Abstract

The responsibility of engineers and designers for the products they design is a common topic in engineering ethics and ethics of technology. However, in this chapter we explore what designing for the value of responsibility could entail. The term “design for the value of responsibility” can be interpreted in (at least) two ways. First, it may be interpreted as a design activity that explicitly takes into account the effect of technological designs on the possibility of users (and others) to assume responsibility or to be responsible. Second, it may refer to a design activity that explicitly affects the allocation of responsibility among the ones operating or using the technology and other affected people. In this chapter, we discuss both interpretations of design for the value of responsibility. In both interpretations, a technological design can be said to affect a person’s responsibility. As there are no explicit methods or approaches to guide design for responsibility, this chapter explores three cases in which design affected responsibility and develops on basis of them design heuristics for design for responsibility. These cases are the alcohol interlock for cars in Sweden, the V-chip for blocking violent television content and developmental podcasting devices in rural Zimbabwe. We conclude by raising some open issues and suggesting future work.

Keywords

Conditions of responsibility Individual and collective responsibility Distribution of responsibility Responsibility as a virtue 

Introduction

The responsibility of engineers and designers for the products they design is a common topic in engineering ethics and ethics of technology. However, designing for the value of responsibility, which we discuss in this chapter, is an under-theorized topic. The term “design for the value of responsibility” can be interpreted in (at least) two ways. First, it may refer to a design activity that explicitly affects the possibility of users to assume responsibility or to be responsible. Second, it may interpreted as a design activity that explicitly takes into account the effect of technological designs on the allocation of responsibility among the ones operating or using the technology and other affected people. In this chapter, we discuss both interpretations of design for the value of responsibility. In both interpretations, a technological design can be said to affect a person’s responsibility. As there are no explicit methods or approaches to guide design for responsibility, this chapter explores designs that do affect responsibility. Through discussing three examples of designs which do affect responsibility in different ways, we will tentatively propose heuristics for designing for responsibility.

Given that design decisions affect the possibilities for assuming and discharging responsibility and the allocation of responsibility, we could in principle also deliberately design for responsibility. Several authors have made the general point that technological artifacts can delegate tasks to technologies or humans and that they affect the moral behavior of people and may shift the balance of power and control between groups (Winner 1980; Latour 1992; Verbeek 2011). All these are obviously important for the allocation of responsibility, but they are usually not discussed in such terms. There are, of course, some exceptions. Wetmore (2004), for example, discusses car design issues in relation to responsibility for safety. Some of our own work has also focused on how responsibility may be affected by technology and design (Grill and Nihlén Fahlquist 2012; Nihlén Fahlquist and Van de Poel 2012; Doorn and Van de Poel 2012; Nihlén Fahlquist in press).

The chapter is structured as follows: First, we explicate the concept of responsibility. Second, we discuss three examples where design, more or less implicitly, affected responsibility. If we have a picture of how design can and does affect responsibility, we achieve a better understanding of how design should explicitly take responsibility into account. Third, we discuss what designing for responsibility could mean. We end with discussing open issues and challenges and we draw some conclusions.

Explication of Responsibility

The concept of responsibility contains many different notions and is used differently in different contexts (Hart 1968; Davis 2012; Van de Poel 2011). Davis distinguishes nine senses of the term (Davis 2012). Some of the notions are more relevant than others in the context of design. In this section, we will explicate those aspects of responsibility that we believe are the most relevant to design. Design can affect both individual responsibility as well as the distribution of responsibility. We discuss individual responsibility in section “Individual Responsibility” and the distribution of responsibility in section “Distribution of Responsibility”.

Individual Responsibility

We distinguish between backward-looking and forward-looking responsibility (Van de Poel 2011). Backward-looking refers to responsibility for things that happened in the past; the focus is usually or undesirable outcomes, although that is not necessary. Forward-looking responsibility refers to things not yet attained. In this context, we will understand forward-looking responsibility as responsibility as a virtue.

Backward-Looking Responsibility

In the traditional philosophical literature on responsibility, being morally responsible is usually understood as meaning that the person is an appropriate candidate for reactive attitudes, such as blame or praise (Strawson 1974; Fischer and Ravizza 1993; Miller 2004). Being morally responsible (i.e., being eligible for reactions of praise and blame) is usually taken to depend on certain conditions that have to be met before it is fair to ascribe responsibility to someone. Although academics disagree on the precise formulation, the following conditions together capture the general notion of when it is fair to hold an agent morally responsible for (the consequences of) their actions (see Feinberg 1970; Hart and Honoré 1985; Bovens 1998; Fischer and Ravizza 1998; Corlett 2006; Doorn 2012a; Van de Poel et al. 2012):
  1. 1.

    Moral agency: The responsible actor is an intentional agent concerning the action. This means that the agent must have adequate possession of her mental faculties at the moment of engaging in the action. Young children and people whose mental faculties are permanently or temporarily disturbed are usually not fully held responsible for their behavior because they do not fulfill this condition. However, to put oneself knowingly and voluntarily into a situation of limited mental capacity (e.g., by drinking alcohol or taking drugs) does not, in general, exempt one from being responsible for the consequences of one’s behavior. Some people phrase this condition in terms of intention, meaning that the action was guided by certain desires or beliefs.

     
  2. 2.

    Voluntariness or freedom: The action resulting in the outcome was voluntary, which means that if the actor performed the action under compulsion, external pressure or hindered by other circumstances outside the actor’s control, she is not held responsible. The person must be in the position to determine his own course of action (cf. condition 1) and to act according to that.

     
  3. 3.

    Knowledge of the consequences: The actor knew, or could have known, the outcome. Ignorance due to negligence, however, does not exempt one from responsibility.

     
  4. 4.

    Causality: The action of the actor contributed causally to the outcome; in other words, there has to be a causal connection between the agent’s action or inaction and the damage done.

     
  5. 5.

    Transgression of a norm: The causally contributory action was faulty, which means that the actor in some way contravened a norm.

     

Note that especially the first two conditions are closely interrelated. Being an intentional agent means that one has the opportunity of putting the will into effect and that one is free from external pressure or compulsion (Thompson 1980; Lewis 1991). With regard to the fifth condition, extensive debate has been going on as to what counts as a norm. In daily life the norm can be much vague than in criminal law where the norm must be explicitly formulated beforehand.

Forward-Looking Responsibility: Responsibility as a Virtue

In this context, i.e., designing for values, we will conceive of forward-looking responsibility as responsibility as a virtue. In daily life, we talk about “responsible” people; that is, people who have certain character traits associated with a certain kind of behavior and attitudes. The word virtue means “excellence,” “capacity,” or “ability,” and being virtuous is being able to or having the power to achieve something (Van Hooft 2006). To possess virtues is the basis of being a good person (Swanton 2005). According to Williams, a responsible person, in this sense, is able and willing to respond to a plurality of normative demands (Williams 2008). According to Van Hooft, taking responsibility includes a personal involvement and commitment. A responsible person does not leave things to others, but feels that it is “up to me” and is willing to make sacrifices in order to get involved (Van Hooft 2006). Applied to different real-life contexts, we all more or less have an image of what a responsible person is, or more contextualized a “responsible driver” or a “responsible leader.” One of us has argued that to be a responsible person requires that one is also a person who cares about fellow human beings (Nihlén Fahlquist 2010).

Because virtues are often seen as something which an agent acquires with time, upbringing, habituation, and experience, and hence can possibly be affected, we are interested in whether design can promote responsible, or irresponsible, operators and users.

Distribution of Responsibility

We will now look at the distribution of responsibility. Responsibility can be distributed over various individuals, but it can also be distributed to collectives (collective responsibility), instead of, or in addition to the distribution of responsibility to individuals. Since collective responsibility does not exclude the attribution of responsibility to individuals within a collective, we will treat the attribution of responsibility to collectives as a special case of the distribution of (individual and collective) responsibility.

Traditionally, in the philosophical literature, individuals were seen as the sole bearers of responsibility, but as the world becomes more and more collectively organized, scholars have seen the need to assign responsibility to collectives, for example organizations and nations (May and Hoffman 1991b). The question is how the relation between individual and collective responsibility should be conceived. For example, what does collective responsibility imply for the individuals who make up that collective? If an individual’s organization, or a group she is a part of, does something wrong, does this mean that she is partly responsible (e.g., May and Hoffman 1991b; May 1992; Kutz 2000; Pettit 2007; Bovens 1998)? This question requires a definition of organization, and the issue of how organized a group of people need to be in order to be assigned responsibility has also been discussed (French; May 1992). In some cases, we are dealing with the responsibility of a company or government agency; in other cases we are discussing nations (Miller 2004) or just humanity in total. In relation to the latter, consider climate change, which many people probably think should be dealt with by governments, but where individuals can contribute by everyday choices. To what extent climate change is an individual or collective responsibility has been discussed by philosophers during recent years (cf. Sinnott-Armstrong 2005; Johnson 2003; Van de Poel et al. 2012; Nihlén Fahlquist 2010). Responsibility for social problems is probably partly individual and partly collective.

In addition to responsibility ascribed to a collective of people, responsibility can also be distributed over different people. When a large number of people are involved, it may be problematic to identify the person responsible for a negative outcome. Dennis Thompson referred to this problem or situation as the “problem of many hands” (Thompson 1980). Thompson formulated the problem in the context of the moral responsibility of public officials. Because many different officials, at various levels and in various ways, contribute to policies and the decisions of the organization, it is difficult to ascribe moral responsibility for the organization’s conduct. For outsiders of an organization who want to hold someone responsible for a certain conduct, it is particularly difficult or even impossible to find any person who can be said to have independently formed and carried out a certain policy or taken some decision. The problem of many hands is now widely discussed in the literature on engineering and business ethics (Harris et al. 2005/1995; Bovens 1998; Nissenbaum 1994, 1996; Doorn and Van de Poel 2012).

Thompson’s definition of the problem of many hands is rather broad and it leaves room for many different interpretations. It is therefore not surprising to see many different interpretations of this problem in the applied ethics literature. Some people see the problem of many hands primarily as the epistemic problem to identify the person responsible for harm because one does not know who actually made what contribution. This is mainly a problem for outsiders, Davis (2012) argues, because insiders generally know very well who made what contribution. This problem could therefore be avoided by making each individual’s causal contribution more transparent. Other authors come with a metaphysical interpretation of the problem of many hands (Bovens 1998; Nissenbaum 1994, 1996). Since our conditions for individual responsibility do not easily generalize to collective action, we need a different conception of responsibility, these authors argue. In the philosophy literature, this track is also followed by philosophers such as Peter French (1984, 1991) and Larry May (1992), who have tried to develop special principles that hold in situations where many actors are involved. In this chapter, we hold the view that the problem of many hands refers to a situation where none of the individuals is (or can be held) responsible but in which the collective of individuals is responsible. The challenge is to avoid this problem from occurring by distributing the responsibility over different individuals such that it does not occur.

Three Examples

In this section, we discuss three examples of a technological design that affected either individual responsibility or the distribution of responsibility, or both. Our discussion of the examples in this section is explorative. The three examples are the alcohol interlock, the V-chip for blocking violent television content, and podcasting devices in rural Zimbabwe. A description of the three cases is followed by a critical comparison.

The Alcohol Interlock

In Sweden, according to a bill adopted by a previous government and parliament, alcohol interlocks should be made mandatory in all new cars from 2012 (Grill and Nihlén Fahlquist 2012). Although this bill has not been implemented, interlocks are now common in coaches and taxis as a result of decisions made by individual companies. Additionally, from 2012, convicted drunk drivers are able to get their driver’s license back on the condition that they drive a car which has an interlock installed (http://www.transportstyrelsen.se/sv/Vag/Alkolas/Alkolas-efter-rattfylleri/). Outside of Sweden alcohol interlocks are used in many European countries, for example, as voluntary measure by transport companies, as one part of rehabilitation programs, or in school buses (http://www.etsc.eu/documents/Drink_Driving_Monitor_July_2011.pdf).

The mandatory use of alcohol interlocks would probably decrease the problem of drunk driving. However, it would also entail a different conception of who is responsible for drunk driving. Instead of seeing it mainly as an individual responsibility1, responsibility for drunk driving would be considered a shared responsibility between, on the one hand, the traffic “system designers” of the government and car industry and the drivers on the other.

There are some counterarguments to using this technology that are related to responsibility. The first counterargument refers to the distinction between individual and collective responsibility . In Sweden, the alcohol interlock should be seen against the background of a policy change in traffic safety, defined in the so-called Vision Zero (Swedish government 1996–1997, Nihlén Fahlquist 2006). According to Vision Zero, the system designers are ultimately responsible for traffic safety, which makes it largely a collective responsibility. System designers are defined as “those public and private organizations that are responsible for the design and maintenance of different parts of the road transport system such as roads, vehicles and transportation services as well as those responsible for different support systems for safe road traffic such as rules and regulations, education, surveillance, rescue work, care and rehabilitation” (Ibid.) Hence, system designers are primarily local and national government bodies and private companies. This is not uncontroversial, since driving has traditionally been associated with ideas of freedom of movement and autonomy, and to shift the balance and make it a societal concern may be considered undesirable. From a libertarian perspective, the alcohol interlock could be seen as reducing individual freedom and responsibility (Grill and Nihlén Fahlquist 2012; Ekelund 1999). On the other hand, most libertarians would agree that the government should protect the lives and health of individuals from the threat of other individuals. The question is where to draw the line between protection from harm and paternalism (Grill and Nihlén Fahlquist 2012; Nihlén Fahlquist 2006).

As argued in Grill and Nihlén Fahlquist (2012), the fact that collective responsibility is introduced does not necessarily mean that individual responsibility is removed. Responsibility does not have to be seen as a zero-sum game. Individuals and collectives can both be responsible for the same problem.

A second possible point of criticism is that the alcohol interlock actually deprives people of (individual) responsibility. After all, the alcohol interlock may remove the option to drive while intoxicated and as such affect the freedom/voluntariness condition. Do technologies that exclude certain behavior or persuade the user to behave in a particular way affect responsibility, and if so, is this desirable from a moral point of view? Recent works on persuasive technologies are relevant in this context. Persuasive technologies are intentionally designed to change the user’s attitude, behavior, or beliefs, often by giving the user feedback of her actions (or omissions) and by trying to “suggest” to her a desired pattern of behavior (Fogg 2003; Spahn 2012). It appears that the question when behavior steering technologies are morally acceptable is somewhat analogous to the question when a technology can be considered as enhancing rather than hampering responsibility. Although a generally agreed framework for assessing how and when to use behavior steering technologies is still lacking (Spahn 2012), the fact that they are put in between the extremes of manipulation on the one hand and convincing on the other may already point to some tentative answers to the question when technologies affect responsibility in a desirable way. The alcohol interlock does not just persuade the user not to drive while intoxicated; it actually blocks the possibility of doing so. It could be argued that technologies that leave the user no choice but to behave in the “morally desirable way” are undesirable. Moral freedom – that is, the freedom to behave in either a morally desirable or undesirable way and to either praise or blame people for their behavior – is “crucial for our self-understanding as rational beings” (Yeung 2011, p. 29). If we consider the alcohol interlock to be such a technology, it may not be the most desirable technology.

The V-Chip

The V-chip is a technological device designed to prevent children from watching violent television content. TV stations broadcast a rating as part of the program. Parents program the V-chip by setting a threshold rating and all programs above the rating are blocked by the V-chip when it is turned on. The V-chip was an added provision to President Bill Clinton’s Telecommunications Act in 1996 and the device is mandatory in all television sets of 12 in. and larger, but parents can decide whether or not to use it. Interestingly, in the debate about violent television content and children, it has been argued both that the V-chip removes parental responsibility and that it facilitates parental responsibility (Nihlén Fahlquist and Van de Poel 2012).

So, how can it be that this technology is simultaneously interpreted as affecting parental responsibility negatively and positively? The first thing we have to acknowledge is that in debates about the V-chip, three different senses of responsibility are at stake. First, some arguments about the effects of the V-chip on responsibility refer to the distribution of tasks. It could be argued that the task of parents (i.e., deciding what their children watch on television) is partly taken over by program makers and rating committees that apply ratings to programs and the V-chip that blocks programs with certain ratings. In terms of tasks, there is thus a shift from parents to program makers and the rating committee. Second, some arguments refer to responsibility as causal control over the “outcomes” of the technology at hand. In terms of control, it might be argued that program makers and the rating committee get more control over what children watch on TV. However, the parents are still in control over whether the V-chip is used and what content is blocked. Moreover, the ultimate control remains with the parents. Rather than shifting control, the V-chip seems to increase the total amount of potential control. As control could be related to the causal responsibility condition, the V-chip seems to increase the total amount of responsibility rather than diminishing parental responsibility. Third, arguments refer to parental responsibility as a virtue. Parental responsibility, conceived in this way includes two relations, i.e., the custodial relation between the parent and the child and the trustee relation between the parent and society (Bayne and Kolers 2008). The V-chip concerns both of these because the device is intended to protect children against threats to their well-being and development but also to protect society by preventing children from becoming more violent as a result of having consumed extensive media violence. Nihlén Fahlquist and Van de Poel argue that the duties involved in this case are sharable and that the non-sharable long-term parental responsibility to see to it that the more specific duties are performed is not threatened by the V-chip (Nihlén Fahlquist and Van de Poel 2012).

Podcasting Devices in Rural Farming in Zimbabwe

The last example comes from the “ICT for Development” (ICT4D) movement and it concerns the introduction of podcasting devices in the Lower Guruve area in Zimbabwe. Information and Communication Technologies (ICT) are increasingly used in the field of development aid as a tool for empowerment (Johnstone 2007). In the past, the introduction of technical devices as development aid often failed because the devices were either not suitable for the context of developing countries or they were used in a different way than the one intended by the development organization or NGO (Oosterlaken et al. 2012).

The technological devices in this example were developed in the context of the Local Content, Local Voice project, funded by the European Commission. The Lower Guruve area in Zimbabwe is a remote, semiarid area. Most people living in this area are dependent on small-scale subsistence farming (livestock production and drought-resistant crop cultivation). The district has a low literacy rate and it lacks adequate infrastructure services; there is no electricity, running water, telephone landline, mobile phone network, or FM radio network. For economic reasons, the district does not receive appropriate agricultural training from the governmental livestock officers. In order to empower the local citizens to improve their farming practices, the NGO Practical Action wanted to introduce an ICT-based device that would enable people to share information while keeping the impact of the technology on the power balance in the communities to a minimum. After consultation with the local stakeholders, podcasting devices on cattle management were chosen as the most appropriate device. In addition to headphones, the podcasting devices came with loudspeakers in order to enable collective listening while sitting under a tree in the village (Oosterlaken et al. 2012, p. 116).

This case is discussed in the development ethics literature as a successful example of how to strengthen people’s capacity to support themselves, without falling into the trap of paternalism. Although the technological design may, in itself, not be innovative, the use of loudspeakers is an interesting addition from a responsibility point of view. Without the loudspeakers, individual farmers can use the podcasting devices to listen to and gain knowledge. As such, it strengthens the knowledge condition mentioned in section “Individual Responsibility.” However, by listening to the podcasting devices collectively, which is facilitated by the loudspeakers, cattle management may become a collective responsibility. One of the local farmers indicated that the podcasting devices fostered “group work and group harmony that did not exist before” (quoted in Oosterlaken et al. 2012, p. 117). This indicates the technology may shift the focus from individual to collective responsibility. In this particular situation, the shift to a collective level was also considered a positive change (possibly also because the collective listening made the training on cattle management itself more effective). It is of course thinkable that for certain technologies, collective responsibility is not a desirable solution. In a hierarchical setting, for example, these devices designed for collective responsibility may not be the most appropriate ones.

Comparison and Critical Evaluation

If we compare the three examples, we see that responsibility plays a central role in each of them. We also noticed a tension between different senses of responsibility. In the case of the alcohol interlock, for example, the option of driving drunk is removed if the device is used. The driver’s freedom, in a sense, is limited. If we discuss the driver’s responsibility primarily in terms of the backward-looking individual, the alcohol interlock can – at first sight – be considered to hamper rather than enhance responsibility. However, in terms of forward-looking responsibility, it could also be argued that freedom is expanded if convicted drunk drivers, who would otherwise not be allowed to drive, are allowed to drive if they have an interlock installed. And in terms of virtue, the alcohol interlock could, although this is far from necessary, possibly enhance responsibility. The alcohol interlock allows the driver to drive more responsibly or at least to prevent her from driving while intoxicated. Virtues are often seen as being developed through habituation and it is possible that the interlock gradually affects the character and attitudes of some drivers. Although one could argue that the alcohol interlock only prevents wrong behavior and does not enhance virtue (cf. Yeung 2011), voluntarily installing an alcohol interlock in one’s car or installing an alcohol interlock in school busses could be seen as a sign that one cares about the effect of one’s actions (or the actions of one’s employees) on other human beings.

The relation between the technology and responsibility was found to be even more complex in the discussion of the V-chip. In terms of the task responsibility, it can be argued that the V-chip reduces parental freedom. However, the rating system in general provides parents with information on the basis of which they can form their opinions about what programs their children are allowed to watch and what programs not and thus might increase control and responsibility. If interpreted in terms of virtue, parental responsibility for children’s watching behavior can be considered enhanced by the V-chip. This could be the case since that responsibility arguably entails protecting one’s children against violent content which potentially affects their well-being and development, and society against children who may become violent partially as a consequence of watching such content and the V-chip facilitates that kind of protection.

Both the alcohol interlock and the V-chip are devices used in developed countries. The discussion of the paternalistic effect of technologies in a developed world context is a relatively new one. It has become urgent in the context of behavior-steering technologies that deprive actors from certain possibilities. In development ethics, the risk of paternalism has been discussed for quite some time now. The list of failed development aid examples is extensive. It seems that these lessons have been taken into account in the example of the podcast devices. This example shows that adding technological options to a device instead of removing options (in this example, providing loudspeakers in addition to headphones and not replacing headphones by a loudspeaker) may avoid the problem of paternalism. This is also the reason why, from the perspective of avoiding paternalism, the V-chip is less problematic than the alcohol interlock. After all, the V-chip leaves more freedom to the end user.

One important observation that follows from the discussion of all three examples is that responsibility is not a zero-sum game. As the case of the alcohol interlock indicated, adding a collective dimension to responsibility does not necessarily mean that individuals are deprived of all responsibility. Individual and collective responsibility can coexist. This means that by clever technological design, one could increase the total amount of responsibility, just as one could decrease the total amount of responsibility by poor design.

Designing for Responsibility

In this section we develop some tentative proposal for design for responsibility. As indicated in the introduction, there are currently no approaches for design for responsibility. On the basis of the explication of responsibility (section “Explication of Responsibility”) and the examples (section “Three Examples”), we do some proposals of what a design for responsibility approach might look like.

Design for Individual Responsibility

Individual Backward-Looking Responsibility

As we have seen in the examples in section “Three Examples,” design can affect the conditions that have to be met before someone can be held responsible. The challenge when designing for the value of responsibility is to assess which of the conditions needs to be “improved” by the technological design, especially if a particular condition conflicts with other conditions or with other values. If we look at the freedom condition, for example, a technological design that offers a person more options for action can be said to increase a person’s freedom (and as such, strengthen this person’s responsibility by way of the second condition mentioned in section “Individual Responsibility”). At the same time, the increased possibilities may also provide new possibilities for using the technological artifact in a wrongful way. Conversely, taking away possibilities for immoral behavior, as in the example of the alcohol interlock, may further responsible behavior and forward-looking responsibility as a virtue but diminish backward-looking responsibility. As such, the normative challenge of improving the responsibility conditions is not a trivial one.

Van den Hoven (1998) has argued that designers have what he calls a meta-task responsibility to see to it that the technologies they design allow users and operators to fulfill their responsibilities. In terms of the responsibility conditions this means that designers have to see to it that all the relevant responsibility conditions for users and operators are fulfilled. For example, when an operator of a chemical plant is responsible for closing down the plant under certain conditions that cause a safety hazard, the system should be so designed that the operator receives the relevant information in time and in an understandable way, and in way that it can easily be distinguished from less relevant information. This specific design requirement follows from the knowledge condition of responsibility.

In terms of the five responsibility conditions discussed in section “Explication of Responsibility,” we could think of the following tentative design heuristics for individual backward-looking design for responsibility:
  1. H1.

    Moral agency: Design should not diminish the moral agency of users, operators, and other stakeholders. From this view point, a moral pill that makes people behave as moral automatons without the ability to reason about what is morally desirable would be undesirable.

     
  2. H2.

    Voluntariness or freedom: Designs should respect or improve the voluntariness of actions, e.g., by increasing the options of actions for users, operators, and other stakeholders.

     
  3. H3.

    Knowledge: Designs should provide the right knowledge in the right form for responsibility.

     
  4. H4.

    Causality: Designs should increase the control over outcomes of actions (with the design).

     
  5. H5.

    Transgression of a norm: Designs should make people aware of relevant moral norms and potential transgressions of them. This can, for example, be done through feedback on the actions of users when they use a design; think of the warning sign in a car when the safety belt is not used.

     

These heuristics are only tentative and may be overridden in certain circumstances, especially because they may conflict with each other or with other design heuristics (derived from other relevant normative demands), relating to design for forward-looking individual responsibility or design for the distribution of responsibility as listed below.

Individual Forward-Looking Responsibility as a Virtue

Responsibility as a virtue is least problematic if a technological artifact offers a user more options for actions, which may in turn facilitate a way out of moral dilemmas (cf. Van den Hoven et al. 2012). Washing machines that can be used in eco-mode offer the user more freedom (in the sense of, more options for action), while at the same time offering the possibility of running the household more “responsibly.” In the same way, we have seen that the V-Chip may enhance responsibility as a virtue. Designing for responsibility as a virtue may become problematic when a particular technology steers our behavior in a particular direction or when the behavior encouraged by a particular technology conflicts with our ideas about behaving responsibly. In a situation of driving, we probably tend to think of behaving responsibly in terms of avoiding harm (non-maleficence). In medical practice (e.g., in a psychiatric setting), avoiding harm is known to be potentially at odds with our idea of freedom, in the sense of being free from of constraints. However, if we conceive of freedom as a possibility to do something (i.e., freedom to do things), treatment against one’s will can also be considered to enhance a person’s freedom. After treatment, even against the patient’s will, the patient may have the capacity to do certain things she was not able to do without the treatment because her physical condition has improved thanks to, for example, medication. Similarly, one could argue that disabling a person to drive while she is, for example, drunk may in fact not have the diminishing effect on responsibility if responsibility is understood analogously, that is, as a capacity to behave responsibly.

We would propose the following tentative design heuristics for individual forward-looking design for responsibility:
  1. H6.

    Behavior: Design should encourage morally desirable behavior of users. It should, however, do so in a way that respects design heuristic H1, H7, and H8.

     
  2. H7.

    Capacity: Design should encourage the capacity of users, operators, and other stakeholders to assume responsibility as a virtue, i.e., their ability to reflect on their actions and their ability to behave responsibly.

     
  3. H8.

    Virtue: Design should foster virtues in users, operators, and other stakeholders. As virtues are acquired character traits, design can foster them.

     

Design for the Distribution of Responsibility

A main issue here is what the right balance is between individual and collective responsibility . The desirable balance may well depend on the case and the circumstances, and probably cultural differences between countries are also relevant here, as the podcasting case testifies. Still, there might be some criteria to judge the balance. One, again, is effectiveness: How effective is the struck balance between individual and collective responsibility in avoiding harm and doing good? Another criterion is moral fairness: Is the balance morally fair? Some people, for example, may consider it morally inappropriate to hold the collective responsible for negative consequences caused by individuals. Conversely, sometimes it may feel to be morally inappropriate to single out individuals for blameworthiness rather than the collective.

In relation to distributions of responsibility, a number of criteria might be employed. As a kind of minimal condition, we might want to require that what we have called the problem of many hands will not occur. This might be understood as requiring that for each relevant issue, at least someone is responsible. In addition to such a completeness requirement, we want a distribution of responsibility to be fair. Fairness relates to the question whether or not the distribution of responsibility reflects people’s intuitions of when it is justified to ascribe a certain responsibility. It is unlikely that a purely consequentialist approach is psychologically feasible. The motivational force of responsibility ascriptions that are inconsistent with basic intuitions of fairness will therefore be undermined (Kutz 2000, p. 129). These basic intuitions of fairness may also differ between people (Doorn 2012b). Finally, we often want a responsibility distribution not only to be complete and fair but also to be effective in achieving some desirable end in avoiding harm (Doorn 2012a). Conceivably, some ways of distributing responsibility are more likely to avoid harm, and to foster good than others. Completeness seems a minimal or necessary condition for effectiveness, but it is certainly not sufficient. Even if for each issue, someone is responsible, the resulting responsibility distribution is not necessarily the most effective. More generally, the criteria of completeness, fairness, and effectiveness may conflict in the sense that they single out different responsibility distributions as best. Technology may play a role in distributing responsibility in a particular way. Since a technological artifact may give the user control in varying degrees, the technology may lead to deliberate distributions of responsibility between different users, between intermediate and end users, or between producers and users. The normative challenge is to single out the relevant distribution criteria and, if there are more, how to prioritize or strike a balance between them.

On basis of the above considerations, we suggest the following design heuristics for design for the distribution of responsibility:
  1. H9.

    Completeness: The design should distribute responsibility in such a way that for each relevant issue at least one individual is responsible.

     
  2. H10.

    Fairness: The design should distribute responsibilities in a fair way over individuals.

     
  3. H11.

    Effectiveness: The design should distribute responsibility in such a way that harm is minimized and that goods are achieved as much as possible.

     
  4. H12.

    Cultural appropriateness: Design should strike the balance between individual and collective responsibility in a way that is culturally appropriate.

     

Open Issues and Future Work

As indicated above, there is currently no methodology available for systematically designing for the value of responsibility. What we have done in this chapter is to explain the different aspects of responsibility and to identify possible challenges. The remaining challenges and open issues are of a descriptive, normative, and engineering nature.

Descriptively, the main challenge is to describe how particular designs affect responsibility. Methods are needed for describing how design affects (1) individual responsibility (backward-looking as well as forward-looking) and (2) distributions of responsibility. With respect to individual responsibility, there is relevant work and methodology in, for example, cognitive ergonomics (relevant for the knowledge condition) and in persuasive technology (relevant for the freedom and norm condition). With respect to distributions of responsibility, the point has often be made that design affects these but most analyses are based on retrospective case studies, and there exists no methodology, as far we know, to predict such effects prospectively.

Normatively, we have identified various heuristics which need to be further developed and specified. Also on basis of the examples we gave, we see the following normative challenges here:
  • How to deal with the normative tensions that follow from the different aspects of responsibility. One pertinent question is how responsible behavior (forward-looking responsibility as a virtue) can be encouraged without falling into the trap of paternalism (freedom condition for backward-looking responsibility). Another issue is how responsibility can best be distributed between individuals and collectives.

  • How such normative challenges are best resolved is most likely partly dependent on context. Different resolutions must be desirable for different technologies or in different countries or cultures. So a normative framework or approach is required that is able to do justice to relevant contextual differences.

The engineering challenge is to translate the relevant descriptive and normative insights into design methodologies and engineering solutions. While finding good engineering solutions is probably to be done by designing engineers who partake in specific projects, a task which will require a good deal of creativity, the development of design methodology for design for responsibility is a more general task. We have formulated some tentative design heuristics that make a beginning with this task; however, the development of a sound design methodology would in our view first require the resolution of some of the abovementioned descriptive and normative challenges.

Conclusion

In this chapter, we have analyzed how one could design for the value of responsibility. However, there is currently no methodology available for systematically designing for the value of responsibility. Based on an explication of different notions of responsibility and ways in which the term is used, we identified two main ways in which design can affect responsibility, i.e., (1) individual responsibility (backward-looking as well as forward-looking) and (2) the distribution of responsibility. We further elaborated three cases, and on basis of these we developed a number of design heuristics for design for responsibility. We also identified a number of challenges for design for responsibility. These challenges are both empirical (How to design choices affect responsibility?) and normative (What is a desirable way of affecting responsibility?). It was shown that the different heuristics for design for responsibility may in some situations conflict, especially if a technological artifact limits the user’s freedom. Further research is needed to develop a methodology for designing for responsibility. This may be complemented with work on behavior steering technologies and insights from cognitive economics.

Cross-References

Footnotes

  1. 1.

    Even in this case, agencies may be seen as responsible in the sense that they inform the public about the risks involved in drunk driving, etc., but the general idea is that the individual driver is the main responsible actor.

References

  1. Bayne T, Kolers A (2008) Parenthood and procreation. In: Zalta EN (ed) Stanford encyclopedia of philosophy (fall 2008 edn). http://plato.stanford.edu/archives/fall2008/entries/parenthood/
  2. Bovens M (1998) The quest for responsibility. Accountability and citizenship in complex organisations. Cambridge University Press, CambridgeGoogle Scholar
  3. Corlett JA (2006) Responsibility and punishment. Springer, DordrechtCrossRefGoogle Scholar
  4. Davis M (2012) ‘“Ain’t no one here but us social forces”: constructing the professional responsibility of engineers. Sci Eng Ethics 18(1):13–34CrossRefGoogle Scholar
  5. Doorn N (2012a) Responsibility ascriptions in technology development and engineering: three perspectives. Sci Eng Ethics 18(1):1–11CrossRefGoogle Scholar
  6. Doorn N (2012b) Exploring responsibility rationales in research and development (R&D). Sci Technol Hum Values 37(3):180–209CrossRefGoogle Scholar
  7. Doorn N, Van de Poel IR (2012) Editors’ overview: moral responsibility in technology and engineering’. Sci Eng Ethics 18(1):69–90CrossRefGoogle Scholar
  8. Ekelund M (1999) Varning-livet kan leda till döden. En kritik av nollvisioner. Timbro, StockholmGoogle Scholar
  9. European Transport Safety Council (ETSC) Newsletter 14 July 2011. http://www.etsc.eu/documents/Drink_Driving_Monitor_July_2011.pdf. Accessed 16 Jan 2012
  10. Feinberg J (1970) Doing and deserving. Essays in the theory of responsibility. Princeton University Press, PrincetonGoogle Scholar
  11. Fischer JM, Ravizza M (1993) Introduction. In: Fischer JM, Ravizza M (eds) Perspectives on moral responsibility. Cornell University Press, Ithaca, pp 1–41Google Scholar
  12. Fischer JM, Ravizza M (1998) Responsibility and control. A theory of moral responsibility. Cambridge University Press, CambridgeCrossRefGoogle Scholar
  13. Fogg BJ (2003) Persuasive technology: using computers to change what we think and do. Morgan Kaufmann, Amsterdam/BostonGoogle Scholar
  14. French PA (1984) Collective and corporate responsibility. Columbia University Press, New YorkGoogle Scholar
  15. French PA (1991) The corporation as a moral person. In: May L, Hoffman S (eds) Collective responsibility: five decades of debate in theoretical and applied ethics. Rowman & Littlefield, SavageGoogle Scholar
  16. Grill K, Nihlén Fahlquist J (2012) Responsibility, paternalism and alcohol interlocks. Publ Health Ethics 5(2):116–127CrossRefGoogle Scholar
  17. Harris CE, Pritchard MS, Rabins MJ (2005/1995) Engineering ethics: concepts and cases, 3rd edn. Wadsworth, BelmontGoogle Scholar
  18. Hart HLA (1968) Punishment and responsibility. Essays in the philosophy of law. Clarendon, OxfordGoogle Scholar
  19. Hart HLA, Honoré T (1985) Causation in the law. Clarendon, LondonCrossRefGoogle Scholar
  20. Johnson BL (2003) Ethical Obligations in a Tragedy of the Commons. Environmental Values, 12(3):271–287CrossRefGoogle Scholar
  21. Johnstone J (2007) Technology as empowerment: a capability approach to computer ethics. Eth Inf Technol 9(1):73–87CrossRefGoogle Scholar
  22. Kutz C (2000) Complicity: ethics and law for a collective age. Cambridge University Press, New YorkCrossRefGoogle Scholar
  23. Latour B (1992) Where are the missing masses? In: Bijker W, Law J (eds) Shaping technology/building society; studies in sociotechnical change. MIT Press, Cambridge, MA, pp 225–258Google Scholar
  24. Lewis HD (1991) Collective responsibility. In: May L, Hoffman S (eds) Collective responsibility: five decades of debate in theoretical and applied ethics. Rowman & Littlefield, SavageGoogle Scholar
  25. May L (1992) Sharing responsibility. University of Chicago Press, ChicagoGoogle Scholar
  26. May L, Hoffman S (1991a) Introduction. In: May L, Hoffman S (eds) Collective responsibility: five decades of debate in theoretical and applied ethics. Rowman & Littlefield, SavageGoogle Scholar
  27. May L, Hoffman S (eds) (1991b) Collective responsibility: five decades of debate in theoretical and applied ethics. Rowman and Littlefield, SavageGoogle Scholar
  28. Miller D (2004) Holding nations responsible. Ethics 114:240–268CrossRefGoogle Scholar
  29. Nihlén Fahlquist J (2006) Responsibility ascriptions and vision zero. Accid Anal Prev 38(6):1113–1118CrossRefGoogle Scholar
  30. Nihlén Fahlquist J (2010) The problem of many hands and responsibility as the virtue of care. managing in critical Times – philosophical responses to organisational turbulence proceedings. St Anne’s College, Oxford. 23–26 July 2009Google Scholar
  31. Nihlén Fahlquist J (2013) Responsibility and privacy – ethical aspects of using GPS to track children. Child Soc (in press)Google Scholar
  32. Nihlén FJ, Van de Poel IR (2012) Technology and parental responsibility – the case of the V-chip. Sci Eng Ethics 18(2):285–300CrossRefGoogle Scholar
  33. Nissenbaum H (1994) Computing and accountability. Commun ACM 37(1):73–80CrossRefGoogle Scholar
  34. Nissenbaum H (1996) Accountability in a computerized society. Sci Eng Ethics 2(1):25–42CrossRefGoogle Scholar
  35. Oosterlaken ET, Grimshaw D, Janssen P (2012) Marrying the capability approach with appropriate technology and STS – the case of podcasting devices in Zimbabwe. In: Oosterlaken ET, van den Hoven MJ (eds) The capability approach, technology and design. Springer, DordrechtCrossRefGoogle Scholar
  36. Pettit P (2007) Responsibility Incorporated. Ethics 117: 171–201CrossRefGoogle Scholar
  37. Sinnott-Armstrong W, Howarth RB (eds) (2005) Perspectives on Climate Change: Sci Econ Polit Ethics. Elsevier, AmsterdamGoogle Scholar
  38. Spahn A (2012) And lead us (Not) into persuasion…? Persuasive technology and the ethics of communication. Sci Eng Ethics 18(4):633–650CrossRefGoogle Scholar
  39. Strawson PF (1974) Freedom and resentment. In: Strawson PF (ed) Freedom and resentment and other essays. Methuen, London, pp 1–25Google Scholar
  40. Swanton, C. 2005. Virtue Ethics. A Pluralistic View. Oxford University Press: OxfordGoogle Scholar
  41. Swedish Government (1997) Nollvisionen och det trafiksäkra samhället, Regeringsproposition, 1996–1997, vol 137Google Scholar
  42. Thompson DF (1980) Moral responsibility and public officials. Am Political Sci Rev 74:905–916CrossRefGoogle Scholar
  43. Van de Poel IR (2011) The relation between forward-looking and backward-looking responsibility. In: Vincent N, Van de Poel I, Van den Hoven J (eds) Moral responsibility. Beyond free will and determinism. Springer, Dordrecht, pp 37–52Google Scholar
  44. Van de Poel IR, Nihlén Fahlquist J, Doorn N, Zwart SD, Royakkers LMM (2012) The problem of many hands: climate change as an example. Sci Eng Ethics 18(1):49–67CrossRefGoogle Scholar
  45. Van den Hoven MJ (1998) Moral responsibility, public office and information technology. In: Snellen ITM, Van de Donk WBHJ (eds) Public administration in an information age. A handbook. Ios Press, Amsterdam, pp 97–111Google Scholar
  46. Van den Hoven MJ, Lokhorst GJ, Van de Poel IR (2012) Engineering and the problem of moral overload. Sci Eng Ethics 18(1):143–155CrossRefGoogle Scholar
  47. Van Hooft, S. 2006. Understanding Virtue Ethics. Acumen: CheshamGoogle Scholar
  48. Verbeek P-P (2011) Moralizing technology: understanding and designing the morality of things. The University of Chicago Press, Chicago/LondonCrossRefGoogle Scholar
  49. Wetmore JM (2004) Redefining risks and redistributing responsibilities: building networks to increase automobile safety. Sci Technol Hum Values 29(3):377–405CrossRefGoogle Scholar
  50. Williams, G. 2008. Responsibility as a Virtue. Ethical Theory and Moral Practice 11(4):455–470CrossRefGoogle Scholar
  51. Winner L (1980) Do artifacts have politics? Daedalus 109:121–136Google Scholar
  52. Yeung K (2011) Can we employ design-based regulation while avoiding brave New world? Law Innov Technol 3(1):1–29CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  • Jessica Nihlén Fahlquist
    • 1
    Email author
  • Neelke Doorn
    • 1
  • Ibo van de Poel
    • 1
  1. 1.TU Delft / 3TU, Centre for Ethics and TechnologyDelftThe Netherlands

Personalised recommendations