Advertisement

Mediation in Design for Values

  • A. SpahnEmail author
Living reference work entry

Abstract

Mediation is the claim that technologies have an impact on the way in which we perceive the world and on the way we act in it. Often this impact goes beyond human intentions: it can hardly be understood only in terms of “intentions of the user” or in terms of “intentions of the designer.” Mediation argues that technologies have “agency” themselves and then tries to explicate the way in which technological objects and human subjects form a complex relation and constitute each other. Designers should anticipate mediation effects and can use mediation to moralize technologies. However, questions can be asked about how far the moralizing of technologies is compatible with user autonomy.

Keywords

Autonomy Mediation Nudges Persuasive technologies Phenomenology Verbeek 

Introduction

Technological artifacts have, without a doubt, a big influence on our life and on social reality. Still technologies are sometimes regarded as mere neutral tools to reach the goals of humans. A knife can be used to cut cheese or to severely hurt or kill someone. It is the user (or so it seems) who is in full control over the action he intends to do. Therefore only users can shoulder responsibility: we do not put knives into prison, but only people. However, the real picture of the relation between technology and users is more complex than the idea of neutral tools suggests. The framework of mediation suggests that technologies play a much more active role in influencing the context in which they fulfill their function. Mediation is the claim that technologies have an impact on the way in which we perceive the world and on the way we act in it. Often this impact goes beyond human intentions: it can hardly be understood only in terms of “intentions of the user” or in terms of “intentions of the designer.” Mediation theorists argue that technologies have “agency” themselves and try to explicate the way in which technological objects and human subjects form a complex relation, in which they constitute each other.

This chapter introduces the idea of mediation as a philosophical attempt to account for the role that technologies play in shaping human perceptions and actions. Mediation will be placed in the context of its philosophical (postphenomenological) tradition and related to similar attempts to comprehend the active influence of technologies on human behavior. Finally, the consequences for ethical technology design will be discussed and suggestion for further research will be made.

Mediation in Design for Values

Philosophical Approaches to Mediation

The notion of mediation is meant to overcome a too simplistic understanding of the influence technologies have on humans. Accepting that technologies mediate our perception of the world and our action in it means going beyond the idea of mere neutral technology. Within philosophy of technology, there is an extended debate about the moral status of technology and whether or not technology is morally neutral (cfr. Radder 2009; Kroes and Verbeek 2014). The notion of mediation has been established in philosophy of technology to overcome the neutrality thesis and take a clear stance for the non-neutrality of technology. The idea of mediation has originally been developed within the philosophical schools of postphenomenology and STS to be able to analyze the role that technologies play in shaping human behavior and human perception.

The main argument from this perspective is that philosophers of science and technology have long neglected the ethical role of technology and interpreted technology mainly as a functional, neutral tool to reach any given aim of human actors. In this sense technology would be morally neutral, as it is only humans who can act and decide which aims to choose and which means to employ. “Guns don’t kill people, but people kill people” is a popular version of the neutral technology thesis. If one starts from the assumption of the moral neutrality of technology, ethics and ethics of design can mainly be dealt with by using traditional approaches in ethics: moral philosophers can mainly focus on human agents, ignoring to a large extent the contribution of technologies to moral and immoral human behavior.

The framework of mediation challenges this view by rejecting one key assumption that underlies the neutrality thesis of technology. This assumption is often linked to the Cartesian dualism between res cogitans and res extensa, or in the field of ethics of technology: between human agents and technological objects. According to this dualism, there is a clear distinction possible between “humans” (subjects) and “technological artifacts” (objects) in a Cartesian sense: only humans have agency and intentionality, whereas objects are passive and lack agency and intentionality. Furthermore, according to the dualistic viewpoint, both humans and technology can be defined independent from each other and both have separate essential features. Scholars in the tradition of postphenomenology and STS have strongly criticized this Cartesian dualism and suggested “mediation” in the field of philosophy of technology as a more appropriate way to analyze the relation between technology and human behavior. In this section the most influential philosophical contributions in this field will be briefly sketched, before the next section links the debate about “mediation” to related theoretical frameworks to account for the non-neutrality of technology and its impact on human behavior, actions, and attitudes.

STS

Within STS, various attempts have been made to account for the active role of technology in human decision making. Winner has famously argued that artifacts can have politics and are thus not mere neutral tools. According to him, artifacts can have politics in two different ways. On the one hand, they can “settle an issue” of political relevance. Winner argues that, e.g., Robert Moses built the bridges at the Long Island intersection in New York on purpose very low, such that busses could not pass them, thus making it difficult for poor black people to access the recreational area (Winner 1980).1 On the other hand, artifacts might be highly compatible or even require a certain division of power structures to operate properly: nuclear energy requires top-down hierarchical organization, whereas solar energy panels lend themselves to more democratic bottom-up decentralized structures (Winner 1980). Therefore, technologies are not simply neutral means to realize a given end, but carry with them a political charge.

Actor-Network Theory

To overcome the vision of technology as neutral passive tools, Actor-network theory (ANT) goes a step further and explicitly introduces the idea of agency of non-human actors (Latour 1979; Law 1999; Latour 2005). ANT suggests the notion of “actant” to cover both human and non-human actors. This is done to illustrate that agency is often distributed over various elements of a network, including the “agency” of non-humans. To avoid presupposed ideas about the items that constitute a network, all elements should be described in the same terms. ANT therefore defends the general principle of symmetry in the network in order to overcome the Cartesian dualism between humans and non-humans. As a constructivist theory, ANT tries thus to avoid all essentialism and therefore regards also non-human elements of a network as “acting.” The key idea is that humans do not act alone as independent entities, but in a broader context, in which other elements of the network play a key role in determining the outcome of any action. A bulky hotel key nob can be regarded as delegating the hotel owner’s wish that the guest should not forget to return the key, into the design of the artifact. In this way, the action of returning the hotel key is the result of both the actions of the hotel guest and the contribution of the bulky nob that serves as a physical “reminder” due to its effect of being inconveniently large.

Madeleine Akrich has introduced the notion of a “script” to account for the impact that technologies have on human agency (Akrich 1992; Akrich and Latour 1992). Like a “script” in a theater play technologies pre-scribe to some extent the actions that should be done with an artifact in question. The speed bump has the script “approach me slowly” inbuilt, the bulky hotel key incorporates the script “I will annoy you with my heavy and unhandy nob, so please return me at the reception.” Akrich analyses in how far scripts are in-scribed into the technologies by designers. In the design context, a script is an anticipation of an envisaged user of any given artifact and contains assumptions about the context of usage and the series of actions that the user is likely to perform. But also assumptions about, e.g., knowledge of the user, level of familiarity with this technology, division of work between technology and user, etc. can be “in-scribed” into technology. According to Latour designers “delegate” responsibilities to the artifact: a speed bump is supposed to make sure that car drivers are slowing down (Latour 1992).

But next to the inscription of a script by the designer, artifacts can also be (re-)used by users in very creative ways that have not been foreseen by the designer or implementer of a given technology (description). In the context of her analysis of the script, Akrich also emphasizes the influence scripts have on power division and power relation, especially in developing countries (Akrich 1992).

Postphenomenology

The most comprehensive analysis of mediation can be found in the works of Verbeek (cfr. e.g., Verbeek 2000, 2005, 2006a) who – following the work of Martin Heidegger, Don Ihde, Bruno Latour and others – develops a systematic approach of What things do (2005). Whereas much previous work has focused on a theoretical understanding of the mediating role of technology, Verbeek explicitly analyses the normative dimension that has according to him been neglected in prior work (Verbeek 2006a). He continues the tradition of phenomenology, which he broadly defines as the “philosophical analysis of human world relationships” (Verbeek 2006b). In line with the above-mentioned criticism of Cartesian dualism he states:

Humans and reality are always interrelated, phenomenology holds. Human beings cannot but be directed at the world around them; they are always experiencing it and it is the only place where they can realize their existence. Conversely, their world can only be what it is when humans deal with it and interpret it. In their interrelation, both the subjectivity of humans and the objectivity of their world take shape. What human beings are and what their world is, is co-determined by the relations and interactions they have with each other. (Verbeek 2006a)

The sharp subject-object distinction, that underlies the technologies-as-neutral-tool-thesis, is thus being rejected by the phenomenological tradition as interpreted by Verbeek. According to Verbeek mediation urges us to “rethink both the status of the object and the subject in ethical theory” (Verbeek 2008a, p. 12). However, both Ihde and Verbeek at the same time want to overcome the uneasiness and critical stance against modern technology that from Heidegger on played an important role in the continental tradition. Furthermore, whereas classical phenomenology aimed to grasp “the things themselves,” postphenomenology no longer accepts given relations “between pre-existing subjects who perceive and act upon a world of objects” (Verbeek 2008a, p. 13). Postphenomenology rather aims at investigating the constitution of subjectivity and objectivity in the relation between humans and reality. This relation is not stable or given, but in fact mediated by technology (Ihde 1993; Ihde 2009; Verbeek 2008a). In this line, Verbeek argues that “[t]echnological artifacts mediate how human beings are present in their world, and how their world is present to them.” Accordingly the way humans act in the world and the way in which the world is represented and constituted in human perceptions is mediated by technology. What does this mean in detail?

Mediation of Perception

Don Ihde analyses the embodiment relation of technologies. Technologies can help humans perceive the world, without being perceived themselves. When looking through a pair of glasses, the world becomes visible, while the artifact (the pair of glasses) is not perceived itself. In this example, a technological artifact becomes as it were an extension of the human body.

However, technologies also represent reality in a way that requires interpretation – which Ihde identifies as the hermeneutic relation. A thermometer represents a part of reality: the temperature. But other than a direct sensory experience of cold or heat, this representation needs to be interpreted. When a technological artifact provides a representation of reality, there is thus almost always a selection of which part of reality is represented and which part is not represented. Furthermore, the designer of the artifact has to make a choice how to represent different aspects of reality. In this way, there is what Ihde calls a structure of “amplification” and “reduction” at play, which transforms our perception of reality. The mere fact that only certain aspects of reality are represented amplifies their significance in the interaction with the technology, while at the same time reducing all possible other aspects of reality. This transformation can go so far that technologies help to shape what counts as real (Verbeek 2006a, p. 366). Verbeek analyses the example of obstetric ultrasound to point at the various elements of a technological mediation of perception in the case of pregnancy. Ultrasonic pictures shape the way the unborn child is given in human experience. It, e.g., isolates the fetus from the female body and represents it thus as “independent,” rather than as “united with the mother.” Furthermore, it puts the image of the fetus in the context of medical norms, thus emphasizing “choices” and redefining pregnancy as a medical phenomenon. In this way, Verbeek claims, this mediation of perception creates a new ontological status of the fetus that is of moral significance, as it influences human decisions (Verbeek 2008a).

Another striking example of technological mediation of perception, which is ethically highly relevant, is the representation of the battlefield or terrorist subjects in the controller display of remote controlled military drones. There is an ongoing debate whether the representation of remote surveillance contributes to a dehumanization of warfare, due to an alleged video-game-like experience, or whether on the contrary it leads to more compassion, as these observations often take very long and the suspect is seen doing daily activities like playing with his children and interacting with family and friends (Singer 2009; Wall and Monahan 2011; Royakkers and Est 2010; Gregory 2011; Sharkey 2010).

These two examples make clear that the mediation of perception plays an important role in human decision making, as the way the reality is (re)-presented influences human moral decision making. This also raises issues for the design of technological artifacts, as the way technologies represent reality is often a design choice, as, e.g., in the case of medical image systems that support doctors in making decisions about health issues (Kraemer et al. 2011). Designers should be aware of the mediating role with regard to human perception of reality. They can use the insights from mediation of perception actively in designing technological artifacts. In the example of the remote control military drone, they need to reflect on whether the interface can and should be designed such that it reduces the stress-level of the operator, or whether on the contrary the design of the human-technology-interface should avoid dehumanizing effects of remote-warfare. The application of insights from the mediation of perception in other fields of technologies might be less controversial: designers can use mediation of perception to highlight important moral aspects of, e.g., consumer choices, by creating smart phone apps that give visual feedback on ecological footprints of products, their nutrient values and other morally significant features.

Mediation of Action

Next to the mediation of perception, technology also mediates the actions of humans. This insight builds up on the ideas of “scripts” and “agency” of artifacts, which was discussed above (see section Actor-Network Theory). The actions of humans are determined not only by their intentions but also by their environment. A speed bump again is a classical example in which the human action (of driving) is mediated such that it becomes an action of slow driving, thereby increasing safety. The action of speeding is made (almost) impossible. In a similar way technologies invite certain actions, while at the same time making other actions more difficult. A paper cup suggests that it should be thrown away after usage, whereas a porcelain cup is designed to be used for a longer time. Design choices that shape these technologies thus affect human actions.

It is important to see, however, that only a sub-part of the mediation of action is actually intended by the designer. In many cases, the mediation of action is in fact an unintended consequence of technology design. Mobile telephones and emails have, e.g., changed the culture of interaction and communication; microwave ovens might have led to promoting the action of regularly eating instant meals individually; revolving doors were designed to keep out the cold, but have the unintended side effect of making buildings inaccessible for people with a wheelchair or a walking stick. An important insight for design is thus that technologies are “multi-stable” (Ihde 2008) and can have various unforeseen usages, including at times the complete opposite of the originally intended usage (Tenner 1997).

One example for a mediation effect that runs counter to the original intention of the design is the “rebound effect” that is often discussed in the context of design for sustainability (Herring and Sorrell 2009): energy saving light bulbs, e.g., have been designed to save energy, but it is often claimed that, due to their low energy consumption, they led to the effect that people add lighting to prior unlighted areas of their homes and gardens, leading in fact to an increase in energy usage.

This makes the task of the designer more complex, as she needs to be aware of not only the anticipated usage of the technology but also possible unintended consequences. She needs to avoid falling into the “designer’s fallacy” (Ihde 2008) that the intentions of the designer are enough to anticipate the usage of a new technology and that these intentions alone should guide the design process. In a similar vein, Verbeek has urged designers to use their creativity to perform a mediation analysis to account for the otherwise unforeseen mediation effects of new technologies (Verbeek 2006a).

Related Frameworks for Mediation: Persuasive technologies and nudges

The insight that technologies have a fundamental impact on human attitudes and behavior has also been discussed recently in various disciplines from psychology, human-technology-interaction, design-studies, sociology, economy and philosophy of technology, without necessarily using the mediation-framework to account for it. The debates on “persuasive technologies” and on “nudging” are two examples that can easily be related to the phenomenon of mediation in Design for Values, despite the difference in terminology (Verbeek, 2006a, 2009b). Let us thus look at the notion of “nudging” (with the help of technology) and “persuasive technologies” in turn.

Nudges

Thaler and Sunstein have introduced the term “nudge” into the debate about the possibilities to change human behavior via design of the environment in which choices take place (Thaler and Sunstein 2008). They start by criticizing the idea of the homo economicus. Real humans should not be viewed as making rational, well-considered choices, but in fact they often lack the capacity to make choices that would be in their best interest. Following the psychological tradition of dual process theory, they distinguish two processes of decision-making. The rational slow process is good for analyzing a problem in depths and to carefully weigh arguments. Often we do in fact, however, rely on a quick and intuitive mode of making choices, which is guided by psychological heuristics and biases. Thaler and Sunstein argue that many of these biases lead us often away from choices that we would benefit from, resulting thus in many sub-optimal results.

The claim that real humans are often poor decision makers leaves in principle two strategies open: one can try to improve the abilities of humans, or change the environment in which they make choices. The first strategy would try to educate people such that they are better capable of making “good decision” by, e.g., training their decision making skills. The second strategy decides to accept that humans often tend to make bad choices, and therefore aims to adapt the environment in which humans make choices such that it “nudges” them to make better choices. Thaler and Sunstein advocate the latter option. We might, e.g., know in general that it is good to eat healthy, but this abstract knowledge alone is often not enough to motivate us in concrete situations to make healthy eating choices. This is where nudges come in: designers can structure choices such that humans decide in their best interest after all. It might turn out, e.g., that humans eat healthier if the salad and fruits are placed more prominently in a canteen, e.g., at the beginning of the row, rather than at the end of the counter.

Thaler and Sunstein thus advocate what they call choice-architecture: designers should create environments (including technologies) such that they help humans to make better choices as judged by themselves. These “nudges” should still let people free to decide (you can still ignore the salad and go for the unhealthy chocolate), but they should make the “better” choice more prominent. One can see that the idea of “nudging” would have many consequences for technology design. At the same time, it raises worries of paternalism (see section Moralizing Technology and Mediation in Design for Values: Research Questions from the Perspective of Ethics of Technology).

Persuasive Technology

The term “nudge” refers thus to intentional attempts of structuring human choices such that they lead to better outcomes (as judged by the individuals). The term nudge is thus very broad; a nudge can be a tax incentive, the setting of a default option on a paper-form, the choice for opt-out or opt-in strategies, the pre-structuring of complex choices, etc. One way of nudging people is to use “persuasive” strategies and embed them into technologies. With the emergence of ubiquitous ICT, technology can actively take over the role to persuade people to change their behavior: persuasive technologies are technologies that are intentionally designed to change human behavior and/or attitudes (Fogg 2003; IJsselsteijn 2006). Examples include blinking warning lights in cars that remind the driver to put on the seat belt, e-coaching apps to help lose weight, and RSI programs to prevent back injury.

From the perspective of mediation, one can argue that persuasive technologies exploit the mediating role of technologies, although mediation is a broader term: mediation also captures unintended influences on actions and perceptions of the user, whereas in the case of persuasive technologies the behavior and/or attitude change is an intended effect of the designer or implementer of the persuasive technologies in question. Furthermore, “persuasion” suggests that the change in behavior or attitude is voluntary (Smids 2012) and that persuasive technology goes beyond just providing information or arguments for behavior change. Persuasion can thus be placed on a continuum between mere informing or “convincing through arguments” and “manipulation” or “coercion” (Spahn 2012).

One can therefore argue that persuasive technologies are a subclass of mediation, in which (i) the behavior and or attitude change of the user is intended by the designer, (ii) the means of behavior change is persuasion, which implies establishing a (iii) minimal type of communication or feedback, which is (iv) often both evaluative and (v) motivating and finally allows for (vi) a voluntary change of behavior. Let us explain these elements of persuasion with one example. Take, e.g., an eco-feedback mechanism in a hybrid car that gives the driver a feedback on energy consumption while he is driving by changing the background color of the speedometer. A blue color signals a fuel-efficient driving style, while red indicates sub-optimal performance. Next to this colored background little symbolic flowers grow on the display, if the driver keeps on driving in a sustainable manner for a longer period. The intention of this design is to influence the behavior (driving) and perception (make fuel-consumption visible) of the driver. This is done by communicating evaluative feedback: red signals “bad” behavior. The little symbolic flowers should motivate the driver to keep on driving sustainable. But still the user is in principle free to ignore the feedback and drive recklessly if he chooses so. In designing eco-feedback systems, designers can use the mediating effect of artifacts thus actively by trying to evoke a more sustainable behavior. In a similar vein, designers can try to encourage users to adhere to other values such as a healthy diet, by creating persuasive technologies that support people in their eating choices.

In fact, many persuasive technologies are often clear examples of using mediation in the Design for Values. Recent literature has covered the application of many new fields of persuasive technologies in various domains (e.g., mobility, health care context, individual coaching, and advertisement) and for different moral aims (e.g., health, sustainability, well-being, and safety). At the same time persuasive technologies raise similar ethical issues as mediation and nudging. It can be argued that both – nudges and persuasive technologies – form subclasses of mediation, as they can be regarded as intentional attempts of the designer to exploit the mediating effects of technology for a moral aim.

Open Questions and Future Work

The phenomenon of mediation (including nudges and persuasion) raises many challenges, both for engineering design and for philosophical and scientific analysis of the impact of technology. Roughly one can distinguish two research areas that future research needs to address. One research area concerns the foundational philosophical work on the framework of mediation. This area concerns questions about how to develop the mediation framework further and make it more fruitful for the analysis of concrete technologies and apply it systematically to issues in the design of technologies. Also the more fundamental philosophical debate on terminology and critical engagement with the framework belong to the first research area. It can be summarized as the ongoing quest to further develop, sharpen, and extend the framework itself. It concerns thus mainly questions on the level of (theoretical) philosophy of technology.

Next to this, mediation raises many questions on the level of praxis or application of mediation in technology design. If technologies have mediating effects, how should designers deal with this insight? Who is responsible for the mediation effects, the user or the designer of a technology, or both, or no-one? Should we use mediation to moralize technologies? These questions concern thus mainly the research field of applied ethics of technology.

Let us look at both research fields in turn. Since the focus of this article is on the impact that mediation has on Design for Values, the debate of the first strand of questions will be dealt with very briefly, before turning to the questions about design methodology and moral issues surrounding mediation in Design for Values.

Mediation and Persuasion: Methodological and Meta-Theoretical Research Questions

The mediation framework has been used to shed light on the design of concrete technologies in different domains and to advance a philosophical understanding of issues of ethical technology design (cfr. Verbeek 2008b). Swierstra and Waelbers have suggested a matrix approach for the technological mediation of morality to help designers anticipate unforeseen and unintended consequences of technologies (Swierstra and Waelbers 2012). Recently, Dorrestijn has presented an in depth analysis of technical mediation and subjectivation (Dorrestijn 2012a; Dorrestijn and Verbeek 2013; Dorrestijn 2012b). His approach draws on Foucault’s analysis to account for the theoretical and ethical issues linked to the mediating role of technologies. In a similar line, scholars have investigated the potential of persuasive technologies to change human behavior (Fogg 2003; IJsselsteijn 2006; Kaptein and Eckles 2010; Verbeek 2009b; Spahn 2013).

Despite these fruitful applications, the idea of mediation has on the other hand also received some criticism within the community of philosophers of technology, mainly due to the ascription of agency and intentionality to non-human entities. Already Winner has insisted against ANT that intentionality significantly distinguishes humans from other “things,” and that this difference should be taken into account when one intends to analyze the influence of artifacts on humans (Winner 1993). Feenberg and Kaplan both argue that the condition of technology should be taken into consideration more carefully in the mediation framework, be it from a transcendentalist (Feenberg 2009) or a non-transcendentalist perspective (Kaplan 2009). In a similar vein, Waelbers critically discusses on the one hand the differences and similarities between human and technological agency and intentionality, and on the other hand the consequences that mediation has for the ethical design of technology (Waelbers 2009). Peterson and Spahn have argued to re-introduce the “neutrality thesis” of technology in a weaker form (Peterson and Spahn 2011). Illies and Meijer have developed the framework of action schemes to be able to express the phenomenon that the terminology of mediation is meant to capture without going beyond established action theory terminology, especially without ascribing agency to artifacts (Illies and Meijers 2009). Pols has attempted to capture the mediating influence of technology by linking it to Gibson’s notion of affordances (Pols 2013).

Verbeek has responded to some of these and other critics, amongst others by further elaborating the notion of technological intentionality and agency (Verbeek 2006c, 2009a, 2014). These terminological debates will most likely continue to occupy philosophy of technology (Kroes and Verbeek 2014) and will give room to further develop the framework of mediation or alternatives to it. Independent of these debates, one can conclude that the fact that technologies have impact on human decision making and human behavior in various ways certainly needs to be taken into account in any framework on the philosophy and ethics of technology.

Moralizing Technology and Mediation in Design for Values: Research Questions from the Perspective of Ethics of Technology

Within the field of ethics of technology one can also identify various questions that are in need of further research: the question of whether or not to moralize technologies; the issue of the responsibility of the designer; and practical issues of anticipating mediation effects. Most of these ethical questions concern mediation, nudging, and persuasive technologies alike. These issues are not exhaustive, but all of them triggered an ongoing-debate and deserve attention in future work. Let us briefly look at these three questions in more detail.

The first challenge in mediation for design of values is the question of whether we should moralize technology in the first place or whether we should avoid or minimize mediation effects. As seen above, Verbeek has argued that designers indeed have a moral obligation to try to anticipate the moral consequences of the mediating role of technology to avoid negative unintended consequences. However, he goes further and argues that designers should use the mediating role of technologies actively to make artifacts more moral (Verbeek 2006a, 2011). The main argument is that all technologies have a mediating role. Technologies are thus not neutral tools but have an impact on human actions, perceptions, and (moral) decision making. We therefore do not have an option to avoid the mediating effect; we should rather accept that all technologies have mediating effects. Since this is the case, designers have some responsibilities to anticipate mediation and take it into account in the process of design. They should not try to create “neutral” technologies but rather actively use mediation to create more moral technologies.

A similar argument has been made in the debate on nudging: there is no neutral design. Every design will influence the choices that humans make. If you, e.g., plan the layout of a canteen, you have to put the healthy food in some place. Where you put it will, however, inevitably influence the choices people make in the canteen, as has been argued above. Therefore, nudges should not be seen as something to avoid, but rather as be designed in a way that they both lead to better choices and preserve freedom, by giving the user the option to overrule the nudge, if he chooses to do so (Thaler and Sunstein 2008).

This idea has, however, at the same time met some resistance. Counter-arguments can take two forms: either one denies the premise that there is no neutral design by embracing a strong or weak neutrality thesis of technology (Peterson and Spahn 2011), or one accepts that all technologies have mediating effects and come loaded with values, but argues that one could still design technologies such that they maximize not specific values such as “health,” “sustainability,” and “well-being,” but general values such as “autonomy” and “free choice.” Persuasive technologies, e.g., could be designed either as nudging the user into a desired behavior, or as prompting him to reflect and make a conscious choice. The mere fact that technology is not neutral and has mediating effects can thus still be seen as compatible with the idea that designers should avoid paternalism and try to maximize user autonomy and free choices (Anderson 2010; Hausman and Welch 2010; John 2011). A growing field of literature therefore tries to sketch guidelines about how to take mediation and/or persuasion into account in technology design, mainly trying to balance (individual) user autonomy on the one side and design for (social) values on the other side (e.g., Berdichevsky and Neunschwander 1999; Baker and Martinson 2001; Brey 2006b; Pettersen and Boks 2008; Verbeek 2009b; Kaptein and Eckles 2010; Spahn 2011; Smids 2012; Karppinen and Oinas-Kukkonen 2013).

A second question for applied ethics research is in how far mediation changes the distribution of responsibility between designer, user, and technology. Under the neutral technology assumption, the user is always responsible for the choices he makes, the technology is just a neutral tool that can often be used for different purposes or cannot be used at all. The mediation framework suggests a more complex relation of the distribution of responsibility, as technologies change the perception and actions of users. Future research needs to clarify who can legally and morally be held accountable for technologically mediated behavior. Here one should distinguish between the broader phenomena of mediation that also covers unintended effects on the one side and persuasion on the other side, where the change in attitude and behavior is explicitly intended by the designers (Fogg 2003). It seems that the different ways in which technology affects both individual users and societal structures or culture could be classified beyond the broader notion of mediation. What are the exact definitions, relations, and differences of various influence types such as technological mediation, affordances, persuasive technologies, and nudges (to name a few)? Such an overarching typology is still missing, even though some initial efforts have been made to develop a coherent framework to cover the various technological influence types (e.g., Brey 2006a; Tromp et al. 2011). Such a typology in turn could help solving the responsibility question.

A final open question concerns the development of a systematic method to anticipate mediation effects in the design of technologies. One suggestion could be to link mediation analysis to other design approaches that try to overcome the isolated designer choice situation by bringing stakeholders into the design process, such as in participatory technology design or constructive technology assessment. Still the specific phenomena of mediation might require a methodological tool to help designers to take mediation effects into account. An elaborated methodological tool and a systematical reflection on the best ways to reflect on mediation in the design phase are to the best of my knowledge still missing, even though many researchers have made first attempts to be more specific about how to engage in a mediation analysis in the design phase (Swierstra and Waelbers 2012; Verbeek 2006a).

Conclusions

Technologies are more than neutral tools; they affect the user, his perception of the world and his actions in it. Mediation offers a framework to systematically account for this impact of technology on our lives. Insights from the mediation framework can be used to benefit Design for Values in various ways. Firstly, designers must be aware of the often unintended ways in which technologies shape our life. Secondly, designers can actively use mediation to moralize technologies and help people adhere to their own ethical values or to socially shared moral convictions. Designers can use the mediation framework to go beyond the neutral tool paradigm and actively shape technologies in a morally beneficial way. This raises ethical questions about how far designers can and should go in these attempts to moralize technologies, and how the balance between autonomy and social values should be settled in technology design.

Theoretical philosophy of technology has created a rich literature on mediation and various related influences of technologies on users, such as affordances, persuasion, and nudges. The fact that mediation theory is rooted in postphenomenology makes it a coherent and systematic approach that can account for a variety of phenomena and allows integrating them into a joint framework, while at the same time offering insights both for ethics of technology and theoretical philosophy of technology. Philosophers that are critical with regard to fundamental assumptions of postphenomenology will, however, obviously take a more critical stance toward the meditation framework. The task is up to them to develop a fruitful alternative.

Cross-References

Footnotes

  1. 1.

    This particular example of Robert Moses has been challenged; see (Woolgar and Cooper 1999; Joerges 1999).

References

  1. Akrich M (1992) The description of technical objects. In: Shaping technology/building society. Studies in sociotechnical change. MIT Press, Cambridge, pp 205–224Google Scholar
  2. Akrich M, Latour B (1992) A summary of a convenient vocabulary for the semiotics of human and nonhuman assemblies. In: Shaping technology/building society. Studies in sociotechnical change. MIT Press, Cambridge, pp 259–264Google Scholar
  3. Anderson J (2010) Review: nudge: improving decisions about health, wealth, and happiness by Richard H. Thaler and Cass R. Sunstein. Econom Philos 26(3):369–375CrossRefGoogle Scholar
  4. Baker S, Martinson DL (2001) The TARES test: five principles for ethical persuasion. J Mass Media Ethics 16(2 & 3):148–175CrossRefGoogle Scholar
  5. Berdichevsky D, Neunschwander E (1999) Persuasive technologies – toward an ethics of persuasive technology. Commun ACM 42(5):51CrossRefGoogle Scholar
  6. Brey P (2006a) The social agency of technological artifacts. In: Verbeek P-P, Adriaan S (eds) User behavior and technology development. Springer, Netherlands, pp 71–80. http://link.springer.com/chapter/10.1007/978-1-4020-5196-8_8
  7. Brey P (2006b) Ethical aspects of behaviour-steering technology. In: Verbeek P-P, Slob A (eds) User behaviour and technology development. Springer, Berlin, pp 357–364CrossRefGoogle Scholar
  8. Dorrestijn S (2012a) The design of our own lives: technical mediation and subjectivation after Foucault. Universiteit Twente. http://purl.utwente.nl/publications/81848
  9. Dorrestijn S (2012b) Technical mediation and subjectivation: tracing and extending Foucault’s philosophy of technology. Philos Technol 25(2):221–241. doi:10.1007/s13347-011-0057-0CrossRefGoogle Scholar
  10. Dorrestijn S, Verbeek P-P (2013) Technology, wellbeing, and freedom: the legacy of Utopian design. http://purl.utwente.nl/publications/88125
  11. Feenberg A (2009) Peter-Paul Verbeek: review of what things do. Human Stud 32(2):225–228. doi:10.1007/s10746-009-9115-3CrossRefGoogle Scholar
  12. Fogg BJ (2003) Persuasive technology: using computers to change what we think and do, The Morgan Kaufmann series in interactive technologies. Morgan Kaufmann, Amsterdam/BostonGoogle Scholar
  13. Gregory D (2011) From a view to a kill: drones and late modern war. Theory Cult Soc 28(7–8):188–215. doi:10.1177/0263276411423027CrossRefGoogle Scholar
  14. Hausman DM, Welch B (2010) Debate: to nudge or not to nudge. J Polit Philos 18(1):123–136. doi:10.1111/j.1467-9760.2009.00351.xCrossRefGoogle Scholar
  15. Herring H, Sorrell S (2009) Energy efficiency and sustainable consumption: the rebound effect. Palgrave Macmillan, Basingstoke [England]/New YorkGoogle Scholar
  16. Ihde D (1993) Postphenomenology: essays in the postmodern context. Northwestern University Press, EvanstonGoogle Scholar
  17. Ihde D (2008) The designer fallacy and technological imagination. In: Philosophy and design. Springer, Netherlands, pp 51–59. http://link.springer.com/chapter/10.1007/978-1-4020-6591-0_4
  18. Ihde D (2009) Postphenomenology and technoscience. SUNY press, Albany (N.Y.)Google Scholar
  19. IJsselsteijn W (ed) (2006) Persuasive technology: first international conference on persuasive technology for human well-being, PERSUASIVE 2006, Eindhoven, The Netherlands, May 18–19, 2006: Proceedings. Berlin. Springer, New YorkGoogle Scholar
  20. Illies C, Meijers A (2009) Artefacts without agency. The Monist 36(3):420CrossRefGoogle Scholar
  21. Joerges B (1999) Do politics have artefacts? Soc Stud Sci 29(3):411–431CrossRefGoogle Scholar
  22. John P (2011) Nudge, nudge, think, think: experimenting with ways to change civic behaviour. Bloomsbury Academic, LondonCrossRefGoogle Scholar
  23. Kaplan DM (2009) What things still don’t do. Human Stud 32(2):229–240. doi:10.1007/s10746-009-9116-2CrossRefGoogle Scholar
  24. Kaptein M, Eckles D (2010) Means to any end: futures and ethics of persuasion profiling. In: Ploug P, Hasle H, Oinas-Kukkonen H (eds) Persuasive technology. Persuasive 2010. Springer, Berlin/Heidelberg/New York, pp 82–93Google Scholar
  25. Karppinen P, Oinas-Kukkonen H (2013) Three approaches to ethical considerations in the design of behavior change support systems. In: Shlomo B, Jill F (eds) Persuasive technology, vol 7822. Springer, Berlin/Heidelberg, pp 87–98, http://link.springer.com/content/pdf/10.1007%2F978-3-642-37157-8_12
  26. Kraemer F, van Overveld K, Peterson M (2011) Is there an ethics of algorithms? Ethics Inform Technol 13(3):251–260. doi:10.1007/s10676-010-9233-7CrossRefGoogle Scholar
  27. Kroes P, Verbeek P-P (eds) (2014) The moral status of technical artefacts, Philosophy of engineering and technology. Springer, DordrechtGoogle Scholar
  28. Latour B (1979) The social construction of scientific facts. Hills u.a, BeverlyGoogle Scholar
  29. Latour B (1992) Shaping Technology/Building Society: Studies in Sociotechnical Change. In: Bijker WE, Law J (eds) The sociology of a few mundane artifacts, MIT Press, USA, pp. 225–258Google Scholar
  30. Latour B (2005) Reassembling the social: an introduction to actor-network-theory. Oxford University Press, Oxford/New YorkGoogle Scholar
  31. Law J (1999) Actor network theory and after. Blackwell/Sociological Review, Oxford [England]/MaldenGoogle Scholar
  32. Peterson M, Spahn A (2011) Can technological artefacts be moral agents? Sci Eng Ethics 17(3):411–424CrossRefGoogle Scholar
  33. Pettersen IN, Boks C (2008) The ethics in balancing control and freedom when engineering solutions for sustainable behaviour. Int J Sustain Eng 1(4):287–297. doi:10.1080/19397030802559607CrossRefGoogle Scholar
  34. Pols AJK (2013) How artefacts influence our actions. Ethical Theor Moral Pract 16(3):575–587CrossRefGoogle Scholar
  35. Radder H (2009) Why technologies are inherently normative. In: Anthonie M (ed) Philosophy of technology and engineering sciences. Elsevier B.V, Amsterdam/Boston, pp 887–921CrossRefGoogle Scholar
  36. Royakkers L, van Est R (2010) The cubicle warrior: the marionette of digitalized warfare. Ethics Inform Technol 12(3):289–296. doi:10.1007/s10676-010-9240-8CrossRefGoogle Scholar
  37. Sharkey N (2010) Saying ‘No!’ to lethal autonomous targeting. J Military Ethics 9(4):369–383. doi:10.1080/15027570.2010.537903CrossRefGoogle Scholar
  38. Singer PW (2009) Wired for war: the robotics revolution and conflict in the twenty-first century. Penguin Press, New YorkGoogle Scholar
  39. Smids J (2012) The voluntariness of persuasive technology. In: Magnus B, Ragnemalm EL (eds) Persuasive technology. Design for health and safety, vol 7284, Lecture notes in computer science. Springer, Berlin/Heidelberg, pp 123–32. http://link.springer.com/chapter/10.1007/978-3-642-31037-9_11
  40. Spahn A (2011) Moralische maschinen. Proceedings XXII. Deutscher Kongress fﺰr Philosophie Doc-type: Ludwig-Maximilians-Universität Mﺰnchen (e-pub) conference object. http://epub.ub.uni-muenchen.de/12596/
  41. Spahn A (2012) And lead us (not) into persuasion? persuasive technology and the ethics of communication. Sci Eng Ethics 18(4):633–650CrossRefGoogle Scholar
  42. Spahn A (2013) Moralizing mobility? persuasive technologies and the ethics of mobility. Transfers 3(2):108–115. doi:10.3167/TRANS.2013.030207CrossRefGoogle Scholar
  43. Swierstra T, Waelbers K (2012) Designing a good life: a matrix for the technological mediation of morality. Sci Eng Ethics 18(1):157–172. doi:10.1007/s11948-010-9251-1CrossRefGoogle Scholar
  44. Tenner E (1997) Why things bite back: technology and the revenge of unintended consequences. Vintage Publishers, New YorkGoogle Scholar
  45. Thaler R, Sunstein C (2008) Nudge: improving decisions about health, wealth, and happiness. Yale University Press, New HavenGoogle Scholar
  46. Tromp N, Hekkert P, Verbeek P-P (2011) Design for socially responsible behavior: a classification of influence based on intended user experience. Design Issues 27(3):3–19. doi:10.1162/DESI_a_00087CrossRefGoogle Scholar
  47. Verbeek P-P (2000) De daadkracht der dingen: over techniek filosofie en vormgeving. Boom, AmsterdamGoogle Scholar
  48. Verbeek P-P (2005) What things do: philosophical reflections on technology, agency, and design. Pennsylvania State University Press, University ParkGoogle Scholar
  49. Verbeek P-P (2006a) Persuasive technology and moral responsibility: toward an ethical framework for persuasive technologies. In: Persuasive technology 2006, Eindhoven University of Technology, The Netherlands. Available from: http://www.utwente.nl/gw/wijsb/organization/verbeek/verbeek_persuasive06.pdf (accessed 29 January 2014)
  50. Verbeek P-P (2006b) Materializing morality: design ethics and technological mediation. Sci Technol Hum Value 31(3):361–380CrossRefGoogle Scholar
  51. Verbeek P-P (2006c) Acting artifacts. In: Verbeek PP, Slob A (eds) User behavior and technology development: shaping sustainable relations between consumers and technologies, Springer, vol 53, pp 53–60Google Scholar
  52. Verbeek P-P (2008a) Obstetric ultrasound and the technological mediation of morality: a postphenomenological analysis. Human Stud 31(1):11–26. doi:10.1007/s10746-007-9079-0CrossRefGoogle Scholar
  53. Verbeek P-P (2008b) Cyborg intentionality: rethinking the phenomenology of human–technology relations. Phenom Cogn Sci 7(3):387–395. doi:10.1007/s11097-008-9099-xCrossRefGoogle Scholar
  54. Verbeek P-P (2009a) Let’s make things better: a reply to my readers. Human Stud 32(2):251–261. doi:10.1007/s10746-009-9118-0CrossRefGoogle Scholar
  55. Verbeek P-P (2009b) Ambient intelligence and persuasive technology: the blurring boundaries between human and technology. NanoEthics 3(3):231–242. doi:10.1007/s11569-009-0077-8CrossRefGoogle Scholar
  56. Verbeek P-P (2011) Moralizing technology: understanding and designing the morality of things. University of Chicago Press, ChicagoCrossRefGoogle Scholar
  57. Verbeek P-P (2014) Some misunderstandings about the moral significance of technology. In: Kroes P, Verbeek P-P (eds) The moral status of technical artefacts, vol 17, Philosophy of engineering and technology. Springer, Netherlands, pp 75–88, http://link.springer.com/chapter/10.1007/978-94-007-7914-3_5
  58. Waelbers K (2009) From assigning to designing technological agency. Human Stud 32(2):241–250. doi:10.1007/s10746-009-9117-1CrossRefGoogle Scholar
  59. Wall T, Monahan T (2011) Surveillance and violence from afar: the politics of drones and liminal security-scapes. Theor Criminol 15(3):239–254. doi:10.1177/1362480610396650CrossRefGoogle Scholar
  60. Winner L (1980) Do artifacts have politics? Daedalus 109:121–123Google Scholar
  61. Winner L (1993) Upon opening the black box and finding it empty: social constructivism and the philosophy of technology. Sci Technol Hum Val 18(3):362–378CrossRefGoogle Scholar
  62. Woolgar S, Cooper G (1999) Do artefacts have ambivalence? Moses’ bridges, Winner’s bridges and other urban legends in S&TS. Soc Stud Sci 29(3):433–449CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  1. 1.School of Innovation SciencesEindhoven University of TechnologyEindhovenThe Netherlands

Personalised recommendations