Design for the Values of Democracy and Justice
In this chapter, we provide an overview of literature on the relation between technology and design and the values of democracy and justice. We first explore how philosophy has traditionally conceptualized democracy and justice. We then examine general philosophical theories and arguments about this relation, dealing with the conception of technology as being “value-free” as well as with pessimistic and more optimistic assessments with regard to technology’s potential for advancing democracy and justice. Next, we turn to three concrete design methods that seek to promote democracy and justice in the design process, namely, participatory design, technology assessment, and value-sensitive design. Finally, we examine two cases of technology influencing democracy and justice: one regarding the relation between energy technology and democracy and one regarding the use of social media during the Arab Spring. We conclude that many pessimists focus on the “technological mind-set” as a problem that undermines democracy and justice; that in the absence of general design guidelines for democracy and justice, a focus on democracy and justice in the design process seems all the more important; and that design methods tend to include values rather than theories of democracy and justice, which suggests that a further integration of philosophy and the design sciences could create added value for both disciplines.
KeywordsDemocracy Equality Justice Non-neutrality of technology Participatory design
From voting machines and solar panels for decentralized energy generation to the role of social media in helping to coordinate protests during the Arab Spring and spread information and footage over the globe, technology design has a clear impact on the values of democracy and justice. This impact has two main forms. First, the design of a particular technology can weaken or strengthen the values of democracy and justice, whether as an intentional choice or an unintended effect. That technology has this impact, however, does not mean that designing for democracy and justice is an easy job. Positive stories about communication technology spreading democracy and the “Facebook revolution” are balanced with cautionary tales about nondemocratic regimes using those same media for propaganda and tracking down protesters. Second, the values of democracy and justice can influence the design process itself, for example, through stakeholder consultation for inclusiveness or by arranging representation for stakeholders who do not have the power or capabilities to defend their own interests, such as very young children. This is not an easy job either, as a just and democratic process requires answers to thorny questions such as who exactly should be considered a stakeholder and whether the consultation process should aim for consensus or rather a compromise.
In this chapter, we provide an overview of the literature on the relation between technology/design and the values of democracy and justice. Particularly, in section “The Values of Democracy and Justice,” we analyze the values of democracy and justice as they have been explicated in philosophy and present the main positions and fields of inquiry. In section “Topics in the Design for Democracy and Justice,” we explore different philosophical analyses of the general relation between technology/design and democracy/justice. This section also discusses some of the mechanisms by which technology design can weaken or strengthen the values of democracy and justice. We analyze the conception of technology as being value neutral, as well as critical and more optimistic positions about the influence technology has on democracy and justice. In section “Democratic Technology Design/Participation,” we examine the philosophical ideas behind the practice of incorporating the values of democracy and justice in the design process and examine three design methods that seek to do exactly that: participatory design, technology assessment, and value-sensitive design. In section “Experiences and Examples,” we elaborate two cases on the influence of technology on democracy and justice: one on energy production and networks and one on the role of social media during the Arab Spring. In section “Open Questions and Conclusions,” we draw our conclusions and present a number of open questions for further research into designing for democracy and justice.
The Values of Democracy and Justice
In this section, we analyze the values of democracy and justice from a philosophical perspective by indicating some of the key elements of these two values and by pointing out classical topics of the philosophical debates that circle around these two notions. The focus will obviously not be on presenting an exhaustive overview, but to highlight the aspects that are relevant in the context of technology design and its potential relation to questions of justice and democracy. We start by elaborating the notion of democracy before turning to justice. In sections “Topics in the Design for Democracy and Justice” and “Democratic Technology Design/Participation,” we will look at the relation between technology and technology design and these two values.
The Value of Democracy
Democracy is the ideal of a political system in which the citizens of a state are seen as essentially contributing to and determining the political power. Theories of democracies are as old as philosophy, although the modern understanding of democracy has its roots in Enlightenment ideals and later political philosophy. Already, Greek political theory distinguishes between six forms of states, depending on who is ruling (“one,” “many,” or “all”) and whether the government strives for the greater good of all or departs from this ideal. These two criteria allow Aristotle not only to distinguish between tyranny and monarchy but also between politie – when the government of the people strives for the good – and democracy, which fails to meet the moral goal. Contemporary political theory is mainly interested in distinguishing different forms of government according to formal, value-free criteria, and the history of political philosophy can be read as a shift from moral evaluations to a separation of evaluative and descriptive claims (Hösle 2004). But next to sociological and political science approaches to democracy, philosophers are often concerned with normative democratic theory, i.e., the attempt to evaluate which elements belong to a democratic society and what the main arguments are in favor and against democratic structures of decision making (Christiano 1996).
In its broadest definition, democracy refers to a process of collective decision making, in which the members of the process have equality in participating and in which the decisions are made by the group and binding for all members of the group (Christiano 2008). In this sense, organizations just as well as states can be “democratic” to varying degrees. In political philosophy, a distinction is often made between direct versus representative democracy, depending on whether citizens cast a direct vote in a referendum on political issues or whether citizens elect representatives for a selected period. A central question in both forms of democracy is who counts as a citizen and has thus the right to vote, as well as the question how the election process is organized: equality , freedom, and secrecy are seen as crucial to guarantee a fair process (Hösle 2004, 526 ff).
Democracies are favored in many ethical theories as preferred organization of a state for both intrinsic and instrumental reasons. Among the instrumental reasons are arguments that point out positive consequences of a democratic form of government. In that sense, democracies are said to lead to better laws, as the government depends on the acceptance of the citizens for reelection and has thus an incentive to take their values, interests, and needs into account (e.g., Mill 1861; Sen 1999, p. 152). However, equality and majority votes as such do not guarantee fair outcomes, especially since majorities could systematically overrule minorities (Mill 1859). It has thus often been argued that democratic decision-making processes must be combined with solid protection of minority rights (Donovan and Bowler 1998; Hösle 2004).
Next to instrumental arguments in favor of democracy, philosophers have also defended democracy as having intrinsic value. It is a form of government that treats individuals as equal (Singer 1973, pp. 33–41) and is in line with the ideal of liberty (Christiano 2008) and autonomy of individuals. Especially Habermas has defended democratic decision making from an ethical perspective in which private autonomy and public collective legitimacy are linked closely together. Political decisions need the actual support of the affected citizens in order to be legitimate (Habermas 1994; Young 1990).
Both instrumental and intrinsic normative theories of democracy can be linked to the ideal of a just society, in which the decisions taken are fair and in line with the basic rights of the individuals, most importantly their general equality . Let us look therefore at the value of justice in turn.
The Value of Justice
Humans are vulnerable and the things that may benefit or harm them, whether goods, opportunities. or emotional states, are not evenly distributed among all. Investigating which distributions are morally better than others is the domain of theories of justice. This section will provide a brief overview of philosophical discussions of justice and show several ways in which technology can be relevant for considerations of justice.
Discussions of justice can be grouped by the topic of their discussion. The two major topics of discussion are evaluating whether a given state of affairs is more just than another, which is the subject of the field of distributive justice, and evaluating whether a procedure or process is more just than another, which is the subject of the field of procedural justice.
Distributive justice investigates distributions within states of affairs, where the main questions are (a) what exactly should be distributed (goods, opportunities, welfare, etc.), (b) who are the donors/receivers of those goods (individuals, groups, nations, etc.), and (c) what principle should determine the distribution (equality, desert, free trade, etc.). Technology can play an important role in both (a) and (b), as good to be distributed, as part of the system that facilitates distribution, and in applying distribution rules, e.g., through registering income tax or determining an individual’s identity or age, which may be relevant in, for example, immigration procedures (Dijstelbloem and Meijer 2011).
Traditionally, philosophers working on distributive justice have focused on how nation-states should distribute goods or opportunities among citizens. The most important work in this tradition is John Rawls’ A Theory of Justice (1971, 1999). Rawls proposes a basic distribution in two parts, where (1) everyone should have equal basic rights and liberties and (2) any social and economic inequalities may only result from a situation where there is fair equality of opportunity and when these inequalities are to the greatest benefit of the least advantaged members of society.
More recently, philosophers of justice have extended their research into distributions on an international level (international or global justice), which raises unique problems. Discussions here have also been inspired by Rawls (1971) and especially by his (1999) The Law of Peoples. Rawls, however, argues that there are no international structures that resemble the basic (democratic, liberal, legal, economical) structures of the nation-state sufficiently to create a demand for principles of justice on an international level. Rather, he focuses on the principles by which nation-states should set their international policies (cf. Nagel 2005; Freeman 2006).
Critics of Rawls have argued that his focus on nation-states misses the point here. They point out that globalization has given rise to many international systems that all affect the global distribution of burdens and benefits, such as UN institutions, trade regimes and embargoes, transnational corporations, NGOs, etc. They claim that the fact that our participation in these systems affects this global distribution itself raises obligations of justice, irrespective of how these systems are structured (Pogge 2002; Caney 2005; Cohen and Sabel 2006; Young 2006). It is important to note here that many of these systems are socio-technical and strongly rely on esp. monitoring, communication, and transport technologies as means for their day-to-day functioning. Other systems use technology as an end in pursuit of decreasing unjust distributions, e.g., international technology transfer or design for development (Oosterlaken 2009).
Finally, the immense power that technology has given us not only spans the globe but reaches far into the future as well. The way we deal with nuclear waste and climate change, for example, and how we shape our socioeconomic systems, have consequences for the rights and quality of life of future generations (cf. Jonas 1979/1984). This raises demands of intergenerational justice: how should we distribute burdens and benefits over generations across time?
As with other questions of justice, John Rawls (1971) has opened the discussion on this, proposing a “just savings” principle. This requires currently living people to do two things: establish lasting and just institutions and save enough resources for future generations so that they will enjoy at least a minimum level of well-being. This makes Rawls’ account sufficientarian, in that we should leave our descendants sufficient resources for a life worth living. Moreover, once just institutions are in place, saving sufficient resources should become “automatic.” Unfortunately, realizing this is a major challenge, as many global institutions are not only unjust in space, as Pogge (2002) has noted, but also unjust in time, providing us with benefits (e.g., nuclear energy, fossil fuels) while passing the burdens (nuclear waste, climate change, and energy scarcity) on to future generations (Gardiner 2011).
International and intergenerational theories of justice are all branches of distributive justice, which evaluates states of affairs. In contrast, procedural justice is concerned with just processes or procedures. This idea underlies, for example, the democratic system, where governments are constituted not after abstract ethical ideals but according to the outcome of an election procedure following prespecified rules. Procedural justice is a fundamental part of discourse ethics (Habermas 1984, 1987) and deliberative democracy (Bohman and Rehg 1997). The basic idea is that a decision or policy is just and legitimate if it is the result of a public deliberation based on rational arguments. This implies that every participant should have the right to speak and be heard; no force is exerted except for that of the better argument, etc. (cf. Habermas 1990). Discussion should not only be on means and indicators, where goals are set in advance by the organizing stakeholder (such as the state). Rather, it should address both values/goals and indicators/means. A diversity of perspectives tends to be encouraged, as this increases the chance that new facts, conflicts, but also opportunities regarding the topic are brought into the discussion (Swierstra and Rip 2007).
Engineers and philosophers of technology tend to be quite interested in possibilities for making design procedurally just, and we will examine various proposed methods for this in section “Democratic Technology Design/Participation.”
Topics in the Design for Democracy and Justice
In this section, we want to first look at how the relation between modern technology and democracy and justice is analyzed in different contributions to the philosophy of technology. If one intends to strengthen the values of democracy and justice through design, one is well advised to first contemplate on the possible impacts of technology on democracy or political power structures as such. Many philosophers of technology – especially from the continental tradition – address issues of design for democracy in the context of a broader analysis of the relation between technology and society as such, while other thinkers – especially from the analytic tradition – focus more on concrete technologies and their relation to democracy and justice.
In principle, there are at least three possible positions, according to the ideas of neutrality and positive or negative influence: (i) technology could be seen as mostly neutral and value-free; (ii) it could be argued that certain technologies endanger democracy and justice; or (iii) that certain technologies promote and foster democracy and justice.
(i) Technology and democracy could be regarded as completely independent entities, such that issues of technology design and, e.g., social power structures do not interfere with each other. Even if social power structures would determine technology design (but not the other way around), it could be maintained that technology is value neutral. If in this way technology would be value neutral with regard to political aspects, such as power division and aspects of exercise of individual autonomy and issues of social distribution of resources and power, ethics of technology could be done for the most part without analyzing the underlying social structures of political power. Accordingly, a theory of society could for a large part either ignore technology or treat it as one social reality next to other social phenomena. In short, this position presupposes the idea that technology is largely apolitical and can be used in different social contexts. This position is still compatible with the idea that both modern technology and modern societies are expressions of an underlying common spirit (e.g., the same “episteme” in the terms of Foucault) or the result of similar processes (such as the process of “rationalization” in the terms of Max Weber). The key idea of the neutrality thesis is rather to deny that there is a strong relation of causal influence between the two phenomena of social order and technology. Therefore, both spheres – technology and democracy – can (and probably should) be studied separately from each other. This position emphasizes the ethical or at least political neutrality of technology.
The other two positions (ii and iii) assume that technology is not value neutral with regard to political issues. In his influential book Critical Theory of Technology, Feenberg distinguishes substantive and instrumental theories of technology (1991). According to instrumental views, technologies are neutral tools, which can be used in all different political and cultural circumstances without affecting or influencing the political order. According to substantive theories (such as Heidegger or Ellul), technology has an impact on social order and is politically not innocent. Positions that see technology (or certain technologies) either as a threat to democracy and justice or as a great promoter of justice and democracy thus often – though not necessarily – belong to the family of substantive theories of technology.
If technology affects important social values and leads to, or even presupposes, specific relations of power, then the relation between technology and democracy becomes more complex. For the sake of brevity, we will distinguish an optimistic and a pessimistic stance of this relation. The optimistic position would argue that modern technologies do often amplify democratic structures and have thus a positive effect on the implementation and flourishing of democracy. Especially, the Internet and new (social) media are often believed to have a positive impact on democracy (see section “Technology as an Amplifier of Democracy and Justice”). The pessimistic position would, however, argue that modern technologies often undermine, or at least endanger, social justice and democratic structures as they contribute to the establishment of fixed power relations in which a technocratic elite is needed in order to control complex technological systems such as railroads and nuclear power plants (see section “The Critical Stance: Technology as a Threat to Democracy and Justice”).
Even though the affirmative and the critical positions point in different directions, they do not necessarily contradict each other. Both positions assume that technologies are not value neutral (and, more specifically, not neutral with regard to social and political values). One could therefore argue that certain types of technologies – such as social media – are beneficial to democratic systems, whereas other types of technologies – such as nuclear power plants – require central control and therefore stand in tension with the central aspects of democracy. In the following, we will nevertheless present both positions separately for the sake of analytical clarity. We will present their main arguments and illustrate the consequences for the ethics of technology design. We will begin with the critical position before turning to the optimist’s camp.
The Critical Stance: Technology as a Threat to Democracy and Justice
If one puts the affirmative and the critical stance into the context of the recent history of philosophy of technology, one can trace back their roots to the difference between the enlightenment embracement of sciences and technology, as important elements of social progress, and the romantic discontent with radical changes within society that led to a loss of traditional values and cherished belief systems (Mitcham 1994, 275 ff.; Spahn 2010). The optimists – or the “modernists” – in this debate embrace the enlightenment ideals of reason and rational inquiry as a key to social and moral progress. They mainly regard modern science and scientific knowledge that is based on observation, experimentation, and mathematization as a powerful tool that the ancient world did not bring forth. As Mitcham has noted, this optimistic focus on the positive aspects of modern science and technology is a very natural position for engineers and accordingly a philosophy of technology that is close to the engineering perspective (Mitcham 1994, cfr.; Snow 1959). It is important to notice that the affirmative position – which will be discussed in more detail in the next section – often highlights moral reasons for the embracement of modern science and technology. Technological progress contributes to the taming of nature and frees humans from many calamities and burdens. Modern technology can contribute to the overcoming of poverty, help in the fight against diseases, free man from the hardship of labor, and contribute to many luxuries of modern life: an optimistic perspective of technology that we find as early as in Francis Bacon (1620) and that is still vivid in technology futurists such as Kurzweil (2005).
This optimism has, however, not been without criticism. More pessimistic views on technology can be found as early as antiquity (Mitcham 1994, p. 277) but can mainly be traced back to the philosophers that are skeptical with regard to the project of modernity and its focus on science, such as Vico (1709) and Rousseau (1750). Mitcham calls this position Romantic Uneasiness about Technology which includes thinkers that reflect on the radical changes in society due to industrialization and continues up to the current time in philosophers who worry about the destructive potential of modern technologies, as reflected in nuclear bombs and environmental damages of overconsumption.
Technology is more than a neutral tool. It is a form of approaching reality, that is in itself dubious and problematic and reduces nature to a “standing resource” and an object of manipulation.
This view is most prominently developed in continental philosophy of technology by Martin Heidegger in his essay The Question Concerning Technology (1977/1953). For him, technology is not so much the realm of artifacts created by humans, but a way of disclosing reality under a very specific perspective. Heidegger rejects the instrumental view; according to which, technology is a human tool for reaching certain aims. Rather, technology is in essence a mind-set, a perspective of the world under which everything that exists is seen as a potential resource for manipulation. Technology discloses (“entbergen”) reality in a very different form than, e.g., art does: it grasps reality under the perspective of usability for external purposes. Nature gets thus reduced to a standing resource (“bestand”) for human activity – a very different way to approach nature from the one taken by an artist. Heidegger also opposes the anthropological view of technology; according to which, technology must be mainly understood as a human creation. He rather urges us to regard technology not so much as a human activity but as a force greater than us: human beings are not so much in control of technology, but driven by its powers. That poses a radical danger to humans, not primarily because of concrete risks of any given technology but more so due to the fact that we lose sight of other ways to approach nature. Everything, including humanity itself, gets converted into a mere object of manipulation.
This idea, to link technology with a mind-set of control, manipulation, and domination, became a frequent topic in many different approaches of philosophy of technology and the relation between technology and democracy, and was taken up, among other domains, within the environmental ethics movement (cfr. DeLuca (2005) for an analysis of the relation between Heidegger’s work and environmentalism). Hans Jonas (1979/1984), a pupil of Heidegger, describes the difference between ancient and late modern technologies as one of the key challenges of modern ethics. Whereas in previous time, technology was mainly a tool helping man to survive in a dangerous and hostile nature, it is now our task to protect nature from far-reaching, irreversible, and potentially disastrous consequences of modern technologies. In a similar line, Ellul (1964/1954) analyzes the threat that modern technology poses to human freedom and broader humanity, pointing out the dangers of modern technology that strives for absolute efficiency in all human domains.
Modern technology leads to substantial changes in social order, benefiting a small elite at the expense of deskilling workers and creating social injustices
While Heidegger, Jonas, Anders, and Ellul comment on the essence of technology and its impact on modern civilization, it is especially the Marxist tradition including the early and later critical theory that draws attention to the political and social justice implications of modern technology.
Following Hegel, Marx links technology to the necessity of labor to sustain human life (Marx 1938). But his main focus is on the analysis of the immense social and political consequences of modern industrial labor. Modern mass production leads to a division of labor and a deskilling of the workforce as industrial technologies do no longer require skillful craftsmanship. This leads to an alienation from work, as little expertise is needed to work in factories. Following Marx, Feenberg argues thus that it is not enough to just focus on the ownership question of the means of production, but to rethink the design of technology itself, especially in the context of labor theory (Feenberg 2002). In a similar vein, Noble tries to show how certain design choices lead to deskilling and social injustice (Noble 1984).
According to the Marxist tradition, these political implications are tremendous, creating great injustices in society. The means of production are possessed by a small elite that exploits the workforce, creating a class struggle for political power. As such, however, the Marxist theory is not a substantive theory of technology, as it is not technology in general that stands in opposition to a just distribution and democratic ideals, but the underlying capitalist economic system of distribution of power and economic resources. The influence of Marxist thinking on the political aspects of modern technology can hardly be overestimated, both in the political arena and in philosophical analysis. In this brief summary, we only focus on a few later contributions, mainly from the Frankfurt School and critical theory, before moving to the STS approach to technology, justice, and democracy.
Technology is rooted in the will to dominate nature and will inevitably lead to the domination of man as well, unless counter-measures are taken.
Adorno and Horkheimer (1979) place modern technology in the greater context of enlightenment rationalism and political philosophy. They interpret modern rationality as a means-end rationality that aims at the control and domination of the external world. In essence, technology is a powerful tool to ensure human self-preservation against nature. The striking feature of technological-scientific rationality is the quantification of natural relations in order to make a controlled and predictable use of natural laws for the exploitation of nature for human means. This strategy of control and domination of the outer nature will inevitably also lead to a domination and degeneration of humans’ inner nature. Since Adorno identifies modern enlightenment rationality with this strategic tool of domination and exercise of power, a countering force can only come from outside of rationality. Much like Heidegger, Adorno seeks remedy in the arts, as a different approach to reality that is not inspired by the will to dominate (Adorno 1999).
Habermas builds forth on many insights of the early Frankfurt School. However, he argues for a more nuanced theory of rationality and thus a more nuanced approach of technology that comes closer to the instrumental theory of technology. He distinguishes between different types of rationalities, which are rooted in different basic anthropological interests and lie at the heart of different types of sciences (Habermas 1971). He tends to agree with Adorno that science and technology are mainly rooted in the will to dominate nature. They have their background in instrumental knowledge. But next to this impulse, there are other types of rationalities rooted in different anthropological features: the humanities are rooted in practical knowledge and aim at communication, understanding, and agreement on shared norms and values. The emancipatory interest finally strives at freeing oneself from dogmatic dependences and lies at the heart of psychoanalysis and rational criticism. Since these other perspectives (next to scientific and technological knowledge of nature) are also seen as types of rational discourse, it follows that a rational critique and guidance of technology is possible without referring to other sources of knowledge outside of rationality. Habermas’ main contribution lies accordingly in the development of a theory of communicative rationality (Habermas 1987) and discourse ethics, which has inspired visions of participatory technology design or participatory technology assessment (e.g., Kowalski 2002, p. 14). In his theoretical works, Habermas defends the lifeworld (Lebenswelt) as a realm of communicative action, in which we discuss about and agree upon social values. In the world of modernity, the lifeworld is under constant threat of being invaded by the world of strategic rationality, a process that Habermas has coined the “colonization” of the lifeworld (Habermas 1987). This thesis mirrors the worry that we find also in Heidegger and Adorno and that has been discussed above: the concern that instrumental, technological rationality that centers on efficiency and strategic manipulation dominates more and more aspects of society that should not follow the logic of strategic rationality. This would mean for the values of democracy and justice that the realm of deliberation, which belongs to communicative rationality, is under constant threat to be replaced or invaded by the realm of strategic rationality. Democracy must thus always actively strive to defend itself against the pitfalls of technocracy and the ruling of a selected class of technocratic experts (Habermas: 1971; Feenberg 1992).
The most elaborated application of critical theory to technology has been presented by Andrew Feenberg (1991, 2002). He rejects both substantive and instrumental accounts of technology and regards critical theory as holding the middle ground between the two (2002, p. 5). Technology and the spreading of instrumental rationality is not a destiny that is beyond human intervention or repair. Critical theory thus wants to avoid the utopianism often associated with Marxist perspectives of technology and the resignation that Feenberg sees present in views that claim that an alternative to the Western capitalist system of technology is not possible. Rather, he denies that “modernity is exemplified once and for all by our atomistic, authoritarian, consumerist culture. The choice of civilization is not decided by autonomous technology, but can be affected by human action” (ibid., p. 14). Currently, Feenberg sees the values of a specific social system and the interests of its ruling classes installed in the very design of current technology. But through radical democratization of technology design, a “shift in the locus of technical control” (ibid., p. 17) is possible that avoids problematic capitalist phenomena such as deskilling of labor and can thus help realizing suppressed human potentials.
The last view by Feenberg thus points already to the option that technology may indeed under favorable circumstances be a positive contributor to democracy and justice (see section “Technology as an Amplifier of Democracy and Justice”). According to Feenberg, technology design thus matters for the crucial impact it has on the division of power within society.
Technologies can be inherently political by settling an issue or being highly compatible (if not requiring) a given set of power.
Within STS, the point has often been made that the instrumentalist perspective on technology is mistaken and that technologies are in fact not morally neutral tools. According to Winner (1980), artifacts can be political in two ways. Either they can directly settle a political issue or the operation of certain technologies requires or at least suggests a certain structure of political organization. Artifacts can directly settle an issue by, e.g., excluding certain users from access or benefits of these technologies or by contributing to a redistribution of influence and power in favor of a capitalist elite. With regard to the second aspect, Winner has argued that certain technologies require a strict hierarchical and authoritative control in order to work: nuclear energy is one example that Winner discusses, whereas decentralized small solar cells can easily be combined with a more localized, decentralized, individual, and democratic control. In a similar vein, Shelley (2012) has argued that for monitoring technologies or risky technologies, issues of fairness can result from choosing the “prediction cutoff” point, which determines the relation between false positives and true negatives. She gives the example of the ShotSpotter, a system to detect and locate gunshots in urban areas. If the system is hypersensitive, it will give more false alarms, which will result in increased police presence and possible disruption of social life. If the system is not sensitive enough, public security may be compromised due to gunshots not being responded to. The risk here is that the dominant user or designer may establish the prediction cutoff point based on private interests rather than on more objective considerations of fairness.
Akrich (1992) analyzes the way in which the expected use context is inscribed into technologies, by what she calls the “script” of a technology. Like a theater play, a technology designer “in-scribes” a vision about how the technology is supposed to be used that pre-scribes the user how to deal with it. Users can of course creatively ignore this script (“de-scription”), but nevertheless, scripts in technologies always distribute and delegate responsibilities (cfr. section “Energy Production, Justice, and Democracy” for an application of these ideas to energy technologies).
In line with this, the actor-network theory (Latour 1979, 2005; Law 1999) emphasizes that technologies have “agency” and influence the network. Scholars in STS have thus tried to identify the role of power differences and power distribution in actor-networks that in part are reinforced or materialized in given technologies. In order to avoid stabilizing one-sided hierarchical power relations and social injustices, the argument has been made that technology design needs to make sure to include neglected interests of marginalized groups. Participatory technology design thus tries to include the values and interests of all affected parties prior to developing and implementing major new technological systems (see section “Democratic Technology Design/Participation”).
Technology as an Amplifier of Democracy and Justice
Technology contributes to human welfare and capabilities.
In order to be able to participate successfully in a democratic society, citizens need to have their basic needs met with regard to food, water, shelter, etc. Meeting these basic needs, or a certain level of well-being, is also a demand of justice under a sufficientarian conception. Technology can help increase well-being, keep us safe from harm, etc. Thus, it can help fulfill the conditions for democracy and justice.
The argument that technology contributes to human welfare is the oldest and perhaps the least controversial argument for technology as enhancing democracy and justice. Its best-known historical proponent is Francis Bacon (1620, 1627; cf. Mitcham 1990), who extolled the virtues of modern technology in conquering nature and bringing about social and moral progress. Though this optimism was tempered later, partly in reaction to the negative aspects of industrialization (see previous section), many engineers still subscribe to it. In Winner’s words, “The factory system, automobile, telephone, radio, television, space program, and of course nuclear power have all at one time or another been described as democratizing, liberating forces.” (1986, p. 19). Those who take this argument to its furthest extreme argue that our technology will someday turn us into super- or post-humans, helping us to fully overcome our biological limitations and making considerations of justice obsolete (Kurzweil 2005; Savulescu and Bostrom 2009).
Criticism against this argument takes two main forms. First, it can be argued that technology can contribute to well-being, but does not necessarily have to do so: it can also diminish it through direct or indirect harm, new risks, exploitation of humans and nature, etc. Second, it can be argued that even if technology contributes to welfare, this is no guarantee that it will contribute to democracy and justice. For example, if a technology were to contribute to a massive increase of welfare for the rich but only a marginal increase of welfare for the poor, it would not be just according to Rawls, as it would violate his condition that any inequalities in distribution should deliver most benefits to those who are worst off (see section “The Value of Justice”).
Technology can enhance one’s skills and knowledge regarding how to participate in a democratic system / further justice.
Having one’s basic needs met can be regarded as a condition for democratic participation but so is knowing what is going on and knowing how to participate. With regard to the knowledge of what is going on, technology has greatly advanced methods of communicating and sharing information, from the advent of the printing press to newspapers and television and later information and communication technology (ICT) and social media. With regard to knowing how to participate, technology has contributed to education and the dissemination of information about procedures, contact details of civil servants, etc.
Criticism against this argument is very similar to that against the previous argument. Johnson has argued that our global information infrastructure does not automatically empower the people. She argues that more power does not simply come through more information, but through accurate, reliable, and relevant information (1997, p. 25). Indeed, without any filter, the deluge of information we get through ICT can easily distract rather than empower us (Floridi 2005), and propaganda spread by authoritarian regimes, as well as entertainment to placate the masses, may hamper meaningful democratic engagement. Consequently, the democratic potential of ICT depends very much on who filters the information and what criteria they use. See, for example, Massa and Avesani (2007) on how different kinds of trust metrics, ways to rate how trustworthy a participant in an online community is, can introduce different biases in the information exchanged in those communities.
A broader problem behind this is the practical issue that even engaged citizens are limited in the time and resources they can invest in gathering and judging information and discussing and taking action on the basis of that information (Bimber 1998; Van den Hoven 2005). Thus, more information and possibilities for digital activism cannot even increase political activity above a certain level unless this “information cost” is brought down, making unbiased design of filters and facilitating discussion tools all the more important. Van den Hoven argues that this problem necessitates a rethinking of what our democracy should look like. Rather than striving for the unrealistic ideal of the Well-informed Citizen, he argues, we should aim at the more practical Monitoring Citizen (Schudson 1998). The Monitoring Citizen does not know everything that is going on but can monitor it successfully and can investigate and contest policy when needed.
Technology can facilitate decentralised communication and coordinated action.
Technology cannot only help us decide what to do but also to actually do it. This can advance democracy and justice by giving more power to the people even in nondemocratic states. For example, during the Arab Spring, social media were widely used to coordinate protests even where official media channels were blocked by authoritarian regimes (see section “Arab Spring/ICT”).
Criticism against this argument is that technology does not automatically increase communication and coordinated action; it may rather change communication and action patterns, connecting some social groups while isolating others. Johnson (1997) argues that in the past, shared geographic space determined shared contact and action. With our global ICT network, she warns that individuals may more and more contact and work together with like-minded individuals rather than with others with diverse and conflicting viewpoints, which may lead to less involvement in local or national communities and ideological isolation. This warning is echoed by Sunstein (2001), who advocates (among other things) for new public electronic forums where discussions are encouraged between people of different viewpoints. Sclove (1995) is similarly ambivalent, seeing new options for cooperation, such as the creation of “virtual commons,” but also risks of disintegration of local social groups. Furthermore, new technologies such as social media may offer opportunities for distributed communication, but these may be partly or fully offset by the opportunities it offers powerful actors such as states for monitoring and controlling this communication (Morozov 2011). Pariser (2011) links this ideological isolation and propaganda spreading to the workings of the algorithms that operate behind the scenes of Internet giants such as Google and Facebook. He warns that these algorithms, which most people do not know and do not think about, have a great influence on what we see and hear on the Internet, creating a “filter bubble” where we only get to see information that corresponds with our own views (personalized information) or those of the state or the company (propaganda). Then again, others have suggested that personalized information may expand rather than constrain our views, which could further the values of democracy and justice (Hosanagar et al. 2014). For the relation between search engines and democracy, see also Introna and Nissenbaum (2000), Nagenborg (2005), and Tavani (2014).
Technology may draw previously disinterested parties into democratic processes.
In presenting her ideal of “collaborative democracy,” Noveck (2009) argues that technology can draw people into democratic processes who would normally not participate, e.g., by offering alternative procedures to classical deliberation and enabling people to enter those processes themselves rather than having to be selected as “experts” by policy makers who may be biased in their selection procedure.
Criticism against this argument could be that the threshold for democratic participation is not necessarily lowered, but rather changed: interested parties still need an Internet connection and the know-how to join and participate in digital forums. Furthermore, even if technology can lower the overall threshold for participation, some voices cannot be drawn in, such as those of disinterested parties or future generations who will be affected by the decisions that are taken now (Thompson 2010).
Technology does not discriminate according to human biases.
It is a well-documented fact that humans suffer from psychological biases that may lead to discrimination and injustice in decision making (Sutherland 1992/2007). Technology does not suffer from these biases: as Latour puts it, “no human is as relentlessly moral as a machine” (1992, p. 157).
Criticism against this argument, however, is also provided by Latour, who argues that technology contains scripts for prescribing behavior that can be inscribed by engineers both intentionally and unintentionally, and this prescription can lead to different forms of discrimination (cf. Winner 1980 on the architect Robert Moses’ “racist overpasses”). To borrow an example from Latour, hydraulic door closers will not discriminate based on gender or race, but they will discriminate against those who are too weak to push the door open, such as the young or the elderly. Introna (2005) gives the example of gender/race biases in face recognition systems and proposes “disclosive ethics” as a way to deal with biases generated by technologies that are relatively closed to scrutiny. Friedman and Nissenbaum (1996) give examples of biases in computer systems and offer several suggestions on how to identify and deal with them. Roth (1994) argues that complex ballot design may bias democratic elections by excluding or misleading undereducated voters. All in all, technology may not suffer from the same biases in the same way that humans do, but bias can still enter design in many ways and so lead to undemocratic or unjust situations.
To summarize, while technology certainly has the potential to contribute to democracy and justice, it can often also easily be used for its opposite. Various design approaches have attempted to utilize this positive potential and avoid its pitfalls by introducing democracy and justice in the design process, in the hope that this leads to a more democratic and just product. We will examine some of those approaches closer in the next section .
Democratic Technology Design/Participation
Democratic technology design starts from the assumption that technology has a major impact on human life, but has in the past often not adequately included the values of democracy and justice in the design process itself. As discussed in the introduction, one of the arguments for democratic decision making is that the affected persons need to have a say in the decision-making process in order to make sure that their values and interests are taken into account. Democratic technology designs or visions of participatory technology assessment start from this idea and move it from the legal domain to the technological domain. Since technology is a major factor, all sides should be heard in the design and/or the implementation of important technological projects.
The ideas of participatory processes of decision making are a natural part of political theories that highlight democratic elements. With regard to ethical theories, visions of participatory design often go back to contractualism (Scanlon 1998) or discourse ethics (Apel 1973; Habermas 1993). Both ethical approaches regard the idea of a consensus that no one can reasonably reject or that every affected party should be able to accept, as a key ideal of normative theory. As Scanlon argues, “An act is wrong if its performance under the circumstances would be disallowed by any set of principles for the general regulation of behaviour that no one could reasonably reject as a basis for informed, unforced, general agreement.” (Scanlon 1998, p. 153). In a similar line, Apel and Habermas aim at formulating the criteria for the ideal discursive community that should guide real processes of democratic decision making. Among these are a power-free dialogue, consistency, transparency, and a striving for common interest (Habermas 1993). It has been objected that a consensus is often not possible, so that real participation should not always be consensus oriented. Rather, it should follow the negotiation model of compromise-oriented discussions to be relevant for practice (e.g., Van den Hove 2006). Next to the question whether the ideal of agreement or compromise should guide participatory processes, philosophers have analyzed the question at which points participants may even have a moral right to leave the deliberative process and take measures of political activism, especially in the absence of equality of the participants and/or severe divergence from orientation at the ideal speech situation by some or all of the participants (e.g., Fung 2005).
In all these ways, European democracy is biotechnologized. Participatory exercises help legitimize the neo-liberal framework of risk-benefit analysis, which offers us a free consumer choice to buy safe genetic fixes. (…) If we wish to democratize technology, I suggest that we must challenge the prevalent forms of both technology and democracy. (Levidow 1998, p. 223)
Participatory design as a research field arose in the 1970s and 1980s when ICT entered the workplace. Early introduction of ICT was regularly met with hostilities from workers who felt that their interests were not adequately taken into account in this transition. This sparked research into how the needs and interests of workers could be incorporated in these management-driven transitions (Kensing and Blomberg 1998). Three main issues in the participatory design-literature have been (1) the politics of design, as it was quickly recognized that new technologies often supported existing power structures and management strategies of, e.g., centralized control, making it difficult to adapt them to workers’ interests and needs; (2) the nature of participation, regarding when, how, and why workers should participate (e.g., Clement and Van den Besselaar 1993); and (3) developing methods, tools, and techniques for actually carrying out the participatory design process, such as the Cooperative Experimental Systems Development, which includes cooperative prototyping and design for tailorability (Grønbæk et al. 1997), and MUST, which focuses on creating visions for change for both the technology to be adopted and the organization that intends to adopt it (Kensing et al. 1998).
Despite the apparently ethical motivations and the fact that many recommendations of participatory design are in line with prescriptions of procedural justice, ethical reflection on participatory design principles is fairly recent and mostly concerned with explicating ethical issues rather than prescribing particular courses of action. Steen (2011) connects participatory design with ethics of the other, pragmatist ethics, and virtue ethics. Regarding ethics of the other, he identifies a tension between the need to be open to others (e.g., the users) and the need to, at some point, close the discussion and finish the design. This connects to the more general question in procedural justice of who should have the authority to establish that a workable consensus has been reached: ideally, this conclusion is established by everyone involved, but constraints of time and resources as well as the possibility of fundamental disagreements can make full consensus unfeasible or impossible. Regarding pragmatist ethics, Steen draws on Dewey’s (1920) prescriptions for processes of inquiries, and regarding virtue ethics, he argues that it can help to reflect on the kind of person a designer should be in her or his role as discussion participant.
Robertson and Wagner (2013) explicitly explore the relations between ethics in general and participatory design, including the effects of design on the world we live in, difficulties in aligning ethical principles with politics in practice, dealing with value conflicts (e.g., whether severely ill children should be drawn into the participatory process or be protected from the strenuous task), and accounting for cultural differences in the participation process.
Technology assessment started out as a method to predict societal and ethical impacts of new technologies in an early stage. While this in itself does not make it a democratic or just method, it has been developed in accordance with these values into, e.g., participatory technology assessment, where many societal parties are involved in the ethical evaluation (Kowalski 2002), and constructive technology assessment, where the results of the assessments are used not so much for regulatory purposes, but more for the redesign of the technology (Schot and Rip 1997). As with participatory design, however, while constructive technology assessment is based in part on the values of democracy and justice, it does not make the use of explicit theories of democracy and justice. This criticism is also given by Palm and Hansson (2006), who propose ethical technology assessment as an alternative. Palm and Hansson explicitly avoid commitment to one particular ethical theory, but they identify a number of ethical aspects of new technologies that should be taken into account. Democratic aspects include the dissemination of information about and control over new technologies; justice aspects include biases of gender, race, sexuality, etc., and biases against handicapped people, such as the cochlear implant that is perceived by some as a threat to the deaf culture. Some authors also argue that keeping technology flexible and open for multiple uses and reconfiguration is part of design for democracy (e.g., Van der Velden 2009; Kiran 2012; Dechesne et al. 2013).
Another method that explicitly seeks to involve stakeholders and their values into the design process is value-sensitive design (VSD; Friedman 1996; Friedman et al. 2005). Value-sensitive design employs a tripartite methodology, consisting of conceptual investigations into who is affected and what values are implicated by a new technology, empirical investigations into stakeholder values and trade-offs and the use context, and technical investigations into how a particular technology can help or hinder certain values. Though VSD does also not explicitly draw on theories of democracy and justice, concern with those values is witnessed by the fact that it seeks to include both direct and indirect stakeholders: not only the users of a technology but all those affected by it in some way.
Ethical criticism against VSD is that, while it does argue for the inclusion of direct and indirect stakeholders, it does not offer any methodology for determining who can be legitimately considered a stakeholder (Manders-Huits 2011). Also, it has been argued that VSD does not explicitly support a legitimate deliberative procedure for discussing stakeholder input and justifying trade-offs and that procedural justice theories, and particularly discourse ethics, can help in this regard (Yetim 2011). To summarize, while design is more and more oriented at the values of democracy and justice, the integration of the theories of democracy and justice into design methods and practices has only just started and still requires much work. We will examine some of the challenges for this integration in section “Open Questions and Conclusions.”
Experiences and Examples
In this section, we discuss two short cases. We intend to contrast a classical field of debate about the design for democracy and justice with a more recent discussion. We thus first look at the discussion about nuclear or solar energy and the effect of these choices on social power distribution and political institutions. After that, we investigate the recent debate about the power of modern mass communication and social media to promote democracy and undermine authoritarian power.
Energy Production, Justice, and Democracy
Technologies for energy production have been at the center of a debate about “inherently political artifacts” (Winner 1980). In this section, we like to illustrate the broader claims made in the previous section by sketching the debate about the politics of energy technology. If one accepts the premise of inherently political technologies, one can ask in how far energy technologies are political.
The first consideration is that modern lifestyle depends on energy supply. The energy demand of modern civilization is one of the major drivers of international conflicts, especially given the dependence of the West on fossil fuels. Similarly, the threats of climate change are a result of an ever growing worldwide energy demand. Already Jonas (1979/1984) has thus argued, as we have seen above, that modern technology raises radically new questions with regard to responsibility and justice. How can we protect nature and how can we strive for a just distribution of resource consumption? On the one side, it has been argued that participation and civic engagement are essential in the implementation of major energy projects (cfr. Ornetzeder and Rohracher 2006; Lewis and Wiser 2007), but on the other hand, future generations by definition cannot participate in the process, as they do not yet exist. This has led to the question of how to represent future generations in policy making, especially in the case of sustainability and long-term-planning (Hösle 1994; Gosseries 2008).
With regard to democracy, however, another aspect next to participation becomes relevant in the analysis of energy technologies. It has been claimed that different types of energy production suggest different types of political institutions. As Winner (1980) has argued, nuclear energy requires a centralized top-down system of political control, whereas solar energy lends itself more easily to more decentralized individual bottom-up initiatives. In a similar line, Akrich (1992) has analyzed the impact electrification has on traditional society in developing countries and how different technologies redefine social roles. She argues that different types of energy technologies also define social roles and thus contribute to or emphasize differences in power structures. A power generator that can be used in rural villages strongly suggests a specific method to divide costs. The investment cost for the generator and the costs of using the generator (the fuel) can be easily separated, thus suggesting a whole microcosmos of economic relations between the “owner” and a “lender.” Battery-driven lighting kits alike are not only designed with certain technical functional requirements in place but also include assumptions about the knowledge of the user, about maintenance structures and use context. Finally, the whole process of introducing electrification is likewise not only a technology transfer but imposes at the same time a system needed for payment, a legal structure of property rights and landownership, control of consumption and payments, and the like. The analysis that Akrich is giving is thus meant to illustrate that technologies go beyond fulfilling their function but also alter economic and political structures. The impact of energy technologies on social structures, justice, and civic engagement has since been analyzed in detail by philosophers and social science scholars (e.g., Chess and Purcell 1999; Devine-Wright 2013; Hoffman and High-Pippert 2005). One illustrative case study by Nieusma and Riley (2010) shows that engineering for development initiatives that are explicitly aimed at social justice and implemented with careful consideration of the nontechnical aspects of decentralized small-scale energy technologies nevertheless often face serious challenges. One of the cases they present is the rural electrification campaign in the southwestern district of Monaragala by the Energy Forum in Sri Lanka during 2000–2002. The aims of the Energy Forum in Sri Lanka are the promotion and implementation of renewable technologies, focusing on decentralized energy technologies, including dendro power for rural electrification. Even though rural electrifications contribute to technological advancements, one of the central motivations of the Energy Forum is one of social justice: “How to more fairly distribute the nation’s energy resources […] so that the rural poor will benefit as well” (ibid., p. 43). The implementation also aimed at strengthening civic community and included nurturing “effective working relationships with village leadership and community members, […] [and] two participatory design workshops where villagers shared their perspectives on the project” (Nieusma and Riley, p. 44). This is in line with the idea that decentralized, locally owned government energy projects foster social community and may even strengthen democracy (Hoffman and High-Pippert 2005). However, Nieusma and Riley conclude that despite the “purity of its motives and regardless of the effort put into transferring control in a sensible way” (ibid., p. 50), the project failed to empower the community members in the targeted village. They argue that designing for social justice requires overcoming engineering project risks such as (1) overfocusing on technology, (2) the occlusion of power imbalances in social interaction, and (3) ignoring the larger structural (including social) context (ibid. 51 ff.).
ICT and social media have been heralded as saviors as well as threats to democracy. Winner (1992) has already pointed out that the fax machine had helped revolutionaries to hasten the demise of the USSR, while Chinese revolutionaries using fax machines during the 1989 Tiananmen Square protests were quickly tracked down and arrested by the authorities. This section takes a closer look at the role of social media in more recent events, particularly those known collectively as the Arab Spring.
The Arab Spring has been said to have started by the self-immolation of the Tunisian fruit vendor Bouazizi, which quickly led to public protests and eventually the flight of the Tunisian president Ben Ali. The protests spread to other North African and Arabian countries, and governments were overthrown in Egypt and Libya. Whether these revolutions will in the long run lead to more democratic governments is still an open question, with the ongoing civil war in Syria and a planned mass execution of more than 500 civilians in Egypt at the time of this writing. However, it is undeniable that social media played an important role during these revolutions.
While the Arab Spring has been called a “Facebook/Twitter revolution,” most academics seem to agree that social media were a necessary but not sufficient part of the uprisings. Khondker (2011) argues that two factors were crucial: the presence of revolutionary conditions and the inability of the state to control the revolutionary upsurge. Revolutionary conditions include high income inequality, government corruption, and high youth unemployment – a large percentage of Tunisia’s and Egypt’s populations are young, and many of those young people are unemployed, tech-savvy, and have no job or family responsibilities that would stand in the way of a willingness to participate in the protests (Howard et al. 2011; Lim 2012). In many ways, the Arab Spring uprisings were a culmination of social unrest, demonstrations, and social media protests that had been simmering for years (ibid.). An example of the inability of the state to control the upsurge is the Egyptian president Mubarak’s shutdown of the telecommunications network, which was partly evaded by protesters with satellite phones and hindered government agencies and ordinary citizens as well (Howard et al. 2011). In the absence of the aforementioned revolutionary conditions, however, social media protests may well be stamped out by the authorities arresting protesters, infiltrating networks, and spreading propaganda.
A specific contribution of social media to the protests that has been mentioned is the creation of spaces and networks for connecting people with common interests (Allagui and Kuebler 2011). This option has already been mentioned by Sclove (1995). Johnson (1997) has warned against the opportunities the Internet offers for like-minded people to contact and agree with each other rather than get exposed to divergent opinions and worldviews, but the Arab Spring showed that networking did also occur across ideological and religious boundaries on the basis of shared complaints against the regime (Lim 2012). Moreover, rather than just leading to “slacktivism” and Facebook rants (Morozov 2011), the Arab Spring also showed that injustices could mobilize those networks to actually take to the streets and helped protesters to coordinate when they did. This happened, for example, after the brutal police murder in 2010 of Khaled Said, a young Egyptian claimed to have been targeted because he possessed evidence on police corruption (Lim 2012). Finally, social media have been used as an alternative source of information in countries with strong media censoring and, conversely, to connect with traditional media in other countries in order to get information and news out into the world (Khondker 2011).
Many philosophers of technology have proclaimed that technology is not “morally neutral.” However, the role of social media during the Arab Spring has shown that it may well be neutral with respect to certain values: the openness and accessibility of information at least make it a valuable tool for both nondemocratic regimes and their protesters. There certainly are attempts to further “democratize” social media, for example, through participation in the Global Network Initiative, which has developed principles and guidelines for ICT companies for safeguarding user access, privacy, and freedom of expression (as of the time of this writing, Twitter is notoriously absent). However, there are attempts to bring social media and other software under state control as well, e.g., by US engineers being required by the NSA to build in “back doors” for snooping. Security technologist Bruce Schneier has criticized this practice and issued an open call for software engineers to expose those practices and actively “take back the Internet” (Schneier 2013a, b). Clearly, there is more technical and ethical work to be done yet if social media are to be made (and kept) a truly democratic technology.
Open Questions and Conclusions
In this section, we will identify three recurring themes and a number of open questions for both engineers and philosophers interested in furthering design for democracy and justice.
First, a general theme among the technology pessimists was the threat to democracy and justice of the “technological mind-set” that Mitcham (1994) has noted is a natural perspective for engineers to take. While in itself nothing is wrong with the values of effectiveness and efficiency or even with viewing nature as a “standing resource,” this becomes problematic in excess. This occurs, for example, when people become unable to view nature in other ways (Heidegger), when other values or viewpoints are derided as “lesser” or “irrational,” or when the technocrat mind-set is applied to areas where it should not be applied to (Habermas’ colonization of the lifeworld).
Interestingly, while Heidegger and Adorno both identify this as a grave problem, they seek the solution outside of the domain that has created it, in the arts. While the arts can certainly make us look at the world in a different way, this is of little consolation to engineers who would like to design for democracy and justice. However, engineers interested in curbing the excesses of the technological mind-set could draw on work by Feenberg and Habermas, particularly the realization that other viewpoints represent different values and rationalities rather than simply being irrational and thus require serious consideration.
Second, while the technology pessimists discussed above at times tend to be overall pessimistic about the influence of technology on democracy and justice, most optimists tend to be nuanced, arguing that technology has great potential to advance those values – but also great potential to hinder them. Overall, there seems to be no design rule (yet) that, when applied to a technology, will make it (more) democratic and just in itself. Some factors that determine a technology’s impact on democracy and justice may be technological. Many factors, however, are outside the control of the engineer, such as the willingness of the Tunisian populace to revolt during the Arab Spring. Other factors might be only under limited control of engineers, such as those that lie in the realm of use and institutional contexts. It is no wonder, then, that design methods that seek to further democracy and justice tend to focus on what engineers do have control over (though not necessarily full control): the design process. The third conclusion will therefore be devoted to this process and the relevant open questions for research that have been identified in this paper.
Who are the stakeholders? Ideally, all those who “have a stake” in the design should be consulted (with possibly certain exceptions, such as business competitors and criminals). However, especially for novel technologies not (yet) embedded in a clear use context, such as nanotechnology, it may be impossible to predict beforehand who will eventually be affected by the technology. Moreover, potential stakeholders may include those who are not able to participate in the discussion, such as future generations and animals, and it may not always be clear who could and should legitimately represent their interests.
Whose opinions should be taken into account? This question has practical aspects, e.g., should severely ill children be dragged into a consensus procedure involving technology that will affect them (Robertson and Wagner 2013)? It also has theoretical aspects, however, e.g., whether people with extremely divergent, uncommon, or incoherent worldviews should participate in the procedure (Taebi et al. 2014). Generally, a diversity of views is seen as beneficial to the process as it may open up new possibilities. Additionally, excluding critical or unorthodox voices can easily become a tool of powerful participants to silence parties with opposing interests or an excuse not to involve minority groups or local communities (Levidow 1998; Van der Velden 2009). The greater the diversity of viewpoints and values is, however, the more difficult it may become to achieve consensus or closure.
How do different opinions weigh? The classic democratic tenet is “One person, one vote.” However, this assumes that all stakeholders are affected equally by a particular decision or design, which may not always be the case. VSD, for example, distinguishes between direct and indirect stakeholders, but does not elaborate on what this implies for the weighing of interests.
In case a consensus is not reached, who should make trade-offs and achieve closure? Ideally, consensus is reached and trade-offs are made by mutual agreement. However, persistent disagreements and constraints of time and resources may make this impossible (Steen 2011). Giving particular groups (or particular ethical theories) the authority to achieve closure of a discussion may then be necessary, even though it runs counter to the prescriptions of procedural justice. Behind this practical problem lies thus a deeper philosophical problem that requires further investigation.
This research was supported the MVI research programme “Biofuels: sustainable innovation or gold rush?”, financed by the Netherlands Organisation for Scientific Research (NWO).
- Adorno Th (1999) Aesthetic theory, New edn. Athlone, LondonGoogle Scholar
- Adorno TW, Horkheimer M (1979) Dialectic of enlightenment, New edn.Verso, LondonGoogle Scholar
- Akrich M (1992) The de-scription of technical objects. In: Bijker WE and Law J (eds) Shaping technology/building society. MIT Press, Cambridge, MA, pp 205–224Google Scholar
- Allagui I, Kuebler J (2011) The Arab Spring and the role of ICTs: editorial introduction. Int J Commun 5:1435–1442Google Scholar
- Apel KO (1973) Transformation der Philosophie: Sprachanalytik, Semantik, Hermeneutik. Das Apriori der Kommunikationsgemeinschaft. Suhrkamp, Frankfurt a. M.Google Scholar
- Bacon F (1620) The new organon. Cambridge University Press, Cambridge/New YorkGoogle Scholar
- Bacon F (1627) New Atlantis: a worke vnfinished. In: Bacon F (ed) Sylva sylvarum: or a naturall historie, in ten centuries. William Lee, LondonGoogle Scholar
- Bohman J, Rehg W (eds) (1997) Deliberative democracy: essays on reason and politics. MIT Press, Cambridge, MAGoogle Scholar
- Christiano T (1996) The rule of the many: fundamental issues in democratic theory. Westview Press, BoulderGoogle Scholar
- Christiano T (2008) Democracy. In: Zalta EN (ed) The Stanford encyclopedia of philosophy, Fall 2008 Edition. http://plato.stanford.edu/archives/fall2008/entries/democracy/. Accessed 30 Mar 2014
- Clement A, Besselaar P van den (1993) A retrospective look at PD projects. In: Muller M, Kuhn S (eds) Participatory design: special issue of the communications of the ACM, vol 36, no 4. pp 29–39Google Scholar
- Devine-Wright P (ed) (2013) Renewable energy and the public: from NIMBY to participation. Routledge, London/Washington DCGoogle Scholar
- Ellul J (1964) The technological society. Knopf, New York (french orig. 1954)Google Scholar
- Feenberg A (1991) Critical theory of technology. Oxford University Press, Oxford/New YorkGoogle Scholar
- Feenberg A (2002) Transforming technology. A critical theory revisited. Oxford University Press, Oxford/New YorkGoogle Scholar
- Friedman B, Kahn PH Jr, Borning A (2005) Value sensitive design and information systems. In: Zhang P, Galletta D (eds) Human-computer interaction in management information systems. M.E. Sharp, New York, pp 348–372Google Scholar
- Grønbæk K, Kyng M, Mogensen P (1997) Toward a cooperative experimental system development approach. In: Kyng M, Mathiassen L (eds) Computers and design in context. MIT Press, Cambridge, MA, pp 201–238Google Scholar
- Habermas J (1971) Knowledge and human interests. Beacon Press, BostonGoogle Scholar
- Habermas J (1984) The theory of communicative action vol. I: reason and the rationalization of society (trans: McCarthy T). Beacon, Boston (German, 1981, vol 1)Google Scholar
- Habermas J (1987) The theory of communicative action vol. II: lifeworld and system (trans: McCarthy T). Beacon, Boston (German, 1981, vol 2)Google Scholar
- Habermas J (1990) Moral consciousness and communicative action. MIT Press, Cambridge, MAGoogle Scholar
- Habermas J (1993) Justification and application: remarks on discourse ethics. MIT Press, Cambridge, MAGoogle Scholar
- Habermas J (1994) Three normative models of democracy. Constellations. Int J Crit Democr Theory 1(1):1–10Google Scholar
- Heidegger M (1977) The question concerning technology, and other essays. Harper & Row, New York (orig. 1953)Google Scholar
- Hösle V (1994) Philosophie der ökologischen Krise: Moskauer Vorträge. CH Beck, MünchenGoogle Scholar
- Hösle V (2004) Morals and politics. University of Notre Dame Press, Notre DameGoogle Scholar
- Howard PN, Duffy A, Freelon D, Hussain M, Mari W, Mazaid M (2011) Opening closed regimes: what was the role of social media during the Arab Spring? Project on Information Technology & Political Islam, Seattle, working paper, 2011.1Google Scholar
- Introna LD (2005) Disclosive ethics and information technology: disclosing facial recognition systems. Ethics Inf Technol 7:75–86.Google Scholar
- Johnson DG (1997) Is the global information infrastructure a democratic technology? In: Spinello RA, Tavani HT (eds) Readings in cyberethics, 2nd edn. Jones and Bartlett Publishers, Sudbury, pp 121–133Google Scholar
- Jonas H (1979/1984) Das Prinzip Verantwortung. Suhrkamp, Frankfurt am Main/The imperative of responsibility. In search of an ethics for the technological age. The University of Chicago Press, ChicagoGoogle Scholar
- Kowalski E (2002) Technology assessment : Suche nach Handlungsoptionen in der technischen Zivilisation. vdf Hochschulverlag AG, ZürichGoogle Scholar
- Kurzweil R (2005) The singularity is near: when humans transcend biology. Viking, New YorkGoogle Scholar
- Latour B (1979) The social construction of scientific facts. Sage, Beverly HillsGoogle Scholar
- Latour B (1992) Where are the missing masses? The sociology of a few mundane artifacts. In: Bijker WE, Law J (eds) Shaping technology/building society: studies in sociotechnical change. MIT Press, Cambridge, MA, pp 225–258Google Scholar
- Latour B (2005) Reassembling the social: an introduction to actor-network-theory. Oxford University Press, Oxford/New YorkGoogle Scholar
- Law J (1999) Actor network theory and after. Blackwell/Sociological Review, Oxford/MaldenGoogle Scholar
- Marx K (1938) Capital. London: Allen & UnwinGoogle Scholar
- Mill JS (1859) On liberty. In: Robson JM (ed) Collected works of John Stuart Mill, vol 18. University of Toronto Press, Toronto, pp 213–310, 1963ffGoogle Scholar
- Mill JS (1861) Considerations on representative government. Prometheus Books, Buffalo, p, 1991Google Scholar
- Mitcham C (1990) Three ways of being-with-technology. In: Ormiston GL (ed) From artifact to habitat: studies in the critical engagement of technology. Bethlehem, PA, Lehigh University Press, pp 31–59Google Scholar
- Mitcham C (1994) Thinking through technology: the path between engineering and philosophy. University of Chicago Press, ChicagoGoogle Scholar
- Morozov E (2011) The net delusion: the dark side of Internet freedom. PublicAffairs, New YorkGoogle Scholar
- Nagenborg M (2005) Search engines, special issue of International Review of Information Ethics, vol 3Google Scholar
- Noble DF (1984) Forces of production. A social history of industrial automation. Knopf, New YorkGoogle Scholar
- Noveck BS (2009) Wiki government: how technology can make government better, democracy stronger and citizens more powerful. Brookings Institution Press, Washington, DCGoogle Scholar
- Pariser E (2011) The filter bubble: what the Internet is hiding from you. Penguin Press, New YorkGoogle Scholar
- Pogge TW (2002) World poverty and human rights: cosmopolitan responsibilities and reforms. Polity Press, LondonGoogle Scholar
- Rawls J (1971) A theory of justice. Harvard University Press, Harvard, 1999 revised editionGoogle Scholar
- Rawls J (1999) The law of peoples. Harvard University Press, CambridgeGoogle Scholar
- Robertson T, Wagner I (2013) Ethics: engagement, representation and politics-in-action. In: Simonsen J, Robertson T (eds) Routledge international handbook of participatory design. Routledge, New York, pp 64–85Google Scholar
- Roth SK (1994) The unconsidered ballot: how design effects voting behaviour. Visible Lang 28(1):48–67Google Scholar
- Rousseau J-J (1750) The social contract and discourses. Everyman, London (1993)Google Scholar
- Savulescu J, Bostrom N (2009) Human enhancement. Oxford University Press, OxfordGoogle Scholar
- Scanlon T (1998) What we owe to each other. Harvard University Press, Cambridge, MAGoogle Scholar
- Schneier B (2013a) The US government has betrayed the internet. We need to take it back. The Guardian, 5 Sept 2013. http://www.theguardian.com/commentisfree/2013/sep/05/government-betrayed-internet-nsa-spying. Accessed 2 Apr 2014
- Schneier B (2013b) Why the NSA’s attacks on the internet must be made public. The Guardian, 4 Oct 2013. http://www.theguardian.com/commentisfree/2013/oct/04/nsa-attacks-internet-bruce-schneier. Accessed 2 Apr 2013
- Schudson M (1998) The good citizen. A history of American civil life. Harvard University Press, Cambridge, MAGoogle Scholar
- Sclove RE (1995) Democracy & technology. The Guilford Press, New YorkGoogle Scholar
- Sen A (1999) Development as freedom. Knopf, New YorkGoogle Scholar
- Singer P (1973) Democracy and disobedience. Oxford University Press, OxfordGoogle Scholar
- Snow C (1959) The two cultures and the scientific revolution, The Rede lecture, 1959. Cambridge University Press, New YorkGoogle Scholar
- Spahn A (2010) Technology. In: Birx H (ed) 21st century anthropology: a reference handbook. SAGE Publications, Thousand Oaks, pp 132–144Google Scholar
- Steen M (2011) Upon opening the black box of participatory design and finding it filled with ethics. In: Proceedings of the Nordic design research conference no 4: Nordes 2011: making design matter. Helsinki, 29–31 MayGoogle Scholar
- Sunstein C (2001) Republic.com. Princeton University Press, PrincetonGoogle Scholar
- Sutherland S (1992/2007) Irrationality: the enemy within. Constable and Company/Irrationality. Pinter and Martin, LondonGoogle Scholar
- Tavani H (2014) Search engines and ethics. In: Zalta EN (ed) The Stanford encyclopedia of philosophy, Spring 2014 edition. http://plato.stanford.edu/archives/spr2014/entries/ethics-search/. Accessed 15 July 2014
- Vico G (1709) De nostri temporis studiorum ratione Lateinisch-Deutsche Ausg. Wiss. Buchges., Darmstadt (1974)Google Scholar
- Winner L (1980) Do artifacts have politics? Daedalus 109:121–123. Also in 1986. The whale and the reactor: a search for limits in an age of high technology, 19–39. University of Chicago Press, ChicagoGoogle Scholar
- Yetim F (2011) Bringing discourse ethics to value sensitive design: pathways to toward a deliberative future. AIS Trans Hum Comput Interact 3(2):133–155Google Scholar
- Young IM (1990) Justice and the politics of difference. Princeton University Press, PrincetonGoogle Scholar